42. Probability & moral responsibility with Olivia Gambelin

This month we're speaking to Olivia Gambelin about: what should and shouldnt be automated, the importance of human connection, call for ethics, what are ethics, where is value created in data, probability intuition of automated cars and the the moral gap, and more.
Date: 4th of May 2020
Podcast authors: Ben Byford with Olivia Gambelin
Audio duration: 51:53 | Website plays & downloads: 333 Click to download
Tags: Business, Ethicists | Playlists: Business, Philosophy

Olivia is an AI Ethicist who works to bring ethical analysis into tech development to create human-centric innovation. She believes there is strength in human values that, when applied to artificial intelligence, lead to robust technological solutions we can trust. Olivia holds an MSc in Philosophy from the University of Edinburgh, concentration in AI Ethics with special focus on probability and moral responsibility in autonomous cars, as well as a BA in Philosophy and Entrepreneurship from Baylor University.

Currently, Olivia works as the Chief Executive Officer of Ethical Intelligence where she leads a remote team of over thirty experts in the Tech Ethics field. She is also the co-founder of the Beneficial AI Society, sits on the Advisory Board of Tech Scotland Advocates and is an active contributor to the development of Ethics in AI.


Transcription:

Ben:[00:00:05] Hi and welcome to the forty first episode of Machine Ethics podcast. This month, we're talking with Olivia Gamblin of Ethical Intelligence. We talk about what we should and shouldn't automate the importance of human connection. What is ethics? Where value is created in data. The probability intuition of automated cars and the moral gap that that might create. And throwing technology at everything when technology might not be needed. If you like episode, then please check out more episodes at Machine-Ethics.Net. You can support the podcast Patreon.com/machineethics and you can find us on Twitter and Instagram machine_ethics. And thanks again to Olivia and hope you enjoy.

Ben:[00:00:54] Hi, Olivia. Welcome to the podcast. Thanks for joining me today. If you could tell us who you are and what you do.

Olivia:[00:01:01] Yeah, thanks for having me, Ben. It's really great to be here. So I am the founder and CEO of Ethical Intelligence, we are an ethics consultancy focussed on helping companies navigate this developing fields of AI Ethics. I myself come from an AI Ethics background philosophy heavy as I'm a philosopher by trade.

Ben:[00:01:23] Great. So thanks, Olivia. Thanks for coming on. One of the first questions we always ask is: to you what is A.I.?

Olivia:[00:01:31] So I'll give you probably a bit more of a philosophical answer. I think my fellow AI programmers would laugh at me, but I view it as a tool.

Olivia:[00:01:42] It's a tool that can help us as people actually achieve some very interesting solutions. It can really help us with our innovation. But at the end of the day, it is a tool. So I'd rather say that it's augmented intelligence rather than artificial intelligence. I think if we just keep it pigeonholed into being this artificial intelligence that can replace our human intellect, we miss out on a lot of important components that go into what intelligence actually is...coming into play being ethical intelligence, not to quote my company's name, but ethical intelligence, emotional intelligence. Those are very important to us as human beings. And that's something that we can help increase and intensify with the use of A.I..

Ben:[00:02:32] Yeah. So you're saying that it's it's really something that we can use as a tool to make what we do as humans better, easier, faster all these sorts of good things.

Olivia:[00:02:43] Yeah. But it should never in a sense, it should never take away those parts of us that make us human. It shouldn't take over those parts. That's not what we're trying... That's not its use. It should never replace the human. It's supposed to help us. Again, it's a it's a tool. Yeah.

Ben:[00:02:58] Yeah. I guess we it might be a quite difficult question, but it kind of leading from that. What is the things that you think make us human, given that there are conversations around A.I. taking away some of those things or having the potential to replace some of the things that humans do? What is it to you that is important that we shouldn't give away or things which are important as human beings to keep us as things that we do?

Olivia:[00:03:25] Yeah. So I think right off the bat, I would say that we should never be using A.I. to, in a sense, replace our time when we actually get to connect face to face, person to person. When I say it's a good it's a tool that should allow us to spend more time actually person to person and having conversations with a person to a person, not a person to a machine. It should help free up time to enable to do that would do that in place. That makes sense. Yeah, that kind of...it creates that emotional connection between people. That's something that truly is human. And I think especially given the time, the circumstances now when we're all essentially connecting with each other through Zoom through Google Hangouts for all of these different digital, really starting to notice actually how important it was to sit down in a cafe and have a coffee with someone and be be able to give a high five or give a hug or something. How important that is. I think that's really now starting to highlight, that that's something we can't replace.

Ben:[00:04:32] Yeah. So our current predicament is almost helpful in highlighting these kind of basic human needs in that way.

Olivia:[00:04:39] Yeah, absolutely. I think so. It's been very interesting from my from my perspective, watching the different technology that's coming out to help us help us in this time when a lot of people are in quarantine and there's this pandemic, seeing the technology usage come out to try and and help alleviate some of that pain that we're feeling with it. But also interesting to see the people's reactions to it, because beforehand it was like, oh, we have this new software that allows me to not have to talk and talk to anyone during my day. And it makes me 20 times more productive. And now being in a situation where it's like, well, we have all of this software coming out, that's what make us 20 times more productive and be more attached to our screens because we have so many distractions now and so much time attached. We don't we don't want that. I'm starting to ramble here now. It's really it's really starting to highlight the fact of, oh, you know, maybe it's better to put the screen down for a little bit and actually do something without technology.

Ben:[00:05:45] I concur. I'm definitely feeling that way inclined at the moment being as I spent a lot of my day locked up in my house at my scream and feeling the impact that for sure, and having to extradite myself from that situation as much as I can. So I'm interested in kind of your basic interest. How did you get into being interested in technology and then by extension, the work you do now and kind of the impact of technology and on society and people's lives and being concerned with that.

Olivia:[00:06:20] I will give you the abridged version. Otherwise, we could be here all day. I originally come from Silicon Valley, Redwood City if you know it. But it's right at the heart. We're right next to Palo Alto. So I go home to visit my family. I drive by Fox headquarters, Facebook headquarters. I, I went to high school right next to Google headquarters. So I grew up in that tech bubble. And it really, really is a bubble. We were all kinda guinea pigs in a sense. All of these different tech companies in the Valley would use everyone around them as their test cases. And because of that is very interesting. It's kind of this community that that's very technologically advanced with have high technical literacy. And that's why I call it a bubble, because as soon as the tech tried to move outside of this little Silicon Valley bubble, it wouldn't really go anywhere because it was almost too beyond advanced or too far beyond other people were wiling to except but because we were always bombarded constantly with all of this technology–we're like, everything's cool. We got robots delivering us are getting wheels, which now is something amazing. But we've had it in the valley for 10 years or something like. It's not new coming out of the valley. But before I come to dig into that, essentially I started my career out actually in Silicon Valley working in tech start-ups, which is very fun for me because in a tech Start-Up, you kind of do everything. You have a position. But in all honesty, you're you're just kind of the body there that floats around and does whatever it needs to be done, especially if you're just starting out and eventually ended up kind of taking a digital marketing route. But through this, I saw a lot of the impact, the direct impact that technology has on people around us. And so we established this, of course...but also saw the power there that technology had that people had that kind of influence that you could have over someone depending on the type of technology that you were using, which is very fascinating to me. But of course, I decided to be rebellious when I went on to do my undergrad. And when the complete opposite direction and did philosophy, I think that was a little bit of me going away, going, I don't necessarily like being part of this bubble. I want to do something completely opposite. Ended up studying philosophy and really took to ethics and morality.

Olivia:[00:09:01] I couldn't quite tell you why. I think it was just that intellectual spark. Every time in these different philosophy courses trying to understand moral responsibility, I like to think it's the way that I'm trying to understand what is life. And the bigger question, by studying ethics and morality and understanding how people's actions affect others around them. It's just a way that I am able to understand the world.

Olivia:[00:09:31] And so from that time spent studying philosophy after that, I actually moved overseas, which is now more my home and then the states is now. But I moved overseas and I travelled a bit doing different projects with a couple of different firms. And these were more research, again, marketing. But I found myself when working for a consultancy firm in Brussels. And this was during the time of GDPR. I was a researcher for data privacy. GDPR, cybersecurity, all of those big topics. Right. When they were first starting to come into to regulation as well as become these hot topics. So I was attending these meetings. I was getting locked inside the parliament building constantly. But these meetings, these conferences constantly about data privacy and what that means. And over and over again, they're...this call for ethics, people would say, oh, we need ethics. And coming from the background in ethics. I just went: Yes, finally, this is my jam. I love ethics, lets talk. But the conversation never went beyond we need ethics. It was like, OK. What do you mean by ethics, realise there's all the earmarks, there's all of these principles there. It's such a rich field. What do you mean by we need ethics? And it was the answer I always got was something management handles. So I never seem to find the management that handled this.

Ben:[00:11:12] it's disappointing.

Olivia:[00:11:13] When you see this overseeing management. But essentially, what I saw from that is I saw the love of technology that I had grown up with. And this fascination with ethics, uniting this one core subject of data ethics and AI ethics. And it was flash, lack of a better word. The light bulb moment. One day during a conference. I was like, oh, my God. I have to go study this. I need to go back and study this again. I'm fascinated. And I want to actually start pushing this conversation forward because I'm tired of looking for the nonexistent management that supposed to handle this. So through that, I ended up going back to study my master's degree at the University of Edinburgh. And that's where all of this began. My time at Edinburgh, I co-founded a society which was called the Beneficial AI society. And it was it was a social group. And we would bring together people from a computer science background, data scientists, programmers. And then we also had philosophers and political theorists and social researchers, we had some lawyers.

Olivia:[00:12:35] And it was really fascinating. It was just to bring people together to talk about A.I. applications in society. And it was absolutely fascinating to see these two different perspectives, A, learn how to talk to each other and B, come to solutions that were a bit more robust than than we're used to seeing because at all different angles had been thought out in different directions. Mind you, we were all just a bunch of post masters, Phd's drinking in a pub discussing this. So these weren't world solutions.

Ben:[00:13:13] You make out so it's a lot less formal than you make out to be. It sounded like a, you know, a Grand Republic forum happening.

Olivia:[00:13:21] Absolutly. One of my favourite conversations that we had was on the idea of data privacy. We're talking about where's the value created in data? Is it created in the individual's data set or is the value created what's added to a larger dataset? And then that dataset essentially starts to actually by pulling out intuitions out of that larger set. And it was fascinating because the computer scientists, everyone from more of a hard science background, all argued there is only value added to the larger dataset. And the person that's that's pulling out these different intuitions out of that large Edina's, that that's that's the person creating value. And then you have the opposite side of the room with or from soft-science philosophy background saying, well, the value is created in the fact that there are data points there already. So the individual who created those value, those data points, that's where the value is. And this turned into a night long discussion, which...of nothing having to do with A.I. but actually modern art piece of a urinal.

Ben:[00:14:34] Yep. That's a Duchamp.

Olivia:[00:14:37] Yeah. That's it. The fact that it was so clear, such a clear divide and then the fact that that clear divide, which you wouldn't think would be that big of a deal, actually had a lot more implications for what it meant to actually have data privacy, what it meant to have control over data. And it was fascinating to see both sides of that conversation. And you could tell both sides were going, oh, I did not think someone thought other had a different opinion than I did. Not a formal setting, but eye opening in its own way.

Ben:[00:15:16] I'm presuming that you're talking about personal data specifically, not just any random set of data.

Olivia:[00:15:23] Yes. Yes. Personal data.

Ben:[00:15:25] Great. And did you find that there was a in that conversation, some sort of resolution?

Olivia:[00:15:34] Well, I don't think the urinal helped us at all, that kind of trap. And honestly, the rest of the resolution wasn't: this is the answer. This is where the values at. I think the resolution was: people coming from the hard science background going, oh, wait, there's actually an opinion other than ours. And again, the philosophers going up, there's an opinion other than ours. And both sides understanding there are people that see value different than I do. And that's going to have different implications for data privacy going forward. So not a hard set answer, but I think it was still something to open the open their eyes. OK. There is there are different opinions in this matter. And it's important to have those considered. Because, for example, if a programmer is creating it, they think the value is in that larger data set. They're using personal, personalised personal data. And they go on to create a wider set algorithm that that's harvesting personal data. And they're like, well, it doesn't matter because it was a personal data point of when smoeone brushes their teeth the morning, something very trivial. And they're like, oh, no problem. This data had no value anyway until I put into this larger algorithm, which helps dentists, or something schedule appointments. I'm making things up right now. But having them realise, oh, just because I think that there's no value in it doesn't mean there isn't. And actually, if I go and talk to the person that is brushing their teeth, they may have a completely different understanding and think, no, I don't want other people to know that. That's that's my personal time when I brush my teeth and I want people to judge me that I don't brush my teeth until the evening because I forgot to or something like that. I was just...It was very eye opening in that sense.

Ben:[00:17:24] Yep. Yep. I guess. I'm hoping that the people leaving, that sort of situation will have the illuminating thing of being able to see things differently from other people's perspectives and having slightly more empathy for, you know, if you're the developer in the room than for the humanities and some soft sciences and vice versa and having that perspective.

Olivia:[00:17:47] Yeah, it's vital right now as this emerging technology is emerging. Having different perspectives because it's touching all of these different lives. It's not just who's creating the programme it's touching their life. No, it's touching people beyond borders, beyond our own disciplines, beyond our own understanding. People we'll never even meet. So it's important to understand your opinion is not law.

Ben:[00:18:15] Yep. I mean, I'm hoping that people just understand that, you know, obviously, not.

Olivia:[00:18:20] I'm afraid I'm being a bit harsh towards programmers right now. I sound a bit harsh, I'm afraid.

Ben:[00:18:32] Yes. So disclaimer not all programmes think like this. And probably not all social scientists think like this maybe..? Less so?

Olivia:[00:18:41] More. No. Yeah. One of the biggest challenges since...my company, Ethical Intelligence we work with a very interdisciplinary set of experts. And one of the biggest challenges of curating that expert network or curating that knowledge base has been to find people that don't have an ego. That sounds bad. But I think philosophers and programmers are two of the world's biggest egos. And they are determined sometimes and you will see this happen where they're determined to use every single piece of jargon they've ever learnt and try and stump the other one with how much jargon they can throw at each other. All they're doing is talking past each other and not listening. It's like, oh, I know more jargon in my field than you do. And that accomplishies nothing.

Olivia:[00:19:36] So it's difficult to find someone that's like, OK, I know my own field and I respect that the person that I'm talking to doesn't know my field. So I have to explain a bit more. But also, it's flipped in the opposite direction, that person also has a huge wealth of knowledge that I haven't even touched before. And so I think you understand we need to be humble on the fact that I have my knowledge base, but there are things that I don't know. Which can be a bit scary to admit. So it's not it's not easy. But these kinds of conversations, this kind of knowledge base is what we need in order to come to solutions within this emerging technology field.

Ben:[00:20:22] Great. So I have a question. This might seem either too simple or too meta. So we'll see howit goes, we asked what AI was to you earlier. And obviously your company is is providing kind of ethical guidance, frameworks and consultation, leadership, that sort of thing. Is that right?

Olivia:[00:20:46] Yes. Yes.

Ben:[00:20:48] So in that context, what is ethics?

Olivia:[00:20:56] This is actually a great question. Oftentimes when I speak at conferences, I start quite asking the audience that or defining it depending on if depending on the setting. I actually spoke at a conference back in February before conferences were banned. And I started with it.

Olivia:[00:21:22] Actually, I followed a talk. The man who spoke before me was like, oh, we have we have ethics and our technology. So great. OK. I get up and I go. Can anyone tell me what ethics is? And it was just dead silence. This is a room of, 100-150 people. No one raised their hand. It was like the awkward, the teacher asked but no one did the homework situation. But it was quite interesting to see that. And I think it highlights the fact that ethics can be a bit of a buzzword nowadays, where I know that if I say I have ethical technology, then everyone's like, oh, yeah, that's good. But what actually is ethics? And so I usually very, very, very simply define it as the study of right and wrong. And that's like a super meta definition. I know it's much more complex than that. Now, that definition itself just kind of clicks into people. And I go, OK. I can understand right and wrong. And so if that's what I think studying, I can kind of understand what this field is supposed to cover. So that is my very meta answer to your very meta question.

Ben:[00:22:35] Great. I love these sorts of questions is why I run this podcast to be honest, to press people against their own understanding and experience. But also some people have, like you are saying, with the idea of where the value was in data, have really different ideas about some of this stuff. So it's really good to make those definitions and see what people are talking from their perspective on things are coming from. So that's why the question.

Ben:[00:23:07] So what kind of problems is ethical intelligence trying to solve? Well, I mean, if you if you had your perfect clients come and give you a phone call, obviously not come into the office to see you at the moment. If they were to appear via email or phone call, what would that client say?

Olivia:[00:23:28] Well, this is interesting. OK, I have a two part anwser. So the first part is the fact that this market, this field as well. They're both very immature, very young. And so even we ourselves, at Ethical Intelligence, we're still as well grappling with what actually is going on behind the scenes. What? Where does it help? What does help look like in this situation? Because we're coming in with an expertise in terms of the technology and the ethics. But what does that look like actually? In a business setting is a completely different world. And so we've already had that that barrier of academia, in a sense, having this wealth of knowledge, but not really being able to communicate that across the bridge to industry. So the first ideal client would be someone from the industry coming to us going, we want to learn what this is in the first place. We understand that there is a need for ethics. We understand that there is that pressure coming from society, from from our consumer market. And we want to know what it is we're actually being asked to provide. That would be a first step. A fantastic client, because it shows that they've already done the consideration of, OK, this is something valuable. This is something worth our time. And this is something that we do need to know. It's a new knowledge base that we're expected to know and we need to gain. So that would be the first one of that kind of educational step.

Olivia:[00:25:09] The one after that would be a client that already has the understanding of some of these ethical dilemmas that they're facing. Actually, the perfect client would be someone coming who is developing this kind of cutting edge piece of technology. And they're stuck in this ethical dilemma of if we create this, then create world piece or everything can go to hell in a handbasket. That's extreme. But someone that recognises this is a huge decision that we have to make and we need to understand all the different aspects that are going into it, because depending on which direction we go with our technology, we could either benefit thousands or we could harm thousands. So some...so a client that's come with that very heavily laden dilemma and the same recognition that this this dilemma that they're facing is not something that they're going to fix down the road. It's something that needs to be decided as they're going forward so that they can put their best foot forward.

Olivia:[00:26:11] So that would be would be our our second or the one after the education aspect. I know the team and I would have a field day with that. It'd be so much fun. It sounds really bad now that I'm saying that. Oh, your your moral dilemmas are really fun for us. Please bring us more.

Ben:[00:26:30] Yeah. I mean, I guess what you what you're hoping to provide is those must be fun persay, but it's like the intellectual challenge of that...

Olivia:[00:26:38] Yeah.

Ben:[00:26:40] ...Mostly interesting when in this circumstance of when you have something really problematic. Is that what you're saying.

Olivia:[00:26:47] Yeah.

Ben:[00:26:48] And you can get to grips with it and make some sort of decision or helpful package to to allow that business to move forward.

Olivia:[00:26:57] Yeah, exactly. It's we essentially get to see, like a philosophy thought experiment actually being carried out and then using what I what I like to call essentially our training. It's our research, but our training, if we're speaking in terms of business and putting it to the test, applying it, then it becomes really, really fascinating instead of just these these theoretical thought experiments. It's OK. This. These are life, not life. And well, in some cases make life and death decisions that were meant to help with.

Ben:[00:27:34] I mean. So on that topic, I saw that you wrote your Phd thesis on automated cars. Is that something that you were interested in discussing here as well? Or have you got a strong opinion on that technology and that use case?

Olivia:[00:27:54] Not Phd yet headed in that direction soon.

Ben:[00:27:56] Sorry. Sorry. That's my fault.

Olivia:[00:27:59] Don't worry about it. Hopefully actually Phd direction very soon. But the Masters ... Yeah. I was looking into. Oh, God. OK. I was looking into this intuition that often pops up in terms of self-driving cars specifically.

Olivia:[00:28:20] And I called it a probability intuition. It was a pain to write on because it's only really been discussed in academic circles. It's not something that's been written on before. So we, my friend, end up joking at one point. They're like, oh, yeah, Olivia is off trying to solve the problem. That's not actually a problem. That doesn't exist. And we're not quite sure what's going on with her. It's essentially the probability, intuition is the fact that: we're creating these autonomous cars, and as we create them, obviously we have these moral dilemma, moral situtations where it's like ok, if the car is driving down the road does it run over the grandmother, the baby. But the probability in intuition tries to look beyond that, saying those decisions are important. Yes. But as long as we can get the car to the point where it is so perfect at driving that anytime it crashes or runs over, someone is like a freak accident, like lightning strikes down in the middle of nowhere. It strikes down from the car. And it runs off a cliff. That's just a freak accident and couldn't predict that at all. So essentially, get the car so perfect that it isn't. It never meets those situations where it needs to make that kind of moral decision.

Ben:[00:29:47] What is your reasoning or explanation for potentially getting to a system that does that?

Olivia:[00:29:55] Well. The funny thing was that was the intuition but I actually argued against it. What ended up happening was essentially I was concentrating on the moral responsibility aspect. I wasn't presay concentrating on the technological side. And I know that self-driving cars are usually based on probability equations and how they work in their environment. Those are just there's so many different factors that you really can't calculate in a lab.

Olivia:[00:30:34] There's a whole assumption that you have to make. It is possible in the first place to have a car become that perfectly driving. It also comes with the stipulation that all cars are self-driving as well. Essentially eliminating the human unknown factor. I had I had to act on that. I had to start with those assumptions.

Ben:[00:30:54] Yeah, I think I think you can almost make those sorts of situations go away without having to think about the technology itself. Like you saying, if humans taken out the equation somewhat, if the whole network were automated cars and if those roads were fitted with technology enabling sensors everywhere. So that technology doesn't have to think so hard and process so much information. And, you know, the weather conditions weren't that bad. So if it only happened in certain countries and all this sort of stuff, you're mitigating all these different things away. Sort of thing, you're sneaking them under the rug, I would say.

Olivia:[00:31:34] Yeah. Yeah. Like, I have a magic wand, like, poof, everything with everything with utopia and everything's perfect.

Ben:[00:31:42] Yes, exactly. Yeah, that would be ideal.

Olivia:[00:31:45] Yeah. But the argument goes essentially though that. As as the car gets better and better at driving, the more responsibility decreases as well. There's that more responsibility gap of when a car runs. When a self-driving car runs over a grandmother, there is this more responsibility gap there. We have to blame someone for that death. But it's not clear who to blame there...And this urgency behind the gap because it's late in the sense of justice to be carried out. If someone's not, if someone doesn't take responsibility for the death of that grandmother, then it feels like something unfair is happening. Injustice has been commited. And so there's that. But the probability, intuition essentially says, well, you know, that doesn't matter. We'll just make the cars so perfect at driving. So as they get better and better and moral responsibility gap will subsequently in parallel decrease as well. Up to the point where we don't actually have to solve the problem of who's to blame when someone gets run over by a car. That's the intuition.

Ben:[00:32:52] Yeah, I guess that in that stipulation, the injustice there is a problem, right? People. The human reaction to any of these incidents is getting in the way of the system actually working at all, like. Because you say you got this gap and it might be that the public find it too problematic to implement this thing because the injustice is it feels to us like it can't be rectified. Unless we put someone in place, like we are going to blame the company and we're going to blame the government or we're gonna to blame someone and they are going to be a moral arbiter of that injustice or the legal kind of entity that we can then deal with. Thats interesting. I mean, if if we took, if we just ignored the human element, then that's obviously not a problem and we can just go on with it.

Olivia:[00:33:53] But we as humans need that kind of sense of justice carried out. Otherwise, if I mean, we can carry it on in society collapses in on itself. But we as humans need that justice. We need that blame to feel well at peace with the death as well. But often this probably intuition was an intuition that I often ended up in discussions with, which is this actually relates back to why I wanted to write on this in the first place. In research it for there was every time I had a conversation with someone who was coming from a hard science background on, OK, who's to blame, who should we attach that moral responsibility to? In the case that the self-driving car runs over a person. The argument I always had in return was probably intuition. Well, know what. Yeah, that's a problem. But it doesn't happen that often. And we're we'll get to the point where it doesn't matter at all. So we don't actually have to solve for it.

Olivia:[00:34:49] And that always used to, not bug me. It's like I felt like we kept running into a wall. It's like. That doesn't seem like the correct answer to what...why is this, why is this intellectually bugging me? I want to dig into this. Why do I have the intuition in the opposite direction? I want to understand that. Is it me just being frustrated that I don't understand the actual coding of the technology or is there something more there? And so that's actually what I ended up doing to that research, was I was looking at the whether or not what was plausible. If the car. If the probability for the car being in an accident decreased. Did the moral responsibility also decrease?

Olivia:[00:35:36] And I actually ended up arguing that opposite. So essentially, what I ended up concluding was, of course, this needs a lot more research done behind it. This was just an initial scratch the surface. But essentially, what I was looking at is the way that we assign praise and blame to a person. And. What we do when we assign blame to someone is when someone commits a crime. So I'm talking person to person now when a person commits a crime and they had, a kind of criminal record already behind them. We were used to them always committing crimes. They came from a broken family. They came from a bad home situation. Just all all of those factors that would contribute into someone committing crimes. In that sense. We would blame that person less. We would look at them and go, OK. You have all of these other reasons for...You have all these things behind. Why why you would have committed that crime. So, yes, there's blame. But you're kind of less to blame, your less in control. Whereas if we saw someone with a completely clean record and we had absolutely no reason to commit that crime except that they were bored on Saturday or something like that, we will look at them and go, no, you have. We will blame you. More than the other person because you should have known better. You should have done better.

Olivia:[00:37:17] We actually blame exists on this scale. And so we're looking at that self-driving car, the car as the cars as they are now, as we're beginning or even five years ago, are the cars as they were five years ago when it was much more likely for the car to run over someone or be an accident? We were a lot more willing to accept. OK, the technology is still in development. It's OK. We won't blame the entity that's supposed to be blamed. Whether it's the car, the company, the programmer. There's less blame here because we were open to the fact that this this was a possibility.

Olivia:[00:37:57] Whereas as the car gets better and better and better and the accidents get less and less, the accidents do happen. We're going to assign a lot more blame to them because we're looking at this accident going the car should of avoided that that shouldn't have happened. This technology is supposed to be better than that. Why? Why isn't it? we're much less forgiving. And so when you look at it in terms of how blame is is assigned and blame being attached to that, kind of in for it for more responsibility as the car gets better and better at driving. We're actually. From. Well, what my conclusion was and again, I still still only scratch the surface here and I need to go through, many more rounds of understanding what exactly this was. But as the car gets better and better at driving, the probability for a crash decreases the more responsibility of the crashes that do happen. The blame comes out actually increases. So completely flipped. What the original argument, the original theory behind the probably intuition was.

Olivia:[00:39:11] I'll pause there, sorry. I talked a long time there.

Olivia:[00:39:14] Yeah, I know it is. It's really interesting because it seems to me that is part of the, you know, the systems work and in a way that you hope that the overall net benefit. Let's say if we're talking kind of a basic kind of utilitarian point of view, net benefit should be better. I mean, why are we even going to be bothering, right. To do this if it's not better? And I guess the assigning of blame or justice into this equation that you were talking about, it has to be considerably better. And I think we've seen that for some of the literature and how people were reporting this stuff in the media. You know, it it it just is so shocking when something happens and we might assign so much more moral appreciation, but some sort of like like you were saying with the moral gap to these things gets bigger as presumably they get better as a net benefit, which is it is an odd thing to kind of look at. So I wonder where the tipping point is, where you know that injustice doesn't get outweighed by the net benefit and that becomes acceptable. You know, as a thing.

Olivia:[00:40:33] Yeah. And it possibly could get to that. We're not sure, we haven't witnessed technology at that point yet. Again, it's all theory and speculation as we're looking at it. But I think the motivation behind...motivation I had behind looking at that intuition into it in the first place was to force the conversation actually back on to understanding where that blame is assigned is very important for the development and the actual trust of this technology in the first place, it's not something that can be skipped over. It's a hard decision to make. It is a hard question. Look at the growing amount of literature written on it. It is not an easy answer, but it's one that we do need to put ourselves to understanding.

Ben:[00:41:22] And I think the people who can do strange and stupid stuff when they get emotional about things as well. So it could have a knock-on effect. I made this quite silly example when a talk I gave quite a few years ago where you have an automated car and it's being programmed in a way that, you know, it suddenly transpires that through tests it's knocking over lots of cats and they change something about the programme and it's not knocking over cats anymore. But then it has this knock on effect in the future that it's knocking over, its, you know, causing much more damage in a different sphere. Maybe it's. You know, something else. So these things can can cascade, if you like. We have to be careful that we don't necessarily. Have these kind of shock responses almost and let that colour our, um, our actions. They go. I've got the words out. Just about.

Olivia:[00:42:23] Olivia. This is a question that we usually ask towards the end of the podcast. So with this technology, with A.I. and kind of the future of A.I. and these tools and and things that we're making moment, what really excites you and what scares you about these technologies and our future?

Olivia:[00:42:43] I am excited by the potential that this kind of technology unlocks for us, the potential to understand our patterns as humans, potential to impact and help each other, and the potential to reach beyond our normal constraints, whether that be constraints within community constraints within a country and so on. The technology that one person creates can touch the lives of millions. Which is very, very exciting. I think it leads us to towards development as as human beings. It sounds cliche, but it helps us develop as as people.

Olivia:[00:43:38] But with that, it's a double edged sword. And I think that leads into the part that scares me is the fact that we are throwing technology at everything. And I think there are things that shouldn't be touched by technology there. There are aspects to being human that just don't need a technological solution. Sure, maybe it might be...up. But is that something that we really want? If we can finish our our workday in five hours with the use of A.I., but then don't have any passion or an outlet or something to do with the rest of the day, is that really going to make us happier? So I think in the flip side, the technology scares me in the sense that we will slowly but surely erode away what makes us human. Our respect for these ethical principles, we'll just slowly chip away at them until the point where it's like, well, we live in a surveillance state because we've chipped away at our understanding of privacy to the point where it doesn't matter anymore. We've chipped with at the principle of trust to the point where you can't trust someone unless they have a tracker on them. That's the usage of technology that starts to scare me. Where? Erm. Now, where it starts to ignore the fact that as a person like...I'm starting to ramble again.

Olivia:[00:45:06] For example, trust, trust used to be. OK. I see you. I see what you do. And I know you as a person and I will trust you. Now, trust is taking on a new perspective when it comes to technology, because we can't see the people behind the technology. We can't see what the technology does. And. That's, that has changed the way that we learn to trust. But. For some reason, the trust...to supplement that trust has become I need to know all of your actions. No, you need to trust that I have your best values in mind, but we don't...technologies isn't helping us to that. It's only helping us understand what exactly you're doing. Like, it helps me understand every single step we take during the day. But I don't know that step you took was in my best interest. Whereas as humans, we need to look at, OK, how do we encourage each other to have each other's best interests in mind and technology can help us with that. But it shouldn't come in between, so I get to see every every instance that you do to make sure that you're abiding by this. That's not trust. That's that's surveillance.

Ben:[00:46:10] Mm boo.

Ben:[00:46:18] So let's not do that. And guys, I guess is the answer there.

Olivia:[00:46:26] I think since I got on my soapbox soapbox about trust. On accident I'm sorry, I did not mean to go in that direction. But trust trust is essential to. I think trust is a great example of an ethical principle that we need to protect. That's something very, very human based. And we need to protect the way that it that it stands and not let technology slowly creep in on our understanding of what it is to trust one another. If that follows.

Ben:[00:47:01] Yeah, yeah. I'm just trying to imagine. Because I come from a, I guess, a design technology background. So I'm just trying to work out what a technology kind of manifestation of that would be. It's it's interesting problem, and I think there is a I think there's a wide kind of trust problem in A.I. technologies generally, as well as the Internet technologies. A breakdown of trust, maybe.

Olivia:[00:47:31] Yeah.

Ben:[00:47:32] Yeah. So I wonder what a kind of....Because I think part the problem is this, the global situation. You know, we can't trust ah, we don't need to trust our neighbours, we need to trust people from all over the world. And how do we do that and how do we do that as biological entities who have evolved in some way and have social norms and cultural norms and law and all sorts of stuff. And how do we do that mediated by technology? I guess I don't have the answer that it feels like we've been hashing out over the last three decades. You know, trying to work out.

Olivia:[00:48:12] Yeah. I wish you had a point blank black and white answer for you. But ethics is anything but black and white. But that doesn't make me any less worth pursuing. I think what we've seen with technology is we've understood now the black and white answers and now we're faced with the hard, hard questions. And these are the ones that exist in the grey. Of what does this actually look like? How do we understand these points about life that we can't touch? I can't physically touch or see human dignity or respect for my fellow man. I can see actions that are caused by that. But I can't physically touch it or see it written out in ones and zeroes. It's something that is a bit nerve wracking to try and take on, because it's it's it is grey. Doesn't have a solid shape or form. But they are still valuable points to life. They're still. Essential to our understanding of what it means to be human. And technology has opened up the time and space that allows us to now tackle these these these grey issues that we've grappled with for even beyond decades, centuries.

Ben:[00:49:35] Yes, yes, yes, yes. I mean, like the kind of the Internet epoc for decades, let's say.

Ben:[00:49:43] So that's an excellent place to to finish up Olivia. Thank you so much for joining me on the podcast today. If people want to contact you, follow you, find your work. How can they do that.

Olivia:[00:49:54] So you can find us Ethical Intelligence on Twitter and LinkedIn. Twitter. It's ethicalAI_co myself just Olivia Gambelin on Twitter. You can find me as well as LinkedIn, a very active and I love to chat with other people that are interested in this field slowly bringing together, this growing community of brilliant minds of all ages and backgrounds that are really trying to tackle these problems. Oh, and you can also find the website at Www.EthicalIntelligence.co That carries most of our work and our current research as well.

Ben:[00:50:39] Great. Thanks so much, Olivia. Have a great day. And I'll speak to you soon.

Olivia:[00:50:44] Thank you, Ben. And thank you again for having me and asking difficult questions.

Ben:[00:50:48] Thank you.

Ben:[00:50:51] Hi and welcome to the end of the podcast. Thanks again to Olivia for spending time with us. I was particularly interested in her thoughts on the probability, intuition and the moral gap. I was hoping that I could probe further on that. But I think maybe a worthwhile conversation for an extended chat on our patron. So look out for that.

Ben:[00:51:09] Obviously, go check out Olivia and Ethical Intelligence. I myself also run ethicalby.Design and we come into organisations to do talks, workshops and produce consultation on AI and AI Ethics. Just a quick note that at the moment I'm working with a company called Tiny Giant, at tinygiant.io producing a small intro to A.I. called A.I.: That edit where we talk about A.I. machine learning, ethics, creativity and marketing. So check that out on the machine ethics, YouTube and also on tiny giants youTube, too. Thanks again for listening and I'll see you next time.


Episode host: Ben Byford

Ben Byford is a AI ethics consultant, code, design and data science teacher, freelance games designer with years of design and coding experience building websites, apps, and games.

In 2015 he began talking on AI ethics and started the Machine Ethics podcast. Since, Ben has talked with academics, developers, doctors, novelists and designers on AI, automation and society.

Through Ethical by Design Ben and the team help organisations make better AI decisions leveraging their experience in design, technology, business, data, sociology and philosophy.

@BenByford