35. Moral reasoning with Marija Slavkovik

This month we're talking to the amazing Marija Slavkovik about a new language for talking about machine intelligence, expert systems and AI history, unchecked bot networks on the internet, how our technology doesn’t work for us, collective reasoning & judgment aggregation.
Date: 24th of September 2019
Podcast authors: Ben Byford with Marija Slavkovik
Audio duration: 52:06 | Website plays & downloads: 555 Click to download
Tags: Morals, Ethicists, Machine ethics, Academic, Reasoning, Voting, Expert systems, Language | Playlists: Expert Systems, Philosophy

Marija Slavkovik is an associate professor in AI at the Department of Information Science and Media Studies at the University of Bergen in Norway. She works on collective reasoning and decision making and is specifically interested in these types of problems in machine ethics. Machine ethics is basically trying to answer the question of how do we program various levels of ethical behaviour in artificial agents. It is a very interesting field for both computer scientists and humanists and I like it because it pushes very hard reasoning problems back to the surface of AI.

Marija's background is in computational logic and in control theory and is also interested in all aspects of automation. She mainly writes scientific articles on computational social choice and multi-agent systems. However, being in a half media department, she is exposed to a lot of issues in how information spreads in social networks and how information gets distorted after being spread through a network and/or aggregated. Marija is now trying to bring this problem into the Machine Ethics conversation, because there is a lot of decision automation happening behind the scenes of information sharing, we see a lot of emergent behaviour of systems of artificial agents and people, but we do not fully understand it or can control it.


Transcription:

Marija:0:00:02 Hi and welcome to the thirty fifth episode of the machine ethics podcast. This month I'm talking to Marija Slavkovik associate professor of the University of Bergen, we talked about wondering about new language for machine intelligence, expert systems and some of the history of AI, unchecked bot networks on the internet and how a lot of autonomous systems are embedded in the network, how technology doesn't always work for you, and we dive into Maria's specialty collective reasoning and judgment aggregation.

To find out more about the machine ethics podcast go to the machine-ethics.net to see some of our past podcasts and more information and you can contact us at hello@machine-ethics.net, you can also support us on patreon patreon.com/machineethics there you find other content videos reading lists and bits and bobs from the world of AI. Thank you and hope you enjoy.

So hi Maria.

Marija:0:01:00 Hi Ben

Ben:0:01:01 Hi thanks for joining me on the podcast, could you introduce yourself to our listeners who you are and what you do?

Marija:0:01:06 So my name is Maria Slovkovik and I'm an associate professor at the University of Bergen in Norway and what I do is I, we are talking about this that I do research in artificial intelligence and specifically I'm interested in collective reasoning problems in machine ethics

Ben:0:01:26 great the first the question I usually ask people is because we have a varying degree of answers here and this sounds pretty basic but to you what is AI what is artificial intelligence?

Marija:0:01:39 Right. So I mean I, I come from a background that has gone from electrical engineering through computational logic through my PhD was in computational social choice I did postdocs in verification so a very technical background and for me the definition of Russell and Norvig stands: so we are attempting to recreate intelligent behavior in software and hardware, I mean that's what a is, of course I mean the goal for AI for me is that we nobody would... make sure that nobody does a job that they wouldn't have to do particularly domestic labour but unfortunately I don't work directly.

Ben:0:02:18 Right so that's like the dream for AI basically

Marija:0:02:22 Yes, nobody has to clean their house unless they really really want to or do a job they they don't want to do writes on less then

Ben:0:02:28 maybe I could do it for pleasure so I'm doing the dishes. Yeah ok well let's let's get there. So that we you spoke briefly about this label of was it decision making in machine ethics and we turned out

Marija:0:02:51 So I mean I understand that today people have this attitude a lot that AI is like this gold egg and it's the mystery gold egg and we don't know what's inside but it's like wow it's awesome and maybe it's gonna be scary and it's gonna do bad things to us but I mean for me it's not like this I don't see AI as some kind of a mystery technology that we don't understand on this new entity and so on I think we don't have a proper language to talk about cognitive behavior in machines so we apply to it language that we use to talk about cognitive behavior in people and then we kind of apply additionally with that certain other expectations of people and then everything just gets confused so when we're talking about decision-making and this is what I'm interested in so it's it's without AI this entire field that I mentioned which is computational social choice is about understanding how information can be put together so in the simplest sense this is something like elections like you have a lot of opinions and who should leave the country what is the best decision what is the decision you want here and in machine ethics we need to somehow be able to have some kind of a moral decision-making capacity built-in software or events software and so on, so then the question is who decides what that moral decision capacity is I mean who decides what's good and bad and in order to do that it's my opinion that we have to somehow put together information and it's very why I kind of put those things together. If you let me uncheckedI can talk for hours

Ben:0:04:34 I'm gonna try. Let's try and unpack those things because I imagine the people listening to this it might have issue with either not knowing on these terms or just disagreeing with you there so.

Marija:0:04:45 absolutely

Ben:0:04:47 what kinds of instances do we need machine ethics or this idea that we give program and algorithm some agency to make decisions

Marija:0:04:54 right so I mean the situation has what really fascinates me about this current narrative of AI is that AI has been with us since 1956 at least since there has been computers and it's just the moving border as to what we call AI, what we call computer that's right and the reason why nobody really talked about AI and ethics up until now was the way we programmed smart machines was the we program them experts program them and they were supposed to be used by experts in a very controlled environment so there are two ways to make something behave intelligently one is to have a very very simplified behavior in a complex environment so you can just you know do your thing move forward move backwards and that's it, or to simplify the environment and get complex behavior.

This is what we used to do we simplified the environment to get complex behaviour. You can think of automated Trains for example right you cannot approach the track of the Train because we need that simple straight line so that the train can know what they're doing so since I would say like ten years perhaps it has been probably more we have smart systems in a complex environment that do complex things, so necessarily these systems interact with people who are not trained to interact with these systems right so you don't you cannot anticipate the behavior or the system you don't even necessarily know that you're interacting with a machine – like think of a chatbot or you're tweeting at a company and it's like I don't know whether whoever replies this to it is a person on that person anymore and because of this because you have this unchecked interaction you need to have certain level of moral behavior programmed in and I use behaviors deliberately not agency because it doesn't have to be agency and I know quite a lot of philosophers who are gonna vehemently, I think that's how you pronounce that, argue that you cannot have an agency within the machine. I stand that side which says I don't really care whether this is agency or not what we want is behavior because we want that machine does not erode the values of a society however you define that and it is because of this new tendency of having complex system interacting with non professionals that we need this it's not about some scary domination of the world is just seek usability you know in a way in caution

Ben:0:07:28 mmm-hmm says it's the circumstance of these algorithms operate you know and who they interact with that's that's my main thing and I think

Marija:0:07:39 I mean people think about, sorry to interrupt you, just people think about you know whenever you say AI that interacts with people and imagine immediately you imagine this driverless car that runs over people on the street or something but in reality it's also quite a lot of unembodied algorithms software that interacts people and those are also of concern here.

Ben:0:08:04 Yeah so it kind of seems like you're spreading your net wider then and to encompass maybe lots of different ways that people interact over social media, over the Internet and how algorithms interact with real world essentially.

Marija:0:08:18 Absolutely yes yes.

Ben:0:08:19 Yeah.

Marija:0:08:2- Because there are much more smarter things going on online than they are in driverless cars because I mean the number of driverless cars is small where is the number of algorithms that interact with people online is quite large.

Ben:0:08:34 Yeah do you have like an idea of what those like basic things that people should be doing when they're creating these sorts of algorithms or interactions for example you said about hatbots and and Twitter bots and things like that

Marija:0:08:49 Yeah

Ben:0:08:50 Could it not just be a simple thing that you could say this is a bot or I am Jeff the robot or you know it's some sort of transparency there.

Marija:0:08:59 Yeah absolutely I mean that's that's one thing that you can do but then. How transparent is transparent because just saying I am I'm a chat bot where I'm a machine it does not necessarily tell this person who interacts with it what its abilities are what are the cornerstone of its abilities because our how can I check for myself this chat bot can do and then people have very different expectations from totally silly to I don't know some sci-fi version of unrealistic version of software

Ben:0:09:37 Yeah.

Marija:0:09:40 and then it's also the question of, for so that is a question of understanding what this means and then the second which is different for different people right different people have different expectations and background and then the second thing is okay if this is a chatbot how can I influence it because when you talk to a person then we have certain, we have certain experiences about interacting with people and knowing that you know you can maybe emotionally push them left-right or you know appeal to certain humanity if you need something done or to reason or what-not but when you interact with the machine so how do I feedback this machine feels like okay it's like talking to a wall or something is it the case.

Ben:0:10:23 yeah was it maybe talking to a larger system you know and this is just a tiny aspect of it which is being.

Marija:0:10:30 Right.

Ben:0:10:31 Broadcast to you but actually then goes like this and there's all this behind the scenes doing other things you know.

Marija:0:10:34 It's also a question of who I'm really talking to right I mean I'm kind of typing in these messages to this algorithm that does something for it but then where does the buck stop in a way so it could be that people are looking at this and maybe it's people in my country and/or my town and it's a really really small town so I don't really want the oh wait it's actually my next-door neighbor that realizes I don't know I'm a Turk here's all my private conversations with the chat bot from the medical office or something I don't know.

Ben:0:11:08 Yes yeah.

Marija:0:11:09 It's not simple but

Ben:0:11:09 Yeah I mean you eluded to - like a couple of news stories that have come out recently about big companies listening in on your on your chats basically on your, was Microsoft or something like that and they Skype one of their systems they were sending audio to people to listen to make the system better you know.

Marija:0:11:29 So, I mean this is what I do in research wide is actually very very theoretical and very mathematical and what we are looking at is theoretical properties of interaction of information from different sources under different circumstances and so on right but the consequences of this is that it feeds into this big narrative of what happens with information when it's just interacts this is what our information does right now I mean. Well I have to say that currently none of the systems that you are, none of your apps it's none of your browser systems in reality works for you especially if it says free then it definitely doesn't work for you so the assumption is always has to be that somebody is reading this and listening to this and there's some human processing so if you work in AI you are very very well aware of the limits of what they I doesn't whenever you see a system like this you know that, yeah there is a person that looks at your only hope is that this person is somewhere far away and absolutely uninterested in meddling with your affairs state-of-the-art.

Ben:0:12:42 Yes

Marija:0:12:43 I mean AI if you look at the definitions it says about one of these definitions says it's about building agents that behave ethically right

Ben:0:12:53 Right.

Marija:0:12:53 but we don't in fact build agents we build parts of agents and then we fill in the other parts with people's agency.

Ben:0:13:04 Yeah so you maybe you build the automated bit, the easy bit maybe the yeah the bit that looks at lots of text and then if it comes to a problem then it will go and ask your human is that sort of system.

Marija:0:13:19 Eell it's actually a lot more simpler than that I mean if you pick something like, I'm trying to find a good analogy here, but I guess there isn't one you just it's like your brain is not actually one unit but it's a bunch of people and they're all do different things yeah and then they kind of inform each other about what they have done but in a very very bad way. So I mean to say that the system asks a person for help that would be already too advanced so it's like in the best case scenario an example would be you have an image classifier is it the cat is it a dog right and then somebody some person has already fed the training set has classified a bunch of examples of is this cat, is this a dog and then you have automated classification and it has a certain percentage of confidence is how sure the classification is how certain it is.

So that that means that we have automated the recognition of cats so then in order to make sure that you don't take the two to make sure that mistakes don't happen whenever this certainty is below a certain threshold you program another program that says well it is below the threshold do something yeah that do something can be like you know try a different classification algorithm were paying somebody who is on duty and so on. But what these people usually do is the ones that are involved in the automation is they they either label examples or they check whether examples have been correctly labeled or look at cases that have low accuracy or confidence in that whatever handling they're doing and try to learn why this has happened so the actual intelligence actually comes in from here.

Ben:0:15:14 Right, so what I find interesting here is like if if let's presume that our listeners kind of have that in mind and that these systems are maybe very good at kind of statistical engines for working out whether something is true or not let's say so is it a cat or a dog and it gives you this percentage in confidence and as the human programmer we can say if it's below 80% then do something else and if this is the case can you use AI and these sorts of algorithms in making this the Machine ethic side as well because you've got if we've got this brain which is made up of all these different parts is our ethical part also a similar sort of situation.

Marija:0:16:08 yes I think so but there is this tweet that that came, popped up in my timeline at one point and I really don't remember who it was from because it's a pity because it's brilliant and it says the biggest problem with machine ethics is that we don't understand human ethics yeah so when you you understand something very well and I say this always it's like very easy to build an that follows the rules so if I know what the rules are then I can build to follow the rules. The problem is how do we build a machine that breaks the rules because a lot of ethical reasoning is about making exceptions in cases right if we had a clear-cut case and say in this dis condition and that condition and that condition never under any circumstances do this that's a constraint we're great with constraints.

Ben:0:16:50 Yeah.

Marija:0:16:51 But then there are unless this is the case but maybe if the other thing is the case and so on so there are many schools of thought as to how do you approach to build machine ethics ethical reasoning and so there is there - I would say kind of basic works here one is a paper and a book by Vallac and Allen and then they go you can build it bottom-up or top-down it means basically like you can we're still interpreting what they meant exactly, but it, the idea is that you can either have a machine that gradually learns to follow some rules and then it comes up with something or you can have you can take a known ethical theory that is developed to a certain satisfactory level and then break it down into simple actions and cases which basically says if you are in a chat and you cannot verify who the source of this chat is do not send nude pictures or something and then there are the the in a way an orthogonal to this there is the work of Moore who says that well you can have ethical, implicit ethical agents and explicit ethical agents and one of them the implicit ones is just your programming machine to follow a bunch of rules as long as the machine can recognize the situation it is in it will follow these rules and if it doesn't then it will call somebody right or stop or you know default behavior and the other the explicit is that then you try to implement some kind of understanding so that the machine, understanding under quotation marks right machines don't understand anything, you can try to enable some capacity in the machine to recognize the situation map it to a situation that it has recognized before and apply find out by itself which moral principles applies what is the ethical situation here and decide what to do so the machine uses its autonomy again whatever that means to make a decision these are kind of like two by two in a way new approaches.

And then in in all of these cases we are not talking about machine solving hard dilemmas that people who couldn't have solved we are more talking about what is common-sense reasoning rather than some high-level.

Ben:0:19:16 Right those those two examples sound very similar the implicit explicit and the bottom-up.

Marija:0:19:22 Yeah yeah but they're not though there are some kind of details in them.

Ben: Okay. Yeah cool. So what I like about this is then it you alluded to the fact that we don't have a good understanding of what that ethical reasoning should be in any particular circumstance we could we argue in various directions given a you know even a very small domain we could probably have lots of circumstances which we across the world might not have a definitive answer to.

Marija:0:19:53 Absolutely.

Ben:0:19:55 Yeah is there is it I mean is that the vanguard is that the the biggest problem in this in machine ethics today's is this is no battling with.

Marija:0:20:10 So I mean if you take into consideration that machine ethics basically exists for ten years and I'm very generous here I wouldn't say there is one big problem its like a lot of we are finding out the problems I mean we are mapping out the field right now and finding out okay this is a problem.

The difference, one of the differences between engineering and research is that in in research we are looking for problems and engineering we are looking for solutions right and so in this in machine ethics we're kind of pressed to do both and we are still looking for what the problems are so I don't think there is one solution that fits all there is not one approach thatfits all and yes you alluded I mean what is okay in one country in one situation within one family is not okay in another country in another household for example which is brings me back to the beginning when I said you have these collective reasoning collective decision-making situations because then who decides what is right so you have the law on one hand which has aspects of morality in it and then you have the society norms and expectations and so on and values as a society somehow we have a agreed that we are upholding and then you have the personal morals and then all of them in some situations it's fine to just say it is illegal to build a car that does this way and it's okay but in other cases it's like if you have something like these assisted living robot let's say that's supposed to help you in your old age to be independent in the house and then it should decide this is very classic scenario and machine ethics works the robot should decide whether you it reminds you to take your medicine and you keep saying no I don't want to do it and then of course it has to really have respect for your opinions but at one point you have to just push by the line and become just continue not taking your medicine and then the question is should it report to your children or whoever is your you know way deputize to do medical decisions for you or to your doctor or what should it do it there are never people who have who wants their children to be involved and would love their children for exactly, i user children here but it could be like really anyone, who want their close ones to be involved and who would prefer that there they're the ones who are informed and then there are people who would rather not who have this other sense of pride and say no I would not want in which law in which country should be employed to make this decision nobody should make this decision for you it should be just you as a person. To me this is the current problem that I'm looking at so how do we put what is ethical opinions from several people into something that can be then defined it can be the definition of this is what we need to implement and then we worry about how we implement it because that's not simple either.

Ben:0:23:08 Let's move on to that in a second I just I just wanted to interrogate this a little bit further so we have this idea that you want the person to be independent right so you want their autonomy in in your old age but they might in your example not be taking their medicine and it might be quite detrimental to their health and there are instances where the robot may be can call call someone call home and say there's an issue or it might you know persist in trying to get them to take their medicine or whatever the ultimate answer there is if they say no. So given that's the case are we not just in a position let's say where this is all just basic reasoning that we all have to do we live in this new digitized environment and we just have to get to grips with it and have a lot of people thinking in depth about specific issues or problems and then we'll have answers or do you think it's a bit more fluid and and complex.

Marija:0:24:09 Well it would be it would be basic reasoning but then the question is like okay fine what is basic reasoning and I'm asking you here and okay you have three friends you are deciding that you're trying to choose which restaurant to go tonight yeah so how do you choose this is a question for you how do you choose.

Ben:0:24:32 I guess this comes back to you you're kind of social aggregation stuff right.

Marija:0:24:37 Right. There's this idea that we have these systems somebody has built all these theories I mean in a lot of cases we have done that and we still use the you know the less efficient ones I'm directly talking about voting here but there is no social choice theory that talks about how do you aggregate moral formation. I can't even call them preferences because we don't even really know what they are right.

Ben:0:25:07 Yeah.

Marija:0:25:08 Because there was no need to do this I mean why would you why would you do that why would somebody aggregate moral because this is not how we decide what moral is in our society we have some kind of an iterative process of trial and error and whatever sticks after a while becomes the moral normal society we don't have a voting for for for what is moral today in our society right you don't I don't you don't you don't go and say well I vote that you know this is bad don't do that so the theory doesn't exist yes it is basic reasoning but we haven't done it.

Ben:0:25:45 Do you think–I have a hunch that actually that sounds like an awful place to be if if you give people the votes on some sort of moral inclination and they vote for brexit again you know they make maybe a choice which is not advantageous for society as a whole because everything individualist then surely maybe that's not the best route forward.

Marija:0:26:10 Right so this is the thing I mean whenever I say voting people immediately think majority.

Ben:0:26:17 Sure. Right or plurality of some kind and let me just set the record straight there are many many clever ways in which you can vote that do not involve majority or plurality so majority is great if yours you are putting together preferences like well do we go for dinner but if you're trying to find what the best solution is there are other ways to do it so you're trying to find the truth there are other ways to do this it's not not necessarily majority. Now that being said so this is the way we use language right and we kind of apply intention to things without yet and I just became aware of this recently where people were saying I was saying consensus and people were immediately like yes majority and you're right I mean it's a very bad idea to use majority here because it is very personal and then just because I happen to be among a group of people who are non-vegetarian it doesn't mean that I have to start eating meat all of a sudden or vice versa right and that's why I'm saying we don't we don't know we don't know how to put things together because it is not you should somehow take everybody's opinion into account but then there still has to be something that is consistent and coherent and it's a new territory super exciting.

Ben:0:27:31 Yeah yeah yeah so you see you're working on this kind of question oh.

Marija:0:27:36 Yes so I really really don't like majorities I mean for me I have been a minority in everything all the time and I'm just kind of personally I'm against this majority idea sometimes it's good yes but not always and there there are there has to be other ways and then it means that sometimes we do it my way sometimes we do it your way is like one approach and then you have to figure out which are these times right into which extent and so on and what are you minimizing are you minimizing some kind of a number are you minimizing some kind of, what are you optimizing against when you're making a moral decision so this all is very complicated in the sense that it involves a lot of discipline so it's not something that programmers can do by themselves or computer scientists or AI researchers and so on we have to be informed by philosophers and political scientists and economists who have been doing this for a very long time social scientists but it is exciting to we learn more about ourselves by building machines.

Ben:0:28:38 Yes it resonates with me that comment where I think that I first heard it from Anderson & Anderson as well but the idea that we're learning more about our ethics basically and our understanding ourselves through having to inbue machines with some sort of decision-making.

Marija:0:28:58 I have to mention this we have this very ridiculous situation with philosophers sometimes we kind of go out them so we I've organized couple of these events in which you put together different disciplines and we're talking about how do we build ethical behavior and then we always we always wail in on them so give us a minimum Theory give us something that is like this is the minimum set of behavior that something needs in order to be called a moral agent and so then they reply to us going something like tell us what you want to do and then with what you want to do with it and then we will find it out which reminds me of this discussion at like family gatherings when your aunt comes to you and says oh I want to buy a laptop tell me which laptop to buy and you go well tell me what to do with it and then I will figure it out and of course she doesn't know what she's gonna do with it I mean she has no idea she just wants to try and find out and you know eventually she will learn what she doesn't know what can be done with it and this is the situation that we are in right now so I mean the Philosopher's are like give us what we give we ask them give us the minimal theory it's something to start with and they go well tell us what you want to do with it then we just really don't know.

Ben:0:30:13 okay you think it falls under that adage of it's like pornography "I'll know it when I see it" sort of thing so we'll build some stuff and then we'll know you know

Marija:0:30:25 It's actually not even like that it requires I know in my opinion I'm sure that my philospher firends will disgree on this, we require a different shift in paradigm of how we think about what is ethical because a lot of a lot of moral philosophy deals about I recently learned these are called not considering judgment is when somehow we have agreed that this is good somehow okay like you know killing people bad hurting people bad sure you know destroying the environment bad planting trees good and so on and then they are dealing with building model theories which are basically nothing but systems of reasoning what is good and what is bad systems of deciding what is good and what is but for these difficult cases you are on a crossroad you can kill one person or you can kill five people right they don't deal with this considered judgments or this easy common sense and so on and what we actually need is a theory that does this. So we have to do it together.

Ben:0:31:26 so the common-sense theory might be an expert system.

Marija:0:31:29Might be I mean don't thrash expert systems they were a good idea and there are a lot of people who in AI right now who believe that the solution is to encase the statistical methods in a logic reasoning framework.

Ben:0:31:50 Right and what does that mean for people who maybe can't visualize that.

Marija:0:31:53 So basically it means that you have some kind of a processing of information using a statistical method you have a bunch of let's say cats and dogs right classifier which tells you this is a cat but this is the certainty that I'm with and you build a scaffold like a program around it that tries to find out what do these examples have in common in which your algorithm fails and then has to build some kind of an if-then around it so if this is the case and that is the case for your data set then you need more of these types of examples to be put in so that's just really obscure oh my god I sound like a category theorist. For you make an example to clarify something in the example is let's be your intuitive example is let s be a set I went for an intuitive example and I ended up with like a system with the things and another thing.

Ben:0:33:03 Yeah and then as easy and it's over itself updates right in that in our example.

Marija:0:33:08 Eh right I mean so of course this is an abstract 0idea so the abstract idea is to use to use rule-based systems to reason about the data set or to reason about the the label the classification that is being produced with a particular data set in order to, so the biggest problem with statistics based systems is that when it fails you do not know why it has failed so knowing why it has failed is very informative and so this is some kind of intelligence and we are trying to build that in.

Ben:0:33:47 Yes and that can help us in certain circumstances that we can maybe avoid cataclysmic errors because you have some sort of feedback loop there to say this is really gonna happen because of this dataset.

Marija:0:34:04 It has to be said that cataclysmic errors can be avoid by understanding when you should use statistics base methods and when you shouldn't that's step one and once you have decided it's okay to to use them you can make them work a bit better but again I'm not saying I know how to do this I'm just saying this is the vogue right now in AI research.

Ben:0:34:28 Right, so should... did you want to talk about the the work you're doing in the judgement aggregation bit more.

Marija:0:34:37 Oh you ask me about judgement aggregation, is this good for your viewership?

Ben:0:34:44 I don't know yeah we'll see because I was looking at your publications and oh okay I kind of understand some of these titles this one I have no idea.

Marija:0:34:58 So I can tell you what it is okay this is this more general voting theory than voting let's put it this way okay voting your aggregate preferences and in judgments aggregation instead of saying I like cheese better than cake for example and so on instead of comparing cheese and cake you have some kind of a question which is is distinct through so you have a series of questions is it true that Brexit will happen on the 31st of October, yes I went there. Is it true that this will be bad for the economy. Is it true that we should avoid if this is bad for the economy should we then try to not make it happen and so on so you have a series of questions and the answers to these questions are usually true or false and then do you cannot freely give answers to the questions because they are logically related to each other so the classic –this comes from this comes from legal theory in the 90s they observed that if you have this is called the doctrinal paradox this is the seminal example here that kind of started the field in a way. You have a defendant and you are trying to decide whether or not they are guilty of breach of contract and you have three judges who are supposed to decide this this case is this defendant is the defendant guilty? Then the law says the defendant is guilty if and only if there was a contract and the defendant breached it so this is a logic relation so there was a contract proposition A, the contract was breached proposition B, and then the planetive the defendant is guilty proposition C. So I see if and only if A and B truth together and so then each of these three judges are deciding or which of these questions A, B or C are true or false but they have to be consistent you cannot violate the law you cannot say there was no breach of contract but there was a contract therefore the defendant is guilty because that is not according to the law so individually they have to be rational so then how do we put together the opinions of these judges so that you get something a collective decision that is rational and it was discovered by Cohen Housen Sega and securities are these law scholars in the u.s. that if you vote on issue by issue so if you first decide whether there was a contract and whether there was a breach then you are going to convict but if you are in some cases right look you will convict but if you instead only look at the dividual decisions that you're not gonna convict and it makes a big difference so then it was discovered that in this case is when you have logic relations between the questions you're asking you cannot just pull the information a question by question you have to do something else and then it was discovered that this in fact is a general framework for voting because everything that is a aggregation of preference you can in fact represent as aggregation of what we call judgments and this is a complex systems in which you make collective decisions and this is what I have spent quite a lot of time looking at how do we make decisions in this sector.

Ben:0:38:07 So just to go back to that example it so what you're saying is that it's it's better to ask whether A and B are true together rather than if A is true and then also B is true is that right?

Marija:0:38:21 It's different. Better depends with the defendant or the person whose contract right.

Ben:0:38:26 Lets say theres a grid of true and false and then A, B and C you might get A, B different answers from the different judges and therefore you can't make a definitive answer to see but if you ask each judge whether A and B together are true then they can only say there's less preference there almost, less to deal with.

Marija:0:38:56 so I mean just if you search doctrinal paradox you have examples of this but yeah it's it's basically illustrates that certain problems are too complex to be represented as a single question and you have to break like for example Brexit maybe may be and then you have to break them down in simpler questions but then when these simple questions inform each other depend on the answers and them depend on each other so then you cannot just do pooling you cannot just do majority because what you get even though everybody is individually rational what you get is collectively irrationality and then in this case is like you cannot you cannot just pool you have to or something smart to do which happens to be computationally very difficult.

Ben:0:39:45 So you might have lots of people who say so let's say there's a hundred judges and there's no definitive answer to A and B from 100 judges what do you do there yeah well there's no majority let's say there's.

Marija:0:39:55 No majority right so I mean one approach would be to somehow look at the issue that has the most strong opinion so it's in ninety nine judges say a is true so you start with that and you said okay A is true then you have sixty judges saying B is false and say okay can I add B is false to A being true is this consistent yes so then you add B is false and then you see the left the left around issue what do you deduce from what you have already figured out right so that's one way to do to look at it another way is to look at this input from each individual source as a unit of info... a piece of information as a whole and then trying to see what is the similarity between all possible this one and all possible truth values that you can give to these questions and then you average out through this similarities you use what is known as Kemeny room yeah there are many and then depending on how you define similarity and how you define how you put them together with your aggregation function is you get a different... it is well known secret in voting theory that the one who chooses the voting method chooses the winner of elections and the same rules apply here so questions yes so then if to pull it together with moral reasoning what I'm saying is that in reason we have yet another problem in which you have relations between the judges that make decisions right between the stakeholders in this particular case so it's not just their opinions but they somehow relate each other to each other so I am influenced by society I influence society the law influences me do i influence the law and so on so then it's a it's it's it's a different theoretical problem than just trying to find out what the truth is as is the case...

Ben:0:41:53 And when you're talking about truth you talk about rational logical truth rather than some sort of etherial.

Marija:0:41:59 Again apologies to the philosophers what I mean here is a specific instance when the truth can be known except not just now things that do have a truth value unlike things like so preferences for example they don't have a truth value so I can say what red wine is better than white wine you can say white is better than red and we can coexist in this world that there is no inconsistency all right, but I say red wine is cheaper in France than white and you say white is cheaper in France than red then one of us is right in one of us is wrong and these are the type of instances I talk about when I state.

Ben:0:42:42 But red wine is better than wine so there is a truth there right.

Marija:0:42:45 No comment. Bad white wine is better than bad red wine.

Ben:0:43:01Yeah that's the real truth isn't it. I fear that we could talk all morning currently and so I was wondering before I asked the the last question if there's something that you wanted to add which we haven't quite covered yet.

Marija:0:43:12 Oh well many things but they should not go into a rant I would like just to say that the most crucial thing in machine ethics in AI right now is the fact that we use language wrong and there's a lot of confusion into what AI is what I can do and in my opinion this confusion comes from the fact that we use the language we use when we talk about people to talk about AI we need different words it's the same thing with moral reasoning and the same thing with moral decision-making you say moral agency and then people immediately imagine the Terminator or HAL or GLaDOS or something like we're talking about common sense we're talking about when you program a robot to bring you a cup a clean cup you say robot bring me a clean cup that that robot is able to do so without yanking a cup from somebody's hands which is what the person wouldn't do.

Ben:0:44:10 Right so there's a long way to go in the foundation of the common sense rules to get us to any of a more terminator based scenarios which others obviously is easier to talk about in the media and the things like that which is quite aggravating but do you get equally as upset about that sort of things I do when people are talking about this?

Marija:0:44:40 Oh yeah I have to say that I have to started that the only way out of this is to just train yourself to understand that when people talk about AI outside of AI research they're talking about something else they're not talking about the research they're talking about this societal psychological concept rather than the research of AI and that is the only way to sanity because otherwise you just go like no it doesn't believe anything it doesn't feel anything it doesn't decide anything you're just like it's a program it's an algorithm just stop it stop it no you know you just get upset. And of course we can we can just you can just educate people and be more transparent about what AI is that this is not this big mystery and, the every, because if you think of it like a mystery you think that you cannot it cannot be learned by common mortals or something but it is simple in essence it's very very simple and anybody can learn how to do it at least anybody can learn what the basic principles behind it are yeah should approach it that way.

Ben:0:45:45 Yes and and hopefully if you listen to this podcast you'll have some sort of understanding and if you go back to old episodes you'll have even more understanding and as we talk about the different aspects and and specifically in episode 18 we talk about some of the foundational stuff go and listen. So the last question I usually ask is are you are there things that I you're worried about within let's say AI that the technology were talking about here or or and also what are you excited about within the field of AI research.

Marija:0:46:21 So what I'm worried about is the use of statistics base methods there is an excitement about this and I'm worried that they will be used in a situation in which really should have a person deciding on things that impact people like for instance whether you're gonna get a loan or whether you're going to you know get paroled or these systems are always built such that this is just an advisor what you get is an output from the machine learning algorithm is just an advice and the person should take into consideration this advice and then do something else and then decide but what I'm worried about is that people have too much faith in machines and then they just propagate the decision that this made by the machine and the machine can be making some very very stupid errors right and that has actual impact on people's lives but then I talked to some some people who work in companies in Norway and apparently the situation was that people are actually of too uncertain about using machine learning even data analysis to certainly since it's a bit tricky so maybe it's better than it seems to us up there up in the Academy aware we don't see what people in companies exactly do. And the second worry is that everybody focuses on driverless cars and robots because we can see them and nobody is looking at all nobody means not true but people are not quite panicky enough about the fact that what is shown to you in day out the Internet is decided by looking at what I call a voodoo doll of you this is the data-fication of your personhood that incorporates various bits of behavior that you have exhibited online and then it's all mushed together into your category of this is the type of person you are and then what is shown to you online or through your apps is based on this voodoo doll of you that you have no control over in a way and this worries me I like autonomy and I like that I know I create the world that I see around me not that some kind of a decision of what is presented to me is based on what some algorithm thinks what type of a person I am and even though if I really also on Facebook currently thinks I'm an established adult I take great offense at that I mean I want to be an unestablished adult anything else you want to change your life and it's very difficult if you know all that you're shown is this established adult things this is the things that that that's worried means we have to pay a bit more attention to yeah what I really hope for is that there is a lot of enthusiasm in AI and then there is a lot of I've heard rumors funding for research there and I think we can ride the coattails of this enthusiasm to actually develop really new smarter systems I don't we don't really know what intelligence is we are pretty certain it's not statistics though and we are right about that statistics now so we can do better and I think that given given enough time and given enough support from society that we can actually do deliver on this if somebody in 50 years finds this and then is like oh they're you you're so stupid I take full responsibility I hope that that we can deliver we can we can deliver to the promise of people not having to do tasks that they dont want to do this is what it is about.

Ben:0:50:00 I think our people get annoyed if I hadn't if I don't ask you this follow-up question if so what do you think intelligence is if it's not statistics like all of these algorithms we're talking about.

Marija:0:50:13 Oh I think that it is in the eye of the beholder.

Ben:0:50:18 Okay that's a good get out answer.

Marija:0:50:23 I reall think it is in the eye of the beholder I don't think there is this something like an absolute intelligence something looks intelligence to you and for your intents and purposes of interaction behaves intelligently and that is intelligence. Don't ask me about consciousness I don't know anything about that.

Ben:0:50:38 Okay Awesome. Thank you so much for speaking to me and coming on the podcast if people would like follow you contact you any of those sorts of things how can they do that.

Marija::50:51 Well I'm on I'm on Twitter I'm very boring on Twitter because I'm kind of half torn between everybody professionally sees this and I just want to rant about things so I would say and then my publications are typically online and I do give a lot of public talks it seems not on judgment aggregation mysteriously but on machine ethics and so these where these talks are and when are they're usually on my webpage I'm just google my name there's. Another person with my my name that does research in biology not that one.

Ben:0:51:31Great so thank you very much.

Marija:0:51:33 Thank you very much for having me it was a pleasure to talk to you.

Ben:0:51:40Welcome to the end of the podcast thanks again to Maria it was a super interesting conversation was very exciting to be able to ask some of those more specific machine ethics questions with someone who is lightyears ahead and is doing her own work in this area, so really urge you to check out some of her work and find out more about what I think about our conversation on our patreon patreon.com/machineethics. Thanks again and I hope you come and listen next time.


Episode host: Ben Byford

Ben Byford is a AI ethics consultant, code, design and data science teacher, freelance games designer with years of design and coding experience building websites, apps, and games.

In 2015 he began talking on AI ethics and started the Machine Ethics podcast. Since, Ben has talked with academics, developers, doctors, novelists and designers on AI, automation and society.

Through Ethical by Design Ben and the team help organisations make better AI decisions leveraging their experience in design, technology, business, data, sociology and philosophy.

@BenByford