47. Robot Rights with David Gunkel
David J. Gunkel (PhD) is an award-winning educator, scholar and author, specializing in ethics of emerging technology. Formally educated in philosophy and media studies, his teaching and research synthesize the hype of high-technology with the rigor and insight of contemporary critical analysis. He is the author of over 80 scholarly journal articles and book chapters, has published 12 influential books, lectured and delivered award-winning papers throughout North and South America and Europe, is the managing editor and co-founder of the International Journal of Žižek Studies and co-editor of the Indiana University Press series in Digital Game Studies. He currently holds the position of Professor in the Department of Communication at Northern Illinois University (USA), and his teaching has been recognized with numerous awards, including NIU's Excellence in Undergraduate Teaching and the prestigious Presidential Teaching Professorship.
David recently wrote the book Robot Rights.
Ben Byford[00:00:05] Hi, and welcome to the 47th episode of the Machine Ethics podcast. This episode we're talking with David Gunkel: author, academic and technologist. We chat about AI as ideology, why write the robots rights book?, what are rights?, and the assumption of human rights. The topic of computer ethics supporting environmental rights through the endeavour of robot rights, visions of social AI and much, much more. You can find more episodes from us at machine-ethics.net, you can contact us at firstname.lastname@example.org, you can follow us on Twitter where we are Machine_Ethics, on Instagram MachineEthicsPodcast, and you can support us at Patreon.com/machineethics. Thanks again, and hope you enjoy.
Ben Byford[00:01:00] Hi David, thanks very much for joining me on the podcast. Thanks for spending this time with me to talk about all the things: AI, robots’ rights, and the books that you've written, all these sorts of things. Could you please first just introduce yourself, who you are and what do you do?
David Gunkel[00:01:17] Okay, so I’m David Gunkel, I am Professor of Media Studies at Northern Illinois University, where I’ve been for – Lord, it's going to be the past 20 years plus now, so it's been a while. I have my formal training mainly in Philosophy, and in that brand of Philosophy we call Continental Philosophy, but at the same time that I was going to graduate school, I paid for the bad habit of going to school by being a web developer, so I have also a programming background, working with web apps and developing applications for the internet.
Ben Byford[00:01:53] Awesome, great. I was going to ask you about that later on, but we can actually start with that if you want. Where did these kind of things intersect? Or is it just kind of a happy accident that you're both kind of like a technical person and also – having read some of your work – quite a deep lover of different philosophy traditions? There's a lot of reference in your work to lots of different ways of thinking about the subjects that you're trying to talk about, and we'll talk about that in your books and things in a second. But how do those things intersect for you?
David Gunkel[00:02:27] Yeah, it's a really interesting question, because it really is a result of not being able to make a decision as an undergraduate. You know, we ask 18 year olds to know what they want to do for the rest of their lives, and we don't know, right? And we sort of stumble around in the wilderness, hoping to bump into something that makes sense, and when I was stumbling around in the wilderness trying to figure out my majors as an undergraduate, I was drawn in the direction of Media Studies on the one hand, and Philosophy on the other hand.
So the way I resolved that sort of dilemma was to do both. So I double majored in Communication and Philosophy, and when I went to graduate school, there became opportunities to work in digital media development, because I was in graduate school at the time that the web is just exploding and becoming popular and there was a need for people to develop web content, but no real process by which to credential people to do that, right? It was at that time we were all making it up on our own, right, and sort of making this thing happen. So I was able to get into it very early, and do some work for industry while I was going to graduate school, which was a nice way to go to graduate school, because it kept me grounded – I think – in the world of actual applications, and took me out of the heady realm of the graduate experience.
Ben Byford[00:03:47] That must have led you into some of your work, because a lot of it covers the digital worlds, virtual worlds, artificial agents, and remixing, and all that sort of stuff that comes with that digital technology. So you do you think you were somewhat influenced by having that practical start?
David Gunkel[00:04:11] I guess, yeah, without a doubt. I would say, you know, in the early part of my career, I was very much involved in what was at the time called Internet Studies, and the idea that digital media was creating these virtual spaces and experiences. So there was a lot of talk about that business, I think, in the early stuff I did. And then somewhere in the early 2000s, we begin to see artificial intelligence taking a more prominent role in shaping our involvement with our technologies. So somewhere around the machine question, there's this pivot to much more emerging tech related to AI and robots, and that's kind of where I’ve been since that time.
Ben Byford[00:04:56] So at the beginning the podcast we always ask this one question, which I think has probably dual answer with you David, so what is AI? And probably it'll be nice to also make an extension to what is robotics or what is a robot?
David Gunkel[00:05:14] Yeah, so these are really good questions, because I think these terms kind of work like time in the confessions of Saint Augustine, right? We all kind of know what they are, until somebody asks us to define it, and then we don't know what we're talking about, because it's all over the map. So the way that I operationalise both of these for the work that I do is to look at them historically, and how they have evolved over time. How the terminology has been built out by various contributors who've tried to think about these things, but also look at them with regards to a bigger perspective, that is not only technological, but also takes into account the social-political-cultural embeddedness of these kinds of ideas. So I would say artificial intelligence – although we think AI as technology – I think AI is an ideology, right? It's an ideology for the way that we think about intelligence, and the manner by which we assume that what we do as intelligent creatures is something that could be manufactured in an artifact.
For that reason, I like to look at AI not just as a set of technologies – because it's not a technology, it’s an ensemble of technologies – and sure enough there are technologies involved in that process, but I also like to look at it as a cultural ideology, having to do with how we think about thinking, and how we think about ourselves as thinking things. “Robot” is also an interesting piece of terminology, because it actually arrives to us out of fiction, right? The term “robota”, which we get out of Karel Čapek’s R.U.R in the 1920s, is a word that originally meant – and it's still today in Slavic languages, like Polish and Czech – “robota” is “worker”, right? And oftentimes connected to forced labour, or slave labour, but even that's a stretch because “robota”, especially in Polish, which I know – I don't know Czech, but Czech and Polish are very close in this area – “robota” is just the word for “to work”. Roboty is the Polish word for work, right, as a verb. So Čapek sort of repurposes this word in his stage play from 1920, to talk about these artificial servants. And so unlike “artificial intelligence”, which is really the product of a Dartmouth conference that was put together in the 1940s, “robot” is the result of fiction, and we have evolved that term from fiction into science fact, right, but we still – I think – bear the legacy and the burden of how that term was developed in our fiction. I know roboticists will often argue with themselves about how important or unimportant science fiction is to understanding the work that they do. But I do think that because of the etymology of the word and its connection historically to the stage play, that science fiction cannot be extracted from the science fact of the robot as it stands today.
Ben Byford[00:08:30] So you've outlined the cultural beginnings of robot, and I really love the idea that the AI is basically – AI is an ideology – and it's almost like when we see, when we realise that ideology, we could probably give it a different name. Because it's actually going to be something other, it's going to be some sort of combination of technologies, which we actually might call something else, but the endeavour of AI is an ideology in itself. I like the way you talk about it like that. Do you think – I mean on a real practical level – do you have things which you call robots? I mean, you know, which are somewhat divorced from this cultural artifact, the artifice idea, and that are objects in reality?
David Gunkel[00:09:27] You know, so what I’m going to do is I’m going to borrow from my colleague Joanna Bryson, and she will argue that this thing – the smartphone – is a personal robot, right? It's an object that we carry around with us that supplies us with all kinds of support and connection and information, and really is essential for us to operate in the world today. Most of us, if we were without that object we would somehow feel lost and unable to orient ourselves to the world in which we reside. So yeah, it doesn't walk around, it doesn't look like R2-D2, it doesn't talk to us like C-3PO – although I guess with Siri it does talk to us. But it does play the role that we generally afford to the robot companion in our fiction, and it just takes – you know, its form factor is a little different, but I think the smartphone is a really good example of a personal robot as it stands today.
Ben Byford[00:10:25] And this might be kind of a semantic issue, but why didn't you call your – so your latest book, in 2018, Robot Rights, why wasn't it called AI Rights? Or was that kind of an alliterative issue, rather than anything else?
David Gunkel[00:10:42] Yeah, so it is due to two things. One is the alliterative part, Robot Rights just sounded kind of cool, because then you had the alliteration in the title. But it also was at that time there was very little talk of AI rights, but there was a lot of talk about robot rights. And I think it's because of some rather high profile public events that were put on by Noel Sharkey and Alan Winfield and a few other people in England, where the press was asking them about robots and about moral and legal status. So the terminology “robot rights” kind of became a “thing”, for lack of a better description. So it was already in the ether, whereas AI rights hadn't quite really been recognised as something that was worth talking about. But I would say that the way the book develops – it sort of develops a strand of thinking that affects both AI and robot rights.
Ben Byford[00:11:40] Yeah, I mean I think that's how I viewed it, because the distinction isn't always obvious. As you were saying in your introduction, there's this kind of sticky issue where robots inhabit AI-ness, where this combination of ideologies, which are technologically created – I’m trying to not mix metaphors here – but you have this embodied AI situation, is quite often how people talk about it. But then you can also have this body-less AI. But again it's somewhat not that – it's kind of like almost not a thing, a body-less AI. If you take it to the nth degree, and it has – and part your definition is that it has – effects on the world, and if the world affects it, then there's probably very few AIs that actually have no body completely. Or, but I guess it's – I’m anthropomorphising those kind of robots which look more, or behave more, or can we can relate to more like humanoid type things, then we are – it's almost like we are easily labelling these things, categorizing them as robots, because of that fact, rather than because of any more sensical, logical reason for doing that. I’ve just kind of gone out on a tangent there.
David Gunkel[00:13:04] No, and you know, part of it is because the book is a really a massive literature review, right? I mean it's looking at the current literature on the subject and trying to take it apart and see who's arguing what from what position, and what it all means. I had to work with the lack of terminological precision that exists in the literature itself, and so you had people using the term “robot AI” in, you know, substituting one for the other without any sort of rigour as to what these things delimited. I needed to have at least a flexible enough terminological framework to be able to respond to the range of different literatures that were being developed by people from all kinds of different fields, and the choices that they made for terminology really needed to affect how I took these things up and dealt with them.
Ben Byford[00:13:58] Yeah, the book is called Robot Rights, or robot-type thing rights, I guess. There you go, thank you. What is it that drove you to produce this book, because it's quite a deep dive into how you might think about giving something which is created some sort of personhood, and thinking about how you get to that conclusion. But what is the initial spark that actually makes you want to do that? Why should robots be considered as moral patients and agents at all? Like what drove you and excited you about this area that made you write this book?
David Gunkel[00:14:54] So I think it's probably two interweaving sort of conceptual ideas that came together to make this happen. One is my undergraduate education in Philosophy is really in Environmental Philosophy, and so I spent a lot of time working on environmental rights, environmental ethics, and things like this. I became fascinated with the way that human beings have divided up their world into “who” versus “what”, right? Who counts as another moral subject, and what is a mere resource that we can utilise and exploit? And the history of moral thinking has been a sort of ongoing challenge to that dividing line between “who” and “what”, so at one point we said, “Well, animals. No, animals don't count for anything. We can abuse them, we can use them, do whatever we want to them.” And Descartes even went so far as to say he did not think that animals even had consciousness, or sentience, they were just resources. And then you get the innovations of animal rights, with Peter Singer, Tom Regan, and a few other people, right? And that starts to change the way we think of animals, and our responsibilities to animals.
Something else happens later, after that, in environmentalism, where we think about the protection of the environment – not as doing it for the sake of us, but that it has a dignity, that the Earth itself needs to be cared for. And so you have – I think – a lot of return to indigenous ways of thinking. To try to reason through how Western society became this very exploitative way of thinking about the environment, versus a more indigenous way of thinking about the environment, as a companion, as another being to which we owe respect and have responsibilities.
If you push that further, you then begin to look at artifacts. Now in law, we do extend personhood to corporations, and that's a reason, you know, we do that in order to make them subjects of the law. But I began to be fascinated with this idea of moral expansion and moral progress, and whether or not there would be a moment at which we would think that artifacts – robots, AI, whatever the case is – would also factor into this kind of reasoning process where we would think, maybe we have some responsibilities to artifacts of our own design. So it was sort of a thought experiment, but that's the one strand. The other strand that came together to make this book possible is that as soon as I began examining that question, whether artifacts could have moral status or legal status, I discovered that this was a polarising question among technologists, but also among philosophers. It’s kind of like the abortion question in technology, right? You’re either really for it, or you're really against it. And I became fascinated by the fact that this question had this ability to sort of parse the room in to two camps: those who were vehemently opposed and those who were vehemently in favour, and I wanted to know why. I wanted to know why this question – the machine question is what I call it in the earlier book – but why this question produced this kind of reaction, and what sort of assumptions, what metaphysical affordances, what sort of way of understanding the world, led to parsing up reality in such a way that allowed people to make these very – almost extreme – arguments one way or the other.
So I didn't have, at the time, a horse in the race. You know, I was just interested in why this argument, why this debate, why this contentious response to what seemed like a rather simple question that extended a rather reasonable way of thinking about the world that had worked for a couple hundred years already.
Ben Byford[00:18:45] Yep, and I feel like you have your kind of “other side of the coin” position from people like Joanna Bryson, who are diametrically opposed to seemingly this way of thinking about machines. Do you feel like that kind of opinion is wrong, or is it just part of that argument, part of that conversation?
David Gunkel[00:19:09] Right, so you know it's a good question. I wouldn't say wrong, but I would say it is informed by some metaphysical assumptions that don't necessarily hold when you try to build the argument. So what I find with both sides of the argument are ways of constructing the debate that really mobilises ways of thinking that I think don't hold up as you begin to pursue it further. So I think there's – like you know, in the people who are opposed to this way of thinking, there is – a residual humanism that they just don't want to let go of, and that humanism on the one hand opens up some opportunities for a great deal of human history, and what we've done, but it also can be faulted for creating the Anthropocene, right? I mean when we put the human at the centre of the Universe, and make everything else into a resource that serves our needs, we find ourselves exploiting the Earth, right? Exploiting the other creatures that occupy this planet with us. So what concerns me is, you know, what arguments are being mobilised, what assumptions are being utilised, what sort of metaphysical affordances are in play, and what are the consequences of holding to that way of thinking? And what new opportunities might be available to us when we start to open these things up, and crack open these often unquestioned assumptions?
Ben Byford[00:20:43] Yeah, and one of the things that you stipulate in Robot Rights, if I’m getting this correctly, is that it's really interesting that you talk at the beginning about rights, and what rights are – it’s a presumption about rights. The presumption is that rights have to be the same across the board, so if you – and which is really nicely put in your in your book – that, you know, rights is this term that we can use, but it doesn't mean that robots or other things need to have the same rights – or the same types of things – extended to them. But we should probably consider what types of things are extended to those things, and there's a sticky situation of, kind of like, what do you – what things do you consider things, and what “things” do you consider “not things”? Or, you know, “persons” or whatever you want to call it. Do you have this intuition about what kinds of rights, if not all rights – as in human rights – what kinds of rights might robots benefit from, and why?
David Gunkel[00:21:50] Yeah, so this is a really good question, and I would say if there's anything that I’m currently really working on and concerned with, it’s to try to bring to this conversation a much more precise understanding of what rights are, and how rights operate. I think we all value rights. Everyone talks human rights; it's a very important component of contemporary political discourse. And it's – you know no one's going to say, “I’m against rights” – just not a position anyone's going to occupy, but when you ask people to actually define rights, it's like, “Well, yeah, you know, it's this thing.” And we kind of know what it is but we don't really have a good formulation of what rights are, and because we lack that, having this conversation really is deficient in its ability to talk.
Oftentimes, what I find is that the two sides in this debate are debating from different positions of what they understand rights to be, and so they meet in the middle and they have this this vehement back and forth, and they're not even talking about the same thing. So what I like to do is take the word “rights”, which – already in 1920, an American jurist named Wesley Holfeld said, “You know, most people who are in the legal profession don't even understand rights, and they often use the term in very inconsistent ways. Not only in a single decision, but even in one sentence,” right? So, he breaks down rights into what he calls the Four Molecular Incidents: claims, powers, privileges, and immunities. And his argument is, if we look at rights in that form very specifically; a very specific claim to something, which then has the obligation on another person to respect it or not respect it, or a power, or an immunity to some imposition, we can get a much better understanding as to what it is we're talking about. And hopefully, when we have these debates, agree upon the very basic foundations of what rights are before we get involved in making wrong-headed assumptions about what we think the other side is arguing.
So the biggest problem right now is – I think – both sides assume that rights must mean human rights, and that when we say robot rights, we're saying let's give robots the vote, let's give robots the right to life, and all these other crazy things that you might hear either side of the argument making. And I think that really steers the conversation in entirely the wrong direction. I would argue that any rights that would accrue to a robot will be very different from the set of rights that accrue to an entity like a human being. There will be sets of animal rights that will not be the same as the set of robot rights, there will be environmental rights that may have overlaps with what we recognise for a robot, but would never be the same.
Part of the problem, I think that we have in addition to defining rights, is that in our legal ontology we operate with only two categories of entity. We have persons, or property. And this is something we've inherited from the Romans. If you go back to Roman law, there were persons who were subjects of the law, and had rights and duties, and there was property. So for example, the pater familias, who was the only individual recognised as a rights-holding individual in Roman law, was the male head of the household. Everything else, his wife, his children, his animals, were his property that he could dispose of and sell into slavery or do whatever he wanted to do with them. From where we stand right now that's a crazy idea but that's the way the Romans organised their world. But we did inherit their division between person and property. This is why now, when you want to protect a river, like in New Zealand, the indigenous tribe asks for the river to be recognised as a legal person. It isn't because they think the river is a person necessarily, but it's because they recognise the only way to get legal protections for that piece of property is to get it to be recognised by the law as a person. So I think in addition to being able to define rights more specifically and clearly, we also need to recognise that we're working with a legal ontology that has some archaic connection to Roman law, and that imposes on us some divisions that are very artificial, and that make our world more constrained than it perhaps should be.
Ben Byford[00:26:21] I mean for me, is the answer just we need more categories? And then there will be a hard problem of asserting what fits into what categories. But if we are left in the situation where we have property and personhood, or objects and people, could we fulfil that in some way that will then make it easier for us to then attribute some of these rights, because there will be a subset of rights? There wouldn't be human rights, because they're not persons. They might be something other than persons, which have a different set of rights, that we can kind of operationalise in this way for rivers, but also for robots and AI?
David Gunkel[00:27:01] So I think we see our legal system straining in that direction right now. I think you see it in the debates concerning corporate personhood, and the recognition that with multinational corporations, the status of person is something that seems to be counterintuitive to the way that we often think about what is a person. We also see it being challenged, I think, in environmental law, where people are using the category of person to gain protections for environmental objects and land features, etc. So I think already our legal system is straining in that direction. Straining against those Roman categories, and trying to come up with a more fine-grained way of understanding the legal categories that we'll be dealing with.
But there's two problems here: one is law changes very slowly and it will be a long time before I think we see any sort of traction on developing something that begins to break out of that binary distinction. Second, law is very local, right? What happens in one jurisdiction will not be the same as what happens in another jurisdiction, so you'll find that in one part of the world the idea of personhood is easily expanded to these things, in another part of the world the idea of personhood is not easily expanded to these things. And so you get this very differing set of patchwork laws across the world, and oftentimes we're dealing with things that are global in perspective, right? So that a rainforest in Brazil is not just a Brazilian concern, even though it falls under Brazilian law. Bolsonaro's government can burn it down, and do other things, but it does affect the rest of the planet. So you have to understand that you know, even though the laws are regional or local, they do have international impact, and so I think those two things make this a very complicated procedure. But I do think we're moving in that direction, and we do see signs that there is a desire to move in that direction.
Ben Byford[00:29:12] Well, that's really positive, I mean especially for the environmental picture. You know, yeah it would be great if we could get there sooner, but do you think there's like an imperative from the view of the AI or the robot themselves to have some sort of rights? I mean, yeah. I think there's a lot of reference in the book to agency and patienthood, kind of the ethical terms. As an ethical patient, is there an imperative to have some protections, have some claims, have some powers and immunities?
David Gunkel[00:29:52] Right. So this is a nuance in this whole debate that is important, and I think crucially
Important, the reason that we give animals moral patiency or that we extend to animals various rights is because of what we think they deserve, because they're sentient and therefore they deserve not to be harmed. So this idea that there's a sort of dignity to the other that needs to be respected, and that's the reason we confer the rights, that's one strand of thinking in this moral expansionism. That doesn't work when you get to environmental objects, right? A waterway isn't going to feel one way or another if you dam it right? So, when you dam a river, it isn't like the question of the river's right is a question of what will the river feel when you dam it. It's more a matter of what will be the impact of that dam on the larger ecosystem? What would be the impact of that dam on the social world in which that river occupies an important role, with regards to the people, the animals, the places that that river serves. I think the AI and the robots will be more of the latter than the former.
I think there's a lot of arguments go like this, and it's a credible argument: it says, “Well, right now we've got robots but they're pretty stupid, and they don't really feel anything, and they don't really think, and they don't have consciousness, so we don't need to worry about them. They're just tools. Once they get conscious and they can think for themselves, or be sentient, or whatever, then come and talk to me, and we'll have the rights conversation.” That is a way of thinking that I think is really rooted in the animal rights understanding of things. In my understanding of where we're at right now, even these dumb objects might need rights, but might need it for the same reason that the river needs rights. Not because of what the object might feel, but because of what it might do to us. Because of what it might mean for our legal systems, what it might mean for our moral affordances, with regards to the world in which we live. So I would like to look at it in a more macroscopic view, as opposed to this sort of microscopic view that drills down into the entity and says, “Why does this entity need this?” I would rather say, “Why do we need this? Why is this something that serves the needs of the larger community, and how does that play for us on a larger scale?”
Ben Byford[00:32:23] I think that really makes sense, and the river analogy actually is pretty useful in this way, I think. On a practical note, I think we see those things already happening, right? Anecdotally, you talk about kids talking back to Siri and Alexa, and learning bad habits about how you kind of interact. Like the relational aspect of that – the social relational aspect – how do we relate to things, and how does that actually guide our own development and our own thinking about the world?
So it almost sounds like – have you heard about computer ethics? This is kind of – I read in my research a while ago that computer ethics was kind of this cute sort of practical thing, which was concerned with breaking computers, right? So, if you don't shut it down before you turn it off, it's going to break, you know? You’re going to ruin the hard drive, because these things were back in the 80s and much more fragile and less robust than they are now. So there's this idea about how you treat the machine. How you treat the computer in a way that it won't break, and you will be able to get your work done faster and all this sort of stuff. But it's almost a nice kind of analogy to, you know, we need to treat these things in a certain way, and how we treat those things reflects both on us, also on how we interpersonally relate to one another, and how those things are, as a system, linked. You know, how societies develop, and how we tell our stories about cultural artifacts and things like that. You know, if we start kicking the computer, are we going to start kicking dogs or, we start going – how do these things then continue into our to ideologies, and into how we behave? So I think it's already happening almost, in a way, and maybe this conversation is actually about how we formulise, how we then take computer ethics, or how we take our relationship to things which then interact with us on a social level.
David Gunkel[00:34:45] So just two real practical versions of this: so remember hitchBOT, the hitchhiking robot? When hitchBOT was making its way across the United States, it didn't get very far, right? It made its way across Canada, and then they tried it in the US and it got, you know, to the East Coast, and it got to Philadelphia or somewhere nearby, and then it was vandalized. It was destroyed, and there was this outpouring of emotion on social media for the robot, right? From people who’re like, “Oh, poor hitchBOT. People killed you,” or you know. “You were attacked brutally by vandals who did not care for you,” and that kind of expression. I don't think is an aberration. I think it's part of being human, right? We express our emotion for things, and we invest in things. Another example is when Boston Dynamics was trying to demonstrate the stability of their Atlas and Spot robots, and they would have them walk around the office building, and they hit them with a hockey stick, or kick them, or things like this, and again the outpouring of support for the robots on social media, “Oh, poor Spot. Why are they kicking you?”
My colleague Mark Coeckelbergh wrote a really great essay called “Why kick a robot?” or “Is kicking a robot wrong?” and it gets into a lot of this idea that human beings anthropomorphise things, and we oftentimes are told that this is a bug that has to be fixed, but I think it's a feature, all right. Anthropomorphism is a feature of sociality, and we're social creatures, and the way we make sense of others in our world is by anthropomorphising them. Now, I think it should be managed. I think it's a feature to be managed, not a bug to be eliminated. And how we manage it, I think we are only beginning to understand as we get into developing more animal-like objects, more humanoid-like objects, more objects that seem to have intentionality agency, and all these other kinds of things that we project onto them, but that if we're being honest with ourselves we have to learn how to respond and respect those things as a social situation.
Ben Byford[00:36:52] And by extension, it might be that if we have all this framework already in place and things, the capacity of the capability of things will increase, and we'll get to a stage where actually we need to extend these rights, and we already have a framework in which to do that. You know, we already have some sort of legal or ethical framework to go, “Oh well, they have these rights at the moment, but actually they're showing much more intentionality now in the way that they're implemented, and maybe they need to extend that.”
David Gunkel[00:37:23] Right, and so, you know, part of my argument is – I think – instead of a wait-and-see approach, which says, “Well, let's just wait and see, when the robots come to our law court and ask for rights, then we'll give them rights.” I think that's a very science fiction scenario. We see it in the Matrix – the prequel to the Matrix – we've seen a lot of other science fiction films. Instead of waiting for that , I think we need to get out in front of this, for two reasons. One is we need to be prepared for the challenges that this is going to present to our legal categories, our moral categories, our way of thinking about ourselves and our world. But it will also, I think, help deal with the challenges we face right now with the environment, right? We are living through a moment of intense climate crisis, and I think thinking about these very difficult matters with regards to AI and robots also has a reverse effect in helping us to get the vocabulary and the mechanisms by which to talk about non-human, non-animal things that could have sort of social standing or legal standing, with regards to us.
Ben Byford[00:38:34] Awesome, great. So let's do that then.
David Gunkel[00:38:38] I’m working on it.
Ben Byford[00:38:41] Okay, I’ll give you a couple of days, all right.
David Gunkel[00:38:44] Please do.
Ben Byford[00:38:45] So, we've got a little bit of a general question from Martin Cunneen, who is at the University of Dublin, I think.
David Gunkel[00:38:57] He's at Limerick, but yes.
Ben Byford[00:39:00] Yes, thank you. He was wondering whether you can talk about this area in terms of kind of relational ethics, and that idea – if that factors?
David Gunkel[00:39:12] So this notion of relational ethics is a development in moral thinking that tries to think outside the box of the traditional moral systems of either virtue ethics, or utilitarianism and deontology, right. And it is a social approach to ethics that recognises that the moral relationship we have with others in the world is where the ethical impetus comes from. So in my own work, this really evolves out of not only environmental ethics, but also out of the work of Emmanuel Levinas, who developed a form of moral thinking which puts the emphasis on the ethical relationship prior to the ontological decisions. So, as I said before, we divide up the world in “who”s and “what”s, right? That means, before we make moral decisions about how to treat something, we have to make ontological decisions about what something is. And so we have to sit down and say, “This is something that counts, and therefore is afforded moral respect. This is something that does not count, and therefore can be abused, used as a resource, and is just a piece of property.”
But when you think about it, we don't sit in our room and categorise the world and then go outside and start interacting with it, right? We begin by interacting with the world, and so Levinas’s point is we don't first decide the ontology and then do the ethics, we engage with the world in a way that already asks us to make a response to the other. Whether the other is another human being, whether the other is an animal, whether the other is a river, or a robot, that the ontological decisions – the “who” versus the “what” – are done in retrospect, after we've already made choices. After we've already made decisions that have ethical consequences. So the relational approach says, “Well, why not just be honest with ourselves? Why not just say we divide the world up based on how we interact with it as social creatures, and put the emphasis on the social relationship, prior to the ontological decision making?” And that will create, at least in Levinas’s mind, and people who follow him, a different way of doing ethics that is more altruistic, that is more realistic, and that doesn't require all of these metaphysical affordances that typically get us into trouble when we try to make moral decisions.
Ben Byford[00:41:53] Do you think you can get yourself in sticky situations with that way of thinking about ethics, in the way that you might end up having subjective moral outputs instead of more of an objective, logical basis? So, I could see, if you were talking about your embodied social interaction with something other, and then someone else has had a different interaction, and you can come to different outcomes, is some problem within that sphere?
David Gunkel[00:42:32] So I think it is a potential problem, but it's also an opportunity, because I think it opens up the space for dialogue about these matters that allows for us to challenge each other to say, “You know, your subjective response is this, somebody else's subjective response is that, and how do those two things relate to each other and how does that open the space of conversation about moral matters?” So I don't think this is something that you can decide once and for all, and say, “Okay, go out in the world and have at it.” I think it's a dynamic process of ongoing negotiations and dialogue, where somebody is always being challenged by someone else, with regards to this decision making and that's the way we hopefully make progress in this project.
Ben Byford[00:43:21] I’ve kind of run out of questions, so do you have anything that you want to pull on, threads that you would like to continue talking about?
David Gunkel[00:43:31] I guess the only other thing I would mention that I think is crucial, is to go back to this idea of dialogue that I just talked about. I think a lot of the ways in which these conversations have gone, and the way these decisions have been made has been, for lack of a better description, extremely European and Christian. That we have really imported into AI ethics and robot ethics a very Euro-Christian-centred way of thinking about the world. And that's because not only are these technologies developed and centred in the Global North, but also European philosophy has this kind of footprint that is difficult for us to see outside of, and I think there's a lot to be gained from looking at these matters on a larger global scale. And that there are ways of looking at these problems and these opportunities from different cultural perspectives that I think open up vistas that we didn't previously see, or see available to ourselves.
So I think a lot can be gained, just as we did in environmental ethics, by looking at indigenous thought. I think there's a lot to be gained from indigenous thought with regards to AI. We don't normally think about indigenous thinking and artificial intelligence as somehow being in contact with each other, but I think they need to be. I think there's things there that can happen, and ways of opening up opportunities that we're missing. I think the same has to do with the way other cultures address these questions. If you look at the way the Japanese address a lot of the questions you and I were talking about today, the response you get out of Japanese philosophers and scholars is very different, due to the way that that culture interacts with technological objects. And again, I don't want to say that one culture's way of doing this is better than another, but I do think the conversation between these cultural differences is absolutely crucial to moving us forward, because if we think we got it right, I think that's where we get into problems. I think we’ve got to look at how others have looked at it, and bring them into the conversation, so that we can not only question our own way of thinking about things, but can see other opportunities that we may have missed.
Ben Byford[00:45:45] I think it's the way of thinking in Shintoism.
David Gunkel[00:45:50] So in Shintoism, you have a very strong tradition of animism, right? And in the European tradition, especially out of Cartesian metaphysics, animism is completely put to bed. I mean it's not something that a good European philosopher of the 19th century would take very seriously. I do think there's something to animism. I don't think it's just wrong. I think there's something that is there concerning our experience of the world, and we need to understand where that comes from. What perspectives it opens up to us, and what new opportunities and challenges that it brings to the conversation.
Ben Byford[00:46:36] In the future David, do you think we're heading towards a future where at first we might have more socially adept tools, and then those tools will slowly become members of the family? You know, there's a very strange distinction there, but you know what I’m trying to say is we'll be slowly moving towards some sort of ideological change with the technology. You know, technology is not going to slow down, right? And so we're going to be producing more interesting, cleverer, more “intelligent” systems, that will be able to perform different and more complex tasks, and as those tasks develop we'll become – we'll anthropomorphise them more – but also, their capabilities would just be more, so we won't need to actually anthropomorphise them as much, because they will actually be inhabiting some of those behaviours themselves. Is that something that is going to play out, you know? Is that the definite and then and at that point, we're already in a sticky situation of, what are these things? Are these things going to be incorporated in different ways? Is that the vision that we have?
David Gunkel[00:47:54] So yes, I would say that that probably is a good explanation of how technological progress will affect this, but I also think we have to look at how our own social processes affect this. So, for example, here's the puppy, right? This is a German Shorthair Pointer, and I think of how the dog has changed, with regards not to the dog itself, but regards to how we interact with the dog as a creature in our world. So my grandfather raised these dogs, but in his mind, these dogs were gun dogs. They were for hunting, so they were considered tools, and for a great deal of human history, dogs were tools: for herding, for working, for hunting, you name it. So what did my grandfather did when he had a dog that wouldn't hunt? Well, you did with the dog what you did with any tool – you disposed of it. And for my grandfather, that meant take it out back and shoot it. Now from where we stand, at this current point in time, this seems absolutely barbaric, right? This seems like a really horrendous thing to do, but at the time that he's doing that it is completely reasonable and logical. You have a dog, the dog's job is to be a tool for hunting. This dog doesn't hunt – it's a broken tool – dispose of it.
Today, it’s entirely different, right? This dog is a member of my family. I’m not even a dog owner, right? When I go to my vet, I’m called a “pet parent”. I don't buy a dog, I adopt a dog, right? So the role of the dog with relationship to us is different. Now, the dog hasn't evolved beyond where it was a couple hundred years ago, what has evolved is our social relationship to the dog.
I think just as powerful as the technological advancement will be, there will also be a very powerful sociological advancement in the way that we respond to these objects, because of the way they're created, because of the way they're marketed, because of the way users use these things. Because designers can do anything they want, but users will always subvert designers, so they can design something and say, “I’m designing this intentionally to be a tool,” but users may undermine that completely by the way they incorporate it into their world. So I think we have to keep an eye not only on the technological advancement, we have to keep an eye on sociological advancement, because I think that's going to come first as these things change their position with regards to us.
Ben Byford[00:50:23] Awesome, that's a really good point. I really like that. It's almost what I left out of the picture, right? The cultural relationship that we have and which is going to be alongside all these technological innovations, basically. So we're getting towards the end now. The last question we always ask our interviewees is, what are the things that really excite you and what are the things that might scare you about this technologically-mediated future that we're going to be moving into?
David Gunkel[00:50:59] So the thing that really excites me, I think, is a lot of what we've talked about today. This idea that the way these artifacts are coming into relationship with us is really challenging us to think outside the box, not only about ourselves but also our relationship to our world, to animals, to the environment. And I think in a sense, these can be a kind of object lesson for rethinking our moral commitment to the entire universe of objects that concern us. And that, I think, could only have positive impact as we begin to take greater responsibility for the way we live on the planet and the way we care for the planet, and that I think is of crucial importance right now in the face of climate change and the devastations that we're seeing across the globe. This doesn't mean that we marginalise questions of human rights or human dignity, it means we roll that into a much broader perspective of the world that we occupy that can make room for these other things. That is what really excites me and makes me very optimistic.
I think what makes me pessimistic and worried is that manufactured objects are things that are done by people in power, right? We create these objects and it takes money, and money requires the power to be able to accumulate the capital to make these objects, to distribute these objects, to market these objects. What I’m afraid of is the way that the powerful will be able to use this challenge to their benefit, to the detriment of the majority of the human population. So I think behind everything we've talked about today there is a question of power, there's a question of politics, and if it's done right, it will be open, transparent, and democratic. If it's done wrong, it'll be authoritarian and totalitarian. And I think alongside all of these really interesting innovations, and moral thinking, and social relationships, etc., there is a political responsibility that we have to take up, and take very seriously, because this is not going to happen automatically. It won't produce good results unless we are actively involved in making sure it produces good results, and that we have got to do what we've always done – challenge the power when it is out of control, and when it's abused, in order to assure that we have democratic apparatuses that can respond to these opportunities and challenges.
Ben Byford[00:53:37] Great, thank you very much. That was a really interesting load of things we talked about today. I would love to talk to you some more, maybe another time, if that's alright with you, David. If people want to pick up your book, follow you, and interact with you, how do they do that?
David Gunkel[00:53:57] So you can follow me on Twitter @David_Gunkel, that's my handle. You can get all of my publishable work, that's available freely on my website, which is gunkelweb.com, and then the two books that we talked about today are The Machine Question, and Robot Rights are available from MIT Press.
Ben Byford[00:54:17] Brilliant, so thank you very much for your time.
David Gunkel[00:54:20] Sure. Thank you, it was wonderful talking to you.
Ben Byford[00:54:26] Hi, and welcome to the end of the podcast. Thanks again to David for spending the time with us. I really liked the idea of the environmental ethics coming in from this endeavour of using rights as a way to explore the meaning of rights for different non-human entities, and how we assert those rights, and deal with those rights, and what kind of things those rights could be. I had a really great time with Jess from the Radical AI podcast, so she'll be on the next episode, talking about all sorts of different things, technical and non-technical. So stay tuned again. If you'd like to see more episodes, go to machine-ethics.net. If you want to contact me you can actually go to www.ethicalbuy.design, and you can find information about talks and consultation that myself and the other people who work with me do, and thanks again for listening.