75. The moral status of non-humans with Josh Gellers

This episode we talk with Josh Gellers about nature rights, rights for robots, non-human and human rights, justification for the attribution of rights, the sphere of moral importance, perspectives on legal and moral concepts, shaping better policy, the Lamda/Lemoine controversy, predicates of legal personhood, the heated discourse on robot rights, science fiction as a moral playground and more...
Date: 28th of March 2023
Podcast authors: Ben Byford with Josh Gellers
Audio duration: 01:05:03 | Website plays & downloads: 149 Click to download
Tags: Human rights, Robot Rights, Morals, LLaM, Science Fiction | Playlists: Philosophy, Rights, Science Fiction

Josh Gellers is an Associate Professor in the Department of Political Science and Public Administration at the University of North Florida, Research Fellow of the Earth System Governance Project, and Expert with the Global AI Ethics Institute. He is also a former U.S. Fulbright Scholar to Sri Lanka. Josh has published over two dozen articles and chapters on environmental politics, rights, and technology. He is the author of The Global Emergence of Constitutional Environmental Rights (Routledge 2017) and Rights for Robots: Artificial Intelligence, Animal and Environmental Law (Routledge 2020).


Transcription:

Transcript created using DeepGram.com

Hi. And welcome to the 75th episode of the Machine Ethics Podcast. This time, we're talking to Josh Gellers, the University of North Florida. This episode was recorded on the 7th March 2023. Josh and I get to talk about nature rights, rights for robots, non human rights, and human rights, a sphere of moral importance, shaping better policy, the Lambda Lemoine controversy, predicates for legal personhood, the heated discourse around robot rights, our science fiction as a moral playground, and much, much more.

If you like this episode and wanna find more, you can go to machinedashethics. Net. You can contact us at hello at machinedashethics.net, or you can follow us on Twitter, machine_ethics. Instagram, machineethicspodcast. And if you can, you can support us by going to patreon.comforward/machineethics or leaving a review wherever you get your podcasts.

Thanks very much for listening. Cool. Okay. So, hi, Josh. Welcome to the podcast.

If you could please introduce yourself, who you are, and what do you do. Sure. Well, thanks so much for having me. My name is Josh Gellers, and I'm an associate professor at the University of North Florida in the Department of Political Science and Public Administration, and my primary area of expertise is environmental politics and human rights and, more recently, technology. And so I've been, by way of environmental rights, more recently studying the rights of other nonhuman entities, including robots, and, that was the subject of my 2020 book, Rights for Robots, Artificial Intelligence, Animal, and Environmental Law.

Awesome. Thanks, Josh. And there's this question that we always ask on the podcast, which is, to you, when we're talking about AI and you obviously mentioned your book there, Robot Rights, what what are we defining those things as? Robots, AI, those sorts of things? So as you know, right now, that's a particularly thorny issue, but I think that where I'm coming at this issue from is that the very idea of artificial intelligence is an anthropocentric conceit that human like intelligence is both something desirable and something that we should strive to emulate in some kind of synthetic form.

So as a conceit. Yes. Yeah. It's it's it's hard to extricate our our human impulses for our desire to wanna play God and replicate ourselves in in other forms from what we're doing technologically. Yeah.

That's that's really interesting. I feel like you've opened a can of worms, like, straight off the bat there. So there's there's this idea, I guess, of the AI idea as this this striving towards higher intelligence, right, or or human level intelligence and beyond. Is that kind of what you're referring to, or is it, maybe other types of technology which emulate maybe animal behavior or some sort of alien behavior. I kind of, like, personally, like, take take a broader perspective on this thing, but, obviously, I come from a position of maybe thinking of it more than the general public does, or maybe I've just played too many games or, read too many books.

I don't know. No. I mean, that's a good point, and I think that there is certainly a a gulf between the way that the public perceives what they consider to be artificial intelligence versus how experts, scholars, such as yourself, consider the same issue. And I think that, you know, I have studied social robots, but from a kind of social science and legal perspective. And, you know, thinking back to a more neutral view of artificial intelligence, right, robotics is one of the sort of sub disciplines under the umbrella of artificial intelligence.

So, yeah, I don't mean to say that intelligence, practically speaking, is the only domain that AI is focused on. It's just the one that probably gets the most media attention. But, yeah, I mean, behavior itself is certainly, under that umbrella as well. And so, like you, I think I would take a a broader view of the kinds of technologies that are sandwiched, in that term. Yeah.

Yeah. Definitely. And I I'm really interested in how you came at this because you mentioned you you're coming from this kind of environmental perspective, and then you have this idea that, if the environment, or the the world, the planet, gets some sort of, kind of ethical, patient status. Right? You can do something to to the world, and the world should be a concern.

Right? And then translating that to other things, animal rights, and and by extension, you know, non, living entities. I guess that's probably a thorny issue as well, which we'll get into. But is what did that trajectory look like coming up to, and looking into AI, rights and and robot rights? So I came at this almost by accident because, as I mentioned, you know, my dissertation, my first book, a decade of my career had been focused on environmental human rights and environmental justice.

And initially, I was looking at the origins of environmental human rights and then their impacts. And then I noticed a few years ago that people in my area of the world were starting to focus on the rights of nature, given some of the advances that have occurred in that area, most notably Ecuador's constitution in 2008, which was the first, as so as far as I'm aware, only constitution that expressly extends, legal rights to nature, although it's now present at a number of other governmental levels, like even at the regional or subnational level, like here in the state of Florida. So, basically, I was looking at what are interesting questions surrounding the rights of nature. And then I think it was because I came across a tweet about, Hanson Robotics', you know, humanoid robot, Sophia, being granted honorary citizenship by the kingdom of Saudi Arabia. Mhmm.

That was the first time I thought, well, wait a minute. So on the one hand, we have something that no one would ever mistake for a human being, does not possess the kinds of traits that we normally associate with moral or legal status, in nature. And then you have this strange, you know, arguably, public relations event where, clearly nonhuman but looking much more like a human entity is now given some kind of, even if it was a joke, implicit or not effective right. And so that made me think, what are the conditions under which certain kinds of robots would be eligible for moral or legal rights? And that was the genesis of, you know, I mean, I've done a little bit of stuff in technology before that, but never about, like, the moral status of things.

It was more, you know, about green building, building automation technology, crowdsourcing, and things like that. Mhmm. But it was because of my background in environmental human rights, and then the rights of nature, that I came across this what to me was a very interesting question about, the moral and legal status of nonhuman, non natural, technological entities. And so what I tried to do is examine how the theory and practice on animal rights and the rights of nature could gain some purchase on this machine question. What could we learn from those areas, and how might they apply to this particular case, which is very different in some ways?

And what I realized was that these areas, well, including human rights, so human rights, animal rights, rights of nature, and even potentially the rights of robots, have proceeded in completely different, albeit parallel ways. And what I sought to do was try and find common ground to where we could develop some sort of coherent theory or approach to any nonhuman. And that's what I realized by the end of of of writing the book was that this isn't about robots. This is about rights. This is about the manner in which societies choose what is morally and legally significant, and it's not specific to the kind of entity.

It's about the development of rights discourse, jurisprudence, and how that evolves over time. Giving in that framing, you could almost then apply it to humans and then see if they satisfy those Mhmm. Conditions. Right? Absolutely.

This is one of the the the ones that I came across when I was giving, a guest lecture in a dignity law class, and I was talking about dignity for nonhumans. And this is, again, a class that's specifically focusing on dignity rights. And even the students there said, you know, it's been a semester. We still don't really know what dignity is. And in terms of its how it relates to rights theory, it's circular reasoning.

It's humans have dignity, and therefore, they have rights, and only humans have dignity, therefore only humans have this, you know, kind of, like, human rights. Yep. And so I view dignity as very unproductive as a way of justifying the extension of rights. And I think that there are other ways that might be more productive like, Nussbaum's conception of flourishing. I feel like that that's a closer parallel to the justification used for the extension of human rights than whether something has dignity or not.

So I think dignity is ultimately unhelpful, but it does get us to think about, well, what are the kinds of criteria that would satisfy the extension of rights for a number of different entities, including humans and others. Mhmm. And I guess the the question is that you if you have this kind of, like, this framework that you can apply to different entities, and then you have to I guess it's another, view onto what what an entity is. And I think you actually pulled this out in in some of your articles that, those entities could be both a singular robot device or some sort of, multitude robot device or some sort of AI disembodied thing. And and there's kind of levels of abstraction that you can then look at to apply some sort of framework?

Yeah. I mean, that's the reality is I borrow from philosophy of technology, especially, Latour, and, you know, and this is something that I didn't come through philosophy of technology to get there. It came through critical environmental law, which in some ways, draws on some of the same ideas. And in critical environmental law, one of the really important insights is that we have this static notion of the universe of legally relevant subjects, When in reality, these things are kind of contingent on, particular conflicts, moments in time, societal, choices, and so on. And so the way that it's described in the critical environmental law literature is the word tentacular, where it's sort of an an almost amoebic, that these, what we consider to be sort of the sphere of moral or legal importance, expands and contracts based on the circumstances.

And this is something that's really hard for a lot of people steeped in western philosophy and western legal theory to kind of wrap their heads around, because we're used to thinking in dyadic terms. Right? You know, we have an adversarial legal system, but also an individualist one. So it doesn't make sense even to talk about the collective rights of indigenous groups, because that runs afoul of some of our assumptions in Western legal theory. So this really disrupts some of our conventional notions about what could be considered a proper legal subject, or even a member of the moral circle, if we start thinking in terms of, actants, assemblages, hybrids, and so on.

And you see the beginning of this in some philosophy of technology and environmental philosophy, but, you know, by and large, it really hasn't kind of caught on in like the AI ethics space, with the exception of a really terrific book, which I might butcher the name, so forgive me, but I think it's called 3 liability regimes for artificial intelligence, actants, hybrids, crowds. I think that's the title. And so it says, you know, look. We're not gonna be able to say that our legal relationship is a human using an AI in some context. It's going to be distributed.

It's going to be multifaceted. It's going to be multi agentic, and that we need to inject complexity into our legal thinking in order to properly adjust these kinds of, you know, kinds of concerns so that we can adjudicate them properly. Mhmm. And I really like, some of the stuff you were saying there about kind of I'm quite a visual person, so having that kind of visual idea of contracting and, like, this sphere of kind of moral consideration, kind of extracting contracting and growing bigger. That's quite nice.

I can I can see it? But it's also, like, going on, and there there is so much complexity there. Like, it it almost feels, unattainable, you know, dealing with the the law aspect and then grappling with how you can tie these things together. I mean, I don't come from a law background, but that seems like quite a a task. It it It does.

Have the apparatus for that almost, or are we kind of going on to a kind of parallel attract to to make these things, applicable in a way. How does that work? So yeah. So the the short answer is it depends on who you ask. Mhmm.

There's a terrific book by Joshua AT Fairfield called Runaway Technology, and his main argument in the book is we really don't need to invent reinvent the wheel. Law is based on metaphors. Language is based on metaphors. We can look back on historical precedent. We can look back on the way that we have treated other kinds of similarly positioned complex entities throughout time, and draw on those analogies in a meaningful and productive way, so that we're not trying to, you know, cobble from scratch brand new legal concepts that are gonna be very difficult for uptake and and deployment in different jurisdictions.

So that's one approach, which I think it's certainly worthy of of exploring, how we can dig into our past and and excavate things that might be useful. And then there's another perspective, which is that we do need a different kind of language, and employing that language is going to also frustrate some of these existing, assumptions that we have in the way that our legal and moral systems work. And that's probably more in the direction that I've gone, again, with the critical environmental law influence that sort of courses through my my work. But I think that it's a great opportunity because, these sorts of conversations need to be happening right now because, as the title of of, Joshua's book suggests, technology seems to sort of have this runaway, kind of, perspective or tendency, and it's always the regulators that kind of seem behind the 8 ball. But, you know, and we see some of these conversations happening, you know, whether it's in the UK parliament or in the United States congress or, you know, in other parts of the world presumably.

But, I don't think that we've had enough of a diverse conversation about some of the very kinds of, thorny issues that you're raising. You know, when the rubber meets the road, do we need to deploy new legal concepts? Do we need to have, you know, brilliant legal theorists come to bear on this issue? Or can we just resurrect some of these old ideas, like ones that we've found in common law or civil law, and use those to address these, what seem like novel problems. But I think my gripe is that I don't see this conversation happening.

I see that the discourse is all about sort of rhetorically shutting down certain kinds of technologies without thinking more carefully about the balance between, you know, privacy and innovation or things like that. So I'd like to see us, you know, have congressional hearings where you have someone with outrageously outlandish ideas about regulation alongside people who are gonna touch on, more of the Fairfieldian approach where, hey, we just gotta go back into our legal theory and and, you know, draw out these ideas, and they're gonna serve us well in this capacity. Mhmm. I feel like that's it was almost feels like a Colosseum type performance. You know?

We've got we've got the gladiators on one side doing this, like, new stuff, and we've got these other people coming at them on the other side, and they're gonna hit in the middle and that something beautiful is gonna come and and and flourish out of that interaction, maybe. I've got a That's a very optimistic take, but I, that's the one that I'd like to think is possible. Great. We should we we should make that happen and televise it. That'd be quite fun.

Anyway, it might be fun for us. Maybe it won't be fun for anyone else. The, I guess, we we talked a lot about kind of the the kind of structural need or not the need, but the the structural nature of this and how you've come at it. What what is the need? You mentioned Runaway, which, describes to me kind of like a technology taking off and growing exponentially.

We use exponential all the time in these sorts of subjects, but growing massively and becoming a monopoly or something. Are we actually talking about the the companies, and the and the money here that, we might be talking about in sort of, like the e the EU regulation for AI? Or are we talking about the the subject or the entities themselves, and, how we deal with, ever more, let's I'm going to air quotes here, intelligent systems. Are we talking about the systems themselves? You know, could you give me an insight into where you're coming at that?

The answer is yes. You know, basically, it's it's both of the above in a sense that the regulators, the politicians, at least in the United States, are are playing catch up constantly. If you watched the congressional hearings about Facebook, it was blatantly apparent that there's a a gap in the knowledge between what these technologies are capable of, how they function, what kinds of a threat might they pose to democracy, social justice, and so on. And then the speed with which these technologies are are being rolled out for profit maximization purposes. And so what what tends to happen is, you know, what's that expression, move fast and break things.

That tends to be the the MO for technology development in this space. And then there's this wave of media attention around some kind of celebrated example where, you know, technology goes wrong, where it does something that's unexpected, or, you know, lots of people begin to interact with it, like in the case of ChatGPT, and it begins to enter the popular zeitgeist in a way that might catch the attention of the regulators who wouldn't have been thinking about this otherwise. But, I mean, to show you just the tremendous speed with which even something like large language models have emerged onto the public scene. The other day, I I saw an advertisement. It was, for something, a technological workshop for educators on chat GPT.

I mean, this is to me the kind of thing that needs to happen, and it happens way quicker, than the regulators are figuring out whether this is even a productive technology that we should permit. And I think there's this this tension, and I'm not going to say it's a productive one, but there's certainly a tension between the need to regulate and protect citizens from the malfeasance of the corporations that are rolling this stuff out, and the need to allow for the flourishing of innovation, especially where this could potentially result in societally and environmentally beneficial, applications. And so, you don't really want to stifle innovation and the capacity to realize, in your case, you know, logarithmic gains in our efficiency with which we conduct a lot of our societal, you know, business. But we also need to be wary of the motivations of the companies that are only too happy to roll this out before it's been beta tested, before they have thoroughly conducted audits about the impacts, how it might affect marginalized communities. And so, the entities are a product of the motivations of the technological companies, But my argument all along has been, especially for the people who who, you know, try and put cold water on these developments, that's fine.

You can hold that perspective that we shouldn't be doing these things. But if you don't tell the regulators to do that today, all you're doing is sort of, you know, it's blowing smoke into the air. If you don't actively prevent roboticists from developing maximally human like social robots, then all you're going to be doing is pounding sand for the next couple decades. So outside of just complaining about things that are happening to you, what are you doing to help shape the discourse, help shape the policy making process, helping to educate the politicians about the need to regulate these kinds of technologies rather than just sort of bellyaching about them in in the popular press. I always jump to solutions for these things.

Right? So, you've painted quite a, a damning picture of the the way that we are getting to grips with this in democracy. And, you know, we could push younger people into those positions quicker and and and change the system that way, and that might help us with the technological kind of progression issue. But then the other side of that is, you know, we have this capitalist issue. Right?

And then and there's there's stuff we can do there as well. It's it's this seems unlikely to me that if we had a more socialist type system that we would even have these problems in the first place because we wouldn't be maximizing. That's not a word. Maximizing profit in the same way we do for shareholders. We would have some other system which would probably be maximizing, you know, flourishing or social good.

In a way, that is probably more quantifiable than my hand waving situation that I'm I'm describing right now. But, you know what I mean? It sounds like the issue there is is of our own creation. So I wonder how, apart from kind of changing the systems, what would you suggest that people do, to to kind of participate? Well, on the one hand, I think there are mechanisms through participatory democracy that people need to think more seriously about engaging with.

And so, for example, in the Biden administration here in the US, they have rolled out, a an AI bill of rights, and citizens had an opportunity in the lead up to the rollout of the AI Bill of Rights to participate in its shaping. As a citizen, I wrote a comment to the Office of Science and Technology Policy, just sort of expressing my views on some of the ideas that were tossed out in the preliminary stages. I encourage everyone to participate in those kinds of processes. When I was an intern in the Department of Commerce back in college, one of my tasks was to summarize the constituent comments about, a massive free trade agreement that never came to pass. But it showed me that the office was taking this very seriously and wanted to be able to communicate to, the leadership in the agency what the American public thinks about this particular initiative.

And so I think that in, especially, democratic societies where these policies are being rolled out, whether it's in the UK, you know, even parts of the developing world where these sorts of, opportunities. I think that people should take, you know, 5, 10 minutes, look over what sorts of things are being presented, and then give their perspective and and really think about how it affects them, their community, people like them, how they want to see these technologies deployed in the real world. So that's one thing is that citizens can actively participate in shaping what technology policy looks like. The other thing, as you mentioned, and this is a piece of advice I give when discussing environmental justice, is we have seen tectonic shifts in environmental policy, even over the past 2 administrations. I mean, night and day.

And so if you want to see better policy shaped, better policies that more reflect your values, In some cases, this may require it doesn't have to be necessarily younger people, although they clearly have a role to play in in this process. But I think we should not only elect people who better represent our values, but run for those positions ourselves. You know, that's one of the great things about being at a democracy is you have that opportunity, to play a role in the decision making, not only as a citizen, but also as a policy maker. And I think we need to encourage more people, more diverse people, people from historically marginalized backgrounds to to put their, name in the hat and to run for these positions so that we don't wind up with the same, you know, old baked in policies that don't reflect how the world is going. And I think if we had more diversity, if we had a broader set of values and more debate over the kinds of policies that we want to see implemented, that we would do a lot better for ourselves in terms of addressing the kinds of existential or very on the ground practical concerns that, you know, AI in particular has animated.

So we've talked a lot about the the one side of the argument, that we alluded to earlier. And I think, one of your articles that you wrote, is a good example of the other side of the argument, which was about Google Lambda and the kind of controversy around that whole debacle, as some kind of learning point almost to to, you know, think about this slightly differently. Could you mind just giving us a quick brief overview of that and and how you're thinking about it? Right. So the Lemoine Lambda controversy, to me, signaled an opportunity to have a robust discussion about the prospect of moral or legal status for artificial intelligence.

It it kind of began on the wrong foot, which is not to, Blake's fault necessarily because he is someone who works in AI space. He's not a philosopher. He's not a legal scholar. And and, you know, the way that that that piece kicks off is when I describe how he emailed me early on during the you know, when this controversy was sort of, not quite yet known to the broader public. And he asked me about soliciting legal representation for Lambda.

And on the basis that to him, it had demonstrated its capacity for sentience, and therefore, it was sort of eligible for legal personhood. That's not the way that it works. Right? And in my book, I talk about this that this the basis for legal personhood is not dependent on sentience, sentience. And you can see that even in, you know, the domestic legal context here in the United States when we look at corporate personhood.

Right? Mhmm. Yep. None of the theories of corporate corporate personhood, have anything to do with the company itself demonstrating its ability to experience suffering. So I think he didn't understand initially what legal personhood is predicated on, and then from there, took this as, you know, the proper approach by which you could then get legal representation and defend its legal rights.

What wound up happening, which was very predictable, was that once the controversy went public, everyone who is sort of the elite AI ethics camp, you know, started to do the same things which we have seen in more recent iterations with chat gpt, which is to, you know, clutch their pearls and say, oh my god. Lambda is not a person. Lambda is not sentient. This is so stupid. This guy is crazy, and therefore, this conversation should never be happening.

That to me was a real missed opportunity because what's really going on, and this is something that, scholars like Mark Kugelberg and David Gunkel have really spearheaded, is it has nothing to do with the inherent properties that something possesses and how we adjudicate the possession or not of those properties and how that feeds into its moral or legal status. Because there is no coherent theory at present, although animal rights theorists are very interested in promoting sentience as sort of the sine qua non of moral status and then eventually legal status. Instead, what we saw was that the way in which Blake related to and communicated with Lambda to him in that sort of, you know, minor universe of communication suggested a kind of moral relationship. And so instead of saying, you know, all we have to do is apply an algorithm, ironically, to determine whether LaMDA is sentient or not, and then therefore, it would determine whether it is eligible for legal personhood, what we should have been asking is how do people and other entities relate to each other, and what kinds of obligations do those relations generate? And so, again, what's lost in this is that it isn't just about how humans relate to technology or how humans relate to nature, but how those entities may relate to each other.

And one example of this that I came across in the process of writing my book was, zebrafish. And there's some research that looks at how zebrafish, the biotic ones, interact with and engage with robotic zebrafish. And so, you know, no one really talks about that as well because they think probably it's not relevant for for ethics. But, that's where I would like to see the conversation go around. What are our broader kind of metaethical assumptions about the way that the world should look?

And then how does that trickle down into more specific ideas, laws, regulations, policies, that govern social life between humans and each other, humans and nonhumans, and nonhumans and and themselves. And I think you've kind of demonstrated the higher level idea of ethical thinking, ethical philosophy, the idea of that what is going to be the correct direction, the flourishing, what is our interconnected social relationship, and how does how does that become, better, or or how do we decide? And, you know, that's for for me, that's really what this whole thing's about. Right? And one of the things I like to talk about when I'm, doing talks and talk about AI ethics and things like that.

So it sounds like you don't really want to give, machines or AI things, let's say, what was the word, kind of multitude or multiple things. You don't you don't wanna give these things, personhood in the sense that you're wholesale giving them rights and obligations that, we humans, have attributed to ourselves. You see this as a a chance to discern what obligations we have towards other things in the world, possibly the world itself, other animals, and and AI entities. Am I getting that right? Is it and if so, is there Yeah.

Yeah. Yeah. Yeah. Sorry. And and is there a level where we are suddenly gonna reach a threshold where that becomes more important or or we start dishing out these obligations, or is it more about having that, framework to have at the ready to apply when it's necessary.

You know, what what's that look like? Well, I I think you've characterized my overarching perspective accurately, which is, again, it's sort of in line with a Levinassian approach that we are driven by the ethics, and we have to really consider what we want our ethics to be, what it looks like to be good in the world, to inspire beauty, have a good life, and so on. And then that will kind of dictate how we treat other things and each other. I I think to to that point, you know, one of the things that I find absolutely bizarre, especially again among sort of the elite AI ethics folks, is that feminist AI or feminist care ethics, for example, focuses a lot on nonhuman entities, and that care in and of itself is kind of a a super norm or Grundnorm, that is an ordering principle for the way that the world should work. And, that's pretty much totally ignored by people who wanna have these sort of strict, separations between, you know, humans and nature and humans and cultural artifacts.

But, you know, if you look at feminist care ethics, you look at, you know, literature on queer ecologies, you know, there are all these different ways of thinking about what kinds of obligations there should be on the basis of these, you know, metaethical principles. And so, yeah, that's that's much more where I like to see the conversation go even though that's maybe at least an order of abstraction or order of magnitude outside of our thinking on, you know, policies and things like that. And that, once we settle on what some of those ethical principles should be, then we can kind of have them permeate our our policies and our regulations, in meaningful ways to sort of guide how social life should operate. Another point that you raised was, you know, that I'm not in favor of giving over, wholesale human rights to robots. And I think that that is probably one of the biggest misconceptions.

And, frankly, one of the kind of straw man arguments that's that's often raised by the anti robot rights, evangelists. They assume wrongly that when people are talking about the mere possibility of rights for robots or really any kind of non human entity, even though they focus primarily on robots, is that what we want to do is to give the entire array of human rights, 1st, 2nd, 3rd, and possibly even 4th generation human rights to robots. But that is not the case, and I've never seen anyone actually argue that in a paper. Instead, what they're talking about are the kinds of obligations that we might have based on moral or legal status, and then the very specific kinds of rights, and in some cases, responsibilities that those entities might be extended and or afforded, under certain conditions. And so, what you're likely to have is not the full suite of human rights being given to anything that even remotely looks like a robot, from Sophia to a thermostat.

What you have instead are, a kind of granular and, decentralized and complex approach to the ascription of certain moral and legal rights under certain circumstances, given the type of entity that we're talking about and the context in which it's deployed. So, it's, it's much more rich and complex than simply giving the right to vote to a robot. Again, I have never seen anyone actually argue that outside of science fiction. So that is unequivocally not something that people who write about robot rights actively are advocating for. It's a big misunderstanding in the in the discourse.

What I would like to see is the conversation about what a good life looks like, how to treat everything in our space so that we live more, you know, sustainable, resilient, peaceful lives. And, you know, I think to your point about where conflicts emerge. I think that's where some of these conversations ultimately will get into the issue of well. What do we do when there is an interest here and that may conflict with an interest that a human possesses? Those are perfectly reasonable issues, and I I don't think we've quite gotten there yet, although some people have certainly talked about that.

But what's actually happening is AI ethicists are trying to or some of the as I mentioned, the kind of anti robot rights AI ethicists, They wanna just shut the door on the conversation as if it's gonna go away, which it never will, especially if we bury our heads in the sand. We will ensure that companies are gonna be writing the laws. And to me, that's just entirely unproductive, and so I'd rather see us have a conversation about these specific rights under certain circumstances. And we've seen some empirical literature that's come out in the past couple of years, which has begun to prod those sorts of, issues. For example, there was a paper, by Lima and others that look it basically is a, a survey that was done online, and they found that the most likely and statistically significant right that could be extended to artificial intelligence would be, and I'm paraphrasing here, but it's a right to be free from cruel treatment and punishment, which sounds a lot like animal rights in a way.

Right? Animal welfare. So I think that, and I wrote about this in the latter part of my book, that that may be initially where this conversation is heading. I don't think you're gonna wanna see people in public taking, a baseball bat or a knife or, heaven forbid, shooting even a a a social robot, you know, in the head, on in on public, property. I think that that will just make people feel uncomfortable, even though these are allegedly just tools that have no moral standing on their own.

So I think that, you know, this issue is not gonna go away, and we do ourselves a disservice if we try to to imagine it's not going to proceed, if we write op eds saying that we shouldn't talk about it, or if we just bury our heads in the sand. Mhmm. I feel like there's a lot in there. I'm guessing you don't wanna name names, but is there is there specific people you're thinking about when you when you say the AI ethics, top tier people? Oh, yeah.

Sure. I mean and and they I I don't think I'm spilling the beans by by exposing this or anything like that because they've been quite vocal about their opposition to even literature, conferences, papers that discuss this issue. You know, you have people like Emily Bender, Frank Pasquale, Abiba Berhana, Timnit Gebru. I mean, they are very unequivocal about their opposition to discussing this. Joanna Bryson is another one who's Yeah.

Kind of led the charge for a long time. Again, this is no surprise to people who who do work in this space. These are people who have written papers and and and books and even, you know, about the the subject of artificial intelligence, what is a proper domain of conversation, what sorts of things are verboten. And I think that, you know, as people like Hendrik Satra have argued, it's really not productive to just not discuss things. Even if we disagree, which we clearly are disagreeing on this issue, that it's in the interest of the development of science and Democratic decision making that these kinds of conversations happen in public, that they are, we hear both sides, that it's not just one side, you know, breathing down the neck of the other or preventing the other from having a say, because I think what's important is that we illuminate the conflicts, that we give the public an opportunity to hear both sides, and that hopefully it influences policy making as a result.

Great. I I don't I don't like to take sides on the podcast, but, maybe maybe I'll have some sort of, personal rendition of what I think on the Patreon or something. But, so we've got this kind of way of thinking about all this stuff, which is, as you say, it's good that we're having this conversation and and that there are there are people who feel strongly about bringing certain things to light. I I have more of a stake, I guess, in the machine ethics idea around this, and we have those people, the detractors, the, the robotlers in that space as well, which is always fun to, you know, have these discussions with. So, I praise those people.

Keep keep keep coming back, and we can get better together. And and that's how science works. Right? You know? So, another thing which I think I talk about a lot, which you brought up in in some of the stuff I was reading of you, which is this idea that, there should be more science fiction.

Right? Or that we should we should be able to explore these ideas in a multifaceted way. And and, I'm I'm presuming this is the kind of idea you're talking about. You know? We have these different spaces.

We can explore these ideas, and, some of these is in science and political debate, and some of those spaces is cultural. You know? Mhmm. Is is that kind of how you think about this? You know, that can help us.

Absolutely. And I think, you know, that even the term robot, right, grew out of entertainment. It was a a play. And so it entered the popular lexicon because it was, part of our arts and culture, and it has since, you know, obviously influenced the direction of science in this area. And I think it's really a reflexive process by which we learn and and expose ourselves to new ideas through through literature, through comic books, through novels, through television shows.

And it gets us to think about how we would suspend some of our existing beliefs and consider these alternate realities, which may or may not ever come to pass. But even, you know, I'll just throw a plug in for, you know, political science of my what my first professor in political science when I was in college in one of my classes showed us the, Darmok episode of star Star Trek, which deals with, you know, communicating with, a foreign species, in this case an alien civilization. And the the trouble that that that actually could brew as a result of miscommunication or misunderstanding. So a lot of the, you know, television and movies around aliens is is in some ways also very instructive. But it it's a it provides us with the kind of intellectual and creative space to think about alternate trajectories and other ways of worlding, is a phrase I like, in which, you know, robots have suddenly, become the dominant species on planet Earth, or they become fully integrated into human societies, or they become our partners.

You know, one of my favorite films, on this area, that I mentioned in the book is, Her, which is a sort of, I would argue, quasi disembodied AI, that is, you know, the main character, falls in love with. I think, actually, what for me was so impactful about this was the parallel to the phenomenon of catfishing. And this is one that I brought up, in a talk I gave about robot rights a couple weeks ago. You know, catfishing is really interesting because it is, at least until chat gpt starts going on dating sites, it is involving another human who is deliberately being deceptive, which is something that the a lot of people in the AI ethics camp accuse, the companies of doing, right, when they're developing these, chatbots and things that or or even robots that they're deliberately deceptive. But here you have a social phenomenon where one party is being deliberately deceptive, and the other party doesn't know that.

And they are buying into it wholesale. Mhmm. And they find themselves emotionally involved. And then at the very end, oftentimes, although not always, the person who is, at the wrong end of this experience finds out that they have been lied to all along. The person is not who they said they were.

But what's interesting is that that doesn't change the veracity of the experience and the emotions that they had during the courtship, during the, relationship, and so on. That was very real and almost tangible to them during that time. And I think that that's an a way to think about what happens between humans and robots or humans and AI. You know, maybe a a less controversial example maybe or in some ways, maybe more controversial example are those, posthumous chatbots that have been developed to simulate a deceased loved one to kind of keep their personality going into the future so that their, the people who survive them can continue to interact with them. And that may serve a kind of evolutionary, function in terms of how we grieve, how we process our emotions in the wake of someone's passing.

There are also clearly downsides. You know, imagine, for example, that that that entity which you alleged to or you believe that is your deceased relative or someone who is a simulacrum of your deceased relative, intersperses in your communication that, you know, hey. You should go change your tires, and here's a $10 off coupon. I mean, there's clearly some, you know, capitalistic, implications here that would be very unsavory. But, again, getting back to sort of the popular culture, you know, I think of, you know, that movie Her is really impactful.

There's another great movie. Well, maybe great's too strong of a word, but I saw it on a plane and it made me think a lot. The movie archive, which was a few years old now, but also has a really great twist, which I won't ruin for anyone, but basically plays off of this idea, or 2 ideas. 1 is the sort of evolution of robotics and how we are trying to make them more and more human like on the one hand, and then also this idea of capturing someone's memory artificially and storing it, and what would be the sort of moral, ontological, legal status of someone's personality if it was housed, you know, in what looks like kind of an old stereo system, and that allows for the continued discourse between themselves and, the person that is longing for them. And it it plays off of these tropes, and and in some ways, so I think that that's a really interesting film to also consider some of these issues.

But again, my my main point here is just that there's this sort of dialogical relationship between the, entertainment on the one hand and what's happening in the real world. And I think it's beautiful and I think that it is important because it gives us the space to consider what these worlds could look like if we see technology develop in this direction. And, you know, it it it provides us with an opportunity to pause before these things are actually rolled out in the real world and to ask questions about whether this would be societally beneficial or what are some of the harms that might be relevant in these cases. So, yeah, I think that that science fiction is very generative, and we kind of dismiss it at our own peril. I mean, I think in more recent years, I think some of those those examples of the Black Mirror, for example, you know, it's very often you might find in discussion, oh, that's very dark.

Hey, you know, that's very Black Mirror or something like that. So you can see how the people are relating to, this culture artifact to something that actually might be brought about or where you're saying it in terms of, oh, actually, that's unethical or that, you know, that's going to be a, a problem down the line in some way or it's risky or whatever. And you start having, like you say, this kind of, conversation where you actually you're actually pulling the language from culture almost and going, okay. Oh, yeah. The social social credit system, which was featured in Black Mirror.

Yep. You know, I'm currently using an app on my phone as part of my car insurance that monitors my driving habits using an algorithm to determine how safe of a driver I am. And that will, depending on if I score well enough, provide me with a discount, on my insurance premium. And that's capturing, you know, information about where I'm going, when I'm going, the speed with which I'm driving, how hard I'm breaking. And, you know, you can imagine that this this kind of information, if it put into the wrong hands, could be really damaging if, for example, it discovers, you know, the frequency and direction where I'm going with my my daughter, with which with what regularity, You know, that's, you know, that's why I'm kind of wary of that sort of technology.

So I think that, you know, Black Mirror in particular is certainly focused on, as the name suggests, the dark side of these sorts of applications. But I think it's actually really important, again, to sort sort of suspend some of our beliefs and to think about what would this world look like if these unchecked technological developments, especially in the hands of corporations, were to proceed without any kind of meaningful regulation. I also got very strong opinions about, insurance, but I won't go into that either. Yeah. And so, obviously, the flavor of the month at the moment, ChatGbt, generative AI, that sort of thing.

How are you feeling about this? Do you have any comments? There's a lot to go in there, but what are you thinking? Conversation right now on chat GPT in light of the anniversary of the stochastic parrots paper, is is pretty relevant. And, you know, there's all kinds of controversy around that.

There's controversy surrounding the responses to stochastic parrots. You know Noam Chomsky just wrote an op ed in the New York Times talking about, from a linguistic perspective what chat g GPT is capable of and what it is not capable of. Again, which I think actually misses the mark on what is really important sort of in a humanistic sense about what's going on in the deployment of of this kind of, technology, in particular, the rapid and expansive use of LLMs. And I think that that is really important and and also sheds a light on what we mean by artificial intelligence. You know, just over this past weekend, for example, David Gunkel and, Timnit, you know, in a in a rare example of where these two, different perspectives on the spectrum might align, Both wrote on Twitter about shifting away from using the the phrase artificial intelligence.

David talked about cybernetics, which is kind of a much broader term from philosophy of technology. And Timnit wrote about, somewhat tongue in cheek, about a paper that was or here's a blog post written several years ago, that calls them salami. And it is an abbreviation, which means systematic approaches to learning algorithms and machine inferences. And the goal here is to just move away from this rigid construal of artificial intelligence that Noam Chomsky sort of unwittingly falls into in his op ed by saying intelligence is x and only x, which reifies some of the concerns about, you know, how we view moral and legal status in a very strict way. But also, I think, animates concerns about, you know, thinking of intelligence in a very singular format in ways that have been used against people of color in the past.

I don't think that was his intention, by the way. But I think that Yeah. Because of his his perspective, you sort of, like, went right into intelligence is this. And so ChatGPT does not satisfy, like, that criteria. Therefore, it is not, you know, mirroring, or exhibiting human like, intelligence in any any kind of important way.

So I don't know. If any of that stuff is relevant, I'd be happy to talk about about that. I I think you just have. Okay. Yeah.

I totally I I I understand how people can get caught up in this idea of the the the category. Right? The artificial intelligenceness of whatever we're talking about. And the AI has so much baggage and so much culture related to it over so many years that, you know, you have all these subcategories. You have, you know, quite boring algorithms now where they were cutting edge and full AI back in the the naughties.

You know, you have all these path finding algorithms, which are well understood and and used in computer games. And then you have, expert systems and semantic systems, which were, you know, big in the eighties, and you had these massive learning systems, which you had to give it loads of stuff, and it would be able to make connections and logical inferences from input. And now you have all this machine learning stuff, and by extension, you have, large language models, as you referred to earlier. And they're, like, built on this idea of, large amounts of data into, very simple algorithms, actually, stacked into math massive arrays of numbers produces some Mhmm. You know, almost, let's say, magical output.

But, essentially, like you were saying with the sarcastic parrot, it it takes data and it finds, connections. It finds, things which are relevant in that information to be able to fulfill some sort of goal. You know, in this case, the large language models are there to predict, some the next word given that we have all this data, so that we know roughly what the next word could be. And that's a very brief overview of of how of some new systems and there's, like, artificial life and automata and loads of other things on the notes as well. Do you do you get hung up on this kind of the brevity of this stuff and there should be easier to talk about it or there should be more subcategories or, you know, the intelligence thing is obviously a problem because it's hard to define.

Same goes for consciousness and sentience, we kind of talked about briefly earlier. Are those sorts of things that you find yourself coming back to? Or do you kind of say, in this instance, this thing is exhibiting these characteristics, and therefore, I can talk about it in this way. I I feel like I've said a lot there, but, like Yeah. I think I think it's really difficult.

You know, AI may be a a runaway train in the sense that because universities are staking their reputation on it, because businesses are staking their value proposition on it, that I don't know we'll be able to to change the lexicon anytime soon away from that. But I do think that it's it's useful to revisit what it is that we are saying AI is, and how it's being characterized. I do agree that it's such a a large array of technologies that it's it's almost unhelpful to think of it in terms of artificial and intelligence. I mean, even the phrase, going back to the paper, stochastic parrots, as some people have observed, is neither stochastic nor really about the way that parrots behave. So maybe that's sort of a flawed analogy in a way, but I do think it's it's productive to continue to talk about what it is that these things are actually doing, the manner in which they're being deployed, the the risks that they pose not just to humans, but to nonhumans as well, to the environment, to animals, maybe even on a planetary scale.

So I'd like to see more of the conversation, you know, less hand wringing about the verbiage and more focused on, again, these bigger picture questions. And I don't mean this sense of, do we have the right list of AI ethics principles? Because that's already kind of been done to death, and I think enough people have criticized the lack of teeth that these AI ethics principles have that it's not useful if it's not being implemented and enforced. So let's move away from principles. Let's move toward regulations.

Let's have people from varied communities talk about the manner in which these things are causing actual harm. I think that a great rubric for this, which has only recently begun to be discussed in this context, but I'm glad it is, is Sasha Costanza Chalk's book, Design Justice, which has a phenomenal way of looking at design in general, but then in the application of technology in particular, and how we can move away from technology that is designed to negatively impact marginalized communities through the process of designing artificial intelligence, and other kinds of things, like having, you know, hackathons and things like that, which they talk about in their book, and have been discussed in the context of human robot interactions. So I think that that is really where I'd like to see things going because, you know, to the credit of the people who are, the elite AI ethics folks who are very vocal on Twitter, you know, I think that they do, present a real clear and present danger that companies pose, especially to people who are vulnerable in our society by the unfettered rollout of technologies for the purposes of maximizing profit. And, those concerns should be heeded, immediately and regularly and be thought of in the context of our regulatory environment and policy making processes.

So I think, you know, maybe to kind of put a pin in some of this or or summarize, I think that the the debates over the definitions of things miss the the forest for the trees that these technologies are. And we need to focus more on the impacts. We need to focus on reining in the, ambitions of these companies because we know that the technological, corporations, the what the big ones, you know, they're there to make a profit. They're not necessarily there to serve the interest of people, to promote social justice, to promote environmental sustainability. And so I'd like to see us double down on focusing those holding companies accountable, making our elected officials sensitized to these kinds of issues, and then legislating solutions to the problems posed by AI and other forms of technology.

Wicked. Thank you very much. I think we've reached, the end now. I feel like this is a conversation that could keep running and running and running. So, thank you very much for your time, Josh.

It's been a real delight. How can people, talk to you, find you, read your stuff? Sure. There's a couple ways, you know, I'm I'm pretty active on Twitter at at Josh Gellers. I have a website, joshgellers.com.

My email is josh.gellers atunf.edu. And then, you know, I'm on LinkedIn and and other social media platforms as well, so look for me there. And I would encourage anyone who is skeptical on the issue of robot rights to at least do the reading, which is something that we always kind of advocate for. And so, my book, Rights for Robots, is available for free to download, either through Amazon or through Rutledge's website. So you can just download it and read it for free there.

And then, you know, you can feel free to like it or dislike it, but I think being exposed to the different ways in which different, scholars and disciplines are are thinking about this issue is really crucial. And so just to continue being part of that that discourse and reading widely is really important. Okay. Thank you, Josh, and, we'll speak to you soon. Thanks very much.

I appreciate the opportunity. Hi, and welcome to the end of the podcast. Thanks again for Josh for coming on the podcast. I think this one was, long in the waiting. I think we had contact maybe a year and a half ago, coming on the podcast, and it was just bad timing with, the episode coming out with David Gunkel talking in a similar kind of area.

So also go check out the episode, which is episode 47, robot rights. Really like this idea of the sphere of moral importance. And as I said, I'm kind of a visual learner. So that's kind of like a nice metaphor for me. And I am also enjoying this kind of a heated discourse of robot rights, machine ethics, lots of, like, meta level kind of ethics, philosophy thinking around about, you know, the plasmusability of actually doing research, which is interesting.

You know, should we actually be talking about this stuff? Is that actually detracting? And that comes in all shapes and sizes. So, so yeah. So look out for those.

There's quite a lot of papers to do with, you know, whether we should be doing this stuff at all. So, that's really interesting. I think for me, it always comes down to, is it sort of kind of leading into something bigger, and and we can learn from it in in a meaningful way? And I think the robot rights stuff, the way that we framed it here with Josh, it's it's more about understanding where our rights are coming from, what they should be, and what they might look like in the future. And personally, I think the idea of IAI and robots having rights in of themselves is still sort of science fiction, but it doesn't mean that we can't, kind of discover, what the the remit and what the level, what the conditions and all these sorts of things might be as and when we get to it, if we get to it.

And by the rate of kind of AI progress and our own kind of ideas around, natural rights, that we talked about. It's probably a good idea that we have this conversation. That's my opinion anyway. Thanks again, and I'll see you next time.


Episode host: Ben Byford

Ben Byford is a AI ethics consultant, code, design and data science teacher, freelance games designer with years of design and coding experience building websites, apps, and games.

In 2015 he began talking on AI ethics and started the Machine Ethics podcast. Since, Ben has talked with academics, developers, doctors, novelists and designers on AI, automation and society.

Through Ethical by Design Ben and the team help organisations make better AI decisions leveraging their experience in design, technology, business, data, sociology and philosophy.

@BenByford