38. Automation and Utopia with John Danaher

This month I'm talking to the prolific John Danaher about cyborg and digital utopias, why you should hate your job, the idea of robot tax, behaviourism, and theories of moral standing.
Date: 14th of January 2020
Podcast authors: Ben Byford with John Danaher
Audio duration: 44:29 | Website plays & downloads: 842 Click to download
Tags: Author, Robotics, Academic, Utopia, Work, Morals, Behaviourism | Playlists: Philosophy

John Danaher is a Senior Lecturer in Law at the National University of Ireland (NUI) Galway, author of Automation and Utopia and coeditor of Robot Sex: Social and Ethical Implications. He has published dozens of papers on topics including the risks of advanced AI, the meaning of life and the future of work, the ethics of human enhancement, the intersection of law and neuroscience, the utility of brain-based lie detection, and the philosophy of religion. His work has appeared in The Guardian, Aeon, and The Philosophers’ Magazine. He is the author of the blog Philosophical Disquisitions and hosts a podcast with the same name.


Transcription:

Ben:[00:00:03] Hi and welcome to the thirty eighth episode of the Machine Ethics Podcast. This month I'm joined by John Danaher, lecturer in law at the National University of Ireland Galloway and author of Automation and Utopia. We discuss definitions of A.I. and Robotics, why we should hate our jobs, what makes for a Utopia, horizons of Utopia, including possibilities of form, longevity and other horizons. The Hope of the Digital Utopia. We discuss the robot tax, ethical behaviourism for robots and theories of moral standing. It was real pleasure to talk to John. So thanks again. We like to listen to more episodes and go to machine-ethics.net. We'd like to get in contact with the show then email hello@machine-ethics.net. You can support us on Patreon Patreon.com/machineethics. You can also find us on Instagram and Twitter for notifications, updates, images, what's going on with us and more. Thanks again for listening and hope you enjoy.

Ben:[00:01:06] Hi, John. Thanks for joining me on the podcast.

John:[00:01:08] Hi Ben. Thanks for inviting me to participate.

Ben:[00:01:11] If you'd like to just introduce yourself, who you are and what you do?

John:[00:01:15] Yeah. So I am a senior lecturer in the law school at the National University of Ireland in Galway. And even though I am on paper and by qualification a lawyer, I'm really primarily a philosopher, an ethicist of emerging technologies. I got a broad philosophical interest, but most of my research work and public published work is focussed on the ethical, social and legal implications of emerging technologies.

Ben:[00:01:42] So thanks very much for sending me the book. I was able to, I think, do half of it before I spoke to today. So I have my hands right now and it's. You've released it a couple of months ago: Automation and Utopia and it can be found in all good book stop shops.

John:[00:02:00] You know, I wish that was true. I think it's probably something that you need to order online. It may be in a handful of academically oriented bookshops, but.

Ben:[00:02:09] Right. Yes. And it's a kind of tour through a technologically mediated jobless future, basically.

John:[00:02:17] Yeah... The motivation behind the book was to try and explore, from a philosophical perspective, the, I guess normative or to use a phrase, I would prefer axiological implications of a post-work future. So just like what impact will it have on human flourishing and human meaning?

Ben:[00:02:36] And before we just dig into that, I think we always start the podcast with some sort of definition. So in your mind, when we are talking about these technologies, usually pertaining to A.I., Robotics, that sort of thing. What are we actually talking about? What do you mean? You're talking about A.I. robotics?

John:[00:02:55] Yeah. I mean, it's in the book, I use the phrase automated technologies. And perhaps somewhat ironically, I don't spend a lot of time defining that concept precisely in the book, although I have done so in other parts of the academic output that I produced. So I guess, you know, I would define a robot as an embodied artificial agent of some kind. And I would define an A.I., I guess, using something like that Russell and Norvig framework of rational problem solving computer program of some sort which can be very narrow in its competencies, very domain specific like it can just be good at playing chess or Go or maybe it has a slightly broader set of competencies. Also, the long term hope of many AI engineers is to create a general intelligence or reasonably general intelligence.

John:[00:03:50] And yeah, I think A.I. doesn't have to be embodied in the same way as a robot. You can have any idea that just exists inside a computer or a box, a kind of oracle that you talk to. And I think for a robot, it has to be physically embodied in some sense and can act and change the world. That said, I think there is a very fluid division between those two things, or blurry distinction between those two things because, you know, what does it mean to act in the world?

John:[00:04:14] Do you act in the world if you can communicate with the world outside? So if any I can communicate with a human being is in some sense acting. You know, you could argue about that for a long time. But, yeah. I think the distinction between Robotics and AI comes through the embodiment and actuated power of a robot as opposed to an AI.

Ben:[00:04:35] If you would indulge me, I'm gonna start with a pithy comment. John don't you like your job?

John:[00:04:42] Right. As your probably commenting on the fact that I have a whole chapter in the book called why You Should Hate Your Job, which starts with a discussion of why I like my job or I say that I like my job. What I what I've said to other people that this is that I...I like my job largely because the conditions of employment are imperfectly enforced upon me. So I'm an academic and I get a lot of autonomy and freedom to decide what I want to read and research and think about. Obviously there are some pressures upon me to show up for teaching, but to a very large extent I am left to my own devices, much more so than in other forms of employment. And I think that I'm lucky and I'm fortunate in the present job market or labour market. But I think a lot of other people are much less fortunate and that in fact the labour market in most developed economies has settled into a pattern or a set of conditions that are getting bad, or sorry, they are bad and getting worse for most people.

John:[00:05:45] And I often think that technology itself is what is making the conditions of employment worse, partly because it enables like more kind of contingent forms of employment and greater surveillance and monitoring of employees, which undermines their freedom and has a number of other negative effects on how income is distributed across workers and kind-of psychological effects in terms of how dominated their lives are by the need to work and the need to find employment and also an increased sense of anxiety and competitiveness around work. You know, we can dive into some of those in more detail, if you like, but those are some of the reasons why I think work is...is a bad thing, even if it's the case that some people have jobs that seem quite pleasant and nice for them.

Ben:[00:06:35] You discuss through the book that technology is taking us to a place where work is becoming redundant, or becoming more automated and therefore there will be swathes of people, a large percentage of population, which will be unneeded, unnecessary for that...the kind of traditional types of work that we have today. And there's obviously you talk about various kind of rebuttals to that situation and the fact that there might be that the Luddite fallacy that you're talking about and the new types of work may replace, or come about because the technological change and that this may or may not be true, but it's probably unlikely.

Ben:[00:07:20] So we're kind of leading towards a future where possibly work is going to be changing completely as we know it and our relationship to work is going to be changing, which is really interesting. And then you go into stipulating kind of the types of post work kind of utopia. So did you want to, let's let's presume for a second that our audience is onboard with the assertion that work is both bad but also becoming redundant. What types of future scenarios, types of utopian ideals can we expect to see in the future? Or would you like to see in the future?

John:[00:07:58] So the book itself is structured around an examination of two different utopian possibilities for the future. And the way in which I set this up within the book is that I talk about like what impact automated technologies have on the human condition in a broad sense, and not just in terms of work and work as the starting point for the book, but it branches out into the broader consideration of how these technologies affect our everyday lives as well.

John:[00:08:27] And so the way in which I frame it I take from evolutionary anthropology, there's this theory which may or may not be entirely correct, but I think it is along the right lines, which is that: what is distinctive about the human species...one of the things that's distinctive about the human species is that we have brains that we can use to solve problems. We have cognitive power that we can use to solve problems, we can do that individually or actually more commonly we do collectively through collaboration and coordination with other human beings. And in some sense, that's the secret to the success of humanity. For the past forty thousand years, whenever modern Homosapiens came into existence, is that we've used intelligence to gain control over our environments. People are interested. There's a book which I cite in my book, which kind of goes into this in a lot of detail by a I called Joseph Heinrich called The Secret of Our Success, which is all about how culture, collective intelligence has helped us to become the most dominant animal species on the planet.

John:[00:09:31] And so what do automating technologies do to this dynamic? What happens, I think, is that automating technologies are really a threat to our cognitive dominance, that we are no longer the best problem solvers in the world. We are now creating machines that are better at solving some problems than us. I mean, at the moment, it's narrow domain specific problems (going back to the earlier definition of A.I.), but within those narrow domains, the machines that we've created are clearly according to all objective measures, better than we are. So as this process of creating technologies that are better than humans at cognition expands, we see that our dominance of what I refer to in the book as the cognitive niche is under threat. And so this leaves us with a fundamental existential question I think, which is where do we want to go in the future? Do we want to try and maintain that cognitive dominance, that cognitive power, or do we want to retreat formation to something else?

John:[00:10:32] In the book I say this it gives us two broad options. We can pursue what I call the cyborg path to Utopia, which is that we try to retain dominance of the cognitive niché, and effectively the way we do this is to become more like the machines that are seeming to replace us. Or we can pursue the virtual path to utopia, which is we kind of retreat from the world into in a sense of virtual playgrounds. But that's a loaded way of putting it. But that's in one sense what I what I explore in the book.

John:[00:11:06] And just like before we go into this in more detail, one thing I would say as well is that even though I frame this in the book and very kind of binary terms, that we either go down this path of this path, obviously it's not true that the choices are binary in that sense, that also it's not that the sidewalk utopia, as I discussed in the book, is any one thing or the virtual utopia is any one thing... They are a rather broad umbrella labels for a collections of possible futures that we could build.

Ben:[00:11:34] Right. What I think we...you discuss in the book that there is your preferred option, I think. There's this sense that the kind of trans humanist movement would be more cyborgian, whereas maybe this idea of the singularity or, you know, this matrix style setup of where we are retreating into virtual environments and you stipulate that that might be the preferred option or your preferred viewpoint. What why is that? Why is that the case, do you think?

John:[00:12:05] Yeah. So I think I'd like to answer that question adequately. There's got a long way around us if you don't minds and we can sort of have a bit of a back and forth before we finally reached the conclusion as to why I think the virtual utopia is a better option.

Ben:[00:12:22] Right.

John:[00:12:23] But like, the long way around it is that like you have to think a bit more detail about what makes for a good flourishing human life and what...what a utopia look like. So the middle two chapters of my book are all about this. And so what what is it that makes for human flourishing and what would a utopian world or society be like? And broadly speaking, when it comes to human flourishing, I adopt a very traditional philosophical framework for thinking about this, which is that to live a good life, you have to be subjectively satisfied with your life. And you also have to do things that are objectively valuable in some sense that have some objective merit.

John:[00:13:08] Traditionally, the way in which we conceive of the objective merit of things is in terms of the good, the true and the beautiful. To use the classic philosophical phrase that a good life is one that does moral acts in the world makes the world a better place, morally speaking, that contributes to human knowledge and contributes to human aesthetic culture as well. So that's the kind of theory of the good. And so to maintain a flourishing life, you have to somehow maintain this kind of link between our subjective satisfaction and the objective merits of our action.

John:[00:13:44] And one of the threats as well of automated technologies is that they sever this connection between human agency and the objective world. So that's just an initial point. Now, in terms of like what a utopian world would look like, this is maybe one of the trickier aspects of the book because the concept of a utopia I think is much maligned and criticized. And as I've learned in discussing the book with people, you know, when when they hear that I am arguing for a utopian future, they think that's naive and pollyannaish and silly. "And haven't all the utopian movements historically failed? Why would you want to continue to resurrect this idea?" Not that it's ever gone away. And, you know, I think a lot of the criticisms of utopian political movements and utopian philosophical movements in the past have merit. And so I kind of try to argue for a different understanding or different interpretation of utopianism in the book. It's not wholly novel to me. But to sum it up, what I think is wrong with a lot of historical utopian movements is that they are what I would refer to as blueprint utopian movements is that they have a specific model for what the ideal society should look like and they tried to implement that model. So if you've ever read Thomas Moore's Utopia, you'll know that it has this like detailed sketching out of what utopian society would look like. And it's and there are all these kind of....and fixed rules for people. Another classic text in Western philosophy, which you could classify as utopian is Plato's Republic, which again is a very rigid blueprint for the ideal city, the ideal state.

John:[00:15:30] And the problem with all those blueprints really is that the blueprints themselves usually don't appeal to most people, and you probably have to have a very totalitarian or authoritarian system of governance to implement them. So what I argue for in the book is an alternative approach to utopianism, which is what I call the horizonal model of utopianism, which is that a utopian society is not something that follows a fixed blueprint, but it is rather a society that explores the horizons of possibility. And the horizons of possibility are many, horizons of possibility in terms of human form and human embodiment and human lifespan, and existence, the kinds of things that say the trans-humanist movement care about, they care about, in a sense, exploring and pushing the limits of human possibility. There's also geographical horizons, you know, expanding beyond the earth into space, exploring different horizons of possibility. And there also, as I talk about in the book, virtual horizons of possibilities, different forms of life and ways of living that you can develop in non physical reality or even in some altered version of physical reality.

John:[00:16:39] So, yeah, that's how I set it up in the book, is that we wanted to build a future that maintains human flourishing and it also utopian in the sense of admittance horizon, a model of utopianism. And the second half of the book is dedicated to arguing that the cyborg utopia I think might achieve some of these goals of maintaining human flourishing and realizing the horizon of utopia. But it is fraught with risk and the fretful path to utopia, even though initially it's something that I think a lot of people will reject, arguably holds out more hope. So, you know, I'm an academic, so I don't really come down very firmly in favour of any ideas. So even the life favour, the virtual modelling, I'm somewhat circumspect about it. I think it is also fraught with some risks. I just think it's better than most people would be inclined to think.

Ben:[00:17:31] Yeah, I think that's one of the things I took away from your writing style is that you're almost too good at writing your opposing view. So as you're reading the rebuttals to your assertions, you're swayed by those different opinions coming in from different directions. So I left wondering what you actually thought almost or what you actually thought desirable. But maybe that's not the point of the book and that's not necessarily what you're asserting.

John:[00:18:05] Yeah, like I would take that as a compliment. But I know that this has frustrated some people in the past with my style. And my approach is that they often wonder what do I really believe? I'm just kind of laying out these arguments. And here's what one side thinks and here's another side thinks. I you know, I try to avoid being completely middle of the road and fence sitting in the book. I try to come down in favour of certain ideas, but I think the only way to do that effectively and honestly is to evaluate and give a fair hearing to the opposing argument. So I do have a very analytical style, I hope it's readable, I hope it's not too dry and some people have read it so far that it's not overly dry in its approach, but it is it's not exhaustive, but it is rigorous in the sense of how it analyzes both sides of the issue. And so whenever I set out an argument in favour of a particular approach or point of view, I always spend several pages looking at different rebuttals to that argument.

Ben:[00:19:11] So in this horizonal utopia is it...is a prerequisite that the...think you were talking about structural utopias, but is it in the horizonal format? Is everything else kind of already taken care of? The society has moved beyond requiring base needs and therefore they are concentrating on this kind of, in my view, kind of a research based meaning in their social construct or their personal construct maybe.

John:[00:19:44] Yeah, I think that's right. So to qualify as a utopian vision, I think you do have to. It has to be a world which isn't riven with too much strife and anguish and deprivation for people still to have a civilization that is capable of catering to basic needs, where everyone has in a sense enough of the basic requirements of life and so they can concentrate on other things. And this is actually a criticism or concern that's come up from other people who read the book where they say that what I'm imagining is a world of...where everyone has first world problems in a sense. I think there's an element of truth to that so that you're living in a world of relative technologically mediated abundance. But that doesn't mean that the world is going to stagnant and boring. And we've become this decaying civilization, which some people worry about. But we still maintain a kind of dynamism and forward looking research oriented to explore the horizons viewpoint because we are freed from the basic necessities of life.

Ben:[00:20:47] Because in my view, this is my assertion here. This is opinion coming in, but the there's a sidestep for the economic view of this future. And I guess, is that is...that something that just is also part of this taken care of situation where your basic needs don't require that to be an economy in the same sense?

John:[00:21:11] Yeah. So like when I talk about a post-war future in the book, what I want to say is that there are two basic problems that are not working anymore would create civilization. One is, given the kind of economy we currently operate and run, that would seem to be no longer sustainable. Unless you have some significant redistribution of income, you could hardly maintain a broadly speaking market based capitalist society, if you distribute income from people who own capital, who own machines to the people who are displaced and that income deprivation problem with automation is something that has attracted, I think, a lot of attention and interest in recent years. You see this in the burgeoning movements around basic income guarantee, or other kinds of wealth taxes and taxes for robots that people have been suggesting.

John:[00:22:08] You're a you're not a fan of robot taxes?

Ben:[00:22:11] And I think it's a it's an impracticality. Basically.

John:[00:22:13] For people who are listening. You give a big thumbs down to the idea of a robot tax. I'm revealing the secret.

Ben:[00:22:20] Yes, that's it.

John:[00:22:21] Well, like we can go to this a bit if you want.

John:[00:22:25] But the other problem is a problem with what do we do with our time? If we're no longer working, since work at the moment effectively dominates the adult lives of most people in most countries around the world, they have to find a job to survive and they have to make themselves employable. And oftentimes they define their identity and their sense of self-worth and self-esteem around their success in work. What are they going to do if they don't have jobs anymore? And my book is really more about that problem than the income problem. So I do sidestep and skirt around some of the economic issues. And I think what I say in the book is that you have to have some solution to the redistribution problem. But I'm kind of agnostic as to the exact form it takes. And this is something as well that I'll just throw in there for people who read this book and who read any book, which is that no book is comprehensive. No book can include every single topic or address every concern. So you have to kind of pick and choose your battles. And I chose a certain target of analysis in my book. And of course, I did leave things out. And what I chose to leave out was debates around redistribution and a basic income guarantee.

Ben:[00:23:34] I don't know if you've picked up Stuart Russell's new book, but that...

John:[00:23:39] I have, I haven it actually on the shelf behind me just here. I've read the first couple of pages of it.

Ben:[00:23:42] Yeah, that is very rigorously comprehensive in its stipulation before you even get to the meat and potatoes if you'd like. I'm pretty sure half of it is the precursor to the meat and potatoes, but it's all good stuff. I've not finished it, but it's good even though I know quite a lot of the arguments...but anyway. That's not super relevant.

John:[00:24:09] So I'm interested in asking you why you think so. You're not a fan of the idea of a robot tax, you think it's impractical. Why so?

Ben:[00:24:17] I think as a technologist, someone who codes and plays with technology and is paid to make technology and teach it. I think the ambiguity of saying a robot tax would have to be extremely specific, and it would...I just don't know how that would actually be able to catch or the instances which would be applicable with that technology, because like you said, we might have these robotic systems which are somewhat A.I. driven. They might not be. They might be controlled systems that are running quite basic kind of physical operations of robots. They might be completely disembodied A.I. systems run by a single company for many companies. There might be, you know, all these different kinds of types of instances where you might want to tax the output of these companies and I don't see how there would be a feasible way to do that to catch all those instances. And I might be wrong. And I haven't spent a lot of time thinking about the implementation of something like this, but it seems to me kind of getting in the way of the real issue, which is the issue that you bring to the fore, which is the issue of work rather than the issue of automation generally. I think if we are resigning ourselves to looking at these horizons and pushing ourselves as a society and a organism and then some, I'm not going to say inevitable, but some of this stuff is going to happen whether we like it or not. So we should probably move in that direction rather than fighting things to cover like a...for me, it's like a bandaid over a larger problem.

John:[00:26:03] I like you having dedicated a huge amount of thought to the policy implications of a robot tax, but I do have maybe two objections to it. The first is along the lines of what you're saying is that I think it would be difficult, legally speaking, to define the concept of a of a robot to target the tax appropriately. This is a point that actually other people have made about the regulation of artificial intelligence, which generally as well, that it can be difficult to define the object of regulation there. Matthew Shearer wrote a paper about this a couple of years ago. I wrote about on my website. So, yeah, I think I think that's an issue.

John:[00:26:38] And then the other point is that attacks on robots would presumably serve as some kind of disincentive to the use of robotics and automating technologies in the workplace. And you might think that's a bad thing. If you follow the arguments that I make in the first part of the book, which is that we should sometimes welcome the automation of work. So yeah, I mean that those are the thoughts that I had about the robot tax.

Ben:[00:27:03] So you also write about, aside from this book, you have a swathe of different things that you've written about philosophy and some of them pertaining to technology and AI, robotics. You have a TED talk which people should check out about sex robots.

Ben:[00:27:26] I have a interest in one of the recent podcasts that you put out about the idea of assigning ethical status to robots. And I was wondering if you could talk about your opinion about will we ever be able to assign ethical status or what point that could be?

John:[00:27:49] Yeah. So I think I haven't always articulated my position on this with as much clarity as I should have. So I have an opinion about the standards that we should use to decide the question of whether robots have moral status, but they don't necessarily have an opinion about when robots will acquire moral status or whether any current robots have moral status, for example. Although like that, that'll sound disingenuous to some extent to people who read my work. My view is that It's definitely possible robots to acquire some kind of moral standing. But I don't know exactly when that will be. In terms of the approach that I have, I favour something that I call ethical behaviourism, which to put it in its briefest form is that it's like an ethical variation of the Turing Test, although, as somebody else pointed out to me, what I'm envisioning is not in no sense as formal as the Turing Test. So what I think is that if a robot looks and acts and behaves like another creature or entity to whom we would ordinarily grant some kind of moral status for standing, then we should grant the robot the same moral status and standing. And this is irrespective of the fact that we know that the robot is an engineered artefact or we know something about the details of how it was created or how it comes to have the performative states that it displays towards us. And we can explore that argument in more detail, if you like. That's the essence of my position.

Ben:[00:29:25] So I think we had someone on the podcast previously who would argue the opposite, I guess, of that stance, which would be that it doesn't really matter the behavioural outputs almost, its more important to have an appreciation for the internal workings and the fact that there isn't some sort of internal life or understanding or mechanism, which is I think there's lots of languages which could be construed at this point, but some sort of way of asserting itself, some sort of conscious knowledge of itself and its own workings and then its behaviour is a reaction to that rather than a output from some sort of internal logic which doesn't have any knowledge of its own workings maybe.

John:[00:30:16] Yeah. So, I mean, I guess we have to step back a little bit here to think in more detail about what what it means for an entity to have moral standing and moral status. Because I approach this maybe from a different perspective from a technologist or even a cognitive scientist, you know. So, when I say that something has moral standing, in essence, what I'm saying is that there are moral constraints are limits to how we treat it that are independent of your own preferences or desires. So you can't just do whatever you like with the object in a sense or the entity. I can if I like, I can tear up my book. I'm a copy of my book in front of me. You can rip it up or I can smash my laptop if I want because it doesn't have moral standing. But I can't do the same thing to you or I can do the same thing to, let's say my pet dog, I would argue as well. I would argue that they have a moral standing on moral status as well, that I have to take their being their life into consideration.

John:[00:31:23] Now, there's a deeper question then about, you know, what is is that grounds moral standing or status? And there are many different philosophical theories about what grounds moral standing. So, you know, probably the most popular theory is some kind of sentientism, which is that an entity has moral standing if it is sentient, if it is self-aware. And then there are variations on that, that an entity has moral standing if it has personhood where it's not just self aware, it's aware of its own existence across time and it has a kind of robust sense of personal identity across time. And there are other theories of moral standing as well, which would state that in order to have a moral status, you have to be a moral agent in some sense. But one of the things that all these theories share is that they believe that the properties that mean that something is a...has moral standing are mental in nature, that in order to have moral standing, you have to have an inner mental life that consists of perhaps conscious awareness of the world, but also beliefs and desires and intentions about your relationship with the world.

John:[00:32:34] The problem for me, with all these theories is that I agree that they sound plausible that these things should ground moral standing, but I'm not sure how we assess them in practice except other than through the behaviour of other entities. So when I look at you or I look at other people that I interact with on a daily basis, I can't get inside their heads. I can't see what their mental states are. I can only really see what their behaviours are. And I can gauge or guess their mental states from their behaviours. Like, by behaviour I mean here, obviously physical behaviour or physical interactions with the world, but also the things that they say and the things that they write, the things that they sing about, whatever. So these are all in my mind behaviours. And so these are the things that I use to infer what their inner mental state is and whether they have an inner mental life that can ground moral standing. And my point is that we should apply the same standard to our interactions with robots.

Ben:[00:33:36] And I guess that's something that is discoverable in the future, that there is some sort of prerequisite to consciousness and then maybe the behaviourist argument will become less one sided, I guess, because there would be some way of assigning conscious, scientific conscious attributation to some sort of system.

John:[00:33:59] Yeah, yeah. I think that the main point of pushback that I get about this theory from definitely like cognitive scientists or even computer scientists is a lot to some extent is that they will say that behaviour isn't what determines mentality. It's some kind of neural mechanism that determines whether you have a mental life. But what I would say about that and this is a very philosophical objection: there's a long history of debating the so-called mind body problem in philosophy, which is, you know, how can you generate mental states or consciousness from this mechanism that's inside your head? And my view is that at the moment, there's no real good answer as to why a certain functional pattern of activity inside your brain should generate a conscious mental state. We don't really have a good scientific explanation of that. And in fact, the way in which we verify the relationship between functional brain activity and mental states is through behaviour. So what I mean here is that like in order to figure out whether a certain pattern of activity in the brain or a certain region of the brain is responsible for some kind of aspect of our mental lives, we have to ask people what they're experiencing when we're observing them in a brain scanner, let's say, or we make observations about their behaviour and tie that back to some damage or lesion in their brains. It's like all the famous patients in the history of neurology who have some kind of mental deficits, what's happened is that they've observed their behaviour first, whether they have a memory and how they interact with other people, and then they've linked that back to this lesion inside their heads. So ultimately, it's behaviour that is verifying the relationship between the neuro-mechanical activity and the mind.

Ben:[00:35:47] Would you concede that perhaps we'll get to a point where we know enough from these sorts of discoveries that we could maybe assign the source of consciousness parameters? Maybe.

John:[00:36:02] Yeah. So, one of the things that I used to research years ago was brain based lie detection tests. And that whole field of inquiry is based on the notion that you will be able to come up with a neural measure that is better than behavioural measures of somebodies sincerity or deceptiveness in their behaviour. And it's possible that these things will be successful someday and thus we will have a sound enough connection, I would say, yes, you do need this kind of mechanical activity in order to have a mental life. I'm willing to concede that. But I will say that I am sceptical as to whether we will ever arrive at that day.

Ben:[00:36:44] And it's possible that one of those...we're actually looking at our own makeup rather than the possibility space of other types of being as well.

John:[00:36:54] Yeah. The things that I've written about this. So you reference this podcast that I have, I gave a talk that I gave about this, but I also have a longer paper. Some of the thought experiments in that paper are designed to test like how committed you are to the notion that a certain mechanism, or a certain kind of material arrangement is responsible for the properties, the ground, moral standing. And look, I think these thought experiments may not appeal to non philosophers, but if you can if you can imagine a creature that walks around and looks and acts like a human and talks to you like a human, but you open up its skull and it turns out there's no brain inside, there's no hunk of matter that we would call a brain inside there. What do you conclude from that? Do you then conclude, well, obviously they're not consciousness and they have no mental life and they don't serve moral standing? Or should you say, well, maybe it turns out that having a brain isn't relevant to moral standing. After all, I'm in favour of the latter interpretation of that scenario. But I think a lot of people are more attached to or more equivocal in their judgment to others.

Ben:[00:38:03] I think our general experience is that it would be more understandable if it wasn't conscious.

John:[00:38:12] Yeah. Again, it's like this is going to be maybe like a technical point as well, which is that: sure, our experience is that in order to have a mental life, you have to have a certain kind of neural mechanism to generate that mental life. And so it could be that it is absolutely essential or physically necessary, as I would put us. To have a complex mechanical brain in order to have the kinds of behaviour that we think are determinative of moral standing, but it is at least logically, or metaphysically possible that there isn't that connection. And my argument in the paper and the work that I've done is merely about that kind of logical or metaphysical possibility.

Ben:[00:38:56] Not the practicality of what the other thing might be. It could be any other situation.

John:[00:39:04] Yeah, exactly. So like all I am trying to do really in the work that I've put out there is to say that in the vast majority of these cases, it's it's really when push comes to shove, it's really behaviour that is the decisive evidence in favour of moral standing not some kind of inner neural mechanism that's the decisive evidence.

Ben:[00:39:25] Cool. Well, I think we are getting towards the end now. So we have a question that we'd like to ask at the end of the podcast, which is I feel like we've already answered some of this already. But what are the things that excite you and what are the things that scare you about this kind of A.I., this technologically mediated feature?

John:[00:39:46] Yeah...In terms of the things that excite me. You know, obviously I argue in the book for some kind of automated future where a lot of our basic needs are taken care of and where we can explore other horizons of possible existence. I'm excited about that possibility in terms of things that I'm scared about. There are lots of things to be scared about. And this is in some sense a deficiency in the book that I've written in that I don't spend a huge amount of time exploring all the terrible things that could happen with technology or with ecological disasters and climate change, let's say. So you know I think there's lots of things to worry about. Some people might disagree with some of the substantive details of this, but I think the kind of argument that somebody like Nick Bostrom puts forward in his paper on the vulnerable worlds hypothesis is plausible that there are many more ways for things to go wrong in the future than to go right. Hence why I wrote a book that is broadly optimistic about the future is partly as a corrective to, or not a corrective, let's say an antidote to some of that, and that it's...if we want to live a hopeful or optimistic life, we have to have something to aim for. And I wanted to try and sketch out something that we can aim for while fully acknowledging that there are many risks out there that we will need to avoid along the way.

Ben:[00:41:10] Well, I thank you for the positive impact that I hope this book will have, in that way, because I feel similarly, there's it's almost like writing a depressing song is much easier than writing a really happy one in my mind. So maybe there's too many people thinking about the bad implications and the bad cultural artefacts and terminators then there are the good ones.

John:[00:41:37] So, yeah, I mean, I think that within the world of, let's say, the philosophy of technology or cultural studies of technology, most of the energy is dedicated to articulating negative or pessimistic concerns about technology. Technologies themselves are often criticized for being overly optimistic or naively optimistic. And some of that criticism might be fair. But I sometimes think that people from my camp should be criticized for being sort of knee jerk pessimists when it comes to technology.

Ben:[00:42:08] Great. Well, thank you so much for coming the podcast. I'm sure there's lots of things that people can get involved with your output. So if you'd just like to tell us how people can contact you, follow you, look at your stuff. How can they do that?

John:[00:42:24] So I guess Twitter is a one place you can start I am @ John Danaher on Twitter and the first person with my name on Twitter, but not the only one. And also, I have a website slash blog called Philosophical disquisitions. It's a bit of a mouthful, but that's where you can find pretty much everything I've ever written. I have well over a thousand articles on a range of philosophical topics up there. They also have a bunch of academic articles that have published that are available through that website too, and a podcast with the same name philosophical disquisitions that you can get through Apple podcasts or whatever your preferred podcasting service happens to be.

Ben:[00:43:06] Great. Thank you very much for your time and hopefully speak to you another time.

John:[00:43:10] Thanks a lot for this Ben.

Ben:[00:43:12] Hi and welcome to the end of the podcast, thanks again to John for spending that time with me. Please do check out his podcasts and his blog at philosophical disquisitions dot blogspot dot com and the podcast is philosophical disquitisions. He is prolific and he's always putting out really, really interesting stuff on many different themes to do with ethics and technology generally, but also a lot of robotics and A.I. and things like that. I am just finishing reading his book so I can give a proper review on that on the Patreon and a mini review on the Instagram. So check that out.

Ben:[00:43:45] I really enjoyed our talk with John. I think we could have talked about all sorts of different subjects. As I said, he's spoken and written on lots of different topics and hopefully will take those topics up again in future. On his book and the ideas of utopia that he puts forward. I think they don't necessarily resonate with my own ideas of ideals for the future or different types of digital utopias. They seem almost polarizing and I know he's putting together his thoughts, but doesn't necessarily resonate with how I feel instinctively as a human being. But maybe that's a naive position.

Ben:[00:44:23] Again, thanks for listening and I hope you enjoy it.


Episode host: Ben Byford

Ben Byford is a AI ethics consultant, code, design and data science teacher, freelance games designer with years of design and coding experience building websites, apps, and games.

In 2015 he began talking on AI ethics and started the Machine Ethics podcast. Since, Ben has talked with academics, developers, doctors, novelists and designers on AI, automation and society.

Through Ethical by Design Ben and the team help organisations make better AI decisions leveraging their experience in design, technology, business, data, sociology and philosophy.

@BenByford