90. An ethos for the future with Wendell Wallach

This time we're chatting with Wendell Wallach on moral machines and Machine Ethics, AGI sceptics, the usefulness of the term of artificial intelligence, a new ethic or ethos for human society, ethics as decisions fast and slow, trade-off ethics, the AI oligopoly, the good and bad of capitalism, conciousness, global workspace theory and more...
Date: 3rd of July 2024
Podcast authors: Ben Byford with Wendell Wallach
Audio duration: 01:03:57 | Website plays & downloads: 64 Click to download
Tags: RoboEthics, Machine ethics, Conciousness, Capitalism, Meditation | Playlists: Philosophy, Machine Ethics

Wendell Wallach is a bioethicist and author focused on the ethics and governance of emerging technologies, in particular artificial intelligence, biotechnologies and neuroscience. Wendell is the Uehiro/Carnegie Senior Fellow at the Carnegie Council for Ethics in International Affairs (CCEIA) where he co-directs (with Anja Kaspersen) the AI and Equality Initiative. He is also senior advisor to The Hastings Center and a scholar at the Yale University Interdisciplinary Center for Bioethics where he chaired Technology and Ethics studies for eleven years.

Wallach’s latest book, a primer on emerging technologies, is entitled "A Dangerous Master: How to keep technology from slipping beyond our control". In addition, he co-authored (with Colin Allen) "Moral Machines: Teaching Robots Right From Wrong" and edited the eight volume Library of Essays on the Ethics of Emerging Technologies published by Routledge in Winter 2017. He received the World Technology Award for Ethics in 2014 and for Journalism and Media in 2015, as well as a Fulbright Research Chair at the University of Ottawa in 2015-2016.

The World Economic Forum appointed Mr. Wallach co-chair of its Global Future Council on Technology, Values, and Policy for the 2016-2018 term, and he is presently a member of their AI Council. Wendell was the lead organizer for the 1st International Congress for the Governance of AI (ICGAI).


Transcription:

Transcript created using DeepGram.com

Hi, and welcome to the Machine Ethics Podcast. In episode 90, we're joined by Wendell Wallach on the 25th June 2024. We talk about the time before AI ethics was a thing and the things that academics are talking about, AGI skeptics, the possibility of a new ethic or ethos for human society, Wendell's trade off ethics, capitalism and the AI oligopoly, the fact that ethics is hard, machine ethics, meditation, and what I'm calling post rational objective thought, and indeed, consciousness. In this episode, we hit into some internet issues right near the end where we're talking about consciousness and machine consciousness and that sort of thing. So there are lots of edits at that point.

It's a real shame that we lost some of that thinking from Wendell, but hopefully, we'll catch up with him at a later podcast. I also hadn't heard the term satisfies before, so I think I was using it slightly differently, than intended. So do excuse my ignorance there. If you'd like to listen to more episodes, you can go to machinedashethics.net. You can also contact us at hello at machinedashethics.net.

You can follow us on Twitter, machine_ethics. Instagram, machineethicspodcast. YouTube, at machinedashethics. And if you can, you can support us on patreon.patreon.comforward/machineethics. It was my absolute pleasure talking to Wendell in this episode, so I hope you enjoy.

Wendell, thank you very much for joining me. If you could please introduce yourself, who you are, and what you do. Well, thank you very much for having me. I'm what passes for an expert in the ethics and governance of emerging technologies. I have had many roles from, from a co founder of the AI Equality Initiative at the Carnegie Council Council for Ethics and International Affairs to chairing, technology and ethics studies at the Yale Interdisciplinary Center For Bioethics.

I could go on with that kind of thing, but it's boring as hell I know. So we'll skip over. But I've been what shall I say? One of the earliest people in the world of machine ethics, AI ethics, the governance the governance of AI, and actually emerging technologies more broadly. I did write a book called, A Dangerous Master, how to keep technology, from slipping beyond our control, but has just been published with a new introduction by by Sentient Publications just about a month ago.

And that's really an introduction to emerging technologies, their science ethics and governance much more broadly. But in the context of your podcast, I'm probably best known for a book I coauthored with Colin Allen called, Moral Machines, Teaching Robots Right from Wrong. And all of this goes back to a time when there was probably a 100 of us in the world who cared about these issues. So we started mapping out, you know, what the concerns were, what the fields of, of interest would would be. And ever since then, I've sort of been a gadfly trying to think of what's the next thing people are not looking at and tweaking them to begin to look at that.

So now we have 10,000 people who think they are AI ethicists, and so just kinda keeping keeping feeding them fodder for subject matter they should be, grabbing on to. Yeah. I actually started this podcast, and when I was thinking about a name, I thought machine ethics was a cool name. But what I didn't realize was that it was actually a thing. And then I looked into it, and I found all this interesting stuff about what machine ethics was.

And that then captivated me into like spending more time in the machine ethics sphere. But I think, like you were saying, there was, in the early days, there's very few people like yourself. And I think, we probably gonna dig into it a bit more later. But in the machine ethics world, there's still not that many people really. If you're actually looking into really.

Yeah. Yeah. That's fair. Yeah. There are some AI people who are taking it more seriously.

There are a lot of people who have talked about value alignment, which is kind of what the, what the computer scientists like to think machine ethics is. But when push comes to shove, there's been very little work that has been done. Mhmm. And there are a few experiments now being formulated around how to create hybrid top down and bottom up approaches to, to AI having sensitivity to ethical considerations. Yeah.

So I what what I'll we'll do hopefully is we'll just step back one step, and then we'll dive back into machine ethics. So this is probably a ridiculous question, but what is AI? Well, by now you've probably asked enough people that to know that it was not a great consensus around around a definition. So I think generally, everybody thinks that AI is trying to reproduce or emulate human cognition within machines or human cognitive capabilities. But then you get into the thing of whether you're talking about narrowly just emulating or reproducing.

Some people think they're reproduce. I think in most cases, they're emulating because they're not doing it quite the way humans do it. But, you know, most AI is pretty narrow in the sense that it does one task or a few tasks that humans do. Though the futurist, particularly those who believe in artificial general intelligence and artificial superintelligence, think that whether we're reproducing or emulating, we will have machines that far out exceed humans in all cognitive capabilities. I'm a bit of a friendly skeptic toward whether that's reasonable, but a friendly skeptic who recognizes and in terms of some human faculties, machines probably exceed us.

Yeah. So do do you think that you, you don't subscribe to the AGI possibility or the way that they are framing it or that it's nearer than, you know, maybe further away than they think. That's how do you feel about that stuff? Well, it's a very difficult problem. Okay?

Where it is and and who's correct? Obviously, the the near term people are the Ray Kurzweil's who have it happening in this in this decade. And a lot of computer scientists wanna say within 50 years in some of the earlier surveys. Mhmm. But with the on-site side of large language models, many people lowered that.

But I think now that we're understanding the intrinsic problems of generative AI, I think that the how far away that is is going back up again in in minds of many computer scientists. I think the the central problematic is who believes that the mind and what the mind does can totally be reduced to a computational theory of mind and evolution. And if you add those two pieces together, so it's you will eventually get artificial general general intelligence. And I think the skeptics, are those like myself who say that's that is a somewhat too simplistic and materialistic or or, physicalistic, meaning kind of material science based notion of what mind is and even what a human being is. So you presume you can emulate, everything, but there are subtleties that are coming into play that that are not reducible to either biochemistry or computational wiring and so forth.

But that said, I think all of us have been surprised at how much apparent intelligence you get out of systems that are just playing with language, for example. Yep. I think I think we've are coming to understand that there's much more that we we recognize as tele intelligence is intrinsic to the very language we use. But then when a language model starts to do something very stupid, you understand that, yes, some of that intelligence is intrinsic to language itself. Others is just in a totally different realm of reasoning or cognition.

Do do you think it's almost easy to to I mean, this is a semantic issue, isn't it? But do you think it's more useful to actually just change that intelligence piece and and use the kind of, like, branding term smarts or something like that? Something less anthropomorphic. So it's like artificial smarts because it's it's doing interesting things which are helpful and are, you know, more or less narrow. Maybe it's, you know, the LLMs are less narrow, obviously.

But the intelligence piece is is it confuses things maybe. Well, I think it's very unfortunate that we had this term artificial intelligence and you can blame it on John McCarthy and this invitation he had to what we now think of as the the fathers of artificial intelligence who gathered for a summer for a workshop at Dartmouth in 1956. But I think it also points to the naive of what computer scientists think they are doing and how easy they think it is to create. So back then, they thought that within a decade, within a decade, we would have machines that could be grandmasters at chess and which would communicate in natural language, you know. And, we didn't beat Gary Casper off, you know, for for another 36 years or something something like that.

They actually assign vision to one graduate student that summer, And we really haven't fully solved the vision. Yeah. The vision problem of a of of artificial intelligence. So, I mean, I you know, we've seen this over and over again. We've gone through cycles after cycles in the last 70 x, what is it, 68 years now That, where, almost nothing happens and there's some breakthrough and everybody thinks whatever.

It's all around the corner. And, you know, the the latest, of course, is of that is generative AI. But but to say that it's not to say that generative AI hasn't opened up, fascinating frontiers for AI. I mean, my particular ballet week for decades has been emphasizing what could go wrong. And I'm I'm I'm not emphasizing what go go wrong in the form of being, a dismisser.

I'm just saying, yes, but this is also on the table. And if you don't direct attention to to deep fake scams, whatever you want to talk about today, you will not really ever get around to realizing what you think is the potential. Mhmm. And what is the potential? We don't know that.

We don't my even if artificial general intelligence is not on the table, the potential of humans adding that extra bit of magic that we can bring to the table together with artificial intelligence does perhaps open up at least a universe of what can be understood that stood scientifically or Yeah. You know, or computationally. Yep. So they can be applied in lots of different fields and push us further to things that we want to do, I guess. Yep.

So I noticed that on your website, it says you you might be working on a new book called, Cloud Illusions, Self, Understanding and Ethical Decision Making for the Information Age. And I wondered if part of that was you trying to assert whether we need a new type of ethics or a new type of thinking behind how we deal with this stuff. Is that is that the case? Yeah. It is.

And I can give it a little broader framing, which is what it has now. It has a broader framing. Right. But I think we need a new ethos, you know. Call it enlightenment 3.0 or whatever you want to.

I think we're at I think we're at a critical juncture in human history. And the kinds of challenges that we're confronting such as artificial intelligence, but climate change, there's all kinds of things. Yeah. We're confronting that, we don't really have the right tools to deal with it. And, I have been, what shall I say, trying to create what I think are elements of that ethos.

So my focus is it's now broken down into 3 books, and I'm trying to make it clip. One is moral intelligence in the digital age. The second is self understanding in the digital age, and the third is meaning in the digital age. And I've been in the ethics world for a long time, long enough to know not only are people stupid about ethics, but we don't have we don't have the right way of even talking about it. So I don't look at ethics as the application of of a process or tools per se or values per se.

I look at ethics as a part of life where we're trying to navigate uncertain piece. And the fact that many of our child make make difficult decisions, many of our challenges, our decisions were a confluence of values impinge upon them where we don't always have the information we need or can't or can't predict the consequences of our action. So I look at ethics in terms of how are we gonna navigate that and that the ethical systems that have been created were largely attempts to help navigate that. Now the first of the ethical systems were largely about rules and duties. But when you think about that, that's really about thinking fast and slow.

How do you have a quick response to a challenge at hand? And a quick response may be necessary for survival or because there's no time. But you know, what we're dealing with with ethical challenges are, those that really require they're devilishly difficult, require a lot of attention, a lot of work. There, you know, there are no some of them are no easier than solving the dark matter, dark energy problem in physics, you know. Wow.

Yeah. And they really require a whole community working on them and they require new tools. And the two new tools that I'm adding to the to the nomenclature are what I call trade off ethics and, what I call a silent ethic. And a silent ethic is more about human moral psychology And it obviously plays into things that have come up in meditation and sports medicine and all kinds of things about one's relationship to one's inner states of mind and and thought and thinking. But trade off ethics and that's that's easier for people who've been submersed in those worlds for years to understand harder for people who have never meditated, for example.

Can I say something pithy? It sounds like, marriage ethics. Doesn't it trade off ethics? Compromise? I mean, it's not marriage ethics, but, obviously, marriage, you have to learn about trade offs.

Exactly. Yeah. But trade off ethics is more the reason part of it, and the silent ethic is more of the moral psychology. Mhmm. Mhmm.

And these are both separate, but they but they can be seen as complementary. Trade off ethics is just saying that when you have multiple courses of action, don't look at only the benefits, but what the trade offs are. So what what the price is for each of those courses of action. And if you don't address both the downside as you make your choice, you have not made an ethical choice. So it's basically, you know, slamming simple consequentialism, but I'm not a good consequentialist would say, yeah, when you we also have to look at the bad consequences.

Yeah. But the fact is that's never been intrinsic to, to to consequentialism or utilitarianism. Does it involve a certain amount of, prior, like, information about the the environment or the the the system that you're working within? You probably haven't answered this, but you we live in quite complex interconnected situations. So does that trade off, just work with the knowns or or the, the information at hand essentially?

Well, Well, you can only work with the knowns, but you can also look the unknowns and what unknowns are what doesn't play out. So my assumption is I do some empirical work and my assumption is that if I enter this form of AI, the people who will lose their jobs are x, y, and z. And this percentage of them will be in North America, and that percentage will be in Africa and so forth. So we can do that empirical analysis, but but we're in a universe of of unknowns and probabilities. So so it's not enough to just tag the knowns.

It's also very important to look at, at what might be the high impact but low probability outcomes. Yep. But we all is still navigating in a world of unknowns, of uncertainties. And it's all, you know, a recognition, which has become axiomatic for people who have thought about You know? So so all of that is all of that a part of moral intelligence, but it just says, if you're really gonna make the good decisions, there's a lot of work in it.

And it may be and in some of these areas needs collective participation from those people who can navigate or have expertise in different facets. And in some situations, it needs input from a public even if the public's re response may be naive or conservative. Mhmm. So, you know, if you talk about, you know, tinkering with the human genome, for example, you know, you will probably get responses don't fully understand what we're talking about in terms of the the genetics. But they also do say there are certain outcomes we're very uncomfortable with.

And Yeah. And we think the pursuit of those because they serve the the advantage of a few individuals, but maybe to the detriment of a large percentage of society really on cannot be just ignorant, you know, that that has to be part of of your conversation. Now whether that conversation, you know, go through a worldwide ban on some technologies or whether that conversation goes into the regulation of technologies or whether that conversation goes into, you know, those who want to experiment on these areas understand you have this responsibility in your experimentation. Yeah. Yeah.

Yeah. I I guess it's it's, it behooves us to to think about this stuff. And it feels like when I've been talking to organizations, you might have a framework or way of working that you can apply to, let's say, the data science pipeline. So you're making a thing, Let's think about the stakeholders and the ethics of using someone's data in a certain way and all this sort of stuff. But at a macro level, it doesn't really tell you whether it's you what you're doing is good in of itself in a way that we we need something to happen there.

And I think if I can bring it back to one of your pieces, you wrote a paper about, this governance piece. Right? So I think there's this idea that maybe there's too many actors doing things on their own, and we need to, like, play together, essentially. Well, whether we're gonna actually play together, whether we need at least some international governance to to moderate, to, to have Congress deeper conversations. I mean, my problem is I think, in nearly all situations, the actors are incredibly naive and few of them recognize that.

So Mhmm. So, you know, so I brought together workshops where I brought many of the leaders in AI together with the leaders in philosophy and machine ethics and other things. And these were the first time these people had, you know, had encountered each other. And I found that, at least the computer scientists were amazingly cavalier about dismissing ethics and what little ethicists knew. Mhmm.

And amazingly presumptuous about how they could throw out a term like value alignment and really understand what the hell they were saying. You know? Kind of like, we can give you, we can give you a rather bland term. Apply it to the problem we wanna solve, which for many of them was superintelligence. Right.

And if we can just align the machine's values with human values, everything is fine. But it's clear that everybody who's actually took that problem seriously finds very soon that they're in the world of ethics. And it's very clear that good ethicists, people who think deeply about any problem, even a small one, come to understand how quickly, particularly with technologies, that these are sociotechnical artifacts. I mean, they're they're sitting on the fringe of what technology can do and what's going on in human society and the tools shaping each other. And you get into this exponential growth of consequences that nobody could ever fathom, But it's it's nice to start with with a group of people who have a deep sense of, of where the problematics lie with the, you know, where social factors are gonna determine more about the the implementation of the tech.

I mean, right now, social factors are basically the technologies. AI is in the control of of a digital oligopoly which oligopoly whose primary obligation is to its shareholders. Yeah. And primary about making money. You know, that's, you know, so that doesn't mean we're developing good AI or we're putting adequate investments in quelling the downside.

And that oligopoly knows that some of the regulations that should be put in place are not in their interest. And they'll do everything they can to quell it, and it's not that hard because our governance structures have corporate capture. Mhmm. The corporations just tell the legislature, well, this is very complicated and you don't fully understand it. So so leave it to us.

And by the way, here's a $100,000 towards your reelection campaign. Nice. Is that society's issue that we need to sort out? Is that who does that come down to? Because for me, we live in this capitalist situation.

And when you consider, the shareholder piece and that we're not making whatever it is in this case, AI products for, you know, for cash, for money. You might be a public organization. You might be making the AI systems to help healthcare or whatever it is. That's a very different kind of environment to be thinking and creating in. So I was just wondering if the if the capitalist situation is probably getting in the way of maybe better outcomes for the the general, social future, I guess.

Right. Well, that I mean, that's a great question. You know, I think you know what a great question is. That's what I asked you. And there's obviously those people who have a simplistic answer.

The problem with capitalism is that capitalism has been productive. Mhmm. And that in certain legal structures, that capitalism helped us find a way to innovate, to, to have corporations and, you know, limit liabilities so that people could experiment and Mhmm. We got the industrial revolution. We got the the germ theory revolution in medicine, and we got the the sanitation revolution in public services and so forth.

I mean, there's a lot of good there. So, but I think capitalism has gone askew, and particularly in America. And, you know, somewhere in history like in the 19 twenties there was, you know, the growth of the labor movements and so forth. And they they established at some countervailing power, which was, you know, which was kind of a good thing. But I think what's happened in it and we can all look at different historical antecedents as to what went wrong with camp capitalism.

But today, we have capitalism where basically all goods all productivity gains go to a small class of people. And, there is no obligation. And I don't care how many 1,000,000,000, you know, Bill Gates has given to taxes. Yeah. It doesn't alter the fact, that he's in a class of people who, that none of us are.

Now I'm not in that class of people, but I can own a few pieces of of stock. You know? So I am an owner of capital, which is, you know but the fact is I own a minuscule amount of stock compared to what the 1%. And I and I think that's how that happened, I don't know. But we have a, you know, I mean, I I can go through historical analysis of where I think the critical elements are.

There are clearly economists out there who can do a much better job than I can on that. But it's clear that capitalism has gone wrong and we have political economies that, controlled by the capitalist in ways that that, you know, that were only speculated at the turn of the, of 19th century when we would write books. We would have books, you know, like the ruling elite and so forth. So so that's a real problem. The real problem is it's both how do you get a society's value structure, realigned What serves more collect the collective good better?

And, and then how do you, you know, how do you get that implemented in government? I mean, the nice thing is that you see a Bill Gates and a Warren Buffett, you know, come around and say, well, now that I've made all this money, what should happen to it? And, you know, in some cases, you know, they're they're doing good, whether it fully compensates for the role that they played in feeding a capitalist system that, that is out of whack, you know, you know, I don't know. Yep. I don't know.

I think there's a lot that I could say here, but then it becomes a capitalist podcast and, a, our social features, podcast instead of a, AI ethics one. So, I was I was really have that podcast at another time. Yes. Yeah. Yeah.

But, thank you. I was, I stopped you during were you saying that kind of this dual ethic that we were talking about earlier about this compromise piece, utilitarian compromise? Well, I called it trade off ethics. Trade off. Yes.

Yeah. But since since utilitarianism is the aspect of ethics that looks at, you know, looks at alternative pathways and picks the greatest good for the greatest number, whatever that. Depending on your definition of that, it seems like utilitarianism. But you know, I'm not a utilitarianist in the sense that I think that utilitarianism is the end all and be all of ethics. Yeah.

So I actually think that the, that the greatest good is also intrinsically, entangled in certain deontological principles. Right. Right. When you're saying with when you were making this, assessment, right, of this ethical assessment, at that point, you have, like, these probabilities. This may or may not happen.

This is a bit unknown. This is very much known. And you can lay it all out. At that point, could you then say, let's be a utilitarian about this now, or let's use virtue ethics and say, all these things are wrong because of they're not striving for, you know, a virtuous person. Or you could use your other reciprocal, part of your ethic that you were going to talk about as well, which, we haven't gotten to yet.

That what I call the silent. Yes. Exactly. I mean, I'm saying ethical decision making is is tough. It's hard.

Yeah. We're trying to make it easy. We try to reduce it to to thinking fast, to what we can react to, what our habits are and so forth. And that's not how we know. So when you really look at some high problems, consciousness and its present form is inadequate deontological systems and adequate virtue ethics is inadequate, you know, but all of those have certain truth to those.

All those are languages that we use to work through. And if you can bring all of those into play in an intelligent form, then maybe collectively, we can work through some of these challenges that I'm, you know, just personal problems. But these are really collective issues that that we as a society have to do. But I but it's some very interesting things come into play. When I've tried to look at the different courses of action and What might be the trade offs of each of those courses of action?

And then I've looked at what can we address? We can't address everything, you know, with Downsons. We don't even know what it all, you know, what it's all gonna do. I find that actually, oftentimes, the net benefit between one and the other is not so great. Right.

Okay. Yeah. And and, you know, I'm not ready to say that that's an intrinsic. I mean, that would be like, like an ontology that would say nothing you do makes any difference. Right.

Right. Yeah. Yeah. You know, I'm not ready to go to that, but I think it is true in many cases. A lot of the most attractive courses that action don't look very attractive once you look at the downside.

So I think that eliminates the worst of them. You know? That kind of analysis can eliminate the worst of them because you just say, you know, you can see that, you know, humanity doesn't want a course of action that's gonna eliminate 3 quarters of it. You know, I mean, just just to be absurd here for a moment. Yep.

And you get down to a few courses of action and then the question is well, which of those are we gonna go with? Yep. And here's where I think ethics does get into virtue ethics and it does get into, it does get into a realm where for somebody who has worked through that, certain kinds of feelings or mental states Mhmm. Start to indicate you're getting closer to the best of these actions. You know, the best of these.

And you may not fully know why, but for whatever reason you feel that it's probably the best course of action or or you've looked at what you've even looked at whether well, you know, I sometimes say I'm a racist. I'm not trying to say that. I say I'm a racist because I get into a lot of situations where I just favor minorities for white people. Right. Yeah.

So so, I look at that and say okay, did you go with B because you're a racist? You know? Yeah. I'm afraid that piece is just gonna get cut and if somebody wants They'll have a deep fake and they'll they'll destroy me. Luckily, I don't have that kind of power, so nobody needs to destroy me.

You'll get counseled by, some sort of, clowns or something. You have you have in other words, that's what virtue ethics come in. That's where self understanding comes in. You know enough about yourself to understand what are your your strong prejudices. You then look at your feelings in that regard, and then you see what you're left with.

And I and I think your feelings still still are pretty good, and it's not just feelings. There's a certain point you get to. And, again, if you haven't meditated for a long time or done this kind of people don't understand this, you get to you're just quiet. You don't need to think about this sucker anymore. You know?

And for whatever reason, when I go with a, I gotta think much more about my prejudices. When I go with b, I'm just quiet. Mhmm. You know? I know that I'm in the right place at the right time in the right way.

It it feels like, you could post hoc justify that, can you? You could say, you know, imagine a politician coming out and going, we've decided the best course of action is this because it felt good. You know what I mean? That that would be Well, that's that's been the problem with satisficing. You know?

It just felt good. Yes. Right. Felt good felt good means something if you've done the work. Felt good means absolutely nothing if you haven't done the work.

Right. You know? And, and I think there will be post hoc. I I mean, I think there'll be post hoc justification even near you're right. Mhmm.

What's interesting to me about the silent epic and what I think is revolutionary in what I'm saying in the sense that that I don't know anybody else who said it, is that I think in your decision making process, if you have a degree of self awareness and relationship to your thought and thinking and mental processes and so forth. When you look at various options and you don't know what to do, you should go with the one that makes you feel relatively quiet. Less thinking, less, you know, there's a dropping away of having to go over this territory again. So that's so the problem with spirituality and meditation and all these things is they have made inner quiet a goal. Because inner quiet is understood to be, energizing, you know, a doorway to alter states of mind.

You know? So they make that a goal. The problem is if you make it a goal, then you're repressing thought and thinking and emotions because they get in the way of that the goal. This is actually looking at that from the total other way around. Right.

There's no problem with the goal, but it's the byproduct of of your decision making process. And if you don't understand it as the byproduct of your decision making process, then you will you will be satisficed. You will be satisficed with something that is short of your best efforts. So so if you relate that back to meditation, could you satisfy in working through your mind is thinking about to the point where you are satisfied? You are calm because you worked through those different thoughts and you become quiet because But you are calm.

Naturally calm as opposed to be satisficed. Satisficed basically says you can only think of, you know, a limited number of options and therefore we go with the one that we feel most comfortable with. So I'm saying that well, that may be partially true, but that doesn't get you the full distance and certainly not when we're talking about some of these big problems. Yep. I think it also requires a certain kind of trust of what the subconscious mind is doing, which gets us in a whole other territory that I still have fully written about, and I'm not sure that many other people see it in the way I do.

Yeah. Yeah. Yeah. But but the, but the point is that if you have a degree of self understanding and you know pretty much when your ego is active and when you're just quiet, you know, then you can work with that. Then you can work with okay.

There's still there's still too much Wendell in this. This is still too it's still too much this is about me and what I have. Right. Right. Right.

It's something like that, but I'm not really quiet. And that that requires a degree of of work effort, you know, so that's where virtue ethics comes. Yeah. Yeah. That's where spirituality comes into being And it doesn't rule out that you go to warfare or anything like that.

It's just, you know, it's just that you have to see the well, that's who I am and where I stand, which is based Krishna's argument with Arjuna. So if that's who you are. Right. These may be your cousins and you don't want to go to war, but that's your dharma. Now I can raise questions about, you know, that kind of dharma or whether that's really what Krishna was saying.

But he happens to be saying it in the context of a war, and happens to be a, you know, a warrior. But the point is, virtue ethics has always had courage and, you know, other other virtues mixed in there. I look at it a little bit differently. I think each of us is a nexus of consciousness and a sea of relationships. And I'm here, you're there.

And you being there has a whole confluence of history and Mhmm. What functions you've taken on, and what responsibilities you have, and what you've learned about yourself. And and and same here, and it's not the same, and we aren't in the same space. We may be able to subsume each other's space for a moment or something and have a conversation like this. Yeah.

But, but it's still that we are in a sea of relationships where we're all fulfilling various functions in this vast this vast planetary planetary cosmos that works perfectly and not to mention. Yeah. Cosmos. Yeah. It goes on beyond.

And, you know, I mean, I think use there's a reason you've done this many podcasts. You you see that you're fulfilling a certain kind of need, and somehow that's who you are. You're you're interested in the these questions that you think need to be probed more deeply. And you feel that if you can probe them more deeply or if you can present to your listeners people who have probed probed them more deeply, it may only be we may have only moved one sand, but we've moved 1 we've moved one grain of sand in our collective ability to appreciate what the challenge of the day is. Yeah.

Yeah. Yeah. And I'm hoping as I hope the listeners are taking this in, taking this on board. Thank you so much. I realized that we we don't have that much time left.

So, if you will indulge me, what I wanted it because I because I was trying to make a list of things that you touch on, you're interested in, maybe you've written, again and again on. And, one of the last things there's obviously more you can say about machine ethics stuff. I wanted to hit on the consciousness aspect because I thought that was both, sticky, and also you have an opinion. You've written stuff. So I was interested in the idea of machine consciousness.

Whether we can talk about AMAs, like, artificial moral agents, and whether they in they are inherently conscious or need to be or or vice versa, whether consciousness demands some sort of morality in embedded. You know, even if it's just a one liner on that. Well, I I mean, I can say a lot here, Ash. Yeah. Exactly.

Yeah. I'm gonna make a few presumptions in this I'm gonna presume that you and most listeners to this understand that there is a massive field of consciousness studies now. And though though it it tends to embrace a certain degree of skepticism whether we know what we're talking about because of because it's sometimes called toward a science of consciousness, recognizing that a lot of problematics. And a lot of what it's done is got into trying to define terms and recognizing the differences on how people are using words like consciousness. So some of the problem is, well, what is consciousness?

And to what extent is it unique to humans or or vertebrates or, you know, other animals. Mhmm. There's certainly aspects of consciousness or how it's evolved that that we apply, that we don't see much of, you know, in other animals. And the difficulty again is what's innate and what, what emerged over time? So when we're talking about consciousness, are we talking about an innate faculty and innate to who?

Some people like Harnett think it's innate to any animal that can feel. And recently was looking again at the breakdown of of, the origins of consciousness and the breakdown of the bicameral mind, which was a bestseller in the sixties. Mhmm. And it's basically an argument. The consciousness emerged.

It emerged in humans about 3000 years ago. And that that the bicameral mind referring to the 2 lobes of the brain was preconscious humans. Humans who are aware of being conscious actually functioned as if they were having auditory Iliad is Iliad is really a preconscious document, yeah, of of consciousness, a document about a preconscious civilization. You know? So so here so I don't wanna get too caught up in all that, but I think I think it's a question very much of what, you know, what definition of consciousness we're, we're talking about.

And, I tend to think there is something about consciousness that is intrinsic to the very universe we're moving in. That's a panpsychist viewpoint. But it still isn't really what we talk about as consciousness until it's apprehended by a body that feels and Yes. And and absorbs and reflects. So I'm willing to grant consciousness to a vast array, you know, of creatures, but I also said there's always something in consciousness that is that is evolutionary.

Now when you get into a dualistic system, basically Buddhism is There's that exists, that there's this pure consciousness that, and that all and as a idealistic system, it says all of matter has has has emerged from that. I I I can feel you want me to move along to this. No. No. No.

No. It's fine. So, I think consciousness is and will remain a great mystery as to what is In principle, it would be damn hard. But if In principle, it would be damn hard. But it doesn't matter because the science isn't there yet.

So my own view is the consciousness does something. It does something because the way it's entangled in human actors. And, it's not it's not just outside of the human system. I'm not a slut word, but it's doing subtle things, and it's being interactive with by subtle brains that are more comprehensive in their relationship to consciousness, and I believe we're going to get with machines in any foreseeable future. There's almost 2 problems that there's.

One, we don't really know what consciousness is, and therefore it's hard to attribute it. And secondly, it's probably much more complicated than we could possibly design or have like a in your, one of your machine, moral machines sections, there's this idea of emergence and that would feel it feels far fetched to allow things to or design things to emerge. Well, to understand that in moral machines, we got Stan Franklin. And Stan Franklin had a theory of consciousness and he had a theory of machine consciousness that was based on Bernie Barr's global ethics, global consciousness theory. Mhmm.

Global workspace theory. So that was kind of a pragmatic relationship to conscious ness. We took Stan up on that and worked together with him and produced some papers on the next to the last change machine model we developed together with him. The superintelligence project presumes that we're gonna have smarter than humans in all respects. Mhmm.

You know and a little bit more bottom up kind of saying, well, when you get when you get what we refer to as functionally ethical systems, then how are they going to recognize the ethical significance of problems they encounter and and factor through the best decision. That's very different than, you know, that's kind of just, you know, how do you ensure that your autonomous system which has been given this task like taking care of or being the companion for a human in their home? How does it deal with the ethical challenges that will come up in that that? And, you know, how do you deal with climate change or the inequality of political economies and so forth? Yeah.

So we weren't looking at a model that was gonna solve all problems. We were looking at a model that could just help people think about this from the bottom up. And Bernie Barrs in Stan Franklin called this, you know, a theory or first Bernie Barrs with global workspace theory called this kind of a theory of consciousness. But, I mean, I've been meditating for 50 years, and, and I could probably say that anybody who meditated for that long has apprehended things about mental states and their own consciousness. That just seem to defy the simple computational model of what consciousness is.

Mhmm. Now that said, Tononi has a model of consciousness that perhaps could be implemented. You know, it's there are certainly models of consciousness such as like generative AI that get us further down the road than we thought we could get computation. Yes. But it it would be it would be a a version of consciousness which is called x y and zed and is it's going in this direction instead of going in this other direction maybe.

Right. And I just think it needs, it needs the complementarity of wise human beings To be to deal with these more complex issues. So when I'm saying we deal with complex issues, I'm not ruling out of role for computer systems in that. I'm just saying that we need, you know, just as these are collective problems. So we can see many kinds of people at the table.

Example, if we're gonna come up with a full new ethos, I'd want economists at the table. Just for an example. Yep. Yep. Yep.

So I might be an ethicist who is more comprehensive and narrow and focused. But, you know, some of those problems are gonna need the more narrow focused person who is aware that this particular problem is much more nuanced than, is naturally thought. I've attended hundreds of lectures, if not thousands, on different ethical problems, you know, enhancements and, you know bioethical issues on end of life. And every time I go into this lecture, I think I know what the core issue is. And then the lecturer says, well, if you look at this from a deontological perspective, there's this and this and this.

Look at it from a virtue ethics perspective. That's also good. And I've come to see these as just languages of ethics. You know. And simplistically, even those of us who try to be more comprehensive thinkers tend to have, I won't say a superficial understanding because we're often aware of things that others wouldn't be.

But in many subjects, we don't have, you know, great depth. The, you know, when you're when you're a human is intelligent, you kind of understand what you don't understand, you know, and you sort of know what the next question is beyond your level of understanding. I feel like, that's a

constant thing with age for me. The, knowing more and knowing that I don't know so much more. It's a paradox. Yes. Exactly. It's a paradox. So the last question we normally ask is around something that you might have already mentioned, but, what excites you and what scares you or cause you pause in this, kind of AI techno social future that we're creating? So that's a a good question and one I often ask in podcasts is the last question.

Yeah. So it's, I mean, first, let me thank you because I think this has been a a wonderful discussion. And I always feel the discussion is revealing of the of the intelligence of the of the person I'm interacting with. And, we've gotten into some fascinating corners together. Thank you.

That's very gracious. My concern is that we will never get a handle, managing these technologies, And they're gonna be applied willy nilly. And they're gonna be applied in ways that serve the interests of smaller leads at the at the expense of humanity as a whole. Mhmm. And, so it's not like I sit here fearful of artificial superintelligence.

I'm I'm less concerned with artificial and superintelligence than forms of intelligence that humans attribute more intelligence to than the systems really have. And therefore, they put they put them in positions of making critical decisions. Totally naive about some of the high impact, low probability Mhmm. Decisions that or actions that these systems could take. So that's the AI part of it.

And that I don't see anything in the form of either will or governance yet. That makes me feel that we're really gonna get a handle on this and I'm not sure that that is even likely to happen until we have some relatively major tragedy. I hope it's not too tragic where humanity begins to demand that Mhmm. Of their legislations. So that's on the AI side.

It's not just AI. It's biotechnologies. It's nanotechnologies. It's geoengineering. Yeah.

It's a whole plethora of of of new tech technologies we're dealing with. So my basic sense is we're on the wrong trajectory at the moment. And that trajectory is dominated by corporate interest and the focus is largely the replacement of human labor and surveillance. And, nudging us onto a more satisfactory trajectory is a difficult task. But what I do is I'm looking for what I call the salt march moment, which refers to something during Gandhi's career, where a little action actually had quite an impact in nudging the trajectory that, that humanity was on.

And that in that case, India and and and England were on. So I look at the salt march moments where through sometimes something very subtle, we could nudge the trajectory just a little bit, and therefore, our destination would be radically different than where it, it would have been otherwise. Mhmm. But I'm not sure that what's taking place is our collective intelligence at the moment. I'm I'm I'm concerned that our our tendency to be stupid, Identity human stupidity could prevail.

And human stupidity, I'm afraid, is going to prevail at least in one form in attributing more intelligence to computational systems than they have. Right. So who? On that bleak note, I I want I kind of want I kinda I'm I'm grasping, Hick. I I kinda wanna leave on a on a positive note.

Do do you think there's a I mean, you obviously talked about, the fact there are positive things that AI can do and achieve. A positive note is a positive note is pretty simple. I mean, I'm not I am not a naysayer. I'm not an anti science person. I'm not a person who wants to shut down these technologies.

I think we can realize all the potential there, but we do need to muster a little bit of will and intention toward managing these technologies, and a and a willingness to to cooperate more brad broadly than individuals and nations seem to be demonstrating that they will do it. At the moment, my hope is that there will be salt march moments in the coming years that will will write that trajectory. But I have no doubt that we can do it. I'm just the pessimistic side is on. We don't seem to be doing it now.

Yep. Well, thank you, again for your time. This is absolutely fascinating. And like I said before, it's been a real joy for me because I've been looking at your stuff for a while now, and you you've been on my list for a very long time. So thank you.

Hopefully, let us know when when your latest books are out, and maybe, we will pick those up and check those out. How do people, follow you, read your stuff, contact you, all that sort of thing? Well, connegiecouncil.org is where we've been running the AI and a quality initiative. We're reflecting though whether that's going to end or not, but a lot of my, you know, editorial content articles we have developed together with other leading thinkers on the governance of AI. And they're all on that website.

There's a podcast, about many topics that we didn't talk here. That was just on a couple weeks ago. That is of me on that web website. There's podcasts I do. My closest colleague is Anya Kaspierson and her writing and podcasts are there also.

So connegiecouncil.org is a good place to go, but a lot of material that's now showing up on Sentient Publications also. I think they're .org, and they have they have various other podcasts, lectures, panels about, my latest book or the latest edition of my, A Dangerous Master on that website. Thank you very much for your time, and hopefully, we'll speak to you again. Thank you. I actually I truly enjoyed this and look forward to our meeting in person one day.

Thank you. Welcome to the end of the podcast. Thanks again for Wendell for joining us. I definitely hope to get him on again in the future. Like I keep saying, it was just awesome to have him on and and talk about all these different subjects and get his opinion as I've been reading his stuff and people associated with him for neon 7 to 8 years now.

I really like his idea of the 2 ethics he put forward, the trade off ethics, and the the other one, which is more kind of the meditative human side or human experience side. I'm personally not 100 percent, kinda subscribing to that aspect of, all this kind of objectivity of experience becoming part of that decision making process. For me, personally, it feels a little bit, fuzzy, but maybe with a few meditations, I could come around. I also massively agree with the the thrust of his argument that we are heading towards this this need or this kind of evolution of this ethical ethos that we need to pull into the digital realm, that we need to kind of think about these things slightly differently and maybe consolidate or or push forward in a new way. It feels like a pertinent thing that we should be doing, hopefully, doing more together as well, as Wendell pointed out.

Thanks again for joining us. If you can, you can support us on Patreon, patreon.comforward/machineethics. And I'll see you next time.


Episode host: Ben Byford

Ben Byford is a AI ethics consultant, code, design and data science teacher, freelance games designer with years of design and coding experience building websites, apps, and games.

In 2015 he began talking on AI ethics and started the Machine Ethics podcast. Since, Ben has talked with academics, developers, doctors, novelists and designers on AI, automation and society.

Through Ethical by Design Ben and the team help organisations make better AI decisions leveraging their experience in design, technology, business, data, sociology and philosophy.

@BenByford