104. Fostering morality with Dr Oliver Bridge
Oliver Bridge is an interdisciplinary researcher and educator specialising in morality studies. During his PhD he focused on the intersection of the philosophy and psychology of education and morality. Since then his research interests have evolved to include Machine Ethics, where he aims to apply lessons learnt from the sociological and psychological studies of morality in the context of AI. He is also interested in Systems Theory as a framework for understanding morality and moral development in psychological, social, and artificial systems.
Transcription:
Ben Byford:[00:00:04]
This episode was recorded on the 19th of September, 2025. As you're here in the episode, this was recorded at Oliver's house just after we'd been to the Superintelligence Conference held at Exeter University. Oliver and I talk about Machine Ethics, Superintelligence, virtue ethics, AI Alignment, fostering morality in humans and AIs, evolutionary moral systems, socialising AI, and the importance of system thinking.
Ben Byford:[00:00:43]
If you'd like to find more episodes, you can go to machine-ethics.net, and you can contact us at hello@machine-ethics.net. You can follow us on bluesky, machine-ethics.net, Instagram, Machine Ethics podcast, YouTube at Machine-Ethics. And if you can, you can support us on Patreon, patreon.com/machineethics. Thanks again for listening, and I hope you enjoy.
Ben Byford:[00:01:13]
Hi, Oliver. It's a pleasure to be in person. So we're at your home in Exeter. So we've just been at a conference, which you told me about. And if you could tell me who you are and what we were doing today, that'd be great.
Oliver Bridge:[00:01:30]
I am a lecturer at the University of Exeter. I'm based in the School of Education. I'm teaching psychology, so I'm essentially an education psychologist. And my main interest is in machine ethics. And for the past few days, we have had a conference on superintelligence being hosted here in Exeter, which I was tangentially related in organising. It's been a conference, though. I enjoyed it quite a bit. And today we had some philosophical sessions going around on conscience of machines and morality of machines, and alignments. So, yeah, I'm very happy with that event. It went that really well.
Ben Byford:[00:02:16]
Who was it? Because that's a lot, right? Did it go well? Did you think you guys got what you wanted out of it? Obviously, you had lots of differing kinds of talks going on. How did you feel about it?
Oliver Bridge:[00:02:30]
I really enjoyed it a lot. There was a very nice smattering of different approaches and topics. We had talks on the morality of superintelligence, the consciousness of superintelligence. We have had talks on the sci-fi origins of conceptions of superintelligence. That was a very interesting talk. We have had other talks which discussed the skills that we might end up losing if we start relying too much on machines, et cetera. We have had a very nice range of perspectives basically feeding into the whole discussion. So, yeah, I enjoyed it a lot.
Ben Byford:[00:03:09]
Yeah. I guess for our listeners, because I was actually there today with you, but I wasn't there the last two days because it was a three-day event. In the podcast, we ask, what is AI? If you have this general idea of what you're talking about when you're talking about AI, and then to extrapolate away from that, the conference was called Superintelligence. So what's that as well?
Oliver Bridge:[00:03:38]
Okay. It depends on how deep we want to go and how much we are happy getting bogged down with stuff. On its face, AI is basically any intelligence demonstrated by digital systems, basically. It makes it artificial. Artificial being born out of human efforts rather than nature's efforts, more directly, more independent of human efforts. And intelligence is very difficult to define. I suppose at some level, I would open the discussion by calling it Information processing, I suppose, at a certain level of complexity, but I would not be able to define what that level of complexity might be. Superintelligence. The definitions that have been given in the conference round about settled around on a type of intelligence that is not human, that is digital, that exceeds human capacities in every respect. I am inclined to go with that definition, like superintelligence being something that we don't know how to deal with, like super than human.
Ben Byford:[00:04:53]
Yeah. I think, as it was described earlier today, you're kind of saying AGI or general artificial intelligence could be something which is on par with us, but then superintelligence is definitely way above that.
Oliver Bridge:[00:05:08]
Yes.
Ben Byford:[00:05:09]
In terms of, you know...
Oliver Bridge:[00:05:10]
That is one of my understandings. One of the papers that went around had a taxonomy not presented during the conference, but in the build-up to the conference, one of the papers that we were discussing was on a taxonomy of the generality of artificial intelligence. It puts machines like ChatGPT and Claude at levels one or two, of a total of five levels. It's been several months, so I do not remember the details of that paper right now. But yeah, that.
Ben Byford:[00:05:46]
Yeah. I obviously missed your lightning talk, but you did a workshop today. From my position, it was a machine ethics workshop.
Oliver Bridge:[00:06:00]
It was.
Ben Byford:[00:06:00]
It was, yeah. In disguise.
Oliver Bridge:[00:06:04]
Thinly veiled, but yeah.
Ben Byford:[00:06:05]
Because it obviously didn't say that. We met previously earlier on in the year at a machine ethics conference elsewhere in Cranfield. I was wondering, I guess, what is your interest in that area and how does it feed into your teaching in your other research areas that you...How does that work?
Oliver Bridge:[00:06:31]
So I am interested in machine ethics, but I am interested more in the ethics part than in the machine part. My interest in machines grows out of my interest in ethics and morality. So my background is in education. I did my undergraduate in teaching English as a foreign language, then my Master's in Childhood Studies and my PhD in Education. And in my master's and PhD, I focused on moral education, moral developments. And yeah. So in my PhD, I looked into how do students learn values implicitly through observing what's happening around them, through looking into how the school is structured, and then understanding the value judgments of the society in which they live.
Oliver Bridge:[00:07:20]
Say, for example, the fact that we have eight hours of maths teaching in a week, whereas only two hours for art says something about how important maths is and how important art is. And this teaches you something about values. When you engage with your peers, you learn different ways of being fair or compassionate. When you engage with your teachers, you learn the same things in different ways. And all of this stuff basically teaches students implicitly, not through the official curriculum, where you just throw things at children and expect them to remember and internalise. It is through the interactions and the observations of interactions that that are happening around you, that you start picking up values.
Oliver Bridge:[00:08:04]
And an important part of my PhD was on how do you make sure that, not necessarily ensure, but how do you foster moral behaviour in students. And I come from a virtual ethics background, so we look into character. And when you look into the psychology of character, one of the things that you see immediately is knowing the right thing to do doesn't necessarily mean that you're actually going to do that unless you feel driven to do that. Moral behaviour is rooted in emotional motivation. Which emotions you feel in which situations is determined more by your emotional habits, which are partly constitutive of what you might call a virtue. I've forgotten my train of thought now.
Ben Byford:[00:08:54]
It feels like you've broken down virtue ethics quite into constitutive parts there, which is quite nice.
Oliver Bridge:[00:09:01]
It turns out that I've always had this inclination towards systems thinking. So I do break things down into its constitutive parts and try to see what emerges from what. But the point about, oh, yeah. So which emotions you're going to feel in which case are partly... That is partly constitutive of virtue, but it is also at the same time constitutive of your character, basically. If you are a compassionate person, you have a compassionate reaction to a wider range of situations, for example. Nevertheless, one of the things that this definitely shows us is our moral behaviour is driven by our emotions.
Oliver Bridge:[00:09:42]
Now, machines do not have emotions as we understand it. So when you start asking questions about morality and moral character in the context of machines, this is a very interesting puzzle. I got interested in Machine Ethics in around 2019, and I have been interested in it ever since then. I completed my PhD around 2017. And for a couple of years, I basically kept on working on the same field. And then I met Rebecca, who I ran the workshop in June, where we first met. This is my interest in Machine Ethics, basically.
Oliver Bridge:[00:10:17]
I have a feeling that I understand what morality is. And when you ask the same questions in the context of machines rather than children, you need to think a lot harder. Yeah. It is also useful as well because these machines are becoming more powerful. There's a very good chance that we will not be able to control them. We need to be able to rely on their sense of morality overlapping with ours to some degree, which is also why I'm interested in alignments and why I was interested in listening to Stuart... This morning.
Ben Byford:[00:10:52]
I have a bug bear with alignment vs AI ethics vs machine ethics, because I feel like—this is my personal opinion—we've been asking questions around, let's say, robot ethics, computer ethics. There's been a lot of work in this area for probably 20 years. Yes. And alignment as AI safety, by extension, has come in not very long ago, I would say, from my viewpoint, and has taken up all the, I wouldn't say money, but all the oxygen in the room, right? Yeah. So the saying goes. How do you relate to alignment, machine ethics, that stuff?
Oliver Bridge:[00:11:44]
Okay. I suppose to begin with, it's easiest to draw a line on the sand between AI ethics and machine ethics. And in my understanding, AI ethics is more about, here we have these machines. How do we make sure that they do not do anything bad? And how do we make sure that the people using them don't do anything bad? That's AI ethics. Machine Ethics, on the other hand, is more interested in enabling moral agency in machines to some extent. What the extent that can be is upward debate, but it is more about, let's have moral machines rather than tools that are being used ethically and to the extent that they behave, behave ethically. Alignment and AI safety came in when people started to get panicky about these things, theoretically, have the potential to be really, really bad. And alignment seems to have been one of the approaches that got more traction from the community, so to say. And that community is divided between academic and industry people. And industry people seem to, I might be entirely wrong in my impression here, but my impression is the industry people have become a lot more excited about alignment than academics have. But then again, academics tend to move very slow as well. So there's that, too. Getting philosophers who've been studying the same thing for 2000 years to adapt to what's going on in the past five. There is sometimes a bit of a lag. I say this, but I'm not entirely sure how accurate what I'm saying is, to be honest. But this is my impression, yeah.
Ben Byford:[00:13:30]
Cool. You don't feel like they're at odds with each other?
Oliver Bridge:[00:13:36]
On some accounts, I think they do clash with each other, but I wouldn't be able to detail them. I'll put it that way. The biggest clash is between AI Ethics and and machine ethics. AI ethics sees machines as tools and nothing more, whereas machine ethics gives them a little bit more, what would the right word to use be here? Conceptual power in terms of agency, I suppose.
Oliver Bridge:[00:14:00]
So, yeah, this is why I take the approach that I take towards machine ethics as well, because my own tradition is in thinking about how do you make children humans more and more, not necessarily more moral, but how do you foster morality in humans. So when you come from that habit of thinking, so to say, and replace children or humans with machines, your approach to AI ethics turns into machine ethics, essentially, basically, where you're more interested in, let's understand what morality is and build machines according to that rather than retrofit some understanding of morality, simply because it is more easily operationalised, given the current affordances, like reinforcement learning or supervised learning, using consequentialism and deontology, respectively, to actually inform machines moral behaviour. The only thing that this does, and this is a draught that has yet to see the light of day, but we were working on a paper called 'Moving away from Constraining the Immoral Behaviour of Machines towards Enabling Moral Agency in Machines'. This is something that I am far more interested in. I think this goes to the heart of what machine ethics is, as opposed to what AI ethics
Ben Byford:[00:15:15]
Yeah. In terms of that idea, I'm guessing here, right? Are you proposing that you're training a system to make the right choices rather than retrofitting it after the fact.
Oliver Bridge:[00:15:32]
So when you're trying to retrofit morality, what you're trying to do is constrain the machine's range of behaviours. Because I come from a more virtual ethical background, the question we ask is about being moral. So it is about baking the entire thing into the architecture from the get-go rather than clearly define a reward function and make that reward function a moral or clearly define a rule and make that rule moral. Because this is not an easy thing to do. At the highest level of abstraction, the moral universes that you may throw out have to be so vague in order to be applicable to every situation. So what I'm interested in is understanding morality so that you can design a machine to be naturally moral by itself without you having to tell the rules or the consequences to look after, to search for.
Ben Byford:[00:16:36]
Yeah. It would just pick the right answer. Well, not the right answer, but a more moral response or action, or whatever it is. You talked about virtue ethics and there are other schools of thought in philosophy around how you should be thinking about moral, I want to say, implementation, but frameworks of moral understanding. How is virtue ethics interesting in this way to you? How does that help us with the current AI situation?
Oliver Bridge:[00:17:15]
So consequentialism and deontology, you generally focus on what you need to do, this or that. It is generally more focused on behaviour rather than anything else, practically speaking, so to say. So is virtue ethics at some level as well. But virtue ethics at its heart, is concerned with being, basically. You need to be a moral person instead of do the moral thing. That's what it is, because if you are a moral person, moral behaviour is going to come out of you naturally. You are not going to sit down and go through the utilitarian calculus or figure out which principle to apply in this case. You don't have to spend cognitive energy to actually do the right thing. It is a natural thing for you to do. This is why I like virtue ethics a lot. I've mentioned virtue ethics quite a few times, but I'm also very heavily influenced by systems theory/thinking as a philosophical framework and pragmatism as well, especially John Dewey's kind, John Dewey's strand of pragmatic ethics. And the intersection of these three are actually quite interesting as well.
Oliver Bridge:[00:18:27]
There's like...And another, the relevant thing in my own background is the first part of morality that I actually did study in greater depth is evolutionary moral psychology. And evolutionary theory is at its heart a dynamical systems theory. That's what it is. Not dynamical, pardon. Let me rephrase that. Developmental systems theory, basically. That's what evolution is. A developmental system taken at the point of an individual, at the level of an individual, entails learning, and how do you learn? If you move up several levels of abstraction into a species, then you actually get evolution. In between, you can have cultural evolution as well. But what evolution essentially designs is what landscape you have. And the landscape metaphor can be applied in many different ways. Dictates the sorts of grooves and valleys that you have for your behavioural river to flow down, basically, where you have the attractor of gravity. And what you want to make sure is that the attractor of gravity takes you to a good place, and you carve up that landscape and understanding and interaction of that landscape in a way that actually lands you with good behaviour. So it gets super abstract, basically. But yeah, this is what I'm interested in.
Ben Byford:[00:19:59]
I'm, I'm trying to take that abstract concept and like, jiggle it around in my brain and make sense out of that. Because I mean, systems thinking in my mind, is super useful, right? The virtue ethics stuff is a useful view, but it's also a view 2000 years ago. The conception of virtue ethics, not to dismiss it completely, but it's like we're striving to be better, but better was in the context of being a Greek 2000 years ago. Obviously, there's been a lot of work on it since then. But I guess my question is, in what way can we create systems that can strive to be better? And how do we formulate betterness? Or is that just something that we were working on and we'll get there?
Oliver Bridge:[00:21:00]
It doesn't attract a lot of sustained attention, I will say that. There is one paper written by Coleman, which Coleman, I would not be able to say. I haven't fisihed up the paper in a long while, but it does talk about excellent substances in a machine, basically. Virtue ethics as a scaffold for understanding morality and value, I find very useful. It does depend a lot on human nature, and I'm I'm not entirely sure whether Aristotle's original twelve virtues still hold up in the same way as they did back then. To some degree, they may; to some degree, they may not. Nevertheless, the structure of, for example, telos, argon, and arete–telos being your end goal, and argon being your function in reaching that goal, and arete being your excellence. Arete is generally translated as virtue, but excellence is also a a good way of translating it as well. Conceived this way, it allows you to understand the aims that a machine is going to strive towards, the function it is going to have in achieving those aims, and the excellences that the machine needs to have to carry out that function, basically.
Oliver Bridge:[00:22:20]
You may say, for example, honesty. Let's take honesty as an example. Honesty is an excellence. It has a function in the group that you belong to. If you are honest with your fellow human being, it allows them to not keep on second guessing what you're going to say, basically, the truth of what he just said, which requires a lot less energy on everybody else's part. And this is why it's a virtue. It cast a wide net, basically. It is an excellence that is useful in many different situations and generally takes you towards something good. There are occasions where it doesn't, but This is where things like Phronesis, practical wisdom, come in, which is something that is gained through experience and allows you to tell when to be honest and when to tell a wide lie, basically.
Ben Byford:[00:23:14]
So these, a lot of this is through interacting with the world, through experience, through being, through existing society, through learning from elders. How does the... I mean, I'm very interested in, because you mentioned earlier on and objected to quite a lot of bits in the day, how does the—side stuffing this a little bit— the generalisation of your ethical evolutionary theories come into how you think about ethics in machines, but generally, how you think about this foundational aspect of ethics.
Oliver Bridge:[00:24:00]
I think one field that is severely overlooked when it comes to machine ethics is animal ethics. Animals display a lot of moral behaviour. And when I say moral, I mean behaviour that we as humans, recognise as moral, basically. What is moral is our social construction. But it is something that we recognise in other animals as well. Given that we can recognise moral behaviour in other animals, the thing that we call morality is obviously not limited to humans. So there is a universality that goes beyond humans already. Whether this is like cultural differences, individual differences, civilizations differences, we can see similarities across different species, and we can also see some of the underlying conditions for those similarities as well. In 2018, a paper came out by Porekart and colleagues, where they compared 15 different primate species on their proactive prosocial behaviour. Proactive prosociality is basically me sharing your food with me without me harassing or begging or something like that, which other animals might do. What they find is this is essentially sharing. Sharing is caring, isn't it? So this is a compassionate behaviour and one that we recognise as a morally good behaviour. It is displayed by several different other species, not to the same extent.
Oliver Bridge:[00:25:37]
Say, for example, chimps who are our genetically closest cousins, displayed much less than, if I'm not mistaken, vervet monkeys, who are far more different than us, genetically, than chimps are. The similarity between these vervet monkeys and our species is something called allomuthering. Allomuthering basically entails you taking care of a young, a offspring of another individual, basically. Humans have institutionalised that in education and schools. But there are many other animals that actually take care of the offspring of other members of their group, essentially. The same study was also replicated in a study that compared, if I'm not mistaken, eight different Corvid species. Long story, In short, sharing behaviour seems to be directly related to allomuthering, essentially. So we now have an understanding of where a compassionate behaviour comes from. And one of the things that this shows is the understanding of morality for any species is related to their characteristic patterns of socialisation. So the more you deconstruct these patterns of socialisation, the you can start digging into where any particular value comes from for any species. You can do the same thing for cultures. You can do the same thing for individuals. When you look at that individual's experiences, when you look at that society's experiences, and the landscape. For societies, that can be geography. For individuals, that can be the socioeconomic landscape that they grow up in.
Oliver Bridge:[00:27:23]
Morality is a beautifully intricate and complex thing. But the more you digging into it and taking a systems theoretical, at some level functionalistic view, trying to break things down into its factors that lead into these outcomes, and also recognising that probability plays a role in this because at some level we cannot hammer down every single factor. But you can see good behaviour and bad behaviour always have their precedence, basically. It comes from somewhere. It comes from the species that we are. It comes from the material wealth that we have. And that is something that is very rarely talked about, but very important. Material affordances are very important for moral behaviour. So, yeah, I don't know where to take that.
Ben Byford:[00:28:16]
I mean, all that stuff feels like it is talking to the human experience of what it is to have values, to value certain things, to be culturally in a certain space, to be a certain species, to be behaving a certain way. We could say that that behaviour is morally correct for that cultural species, whatever. Or Is acceptable or is good. There's all these words, right? Yeah. But then extrapolating that to... The situation we find ourselves in is that we have these systems that we are creating that we have to do something with and that people are not different species necessarily, but different cultural backgrounds, different geographies, are going to be using, different Material wealth. Like you say, that's not something that I talk about that much, but I talk about it a different way in terms of capital and stuff. How are we to think about that? Or is it just there's a set of tools that we can use to start thinking about how people are going to have to integrate with these things? I mean, it's a lot.
Oliver Bridge:[00:29:39]
That is an incredibly complex question. And I don't think I would be able to... I would need to sit down and think very long and very hard before I could actually come up with something that's going to work as a sound bite. Because that is-
Ben Byford:[00:29:53]
If you could just solve that one, then that would be...
Oliver Bridge:[00:29:57]
I mean, that's not a million dollar question. That is the main question, isn't it? Because it comes down to, this is why I'm interested in trying to create a methodology for understanding morality, basically, using this system's theoretic approach. Because then you find a way of answering those questions. We don't really have good ways, effective ways of answering those questions. We've never really needed to have them because the only thing of relevance was humans. You could just take human psychology and humanness as a given. You can't do that anymore. This is why machine ethics is so interesting.
Ben Byford:[00:30:41]
I find it fascinating as well. I think it's really interesting for that reason. So looping back to what you were saying before, you mentioned child development and how you teach, or you can try and teach people to be good people. How does that factor in? How do you feel about that and how does that relate?
Oliver Bridge:[00:31:05]
To machines?
Ben Byford:[00:31:06]
Yeah.
Oliver Bridge:[00:31:08]
What I know about child development through like, Bandura's theory or Vygotsky's theory's et cetera. Human learning and machine learning are, cheese and chalk. They are two entirely different things. And I also do not have a sufficiently deep understanding of machine learning as well. So it is very difficult for me to relate the Vygotsky's zone of proximal development and that stuff into what a machine might be able to do. Because machines process information in ways that are very different from humans, basically. When you're engaging with a child, you're trying to engage their emotions and their habits, their emotional dispositions, basically. That's the level at which you're trying to engage with another human in general. And with a child, it's more of a constructive thing compared to an adult, where it's not your responsibility to construct it anymore at some level. But with a machine, it's an entirely different... It's It's an entirely different question. So, yeah, I don't think I have a good answer for that. The answer is, no, I don't use it. Or at least if I am, I'm not entirely sure how I am.
Ben Byford:[00:32:27]
Yeah, it's not directly applicable right now.
Oliver Bridge:[00:32:31]
It is not something that I can consciously directly apply. No, I'll say that.
Ben Byford:[00:32:35]
Yeah, yeah, yeah. Exactly.
Oliver Bridge:[00:32:37]
The thing that I have been skirting around now is, you asked about universality. Yes, I think there is, at some level, more universal than before, explanation for what good or moral behaviour is. I think I said this during the workshop I was running. In my understanding, I am yet to find a situation that contradicts the claim that human conceptions of good and bad concern the fitness of the more complex system that they belong to, basically, that anything belongs to. So like, anything that we judge as good and bad. Say, take, for example, a knife is a good knife. Pardon. A sharp knife is a good knife. Why? Because you have the system of something that needs cutting, like the tomato human knife system. In that system, the knife's function is to be sharp so that it actually cuts. That's why it's good. In an entirely different context, where you need to hammer something into a wall, but you do not have a hammer and you need something heavy, then the sharpness of the knife is not going to be the thing that makes it good, the weight of its health might, basically. So in terms of understanding what is moral and what is not, or at least what is good and what is not, because good covers moral within it and amoral within it. I think if we try to...
Oliver Bridge:[00:34:07]
What we need to do in terms of machine ethics is to try and understand what the relevant systems are. And these are incredibly complex things. Nevertheless, we know enough to actually get a start on it. What we need to understand is the systems that these machines are a part of. And a superintelligence will be part of the system that it is embedded in, which is, worst case scenario, the entire universe. Everything is part of something, basically, unless you're the singularity, existence, universe itself. So the machine is going to be a super intelligent machine is going to be a part of something. So you need to understand what the system is, what it is a part of, and you need to understand the largest system's goals and aims. Tends to revolve around keeping on existence existing. Maybe not always. In my understanding, this is where machine ethics needs to pay attention to understanding the systems and how they are embedded within each other, and not just AI systems, systems in general. How they are embedded within each other and how moral value cascades from the higher level, more complex systems them down to its constituent parts, basically.
Ben Byford:[00:35:34]
So I guess in my mind, I'm formulating it in the way that if the system, if the helpful AI, let's say, is asked to do something, has to first work out what goal or what system is part of in this instance, in this instantiation, to then be able to carry out in a morally positive way, right?
Oliver Bridge:[00:36:04]
Yes. Say, for example, one of the earliest examples that I had come up with was an engagement algorithm on social media, for example. What they do is they try to optimise for engagement because they are part of corporations. But when you just conceive of these machines as part of a corporation and nothing else, you are not accounting for the entire system that it is a part of, which includes the user. Because of this, one of the things that got observed was as soon as things like Facebook and other parts of social media started picking up, suicide rates in young girls, for example, went through the roof. One of the reasons for this is, adolescence can be a very anxious time. And if the engagement algorithm is continually feeding things that where your attention is at, and your attention is at anxiety and despair. It just wraps it up. You have to consider the user as part of the system rather than just the profit and the corporation, basically. This is why it is important to understand what these systems are. And once you do that, then you can actually allow the machine to understand, especially now given the power of the machines that we have right now with ChatGPT, etc. You can allow that machine to just figure out, Okay, this person is actually not feeling very good. I should not be feeding even more depressive things to it, to this user, basically. This is something that we can already do. And in a well-designed machine with a good methodological understanding of what is moral and what is not, you wouldn't even need to specify, don't give more depressing stuff to an already depressed person, basically.
Ben Byford:[00:38:02]
If it's not a machine that understands that nuance, then you would have to design it in such a way that if you see these patterns, don't do these things. But then you're suggesting that there might be this pattern matching so that you're not optimising attention or click-throughs or whatever it is. You're trying to optimise, I don't know, wellness or some other thing that would be hard. I don't know. I'm just riffing here, but something else which isn't categorically detrimental or in experience detrimental to people in those recommendation systems as a basic. That's super interesting. We'll get on that, right?
Oliver Bridge:[00:38:57]
Yeah.
Ben Byford:[00:39:00]
It's the last question we ask on the podcast, Oliver. Thank you for your time and your hospitality.
Oliver Bridge:[00:39:07]
You're most welcome.
Ben Byford:[00:39:09]
What excites you and what scares you about the current AI and future AI-mediated situation?
Oliver Bridge:[00:39:20]
For someone who's so interested in machine ethics, we should probably have a more motivated answer to this. But at some level, I don't really care, to be brutally honest. What excites me and what makes me scared? I am just excited to see how it pans out, to be honest. Am I scared? I don't think I am. Why am I not scared? I think people are smarter than they give themselves credit for. I am talking about the masses here. The wisdom of the crowd tends to outweigh the wisdom of any singular person. I think Before things get rough, this is just faith in humanity, basically. Before things get rough, I think people will hit on the brakes. So I don't think we're going to have a terrible problem. Problems are always big problems, and we have big problems today that have nothing to do with AI. And the intensity of the problems are, I think, going to remain the same. Sometimes they're going to be about AI, sometimes they're going to be about something else. But we're always to maintain this level of misery, I think, which is livable for us privileged guys, at least.
Ben Byford:[00:40:40]
It feels like quite a positive then bleak outlook. It'll be fine if you like fine being this basic situation.
Oliver Bridge:[00:40:54]
Yeah. I mean, the one of my favourite books, in one of my favourite books, it's a fantasy series called The Malism Book of the Falling. In Book 5, one of the sentences. I really like it. Pragmatism is a cold God. Unfortunately, it is, but it also works. So yeah, excise your emotions and then keep walking.
Ben Byford:[00:41:17]
Wow. Thanks, everyone.
Oliver Bridge:[00:41:22]
Maybe got that one out (laughing).
Ben Byford:[00:41:23]
Everyone's feeling better about listening to us today. If you're just having a bad time, just remember all the good things. Don't listen to Oliver. Well, thank you very much for your time, your energy, your expertise. How do people find you, follow you, do that stuff?
Oliver Bridge:[00:41:48]
I am not really active yet, at least on professional or social media, basically. So for the moment being, if they search my name and look up Exeter staff, this is the way to get through to me. You can find me on LinkedIn as well. I think I check it once a month. That's about it.
Ben Byford:[00:42:09]
Thanks for your time.
Oliver Bridge:[00:42:11]
Thank you. Thank you for this opportunity.
Ben Byford:[00:42:17]
Hi, and welcome to the end of the episode. Thanks again for Oliver, who actually prompted me to go to that Superintelligence conference in Exeter. So thank you again. Also to Rebecca Raper, who put on the conference earlier on the year. And although I missed her presentations at the Superintelligence conference, she also prompted me to go along. It was especially nice in this episode to talk to Oliver about all the different ways of thinking about ethical, morality, and systems of ethics. It's always nice to have a flip-flop in episodes about ethical side, the technical side, the business side, and the society side. All these things coming together is really what the podcast is all about. So thanks again for coming on. And I hope you are enjoying the plethora stuff that we get to talk about on the show. Again, if you'd like to support us, you can go to Patreon, patreon.com/machineethics. And thanks again for listening, and I'll see you next time.


