108. Moral Agents with Jen Semler

This month we chatted in-person with Jen Semler. We chatted about what is AI? Philosophers and engineer collboartions, businesses working with ethicists, machine ethics and AMAs, what makes a moral agent, how to create a moral agent, types of moral decisions, the point of moral agents, how can we tell if a machine is conscious, tech companies being not democratic organisations and don’t have to adhere to their citizens, and more...
Date: 4th of February 2026
Podcast authors: Ben Byford and Jen Semler
Audio duration: 44:18 | Website plays & downloads: 79 Click to download
Tags: Ethicists, AMAs, Conciousness, Agents, Philosophy | Playlists: Philosophy, Machine Ethics

Jen Semler is a Postdoctoral Fellow at Cornell Tech's Digital Life Initiative. Her research focuses on the intersection of ethics, technology, and moral agency. She holds a DPhil (PhD) in philosophy from the University of Oxford.


This episode is sponsored by Airia.com - Meet Airia—the AI platform that empowers everyone on your team, from no-code beginners to pro developers, to innovate responsibly.


Transcription:

Ben Byford:[00:00:07]

This episode was recorded on the 24th of November, 2025. I was fortunate enough to talk to Jen in person in New York City. We chatted about what is AI, philosophers and engineers collaborating, a subject really close to my heart, machine ethics and making AMAs (artificial moral agents), different types of moral decisions, moral agency and patience, and indeed, the point of moral agents, how to tell if a machine is conscious, the fact that tech companies are not democratic organisations and therefore don't have to adhere to citizens, and of course, much, much more.

Ben Byford:[00:00:54]

This episode is sponsored by Airia. AI innovation shouldn't compromise ethics or security. Meet airia, the AI platform that empowers everyone on your team, from non-code beginners to pro developers, to innovate responsibly. With enterprise-grade security, cost controls, and a prototyping studio for rapid self experimentation, Airia ensures you can build smarter AI while managing risks, safeguarding data, and staying compliant. Transform chaos into streamlined AI control. Future proof your business, unleash innovation, and embrace AI responsibly with Airia. Learn more at Airia.com. That's A.I.R.I.A.com. Thank you and hope you enjoy.

Ben Byford:[00:01:41]

Hi, Jen. Nice to meet you. If you could introduce yourself, who you are and what you do.

Jen Semler:[00:01:47]

I am a postdoc at Cornell Tech as part of the Digital Life Initiative.

Ben Byford:[00:01:54]

Amazing. So we're actually in Cornell Tech right now, recording this. I We happened to be in the States for a week, so we were amazingly able to meet up. It's Thanksgiving, so thank you very much for your time.

Jen Semler:[00:02:10]

Yeah, of course. And thank you for coming. Roosevelt Island is a very unique part of New York City, so I appreciate you making the trip.

Ben Byford:[00:02:17]

Yeah, and I just made it here. No one knows where it is, but I got here. Basically, we always start with the same question. If you're okay, what is AI to you?

Jen Semler:[00:02:31]

Yeah, it's a difficult question. I think there are different ways you could interpret the question. How do people use the term AI? I mean, most people now use it to refer to generative AI or maybe machine learning more broadly. What really is artificial intelligence? What would it mean for something to be artificially intelligent? Some people like the definition that it's a computer system that can do the things that if humans did them, we would think it was intelligent or something like that. We can ask questions about what it really means to be intelligent. Maybe that involves problem solving, other sophisticated cognitive skills. We can also ask the question of how should we use the term AI? Should we be using it differently in virtue of the fact that maybe how people are using it now is disconnected with the notion of intelligence? I think in a sense, the toothpaste is out of the tube already on that one. I don't think we can really change how we're using the term, but I think it should give us something to think about. There's lots of terms in the AI space that maybe we would want to use differently, including intelligence, agency, autonomy, all of these different capacities, reasoning.

Jen Semler:[00:03:46]

Maybe a long answer to what is artificial intelligence, but it's a complex question.

Ben Byford:[00:03:52]

Yeah. I mean, you bring up a good point because I was wondering if you had an idea of what we should be calling it, because obviously it feels like it's transitioned from ML, machine learning, to AI, generally, to what we call generative AI. Now that is what we're talking about when most people say AI and not ML, really, or this other set of stuff which is in machine learning. I don't know. Is there some things that you think we should be calling attention to in that weird language soup?

Jen Semler:[00:04:26]

Yeah, I'm not sure. I think maybe it's called AI because intelligence is the goal or because it's useful to have an analogy to what humans do or because it's good for marketing to refer to something as intelligent. I don't know. I think I would advocate for at least some more consistency around how we're using these terms, but I don't know if I have a proposal for what the best term to use would be.

Ben Byford:[00:04:54]

Damn it. I thought we were going to solve it right here. Yeah. We're in Cornell. What brings you to this area of expertise? I know that you were in Oxford before. What was your journey in getting to where you are now? Is there a view on where you're going as well?

Jen Semler:[00:05:14]

Yeah, big question. My background is in philosophy. I've always... Everything makes sense when you look at it from the future, but at the time, I felt like I was just bobbing along following my interests. But I think the thread of my interest has always been about agency and responsibility. These are the questions I've always been interested in. When I was an undergrad, I was thinking a lot about the problem of moral luck, how luck plays a role in our moral judgments when maybe we think luck shouldn't play a role in our moral judgement. I ended up doing a master's in mediaeval Icelandic studies, which is not really directly related to AI or philosophy, but I ended up thinking and writing about fate and free will in mediaeval Icelandic literature. I think I've always been interested in questions around agency and responsibility. Then when I started my PhD, I was really thinking I wanted to do something practical, and AI has always raised really interesting philosophical questions, so I thought this was the perfect place to combine my interests.

Ben Byford:[00:06:27]

It's funny because some of that... I haven't heard it discussed as luck, but you often hear free will discussed in this place of AI stuff and more physics stuff, where if the universe is a clockwork thing, then do we have any free will? And does it matter that we do have free will in terms of machines that we're making and consciousness and stuff? Sometimes you hear these things come into play, but is it similar to you or is there this luck element? Is that a different aspect which is not associated with free will and consciousness?

Jen Semler:[00:07:07]

I think that they can be connected, but they can also be distinct. I think what luck really draws our attention to is the ways in which we lack control over certain decisions that we make and certain outcomes that result from our decisions. You could cash that out in terms of questions about free will, or you could just cash that out in these non-mysterious factors in the world that give us less control over our actions than we think we have.

Ben Byford:[00:07:35]

Yeah, like complex systems and biology and chemical reactions.

Jen Semler:[00:07:39]

Exactly, all of that.

Ben Byford:[00:07:41]

Yeah, all that. Yeah, cool. You are currently, Cornell, what are you doing now?

Jen Semler:[00:07:47]

Yeah. So I started here in August, so just a few months ago. I joined this group called the Digital Life Initiative, run by Helen Nissenbaum. And I really jumped at the opportunity as a philosopher to be in a tech space. So Cornell Tech is part of Cornell University, but it's located on a different campus. So the rest of Cornell is up in Ithaca, and this campus is in New York City. Almost everyone here works in computer science or information science or engineering. I really wanted to immerse myself in more of a tech space because I think if you're a philosopher working on AI ethics, you should be willing to engage with people who are working on the technical side.

Ben Byford:[00:08:36]

Are they willing to engage with you?

Jen Semler:[00:08:39]

In my experience, yes. I think going in both directions, there are difficulties. I'm sure my frustrations sometimes in translating what I'm thinking about and working on and what I care about to computer scientists, for instance, I'm sure they also have their own frustrations about the way that I'm understanding and conceptualising problems. So I think part of it's just trying to find a common language and common problems, common methodologies. But yeah, I do think that there are people who are interested in the normative dimensions of their work.

Ben Byford:[00:09:14]

Yeah, yeah, yeah. Previous to this conversation, we had a conversation around the difficulties of knowing how to almost integrate ourselves with private entities and how to support decision making, almost like using this toolset, basically. Our experience and knowledge on moral philosophy. Do you find that... Are you expecting that to continue or do you have a vision of how these, kinda of... Not saying that the tech, the practical people who are making stuff, don't have these ideas and thoughts. But do you think that there's going to be more coming together or it's not going to continue? It's not relevant, maybe. I don't know. For example, I'm trying to work out what I mean here. There was a big push for responsible AI, right? For me, that's completely gone out the window with all the generative AI stuff. I feel like that was a parting of the industry's interest in this area. I'm hoping they come back again and go—cool, we need to think about society and humans and how we affect each other and stuff like this in a way that we're currently not. Is that something that you resonate with or is that completely not true to you?

Jen Semler:[00:10:36]

Yeah, no, I think it resonates with me. I think I would imagine that there's just always going to be this ebb and flow of the ethics world and the business world. I think it's a mistake to view ethics and business as necessarily intention with each other. Like, oh, we're just the ethics police coming along to tell you that you can't do X, Y, and Z. I think there are business reasons to care about ethics. I think there are non-business reasons to care about ethics. I think that there are a lot of ways in which setting up your values and ethical systems as a company can actually help you do better in both the economic sense and the broader societal sense. But yeah, in terms of how that actually plays out, I'm not sure. I think what we've seen now is I think a lot a lot of companies have had some acknowledgement that people do care about things like accountability, transparency, explainability. And probably as a result of that, lots of companies have their principles of AI. We at Company X care about promoting autonomy, and we care about transparency, explainability, privacy. But then I think the question is, can these companies bring in people working in ethics to really nail down what they mean by these terms and what happens when these values conflict, either with each other or with the business goals of the company?

Jen Semler:[00:12:11]

I'm hopeful that there's more room in the future for this interdisciplinary work, I think, especially as more researchers are taking this interdisciplinary approach, but I don't know what the future holds.

Ben Byford:[00:12:22]

Do you think do they have a philosophy part of the tech undergraduate or degree programme?

Jen Semler:[00:12:30]

I'm not sure, actually. I probably should know the answer to this question. I either will or do. I think there are definitely courses, like ethics in design type courses. I'm not sure what's required to get a degree. So, yeah, I don't know.

Ben Byford:[00:12:51]

If you're listening, Cornell, then message me. I was looking at some of your papers and stuff like that, and I feel like we're going to go down a massive rabbit hole. I was wondering where to start and how to gently bring people with us. So you had a paper on instrumental convergence and on moral agency in consciousness. I think they both, to me, are around AI safety and this idea of singularity, superintelligence, and then this other side of how do we control or how do we make this thing useful, even if it's not sentient or super intelligent or whatever. This is not me bringing people along with me so far, is it? I was just wondered if because the moral agents thing is much more interesting to me. Me too. It's a good one. We talk about consciousness, and in this paper, you're saying that we don't need systems that need to be conscious necessarily to have moral agency. What are those two things? Can you talk about what you mean about consciousness in machines and then the moral agency part. Is that cool?

Jen Semler:[00:14:09]

Yeah. Let me just think for a second. Most of my work is on moral agency and AI. But how am I going to answer this moral agency question? Okay. I define moral agency as a moral agent is a genuine source of moral action. We can unpack what that means in a little bit more detail. To be a source of moral action, you have to be a source of action in the first place. You have to not merely behave but act. This is a common distinction in philosophy. I think action involves having mental states, having things like beliefs and desires that move you to act. You can act for reasons. This is something that humans are thought, and maybe some nonhuman animals are thought to uniquely be able to do, whereas computer systems aren't really thought of being able to do those things. And then the moral side is, okay, can you act in such a way that you're morally available as being the source of that action? So not just did the AI system do something bad, but did it actually wrong someone? Does it have moral obligations? Can it be morally responsible?

Ben Byford:[00:15:20]

And it's morally responsible to us, right? Exactly. Yeah. Because you got to obviously extend that out to your pet or the planet or plants. You know what I mean? There's a distinction there. We often implicitly discuss morality as a human-centric thing, but you could be an agent which is considering in your moral sphere, insects, whatever it is, basically.

Jen Semler:[00:15:49]

Yeah, absolutely. So philosophers like to distinguish between, on one hand, moral agents and on the other hand, moral patients, where moral patients are things that deserve moral consideration consideration, and we can include nonhuman animals, insects, the environment, and that. And then moral agents are the ones that can actually act for moral reasons and actually do the considering of things. So then the question is, okay, well, we have this maybe abstract still concept of moral agency. It's got something to do with what makes someone the source of a moral action, maybe what makes someone the type of thing that can be responsible for their actions, the type of thing that can have moral obligations. But what actually does something need to do, or what capacities does something need to have to be a moral agent? Of course, with an eye towards, well, can AI systems be the type of thing that can be a moral agent? This is the Machine Ethics podcast. Machine Ethics is a branch of philosophy/computer scientists with the goal seemingly to develop artificial moral agents, things that can really do morality, really act not just in accordance with what the right thing to do is, but act because something is the right thing to do.

Ben Byford:[00:17:01]

Yeah.

Jen Semler:[00:17:03]

Sorry, this is like, you can cut this. This is like really rambling. But then the question is, okay, so we're at the question of what makes something capable of being a moral agent. One possible thought you might have is that you need to be conscious to be a moral agent. This is the idea that the paper addresses. I argue that you actually don't need to be conscious to be a moral agent. Now, that doesn't mean that it's easy to be a moral agent. It doesn't mean that any existing AI systems are moral agents. I actually don't think any existing AI systems are even close to being moral agents. It just means that consciousness is not the important thing to be thinking about when we're asking about moral agency.

Ben Byford:[00:17:49]

Yeah, exactly. See, you're almost inherently not pushing for consciousness to be an intrinsic part of being a moral agent. Or we're not actually telling people that, I say we, but one could not tell people to say, Oh, if you want a moral agent, you have to make a sentient computer system which reflects and acts and has memory, or whatever we're saying that a sentient system is. But we can have an agent which is capable of doing, let's say, the right thing in the right time for other moral patients without having… It's almost like without having skin in the game.

Jen Semler:[00:18:35]

Exactly. Yeah, that's a good way to put it. Yeah.

Ben Byford:[00:18:37]

Yeah. So how does that play out? Because I feel like if you're thinking about it, it would be so much easier to go—well, it has goals of its own, which is associated maybe with this idea of sentience. And maybe it wants to… This is like coming back to the other paper, but it wants to persist and it wants to do things, possibly. Or it might be just that it wants to help, I don't know. And maybe from that vantage point, it's going to be easier to go, Okay, well, we're going to add some stuff here to make it morally useful for us as human beings and act according to what we consider morally correct or more correct than not correct. Do you have this idea that what does that look like, this moral agent, basically?

Jen Semler:[00:19:26]

Yeah. So I think here's maybe a helpful way to think about it. There's this famous philosophical thought experiment about what are called philosophical zombies. These zombies are exactly like you and me, except there's nothing going on inside. The lights are on, but no one's home, or something like that. There's nothing it's like to be that zombie, even though they are otherwise exactly the same as us.

Ben Byford:[00:19:55]

Yeah, so outwardly, they behave the same.

Jen Semler:[00:19:57]

Exactly. Even in Inwardly, in a sense, they have a brain like ours or similar to ours in many ways. Certain regions might light up in response to certain stimuli. It's just that from the first personal perspective, they feel and experience nothing. I'm saying that that type of entity is a moral agent.

Ben Byford:[00:20:19]

Yeah, because they can still, hopefully, make moral actions in the world.

Jen Semler:[00:20:24]

Exactly. Another way to… Maybe this is actually the way to introduce this topic, and I think this is how I introduce it in the paper. Say you're trying to build a machine, an AI system that can truly do these things, act for moral reasons, have moral obligations, weigh different moral considerations, be responsible for their actions. You need to implement a lot of capacities into that system. They have to be able to look at a situation and pick out the morally relevant features of that situation. For instance, they have to be able to weigh those reasons against each other. They have to be able to decide on a certain action. They have to maybe have some understanding of what their obligations are. So lots of complex capacities. I just don't think that consciousness is one of those capacities that you would need to get a system to do all of these things.

Ben Byford:[00:21:22]

Yeah. It's interesting because those things seem antithetical to what the AI we have is, right? Because we can't look inside the black box, let's say. It's a traditional way we would talk about neural networks and how they work. It's very hard to pick apart the moral bit or the bit that knows about math or logic or anything. You know what I mean? It's more like we're looking at it and after the fact, we are making assumptions about how it's operating. Then sometimes we get proven wrong. People have these visceral experiences with some of these products which feel like they're doing a certain thing, but it's very difficult to say whether they are or not. It's very hard to say whether if we made a system with these tools now like that, whether we could post hoc say, was that decision based on logic or weighing up things, or did it just see in the data that previously that thing ended up this way? So it's going to... You know what I mean? Is there Is there some formal, logical way that we should be doing it, or is this, we say, bottom-up approach, yield best results? Or is that... It's hard to say how we actually implement it.

Jen Semler:[00:22:43]

Yeah, for me, I think there are two questions. There's the question of, okay, if we have a system that we think might be a moral agent, how do we know for sure? How do we test? Then the second question is, how would we build such a system? In response to the second question, how do we build a system that is a true moral agent? The answer is I don't know.

Ben Byford:[00:23:10]

Good.

Jen Semler:[00:23:11]

How do we evaluate? I think in response to both questions, there are certainly tools we can use, for instance, from the philosophy of mind. If I'm right, that in order to be a true moral agent, you need to be capable of action, you need to have mental states. Well, there are views in the philosophy of mind about what it to have a mental state. So representationalist views, for instance, are about having the right internal representations. There might be ways in which we can probe existing AI systems, and some people do this work, to understand what those representations are, and that might give us some evidence of mental states. There are various tests we can run to determine and trying to isolate whether a particular system can pick out the morally salient features of a system situation, whether it can robustly and consistently engage in moral reasoning.

Jen Semler:[00:24:06]

I think we have tools available to test these things. But how to build a moral agent, the only way I know is to have and raise a human child. That's the only way I know for sure to develop a moral agent at the moment.

Ben Byford:[00:24:22]

Yeah. I think it's funny because we actually did have an episode a few years ago now, which was considering the mothering as a way to develop a moral ability or capacities, which it feels for me as a designer, as a person who is more interested in practical implementation. It feels very unsatisfying, doesn't it? We can't design it. We have to grow it almost.

Jen Semler:[00:24:55]

Yeah. Well, and then there's the practical question of why would I spend all my time growing an artificial moral agent instead of raising a human moral agent?

Ben Byford:[00:25:05]

But maybe that's fine. Maybe your gift to humanity is that you raised the agent. We can be duplicated. You know what I mean? Because obviously, you're not restricted to it being a single thing in the same way that human beings.

Jen Semler:[00:25:22]

Yeah. I'm committed to the claim that in principle, it's possible to develop an artificial moral agent. Whether it's possible in practise, I don't know. It might just be the case that it's so difficult that we really should be reserving morality for humans. I definitely think in the present, we should be reserving moral decision making for humans, and maybe even in the future as well.

Ben Byford:[00:25:49]

Yeah. And do you think there are certain areas where it's more or less easy to abdicate responsibility for those things? For example, we had the compass recidivism thing that happened, which is well known and well talked about, probably five years now, where there was a system which was determining whether people could go on bail, right? And the judges were looking at the system and they were being influenced by what the system was saying in their judgement, right? And it's like, is that a place that it doesn't actually matter how good it is? You know what I mean? It could be the best system ever, but we still need, we still want human to be doing that decision regardless, probably without help. But there are certain places where it's like, don't worry about it.

Jen Semler:[00:26:44]

Yeah, I think my view is to err on the side of human decision making. I think for any particular decision, it's really important to think about what we want from this decision. If all we want is efficiency and there's nothing else at play at all or no risk of any other factors being at play, sure, outsource that decision to AI. But I actually think there are very few decisions of that type. I think many decisions are morally laden in nature. Then I think we have these risks where when we're making someone's decision about bail, we don't really only care about their probability of reoffending, even if we could solve all of the bias problems. It seems like there are other things we care about, such as responsibility, maybe a certain relational aspect, maybe a certain recognition, or a certain sense that the person making the decision recognises the defendant as a person. And if we care about these things, then we shouldn't be outsourcing moral decisions to AI.

Ben Byford:[00:27:51]

Yeah. And we discussed a little bit before. It's like that might be a job that we want to keep as well, that judgement job or the jury job or whatever it is that is worth keeping around for that reason as well. I also weirdly thought about when you have a pardon as well from the President or whatever, we have these really weird tools that we have historically or traditionally for some reasons. I don't actually know where that came from, but when the President comes in for a new term, they get to pardon someone or something? What's that about?

Jen Semler:[00:28:32]

Yeah, I don't think you all have that in the UK.

Ben Byford:[00:28:36]

I don't think we do.

Jen Semler:[00:28:37]

Yeah, so the President does have the power to pardon people. I don't know the history of it either. And I think different presidents use it to different extent. I mean, we're recording a couple of days before Thanksgiving. Every year, the President pardons a turkey.

Ben Byford:[00:28:50]

Yeah, yeah, yeah.

Jen Semler:[00:28:52]

But that's, I think, a different sense of pardoning.

Ben Byford:[00:28:54]

It's a very naughty turkey.

Jen Semler:[00:28:56]

It's like pre-emptive pardoning. It's more like just saving the life of the turkey.

Ben Byford:[00:29:00]

Yeah, yeah. It's not really pardoning, is it?

Jen Semler:[00:29:01]

But yeah, I mean, I think, and I don't know, maybe there are good reasons to eliminate that practise. I mean, maybe there are reasons to keep it. But I do think it points the idea, this more general idea that we do appreciate and respect the human discretion in certain fields, even when it's not the President pardoning someone, even if it's a judge that can take a person's situation into account and within the confines of the law, make a judgement that utilises their discretion.

Ben Byford:[00:29:30]

Yeah. It's almost like distilling it down to those data points is not seeing the whole picture of maybe what we want to produce or what we want to enact. And I was alluding to the fact that there's the cultural stuff going on as well, like weird traditional cultural things. What it means to live in America is that you have that. You know what I mean? Right.

Jen Semler:[00:29:54]

For better or for worse.

Ben Byford:[00:29:57]

For better, for worse. Yeah, exactly. So Yeah, I love all that stuff. I think I mentioned it before, but the Machine Ethics podcast was named before I realised that Machine Ethics was a thing. The name's fine. I don't mind that that happened. But I got really interested in that area of machine ethics and imbuing the system with moral agency. Do you think that if you take that forward and talk about moral patiency, is that interesting as well, or is that like, beyond your interest in that area?

Jen Semler:[00:30:33]

Yeah, it's something I've thought about. I think my approach has been to think of moral agency and moral patience as two separate things. If it so happens that the necessary conditions for one are the same or overlapping with the necessary conditions for the other, then those two things are related or something. For instance, if you need consciousness to be a moral patient and you also need consciousness to be a moral agent, then, okay, you need to be a moral to be a moral agent. But yeah, I've been treating them as separate, but I think that there's interesting work to be done at the intersection. My view on moral patience is that consciousness is necessary for moral agency. No, not moral agency, moral patience. Moral patience, yes. Now you can see that I think if you're conscious, you are a moral patient, but you're not necessarily a moral agent. But there's a lot of work being done now on whether there might be other sufficient conditions for moral patience besides consciousness. Those views will interact with my moral agency views in potentially interesting ways as well.

Ben Byford:[00:31:38]

I guess we are being anthropomorphic when we're talking about what the moral patient looks like, almost, as well, because you have to have a definition for, again, the moral agency. What test can we do to say that this is a moral agent, basically, the system? Therefore, you need a test also to be like, is this a moral patient or is this conscious? Then we have to treat it as a moral patient in the way that you were saying that they are linked. Yeah, it feels like it's pure philosophy that is becoming mainstream in the way that people are thinking about these systems, which is also slightly worrying.

Jen Semler:[00:32:23]

I also think companies are incentivised to make their products likely to be anthropomorphised. So I think this is a dangerous combination.

Ben Byford:[00:32:35]

Yeah. Because we talked a little bit in the previous episode about psychosis and attributing abilities that maybe the system doesn't have. But it's also for us, it's just hard to know as well. It's just like, because you were talking about the zombie stuff earlier. It's hard to know whether we are conscious or that there's a hard and fast test for that as well, other than maybe some behavioural way of thinking.

Jen Semler:[00:33:07]

Yeah, it's really hard if you're talking to a chatbot and the chatbot is responding as if they are a person. It's not that surprising that people are really being impacted by that and really being confused and having significant mental health issues because they're being It's duped in this way.

Ben Byford:[00:33:31]

Yeah. You almost need to dial it back, don't you? Just tell these companies, I know you've read all this language and it's mimicking that style, but can you just make it sound bizarre? I am the robot, please go and do. Yeah, and almost distance itself from it. Right, exactly.

Jen Semler:[00:33:49]

Yeah.

Ben Byford:[00:33:51]

Yeah.

Jen Semler:[00:33:51]

Yeah, of course, it's the reason, I mean, the pessimistic reason that companies have taken this conversational route is to manipulate users and get them to keep using the product. The non-pessimistic reason is that these systems are much more useful if we can converse with them in ordinary language. I don't know if I had the best point for that.

Ben Byford:[00:34:18]

Yes. Do you feel like one of the last questions we ask is, what is the positive, what excites you and what scares you? What is it about AI that you find interesting and what is it you are worried about?

Jen Semler:[00:34:37]

I think what I find interesting and exciting about AI is that I do think it can be a really useful tool. And I think especially if we start to see a shift away from these more generalist systems and towards more narrow, specialised systems, we might start to see more tools that are useful to us on a day-to-day basis and less scary. I think what I'm most afraid of is the slow erosion of human autonomy that might result from too much AI too quickly.

Ben Byford:[00:35:17]

Yeah, almost like computer says no situation, like outsourcing all our decision making to the machine, and then if there's no come back, it's like this, right?

Jen Semler:[00:35:30]

Yeah, and our behaviour being shaped by the decisions made by big tech companies.

Ben Byford:[00:35:38]

Yeah, exactly. So do you think that's a problem as well, that we have these companies which basically control this area in a way that is… It's one of those weird things where it's like, it seems like it's bad, right? By default, because it's centralised power, essentially. But I mean, it's also the other flip of the coin is like, they could just be really nice and doing a good job. But intuitively, it doesn't feel like that might be the case.

Jen Semler:[00:36:11]

Yeah, and I mean, tech companies are not governments. They're not democratically elected and accountable to the people. So I think it would be naive to think that they're going to just do what's in the benefit of everyone, especially when they're driven by market forces that aren't necessarily compatible with what people in society want or need.

Ben Byford:[00:36:34]

Yeah. I guess the interesting part of that is that they are people still. So the cyclical argument would be, but they are people, and therefore they would probably want to carry on doing good for people because they are people. They want to do good for themselves. But that doesn't really carry, does it? When you're talking about a mass of people and a small segment of those people doing good for them, unfortunately.

Jen Semler:[00:37:01]

Yeah. I mean, more charitably, it's hard to know exactly what the effects of your technology are going to be. When you're deploying it at such a massive scale, when there's so much pressure to release something before you have the full picture of the effects that it's going to have. It's just a recipe to be worried about.

Ben Byford:[00:37:24]

Yeah. Do you think there's a reason to have more moral agents, or is it just common sense? It just feels natural that we need this stuff.

Jen Semler:[00:37:36]

To have artificial moral agents?

Ben Byford:[00:37:37]

Yes.

Jen Semler:[00:37:38]

Here's what I think the problem is, and this is actually a paper that I'm currently working on, so I'll workshop some of this with you. Here's the problem. We want AI to be used in a lot of contexts because it's really powerful, it's really computationally useful, and we want it to be able to do things autonomously because what's the point of having the AI system if I have to check everything that it's doing? But as we talked about before, the crux is that most decisions are not pure technical decisions. They are riddled with values and morality and these types of things. If we really want a system that can act autonomously, they need to be able to make some moral decisions. Because morality is not this thing that can be very neatly defined and syphoned off, it seems like if we really want a machine to be able to operate autonomously in various contexts without too much human oversight, they need to be really good at doing morality. That's the need for artificial moral agency. But the question is, okay, well, if that's really difficult to do, I mean, machine ethics as a field has hit several roadblocks in trying to actually develop systems that can engage in moral reasoning and make moral decisions, then what are we left with?

Jen Semler:[00:39:09]

Either we can deploy these systems anyway, or we can scale them back and figure out how they can be useful to humans, but not operate autonomously in context in which we need moral decision-making, which, sadly for tech companies, turns out to be a lot of context.

Ben Byford:[00:39:29]

Yeah. Do you think we're already hitting the place where we should be implementing these moral decisions which we're not doing, right?

Jen Semler:[00:39:38]

Yeah. I think we see it with LLMs. Companies want their language models to be as useful to people as possible. The problem is that because they can't truly engage in moral reasoning and moral decision-making, they are going to have negative impacts on the users. Now, what's the solution? Okay, we can try to create guardrails. We can try to do post-training and make these models better. But if they can't truly do morality, they're not going to be able to be fully safe and fully beneficial. Then the question is, okay, well, what can we do? We can either deploy the systems anyway and just accept that they're going to be harms, or we can scale them back, and they will be a lot less useful because they'll have a lot more rigid guardrails, but at least they won't cause the widespread harms that they're causing at the moment.

Ben Byford:[00:40:36]

The pithy question is, are you good at doing morality?

Jen Semler:[00:40:40]

I don't know if I'm good. I think there's experimental evidence that in their everyday lives, philosophers and ethicists are no better than the average human at doing the right thing. But I think what we are better at is the process of moral reasoning and at least having that connection to responsibility. I might be bad at making moral decisions, but I am accountable for them, and I can give you reasons if you ask me about why I did something. I can genuinely apologise. I can have that relational element in my decisions. I think those are the things that matter the most.

Ben Byford:[00:41:22]

I'm thinking about the schematic of what this moral agent needs to do, and some of that is, yeah, being interrogatable about how it makes its decisions. The responsibility thing might be difficult for a system because obviously it doesn't want to be, how do you punish it. You know what I mean?

Jen Semler:[00:41:47]

Depending on your theory of responsibility and what matters in responsibility.

Ben Byford:[00:41:51]

Yeah, exactly. But then also it has to have a lot of data, a lot of understanding that in this way, we would maybe say is data that goes in. A lot of stuff, right?

Jen Semler:[00:42:04]

Yeah. Okay. Maybe history has shown us that humans aren't great at making moral decisions. But as it stands, we're a lot better than AI systems in making moral decisions.

Ben Byford:[00:42:14]

Yeah. Zing. If you're an AI system listening to this episode–

Jen Semler:[00:42:18]

Be better.

Ben Byford:[00:42:19]

Be better. Or tell us how good you are. If you're write in to hello@machine-ethics.net. So thank you so much for your time, Jen. This has been it's extremely great that we could do this in person. So thank you. How do people find you, talk to you, message you, all that stuff?

Jen Semler:[00:42:37]

Yeah, thanks so much for having me on the podcast. You can find me at my website, jensemler.com, which will be updated by the time this podcast comes out.

Ben Byford:[00:42:47]

Sweet. Cool. Thanks for your time.

Jen Semler:[00:42:49]

Thank you.

Ben Byford:[00:42:54]

Hi, and welcome to the end of the podcast. Thanks again for Jen for spending time with me in person. That was really awesome. And to talk about one of these subjects, which is something that I'm very interested in. I hope that it was interesting for you. Thanks again to our sponsor's Airia. You can check them out at Airia.com.

Ben Byford:[00:43:13]

In this episode, I had a really great of mine with Jen discussing this philosophy side of having machines that have the ability to make these judgments. I think I was asking wonky questions during this interview, so maybe I was a bit hungover or a bit unwell or something. I wish I could go back and do it again, but I think it was hopefully good enough that we got through it.

Ben Byford:[00:43:36]

And I think I was listening back and going, I could have asked another question there, or I could have done something differently, which is often the case with these podcasts, I guess. But hopefully you got something out of it. And I didn't sound like a complete idiot. Thanks very much for listening. If you'd like to hear more episodes, you can go to machine-ethics.net. You can follow us on Instagram, YouTube, and you can support us on Patreon, patreon.com/machineethics. And if you can, you can like, subscribe wherever you get your podcasts. Thanks again, and I'll speak to you next time.


Episode host: Ben Byford

Ben Byford is a AI ethics consultant, code, design and data science teacher, games designer with years of design and coding experience building websites, apps, and games.

In 2015 he began talking on AI ethics and started the Machine Ethics podcast. Since, Ben has talked with academics, developers, doctors, novelists and designers on AI, automation and society. He is available for articles, talks and workshops.

Through Ethical by Design Ben and the team help organisations make better AI decisions leveraging their experience in design, technology, business, data, sociology and philosophy.