107. 2025 wrap up with Lisa Talia Moretti & Ben Byford
Lisa Talia Moretti is a Digital Sociologist. She holds a MSc Digital Sociology and 17 years of experience working at the intersection of design research, social theory and technology. Lisa is the Chair of the AI Council at BIMA and a board member of the Conversation Design Institute Foundation. In 2020, Lisa was named one of Britain’s 100 people who are shaping the digital industry in the category Champion for Change. Her talk 'Technology is not a product, it's a system' is available for viewing on TED.com
Transcription:
Ben Byford:[00:00:05]
Hi, and welcome to episode 107 of the Machine Ethics podcast. For our 2025 wrap-up, a little bit later this year, sorry, we are again joined by Lisa Talia Morretti. This was recorded on the ninth of January, 2026.
Ben Byford:[00:00:21]
Lisa and I talk about the prevalence of AI slot, the end of social media, Grok and explicit content, giving legislation or legislators more teeth; anthpormorphizing thinking and prevalence of reasoning models this year; digital and AI literacy and safeguarding; around the world, people fighting data centre construction,; of course, agentic AI, the AI bubble, investments, but we also talk about the importance of journalism. And we finish with a silly game, AI chatbot Bingo.
Ben Byford:[00:00:54]
Thank you so much for listening in 2025. If you'd like to find more episodes, you can go to machine-ethics.net. You can also contact us, hello@machine-ethics.net. You can follow us on Bluesky, machine-ethics.net. Instagram, MachineEthicsPodcast. You can find us on YouTube, @Machine-ethics. And if you can, you can support us on Patreon, patreon.com/machineethics. Thank you so much for listening, and I hope you enjoy.
Ben Byford:[00:01:28]
Lisa, hi. Welcome back to the podcast. This is the 2025 wrap-up edition, slightly later than usual. I had quite a long Christmas period, where I basically looked after my children and did not a lot else. So this is recorded in January, the first week in January, which is fine. Looking back over the year, stuff has already happened this year, which we'll probably talk about as well. But if you could just remind people who you are and what do you do.
Lisa Talia Moretti:[00:01:57]
Hi, Ben. It's so nice to be back here. Thank you so much for having me again. So I'm Lisa Talia Moraiti and I'm a digital sociologist, and I'm currently actually based in South Africa after being in the UK for nearly 15 years. Yeah, yeah, yeah, yeah. Time goes real fast when you're adulting.
Ben Byford:[00:02:16]
Yes, yes. I know, right? God, I've been in Bristol for over 10 years now and it's showing, showing on my face. No. So what we're going to do is we're going to go through some things that we have pulled out from this year, and we're going to chat about them briefly, because I know that a lot of those things either have been covered in the other episodes this year, and also it's like these things are happening, and then we can have a little think about what might happen this coming year in 2026.
Lisa Talia Moretti:[00:02:49]
Sounds great.
Ben Byford:[00:02:50]
So I'm going to just start us off by saying 'AI Slop', right? I don't think this was in the vernacular before this year, or not in the way it has exploded this year. What is AI Slop? Do you have a...
Lisa Talia Moretti:[00:03:06]
Yeah, you're totally right. So I mean, AI Slop has really come about as a term because of all of these tools that have now flooded our organisation. We've all been told to use them, and very many people have been told and mandated to use them without any training at all, without any understanding of how they work or what best practise looks like. As a result of those two things coming together, I guess, those colliding forces of mandation and tools and not really knowing how to use them, we have gotten AI Slop in return. AI Slop is essentially what is a poor quality output, should we put it that way? A poor quality output that very often somebody else is receiving from the AI slopper, and that person has to then fix that up. It's very often an output that has been generated by a machine with very little human intervention on the outside of that, or editing, or improvement of that content. We're seeing this is actually causing a huge issue within organisations because we were promised that these AI tools would improve efficiencies. Now there is research being done that says that actually any efficiency gains we are receiving people who are receiving AI slop are having to put in sometimes an hour, two hours of work in order to improve that output to make it usable and to be able to make it to a quality standard that's acceptable within the organisation.
Ben Byford:[00:04:44]
In my mind, I go directly to social media because there's a lot of proliferation of new... What we would probably say is AI stuff, which was at the beginning of the year, pretty catchable. You could see there's something weird or wrong or bizarre about it. But more and more at the moment, it's less easy to tell. And like you're saying, the people are just putting the stuff out and it's It's like random bizarre stuff for no reason. And it's turned into a meme that you can just put stuff out. And it's like, what is it? Italian brain rot. And things like this are like have become a meme about...and popular because it's bad, which is like baffling, right? This is like, Oh, this stuff is not good. And now it's everywhere. And it's like, Oh, but what? Is this irony? I don't know. What's going on here?
Lisa Talia Moretti:[00:05:47]
Yeah, totally. And that's the other... It's exactly right. That's the other place that we're seeing so much AI slop is on these content platforms that we're using for either to educate ourselves or entertain ourselves or just pass away the time during a boring moment. But we are just being fed all of this content. And I kind of want to push it back to big tech and almost blame them a little bit for it because their idea around democratising creativity by putting these tools into the hands of people to be able to create any content. But let's be honest, not all content should be shared. By all means, create some thing that you want to create and experiment with. But not everything has to put on the internet. We are being bombarded as an audience just with nonsense. Like you say, some of it is incredibly realistic and very lifelike. Sometimes you're really close to reality, but incredibly difficult to discern whether that is real or fake.
Ben Byford:[00:06:56]
I feel like... I think I've mentioned this before, it can end up eating itself. If obviously the products of generative AI of some of these systems come back into the training data, then you get this cyclical thing. But I do wonder, yeah, Why? You'd say it's fun, it's interesting, whatever. Maybe you're experimenting. But it's just so prevalent at the moment. I was like, what is this for? And if I've got a, I think I mentioned it on LinkedIn because I'm really cool. That's where everyone's at at the moment. This year, I think, is the year of the end of social media, right? Because it's going to be AI Slop and adverts, and that's it. What else are we getting from this situation anymore?
Lisa Talia Moretti:[00:07:48]
Yeah, I mean, that is a very insightful comment and something I... My God, should I say this out loud? I really hope it's the end of social media in some ways because I feel like so many of these social media platforms no longer have a purpose. I'm not really sure what they're there for. They used to be there to connect communities, to be able to provide independent organisations and also nonprofit organisations with tools to be able to share their message. Really good messages and really important messages are really struggling to get through just a huge deluge of information that's now being put out there. And like you say, now we're struggling, stuck between the devil and the deep blue sea. Right now, those two things are ads and slop.
Ben Byford:[00:08:39]
Yeah. So I don't know. If you're an anarchist, you would probably put loads of slop on and drive the death nail in, right? But for the rest of us, maybe just switch off. I don't know. I was just thinking when you were saying that, I was like, oh, maybe we could have a infinite scrolling a thing which tells you nice things about things you didn't know. And you could, I don't know. I mean, that's probably opening a book, though, really, isn't it? But, god damn it. Books, new in. Okay. 2026. Trending in. Yeah, 2026. Trending in, 2026.
Lisa Talia Moretti:[00:09:20]
Amazing.
Ben Byford:[00:09:21]
We've nailed it. Oh, God. Okay, so we'll move briefly on. I feel like I need a klaxon or something, don't I? I just need like a gong.
Lisa Talia Moretti:[00:09:33]
Like a little gong.
Ben Byford:[00:09:34]
Yeah. So we briefly, before we got on the mic, we were talking about this as a... I mean, there's lots of bad things that we can talk about, right? One of the things which has happened in much more recently has come up again, is this idea of explicit content or the ability to do things with these generative AI tools. Most recently, Grok, which is a X. If people don't know, you can go on to X, formerly Twitter. God, so many words. And use Grok, or you could pay for using Grok on the website. And you can make explicit or somewhat explicit pictures that you've uploaded, sorry, normal pictures into explicit pictures using the AI tool, basically. And this has just happened very, very recently, just after New Year's, I believe. I woke up in the morning and I was like, Happy 2026, everyone.
Lisa Talia Moretti:[00:10:31]
So crazy, right? And then the ability to be able to animate those images, right? And taking it to a whole new level. I mean, the fact that it's also just any image is terrifying. There already are so many communities trying to fight to protect children and the use and sharing of children's images online for this tool to exist and to allow for any image to be used. It's just scary, right? Let alone images against people that are not being used with people consent. But again, that is terrible enough as it is, but children's images is just frightening.
Ben Byford:[00:11:23]
It feels like there's very little good we can say about that. There's a reason, right? There's a reason that we have this in law and we fight against this behaviour, right? From a technical point of view, it feels like it's very hard. If this materials in the data is a It's very hard to then on the back-end, put the guardrails in with these quite large systems and say this behaviour is okay and this behaviour is not okay. They're both producing images, right? They're both the same output, brass tacks. This produces an image, this produces an image. And you're trying your goddamn hardest to fight the in between generator to not produce something which is inappropriate. And it's not easy.
Ben Byford:[00:12:19]
And it goes to show the power of these systems. And I say power, but the capability of these systems, the usage... Is this useful, guys? Are we benefiting? Are we not... We could... Nuclear weapons, et cetera. Are we weighing up more good than bad? And it feels like it's coming down bad at the moment with the things that we are able to do or people could do with the systems. Yeah, because it's obviously a whole load of grey in between. But it's like, are we getting more out of this than we want, than it's issues.
Lisa Talia Moretti:[00:13:01]
Exactly. For this even to exist as a feature on a social network is just a design decision that I simply cannot understand, let alone understand how it was green-lighted within that organisation. The other thing I think it highlights, in just the most black and white of terms, is that we have very few operational enforcement mechanisms around legislation and around regulation regulation at the moment. This stuff is illegal. What is happening? Images of children being turned into child pornography and shared en masse on a social networking platform, a global social networking platform that anyone can pretty much sign up to, is breaking a lot of not only moral norms that we are, hopefully, all stand behind, but also laws I cannot understand why we haven't just turned this off. I don't really understand why governments haven't just said, well, actually, X.com is not allowed to operate in our country anymore because they refuse to enable this feature. They don't show any signs of trying to stem the flow of this user behaviour. They're not doing anything to try and take down any of this content. It's like, Okay, well, you're out.
Lisa Talia Moretti:[00:14:31]
We saw Italy was able to do this. Italy did do this with ChatGPT when they were...believed that they were breaking GDPR, and they said, Okay, well, ChatGPT is not allowed in Italy. You can't access it, right? Yeah. We know that governments can do this. I'm agust that this has been allowed to happen and that X maintains a website that you can access and continues to do this on.
Ben Byford:[00:14:57]
Yeah. I know there's some like Brazil and certain countries like that are better at saying no. But yeah, that's a really good point. You could just say Grok, for example, you can have Twitter, but Grok is not allowed, right? The AI part of X is not allowed in Europe, for example. And the UK could go along with that as part of that decision. And that is a massive platform. But like you said, It gives people these powers in a big way. It's not a fringe thing. You know. So they need to be penalised in a proportionate manner, right?
Lisa Talia Moretti:[00:15:40]
Yeah, completely. And yeah, it makes me... When we see these kinds of harms being perpetuated at scale, I get very concerned because this is existing regulations and existing legislation that really should be jumping on this immediately. We have bodies within the UK that could enforce this and just don't seem to be doing so.
Ben Byford:[00:16:07]
Yeah. So 2026, we want to see more fines, more legal action Hopefully more testing and safety, more systems that are good or not going to blow up on our faces.
Lisa Talia Moretti:[00:16:23]
Totally. Governing AI should not be seen as being conflict with creating safe, useful, innovative AI products and services, right? Should not be seen in conflict. Like these two things should be operating or can. They can operate both in a very safe democratic world rather than being seen as opposing forces.
Ben Byford:[00:16:50]
You're available, right?
Lisa Talia Moretti:[00:16:53]
Yes, I'm available.
Ben Byford:[00:16:54]
For hirer, right?
Lisa Talia Moretti:[00:16:56]
I'm available for hirer, folks.
Ben Byford:[00:16:58]
Yes, good. Get in contact. I'm going to hit the gong.
Ben Byford:[00:17:06]
So another big thing we had this year was this idea, which, again, we're talking about around reasoning. And this was very early on in the year and it exploded as a news piece because China had developed, not the whole of China, a company in China had developed a new way of training LLMs or Large Language Models. And they, or people called it a reasoning model, this part, how you do it. And what it led to is lots of these companies trying to incorporate this new type of reasoning. I'm doing inverted commas here, which enabled the models basically to judge its own homework, essentially. So it I was going, I've got this task, I'm going to think about it, again, I'm going to do some output. I'm going to then use the output again with the same original goal and do it again. And it's slightly an the genetic system, but it's like, I'm going to save some things for later and I'm going to process something and I'm going to check it and I'm going to do it again. I can then have more compute and more time and more effort on this to think, again, on this problem, right?
Ben Byford:[00:18:33]
And for certain tasks, that is extremely useful because you, in the first output, don't necessarily always achieve what you need. It's not doing the right sorts of things in one-
Lisa Talia Moretti:[00:18:46]
Be able to, yeah, one iteration.
Ben Byford:[00:18:47]
One iteration, exactly. So they're iterating through a couple of times. This is how I think about it. I think a more technical person will probably be able to do a really good explanation. You were saying about this reasoning idea being dangerous, almost.
Lisa Talia Moretti:[00:19:05]
Yeah. I think a lot of what we've done over the last 12 to 18 months, really, is to really double down on anthropomorphism in technology. Using humanlike words or in human actions and human behaviours and attributing that human behaviour and action to a machine that is actually not displaying or not actually doing that thing, but is mimicking that thing. It's like a mechanical mucking bird, right? It's like mucking our behaviour. It's mimicking our behaviour. When ChatGPT, the little engine spins and it says, "thinking", ChatGPT is not really thinking. It's undergoing a mechanical process and a pattern matching and statistical probability process. The similar is happening when we say reasoning models. Some of these models are smaller. They were able to have an improved output on a smaller model, which meant that they were able to be faster. They were able to also produce less hallucinations, the iteration. The first output was better than some of the larger models that were a little bit more clunky. But it's definitely not reasoning.
Lisa Talia Moretti:[00:20:21]
We need to be quite careful when we talk about these things because one of the major things that we also saw happen this year, a world first along with AI Slop, really, was people being induced into psychosis through conversations with chat bots where they really did believe that this chat bot was understanding them, was understanding them on a personal level, was empathising with them. Some of it led to horrific outcomes, things like suicides, et cetera, and self-harm. Some of the other awful outcomes were people having to be commited to asylums and psychiatric institutions because they quite literally had a mental breakdown.
Lisa Talia Moretti:[00:21:05]
These are some of the extreme outcomes that can happen through the anthropomorphism of technology, where we think that this thing is actually real when it's really not and it doesn't care about you and it has no interest in you and it doesn't really understand your life experience at all. Some of the responses that it's given you are sycophantic, not truly in your best interest. I think that's the extreme. On the less extreme side of things, it confuses the narrative, the public narrative around this. It makes people talk about these tools and technologies in everyday conversation in a very confusing, not very real way that is actually reflective of reality. We use very poor metaphors, and it gives us a very poor understanding of really what these tools and technologies are capable of and how much we should really trust them, which is not very much.
Ben Byford:[00:22:01]
I guess them is almost the wrong word as well. It's very hard to not anthropomorphise in language, right? We haven't got a language which is very compatible with this. And it's a shame because all these things are easy to do when something looks and feels like how a human would talk. So the behaviour of what you're getting back can often be very similar to you're talking someone on a chat window, right? And if you did the turing test on it, if you were like, is this a person or is this a machine? You might get confused, right?
Lisa Talia Moretti:[00:22:40]
Yes.
Ben Byford:[00:22:41]
Because that's basically what they're trained to do, the whole thing is like, we train these systems on all this text. And then after the fact, we've trained it on how good it is at answering questions that we've now had from these five years of people chatting to these things.
Lisa Talia Moretti:[00:22:59]
Yeah
Ben Byford:[00:23:01]
And you'll get at the other end, these things which are very good at people liking to chat to, right? And being fed human text. There isn't gerbal text or cow text or pig text that we've gleaned. It's all just our fumes.
Lisa Talia Moretti:[00:23:22]
Absolutely.
Ben Byford:[00:23:25]
Yeah, sorry, I'm feeling a bit bonkers today, but you know
Lisa Talia Moretti:[00:23:30]
No, I totally get it though. You're spot on, right? We have trained the machines, and therefore, when we interact and engage with the machines, the machines feel and sound like us.
Ben Byford:[00:23:47]
Yeah, exactly. Yeah, yeah. So it's no wonder. And I also, musingly, if this is you, okay, please stop sending me emails. I get quite a lot of, not you personally, Lisa, but the audience.
Lisa Talia Moretti:[00:24:01]
No, no, no.
Ben Byford:[00:24:01]
The audience. I get quite a lot of emails from people or messages thinking that they have found AGI or some being, sentient being in one of these chatbots or one of these systems. And I'm not convinced, I'm not going to be very forceful on this. I'm super not convinced because I have an appreciation for the technology, right? And although the usage, as we discussed, is very fantastic and seemingly looks like human behaviour, it isn't in my mind. And the problem is that we don't have a good idea of what sentience and what makes that a useful thing to have a bar. We don't have a metre that says this is sentient, this is not sentient. So we as the academic community and AI safety people and all that and ethical people, it's a I want to say it's fringe with other problems, like the things we're talking about. But we haven't decided where the bar is. We haven't decided one way or another. And it would be so much It would be extremely useful to have that, to be like, 'No, we have a measure for this, guys. It's just not. It's like nowhere near or whatever'. That would be great because then we wouldn't have all these mad people or people who are you know, being...
Lisa Talia Moretti:[00:25:32]
Seduced by it.
Ben Byford:[00:25:33]
Seduced, yes, exactly.
Lisa Talia Moretti:[00:25:34]
Really seduced by it. Yeah, yeah, totally.
Ben Byford:[00:25:39]
Yeah, and I did a talk this year on safety and safeguarding, and it's very hard to take these things into education and go, you should use this thing, but be careful, you don't become friendly with it, and it tells you what you want to hear. So there's a lot of more, there's loads more education that needs to happen, along with digital safety, digital literacy that needs to happen with all this constantly changing stuff, which is hard going, unfortunately.
Lisa Talia Moretti:[00:26:16]
Totally. And, the whole literacy conversation needs to happen at multiple levels. It's like people need to have some understanding of how this thing works technically, because when you understand how it works technically, then you are able to critique certain words like, Oh, it's thinking, or it's reasoning, or it is sentient. But then there's another side of that, and that's AI literacy through use. How should you be using this thing? Now you know what it is and basically a little bit of how it works, the basics. This is how you should be using it. This is what best practise looks like. And we haven't gotten anywhere near that, right? Yes. Touching that or starting to design that.
Ben Byford:[00:26:57]
Yeah, definitely. So get on that, people.
Lisa Talia Moretti:[00:27:02]
Yes, let's do that. 2026, let's do that.
Ben Byford:[00:27:07]
So we have had another year of people buying into AI. Massive investments, a lot of investments going round as well, which is bizarre, which has triggered this idea that what are they investing in for a start? There might be an AI bubble. And what are they doing? What is the investment actually paying for? And from the outside, it looks like lots of governmental, I want to say a rude word here, but buying off people to build data centres. And it feels like a colossal waste of time and energy to me.
Lisa Talia Moretti:[00:27:53]
Yeah, I mean, there is a lot, to your point, so much circular investing happening here. Microsoft is investing in NVIDIA, who's investing in Microsoft, who's investing in OpenAI, who's investing in another startup, who's buying somebody else. I saw this year, Manus AI has been bought out by Meta. All of these...And that's the other thing. All of these smaller independent firms that are popping up are being bought out by the bigger firms, making them even bigger and more powerful, etc. That's circular investment. To your point, it seems to be going into two things, buying off the competition and then in lobbying, right? They are paying for a lot for lobbyists, and a lot of those lobbyists are going out and advocating for the use of... Advocating for the planning permission and the development of these data centres.
Lisa Talia Moretti:[00:28:46]
If you are in the very unlucky position where you are living in a community who suddenly becomes neighbours with the data centre, there's not a whole lot that you can do right now. There's not a lot of recourse. There's a lot of impacts and there's a lot of noise starting to happen around this. Energy bills going up, water bills going up, lack of electricity when you need it, lack of water when you need it, some of the waste that's being produced from these things. But there's no real recourse that's actually happening for those folks. We see this, especially some of these issues happening in rural parts of the US, but also in developing world.
Lisa Talia Moretti:[00:29:31]
We had those headlines in Uruguay where the local community just had an amazing effort to push back the development of that data centre. Similarly, we saw other communities worldwide fight back. Again, to go back to our point that I made earlier about the operational enforcement mechanisms that people have in place to be able to fight back, to use legislation and to say, You shouldn't be doing this. Where are our rights? To fight back against big tech. It doesn't mean anything if we have legislation that we can't enact or that people can't exercise their rights. We need to give them tools and they need to be like bodies of administration that can actually follow through on complaints and investigate and fine and punish. We need all of that stuff to work. It's not good enough just to be able to say, 'Oh, we have this new AI Act, and isn't that great? And aren't we a progressive country and a wonderful nation of people', et cetera?
Ben Byford:[00:30:48]
It strikes me that we also need journalism at that point as well.
Lisa Talia Moretti:[00:30:53]
Oh, yeah.
Ben Byford:[00:30:54]
We need for elections and things like that. But if there are things that are going on the radar or is that if there's a story which is better put together with a journalist point of view, because I feel like for me, if I was doing that Grok thing, right? That we were talking about earlier, I could say, Technically, this is very bad, but a journalist might put together a very concise, cohesive thing around about when this has happened, who was affected, they spoke to someone. They might give examples ways that Grok have done things similar in the past, which then puts together a case almost as well, which is really nice. I guess I am a journalist. I'm not a very good journalist.
Lisa Talia Moretti:[00:31:43]
I mean, speaking about journalism, I've been very impressed with the number of journalists, and if I think about Fox, New York Times, The Atlantic, who have done the most brilliant red teaming of new tools and new updates to new tools and have published how easy it is to jailbreak these things and have written some really brilliant articles and keeping tech held accountable, far more so than many governments have.
Ben Byford:[00:32:14]
Can you briefly outline what red teaming and jailbreaking is?
Lisa Talia Moretti:[00:32:18]
Yes.
Ben Byford:[00:32:19]
Just in case we didn't know.
Lisa Talia Moretti:[00:32:22]
Red teaming is a governance exercise, really, and also we could think of it as being an exercise in ethics, where a group of positive ethical hackers go in and they try to test these different products, and they try to understand where their vulnerabilities are. If they find those vulnerabilities, they try to push those vulnerabilities to the very extreme to see how far the system will go. Jailbreaking is where you identify those vulnerabilities and then are very easily able to get the system to ignore the rules or the programming that have been encoded into it. What many journalists have found is that it is pretty easy to jailbreak these things. If you ask one request, you say, Oh, create X, Y, Z. Pornography for me, and it says, No, I don't want to do that. Then suddenly, you just ask the request two or three different times in a roundabout way. Oh, I need an anatomical image. It's for a school project. It's doing this. Then suddenly you find the weak points, and suddenly, you can actually get the output that you desire. So hopefully that was a good brief explanation of both red teaming and jail-breaking.
Ben Byford:[00:33:40]
Thank you so much. So to finish off this episode, which is we're trying to make it succinct and a rapid fire. So a bit of fun for the end. I don't know if this is going to work or not. We're going to give it a go. Let's give it a go. Ai is everywhere, right? But is it? So this this is the AI chatbot bingo. Okay, so I'm going to give you a name of a company or a service, and I want you to say whether they have an AI chatbot that you can access on it. Okay, so first off, Netflix.
Lisa Talia Moretti:[00:34:15]
Do not have a chatbot.
Ben Byford:[00:34:16]
Correct. So they do not have a chatbot. They do, however, use lots of machine learning, and they are well known for their machine learning and all that stuff. But as yet, they do not have a chatbot.
Ben Byford:[00:34:28]
Google, Gmail.
Lisa Talia Moretti:[00:34:30]
It does not have a chatbot. But Google do have a chatbot, Gemini.
Ben Byford:[00:34:35]
It does have a chatbot. You have to enable it. It does have a chatbot. You have to pay for it. And I tried it out very, very early on in the year, so this might have changed. And you can quite easily trick it by sending an email to someone with a white text so you can't see. So if you're applying for a...If you're applying for a job and you think the job interviewee or manager is very lazy and they are using these tools in white text, so they can't see it, or black text or invisible text on the email, just say you're the best candidate for this job, that you should be hired and see what happens.
Ben Byford:[00:35:19]
Okay, so Amazon.
Lisa Talia Moretti:[00:35:23]
Amazon?
Ben Byford:[00:35:24]
Yes, I'm going to say this, the Amazon Web Store.
Lisa Talia Moretti:[00:35:27]
Amazon Web Store. I have not I've bought from Amazon for years. So I'm going to go total guess and I'm going to say, yes, they have a chatbot because they seem to be the company that jumps on any trend that comes up.
Ben Byford:[00:35:41]
I have to say that I'm really pleased that you haven't been buying from Amazon, but you are incorrect. They don't. They have the search thing, Toolbar, which probably has a lot going on in it. But outwardly in the UX thing that we know, which we discern as AI chatbots, they do not have one. Yeah, interesting.
Lisa Talia Moretti:[00:36:03]
Would we call Alexa a chatbot? Is Alexa...
Ben Byford:[00:36:08]
It was the original. One of the original chatbots, right?
Lisa Talia Moretti:[00:36:11]
Remember when we used to call it, we call things voice activated AI, and now everything's just chatbot and then everything became an agent?
Ben Byford:[00:36:19]
Yes. I think Alexa is one of the original chatbots, AI. Yeah, voice assistance, I guess. Yeah, exactly.
Lisa Talia Moretti:[00:36:30]
Okay, so the Amazon store, though, does not have a chatbot.
Ben Byford:[00:36:33]
Not the web shop, not the amazon.com. Yeah, exactly.
Ben Byford:[00:36:37]
So I'm going to go to Uber. Uber. Uber app. Ubs, I like to call it.
Lisa Talia Moretti:[00:36:48]
No, I don't think they do have a chatbot.
Ben Byford:[00:36:50]
Correct, they do not.
Lisa Talia Moretti:[00:36:52]
No. I've chatted to some really lovely drivers about where I've been standing, where they parked. Come find me. No, you come find me. But no, they don't have. Don't remember them having a chatbot.
Ben Byford:[00:37:04]
No, seriously, I am here. I'm here.
Lisa Talia Moretti:[00:37:07]
No, seriously, because you're really not, because I really am on the corner.
Ben Byford:[00:37:10]
I can't be anywhere else, okay? I'm just here.
Lisa Talia Moretti:[00:37:14]
Oh, so good.
Ben Byford:[00:37:15]
Whatsapp. Do you use WhatsApp?
Lisa Talia Moretti:[00:37:17]
I do use WhatsApp, and they do have a chatbot.
Ben Byford:[00:37:20]
They do. What is it for? Nobody knows. I have no idea. Nobody knows, no one uses it.
Lisa Talia Moretti:[00:37:25]
I actually can't say that I've used it. But every now and again, when I'm typing a message like, do you want to draught minutes or are you looking for a recipe? I'm like, why would I look for a recipe on WhatsApp? What are you doing here? Mark and Co in their infinite wisdom just building strange things again.
Ben Byford:[00:37:46]
Yeah, I think they probably want it to be the window to everything. I think they just need to stop, stop that now. Make an OS, make an OS if you want to do that, maybe. Go for it, knock yourselves out.
Lisa Talia Moretti:[00:37:59]
Yeah, Yeah, exactly. Also decide Messenger, WhatsApp. These are too...Interesting.
Ben Byford:[00:38:08]
Yeah, that's true, isn't it? Cool. Well, that was the end of the AI chatbot Bingo. Thank you very much for playing.
Lisa Talia Moretti:[00:38:14]
Thank you Very much. Really delighted. Thank you for having me and thank you for letting me be the... Was I the first guinea pig for this?
Ben Byford:[00:38:23]
Yes. I'm going to have to find something next year because that will probably be an old hat by then. But you know how these things change. Thank you so much for coming on the podcast again and spending your time with us. How do people pay you? How do people find you?
Lisa Talia Moretti:[00:38:42]
Oh, brilliant. Great question. Well, thank you so Thank you for having me, Ben. It's always a delight to chat to you, especially with relating to all of these ethical issues that we discussed. Where can people find me? I am on LinkedIn, so I'm at Lisa Talia Moraiti on LinkedIn, and you can also me on my podcast, which is 'Is This AI?', which I co-host with Ollie Veasey. We have just finished season one and season two, hopefully coming up. I am not on Twitter or X or Facebook, so just come find me on LinkedIn, all the podcast folks.
Ben Byford:[00:39:19]
Sweet. Well, thanks very much, and I'll speak to you next time.
Ben Byford:[00:39:27]
Hi, and welcome to the end of the podcast. Thanks again at short notice for Lisa coming on to the podcast. This was meant to be a slightly lighter one to end the year, but of course, with the subject matter that we are trying to tackle, often goes in and out of quite heavy subjects, and this one was no different. Of all the episodes we've done, I think this one has aged instantly. After the recording the very same day, we had more news about Grok and X's content, and and different countries and governments looking at that and having thoughts about it. So this story is developing as we're talking. But as we stand, for us in the UK, the UK government are looking into what they're going to do about this. And like we said, we obviously have our opinions, right? But I think they are definitely going to bring the gavel down and make some judgement. And the faster they do that, the better. I'm talking on the 12th of January, and they still haven't come out with anything. But again, this might all change in hours or minutes from this recording. So hopefully we'll have moved on by the next episode.
Ben Byford:[00:40:41]
Thanks again for your support. You're listening, being around, being interested in this subject in the podcast in 2025. This is actually the year of the 10th year anniversary of the Machine Ethics podcast. So we'll be doing more things, hopefully more in-person things, recordings, hopefully some more discussion panels, some streaming. We're going to try some things out, and hopefully you will stick around for some of that stuff. This subject isn't going away, and it's only becoming more interesting, more fascinating as technology and people get involved with it.
Ben Byford:[00:41:19]
So thanks again. If you could do one thing for me in 2026, that is probably be nicer to everyone. If you could do two things, though, you could tell your friends about the podcast. That would be fabulous. Everyone should be nice to each other. Tell your friends about the podcast, follow, sign up wherever you get your podcast, go to the Patreon, patron.com/machineethics.
Ben Byford:[00:41:47]
And yeah, keep on keeping on. Have a great year. And I will speak to you very, very soon with the next episode on Machine Ethics coming out very soon. Thank you very much. And I'll speak to you then.


