53. Comedy and AI with Anthony Jeannot
Podcast authors: Ben Byford with Anthony Jeannot
Audio duration: 44:30
Website plays & downloads: 80 Click to download
Tags: Comedy, Recommender systems, AI creativity, Content moderation, Jokes
Anthony Jeannot is a critically acclaimed stand-up comedian who has sold out shows around the world. He also host the Highbrow Drivel podcast where he and his comedy friends engage an array of experts in deep dive conversations.
Ben Byford[00:00:01] Hi and welcome to the 53rd episode of the Machine Ethics podcast. This episode was recorded on 3rd February 2021. We’re chatting with comedian Anthony Jeannot. We talk about Netflix and recommendation systems, finding comedy in AI and difficulties doing that, AI-written movies and theatre, content moderation, biased algorithms, bringing an AI Ben back from the dead, AI in comedy and constructing jokes recursively, and much more.
Ben Byford[00:00:37] You can find more episodes at Machine-Ethics.net, you can contact us at hello@Machine-ethics.net, you can follow us on Twitter Machine_Ethics, and Instagram at Machine Ethics podcast. If you would like to support the podcast, you can find us on Patreon, at patreon.com/machineethics. Thanks very much and hope you enjoy!
Ben Byford[00:01:03] Anthony. Thank you and welcome to the podcast.
Anthony Jeannot[00:01:05] Thanks for having me.
Ben Byford[00:01:06] If you could please introduce yourself. Tell us who you are and what you do.
Anthony Jeannot[00:01:10] So I am a comedian when that’s not an illegal thing to do. And yeah, I have a podcast which I was thrilled to talk to you on, where we discussed various things, and I think that machine learning is obviously one of those things that kind of – I guess it’s so different to what people expect it to be, right? Because in sci-fi, they expect Terminator and this and that, and so I if was to ask people about machine learning or like, “What do you think are the ethics of machine learning?” down at the pub,
people would be like, “Anthony, are you stoned again?”
And I’d be like, “No, but actually what do you think?”
And they’d probably say like, “Oh, I just don’t think it affects me.”
Whereas, I look around and I’m like, “Dude, I mean sure, it doesn’t affect you if, you know, you never read the news, or you’re not trying to date someone right now, or if you don’t watch movies and TVs on Netflix, but if you do any of those things, I’m pretty sure it does, you know.”
Ben Byford[00:02:12] I’ve never heard of it. What is this “Net Flix?” What is that?
Anthony Jeannot[00:02:15] So I’ve not really looked too much into it, but I believe that it is a kind of decoy activity that you suggest to somebody if you’re hoping to take them to bed that night.
Ben Byford[00:02:27] Ah, okay.
Anthony Jeannot[00:02:28] Do you know what’s an interesting thing? For AI sceptics, I think Netflix is a really great example of the reason why people maybe are like, “Surely they’re not sophisticated,” because I’ve read that Netflix is using AI to inform some of its content creation, right. And there is no way an intelligent AI would ever give Adam Sandler a multi-million dollar movie deal. There’s no intelligence in that decision.
Ben Byford[00:02:59] I think they have algorithms that check to see what are missing in the catalogue. So you can see lots of people are watching romance films, and lots of people are watching vampire films, and lots of people are watching these documentaries about making super cars, or something like that. And then there’s a Venn diagram of people who like all those things, but there’s no programming which incorporates all those things. And that’s how they can go, “Okay, we’re going to make that.” Which I find is fascinating, because it’s really uncovering this like, weird diamonds in the rough. And you’re still hoping that it’s going to be a good programme, or it’s going to work out or whatever, but they’re fulfilling that need that is actually a need that is wanted. But you can get some really bizarre combinations of things that way.
Anthony Jeannot[00:03:49] Yeah, wish.com I imagine is using the same kind of algorithmic decision making. Because like, “Do you want an ashtray that is also a resume?” No! Nobody wants that.
Ben Byford[00:04:02] I feel like that’s a lot of things in life, though. That there’s lots of things that are wholly unnecessary, but they exist for some reason or another, and I don’t know who’s buying these things. I don’t know. I’ve got this thing about unnecessary products.
Anthony Jeannot[00:04:20] Yup. Like any $2 shop, right? Or Poundland. When you just go in and you’re like, Who woke up today and said, “I need a Christmas decoration on January 3rd that is also a tote bag and can hold my nappies? Also, it can turn into a crown for a costume party, and I need it for a pound and I need it really, really quick.”
Ben Byford[00:04:49] I probably do need that, actually. I mean, I’ve got two kids and maybe you’ve just hit upon the most perfect idea, unbeknownst to you there.
Anthony Jeannot[00:04:51] Tomorrow on wish.com.
Ben Byford[00:05:04] I’ve not heard of this “wish.com”, I don’t know if it’s one of those things – I’m not joking – I’m legitimately, what is that?
Anthony Jeannot[00:05:12] They are, as far as I can tell, a massive, massive advertiser on Facebook, and they are like the $2 store of the Facebook advertising world. And they just have some of the most bizarre products that ever – and I can only assume it’s a game. It’s like smart algorithmic decision making. And you will get things like an ashtray that is also a sex toy, or a pram that is also a laser beam and a cigarette lighter. And you’re just like, “Why are these things together, and why do you think they interest me? Somehow”.
Ben Byford[00:05:52] You know that you obviously fit in a demographic who want to buy tat, and you have bought something in the past that has elucidated them to the fact that you are suggestible in some way. Obviously, following on from the podcast that we did together – which was excellent, so go check that out – the one thing that strikes me is that there is so little comedy and laughter within the portrayal of AI, robotics-y sorts of things. I can think of one thing, and that’s maybe in Interstellar, where they have, I think it’s Case, who has to dial down his humour setting. That’s good. I can’t – can you – think of any other portrayals of –
Anthony Jeannot[00:06:42] I would argue that C-3PO was comedic. I mean, he plays the straight man, but it’s a funny straight man.
Ben Byford[00:06:53] Yeah, yeah. He’s definitely light relief, isn’t he.
Anthony Jeannot[00:06:55] Yeah. Exactly. I have something I’ve come across, an article that is a selection of AI-written jokes, and they are all hilarious, but all just because of the absurdity. It’s like, one of them is:
Q: Why do you call farts of tea?
A: He was calling the game of dry.
Ben Byford[00:07:14] They’re just absurdist, right?
Anthony Jeannot[00:07:15] I believe what it’s trying to do is take the language from well-known jokes and map it together, but what you get instead is hilarious.
Ben Byford[00:07:26] There was a natural language model that produced a film script. A science fiction film script, and it was called Sun Screen, or Sun Burn, something like that. And it was – I’m not saying bad – but it was just so bizarre. And also, because the directors had to shoot this thing where it didn’t really make any sense. So you’re cutting between bits of dialogue which don’t really naturally join up, because they’re almost like two monologues happening. So it’s quite hard to watch, but it’s obviously brought to life through the direction and the acting, and all that sort of stuff on top of that, which actually then makes it more appealing. But go check it out, I’ll put a link to it in the show notes as well.
Anthony Jeannot[00:08:16] Just listening to it, it does sound – again not to dump on poor Adam – but it does sound like it’s more worthy of a film contract than Adam Sandler, so, you know.
Ben Byford[00:08:28] You’ve got a real beef with Adam Sandler, man. What’s the deal? What’s he ever done to you? I found out the name of the film was Sun Screen, and it’s actually 2016, so that’s quite old –
Anthony Jeannot[00:08:39] Well old.
Ben Byford[00:08:41] – in technology terms. So kudos to them.
Anthony Jeannot[00:08:45] I wonder if they’ll do a sequel?
Ben Byford[00:08:47] I’m pretty sure I’ve seen something about the director’s doing other things in a similar vein, where some sort of more action-based love thing. And it was mostly choreography. I’ve seen some script and it was mostly choreography for the dancing and stuff. And then these people had to act that out, and you get this sort of absurdist movements. Like – obviously this is a podcast, so you can’t see me flailing around.
Anthony Jeannot[00:09:19] For those of you who can’t see Ben on video, he looks like, you know at a car sales yard, how they have the flailing arm inflatable, that’s exactly what he reminds me of.
Ben Byford[00:09:30] Yeah, that’s it. “Come and buy the AI thing.” That’s what it’s going to be like. Are you not worried that your – given the puns and the jokes that we can now automatically create – that your future career is going to be gazumped by these AI?
Anthony Jeannot[00:09:49] I mean, we need to get back on stages before I even worry about that.
Ben Byford[00:09:53] That’s true. Yeah.
Anthony Jeannot[00:09:54] It is interesting, right, because I have long suspected that the long-lasting impact on comedy of the pandemic will be that people are more absurd in their humour, because we had to be, by being locked in. And if that it is what comedy becomes, then it’s ripe for machines to go over the top, because that’s what they’re going to do best, right?
Ben Byford[00:10:20] Yeah, definitely. I mean I think they should do all your writing, in that case.
Anthony Jeannot[00:10:25] Yeah, get them in on this podcast, you can talk to an AI. How much easier would that be as a podcaster if you could just talk to the machine and never have to be like, “Does 9pm work for you?” No, you just do it off your machine.
Ben Byford[00:10:40] I’m definitely going to have to do that in future podcasts. There is a theatre show, which I haven’t’ seen, which I would like to see, by some fellow technical lovely people in Bristol, and it’s called I Am Echoborg, and it’s about – your job is to be the voice piece for the AI. And it’s like a job interview. So one of the audience members goes up and is the voice of the AI, and another, second audience member comes up and sits and is interviewed by the AI. And it’s quite – I mean, I haven’t seen it – but it’s quite a bizarre thing, because you’re still having a human interaction, right. But the human’s being the puppet in this situation, and they’re having to repeat these things which are sometimes odd and bizarre, or straight. And they’re having to put their own twist on it, too.
So next time, maybe I’ll get you on, but I’ll present all the language to you through an algorithm, and you’re just going to have to speak it however you like to speak it. You’re going to have to perform the interview. How’s that?
Anthony Jeannot[00:11:51] Ah, yeah. Why not? I mean, assuming that the AI is good, I would love to take credit for somebody else’s work, that is, that is right up my alley. That is modus operandi.
Ben Byford[00:12:06] Done, well we’ll stop here, and then we’ll come back.
Anthony Jeannot[00:12:10] Come back, and I will give you better things to say from a robot.
Ben Byford[00:12:15] Yeah, exactly. These things often portray how ridiculous or how far we’ve come, right. So do you think that comedians have spent enough time with technology, or written enough jokes about the situation that we’re in, because as soon as, like, there must be so much in there to mine. Especially since we’re spending a lot more time in front of our screens?
Anthony Jeannot[00:12:44] That’s a really good question, and I think, giving you a completely unfunny answer, as if to point out the difficulty of comedy on AI, is that the key to comedy is getting to the point really quickly. So you have to use these broad brush strokes that everybody understands, right. And the problem with AI, I think – and obviously we contribute to it by not thinking about it – is there is just so much to unpack, in terms of what people expect it to be. And what – as we already discussed at the top – what it already is that people filter out, out of fear, right.
I think if people actually took stock and listed out, “Okay, AI’s influenced me here, here, here, here throughout the day,” you’d be like, “What the hell? Something or multiple things that I’ve had nothing to say over, nothing to do with are literally influencing everything I discover, and how that impacts who I am.” You’d be like, “I’m not up for that.” But because the cognitive dissonance helps us just, you know, it’s convenient. I could think about the way that Google Maps is absolutely intrusive and doing all these things, but then also where am I going to get my coffee? So it’s a really hard one to unpack on stage.
What I do think may change that – and it’s something that is increasing with speed since we spoke last time – is the expansion of aggressive roll out of content moderation on social media platforms. Because I think since Jan 6th, and since they started banning people, those content moderation platforms have got a lot more automated, even since then. I’ve been banned three times since January 6th. And never for doing anything outrageous or horrible. Like the first one – and this is where I think maybe people will want to talk about it more, because it was one of the problems with AI in that sense at the moment and the way it’s deployed – is that it’s a crude tool that has no context. So the first time I got banned from Twitter, was on the day of the insurrection when Kevin Sorbo aka Hercules was defending the insurrectionists. I think he quoted something like, “We must respect these brave patriots,” or something like that. And I tweeted back, simply, “I think you mean domestic terrorists,” and then two choice adjectives. And I got banned and he didn’t.
Ben Byford[00:15:36] Yeah, yeah. That seems like a language thing, though. Not like a sentiment thing, which is weird isn’t it.
Anthony Jeannot[00:15:43] Well exactly, that is the problem, is the way they’re employed at the moment they’re crude language, “Is this something that we’ve flagged as bullying before or isn’t it, based on these selection of words,” and I think, obviously then. I’m absolutely not a freedom of speech guy, it think it’s ridiculous. But if you have, again, unelected, undemocratic, untransparent machines making decisions that can impact people’s creative freedoms, as comedians or whatever then I think you’re more likely to have comedians going, “Well, hang on a second. Let’s start to tinker and unpack what is in this,” and then hopefully we can see more about it then, because I think comedians are selfish at heart and that’s what it’ll take.
Ben Byford[00:16:35] I don’t know if you’re selfish. That’s mean. You’re fine.
Anthony Jeannot[00:16:40] I mean, it takes a lot of self importance to stand in front of a roomful of people, the only one with a light on you and a loud voice and talk about yourself for an hour.
Ben Byford[00:16:53] That’s true. I was thinking, when we were talking before about, you know, what it would take to stand up in front of people and talk about this stuff, and make it funny. I think that’s a big challenge. Because, like you were saying, you have these short hands, these big brush strokes that you can defer to because we want to land with the most amount of people, so we’re going to use this type of language. But if you wanted to get to a joke which involved some sort of machine learning understanding, and how neural networks worked and why these things are stupid, you would have to then build up this underlying layer upon layer of mini-jokes on how these things are created, right?
Anthony Jeannot[00:17:38] Yeah.
Ben Byford[00:17:39] Until we get to the main crux. Then, unbeknownst, they’ve learned about how it all works. And that sounds like a really good challenge, right there.
Anthony Jeannot[00:17:52] I think that is exactly right. I think if you wanted to tackle that subject, that is what you would need to do. Because essentially to be able to manage your ultimate punchline, you need everybody to take them along the way and everything you need to explain needs to be a good joke, and needs to get to the punchline within about a minute. It gets super-meta. You’ll end up like one of those crazy people with the pinboards and strings and – that could be a great prop for the show.
Ben Byford[00:18:28] Yeah, yeah. I’m expecting that once the shows open, right.
Anthony Jeannot[00:18:32] I’ve got enough time on my hands for that.
Ben Byford[00:18:34] Yeah, exactly.
Anthony Jeannot[00:18:36] I have a question for you on that. So, again, hypothetically but going down this rabbit hole, I imagine as someone who actually pays a lot of attention to AI, you get a bunch of ridiculous and funny examples of it going wrong. What are some of those that, you know, as I’m writing the show I need to be like, that’s where I get them. That’s where I get the laugh?
Ben Byford[00:18:58] The problem is that a lot of the stuff that I do really emphasises the negative, right. It emphasises, “This is a big problem because of x, because this person was misidentified and now they’re really annoyed” and like, you know. It’s to do with that bigger headline stuff.
I was trying to research some stuff for today, but what is stupid is the ways that things can fail. Like face recognition in China for example, was recognising the CEO of some company’s face on a bus as a jaywalker, and you get these lovely things that tie together, because no one’s thought about it. Not that they haven’t thought about it, they haven’t covered off all these strange eventualities in their design of the products. So you get these really wealthy people getting fined on a daily basis, probably with their own technology, which is quite nice. I think that’s a good example.
Anthony Jeannot[00:20:02] Yeah, I mean it is a great example of due diligence in technology. I remember watching a TED talk of a facial recognition designer who to get some of the products she was working on, had to use a white mask to get the machine to recognise her face. I was just like, “You’ve really messed something up quite bad if an expressionless mask is recognised above and beyond an actual human face.”
Ben Byford[00:20:32] Yeah. That’s a whole load of stuff. I think there’s quite a few episodes where we’ve talked about these unintended consequences, these unconscious biases, all these different ways of these sorts of outcomes to trickle into, not just the system, but the way models are implemented, and the way the models are trained. The data, and the feature selection of the data that the data scientists are doing, and the kinds of outcomes they’re using. Like there’s all these different ways that these different things can fit together, right, and all these opportunities to pull the lever, and to pull the lever in a certain direction. And there’s always a human pulling that lever, and that’s the important thing about all these technologies is that we’re not quite at that self-creation bit yet, so – yeah, I think those things are pretty horrific, and pretty stupid. But I guess that’s easy for me to say on this side of the mic, right.
Anthony Jeannot[00:21:32] See, this is the question I have, because I’m going to stick with you. I’m going to say this is stupid, on the basis of this, right. I am not smart enough to put together one of those platforms. I wouldn’t know the first step so it’s clearly very smart, very intelligent people working on these things. There are enough examples of it going wrong for people to be like, “Ooh, maybe we should do some testing before we put it out in public.” And yet, these clearly smart people keep making this clearly dumb mistake, and I get that there’s a lot of levers, but at some point you just test it.
Ben Byford[00:22:12] Yeah. Well, you can imagine how that plays out, right. So you’re in this big company, and you’re like, “Okay, so we’ve got this new face recognition thing, which is going to tell use whether you’re happy or sad,” and – for some reason we want that – and you know, you’re Jeff, and Jeff is a Caucasian British person, and he tests it out on himself, and goes, “Okay, great. It says I’m sad.” Or whatever, and then he goes to Jackie in the next office, and Jackie’s a Caucasian American lady, and she comes in and it works for her, and they test with someone else. And maybe they do some more testing – some outside testing – and it still works. But that testing pool is always going to be much smaller than the millions of people who are going to use it, right. So that’s what you’ll find.
Anthony Jeannot[00:23:07] But the problem is that we can’t test on 7 billion people.
Ben Byford[00:23:09] You can’t test on 7 billion people, but you can test across a demographic of 7 billion people, right. You can do better and worse testing. But it’s not just testing, it’s just like building the thing in the first place better. Incorporating some of those people in that process, in the data, in the conversation.
Anthony Jeannot[00:23:28] But it does come back to something that we kind of touched upon when we spoke last, and it is a really hard thing. If we’re trying to write comedy about it, it would be one of the difficulties of it as well, is that it is such a weird thing in the way that AI technology increasingly seems to be a really great funhouse mirror, that just blows up the worst parts of biases that we as straight, white people don’t see in day-to-day life. Then all of a sudden you have this AI going, “Oh have a look, this company forgot to test on some subset of the community.” And so it’s one of these things where we have a broken society, booming technology that is broken that reflects that and it gets very hard to go back to the AI, and go, “Ooh, bad AI.” People keep making bad AI.
Ben Byford[00:24:20]. I’m going to get slightly more technical on you here, to try and explain how I feel about that, but these machine learning systems learn from data, right. So a lot of them aren’t generating causation, they’re not generating, “x does y because of z,” right. They’re generating, “These things are correlated.” So if you give it data that has those biases in it, e.g. human data, because we’re all crazy humans who are narcissistic and alcoholics, and all sorts of strange things that we do. Here’s me, drinking a glass of whiskey – other vices are available. It’s going to pick up on the data it’s given, in a lot of these instances.
The machine learning systems, the AI isn’t going, “Oh, that’s interesting. That must mean that that’s a thing.” It doesn’t know anything’s a thing, right? It’s not making those correlations in causation. It doesn’t understand that. So it’s hard to expect it to be better than us, essentially, in that respect. But where it is better than us, it’s always going to be faster, and it’s going to be possibly more efficient, and it’s possibly going to be more accurate across a range, right. It’s not going to be more accurate than the most accurate human being at a certain process, but it is going to be getting there. So you be like, “I can’t look at a hundred million images and work out all the cats in them, in 10 minutes.” You can’t do that. As a human being, you just can’t. But you could probably do that better than a machine given more time, right. You could definitely do that.
Anthony Jeannot[00:26:10] Yeah.
Ben Byford[00:26:10] So that’s not what machine learning things are giving us, they’re giving us other stuff which is speed and efficiency and the correlation stuff. And it’s up to us to then – at the moment, like I was saying – put two and two together, investigate things, work out how things work, and tease that out, and then make slimmer models. Models that don’t incorporate all that crap because we now have learned something. We now know more about the world, and that’s how we should be training it, not, “I’m going to sell you that sex toy with a lighter more efficiently.” It’s coming back round.
Anthony Jeannot[00:26:47] I do think that’s a really good point in terms of speed and efficiency versus accuracy, and I do think, and again, it’s one of the reasons perhaps why so many people have misunderstandings about where AI is at, because they’ve got a degree of expectation, and then there is all these big news articles about things, and people probably don’t appreciate that it’s probably about speed and not accuracy. Cambridge Analytica for example, I remember, one of the big headlines around that time was like, “Given all your Facebook data, Cambridge Analytica only needs six interactions to know everything it needs to know about you.” And I just remember seeing that and thinking with most people I only need one: “Do you deny climate change?” That’s all I need to know. “Are you following QAnon?” Don’t need to know anything else.
Ben Byford[00:27:41] Is the world flat?
Anthony Jeannot[00:27:42] That’s all I need to know.
Ben Byford[00:27:43] Yeah, that’s a good point. I think it depends what you think knowing someone really is, as well. For their purposes, maybe that’s all the information they need to rip you off, or to sell your personality category to someone, but that’s not who you are, right?
Anthony Jeannot[00:28:03] In that specific case, all they needed to it to do was, you know, nothing too major. Just deter people from voting.
Ben Byford[00:28:11] Yeah, yeah. So, Anthony, what do you think about deep fakes? Are you a deep fake?
Anthony Jeannot[00:28:18] I could be. I think, do you know what would be great? It’s a long way away now, but it would have been great to be able to deep fake my face into really impressive scenarios when I was a single person on Tinder. Just like deep fake myself climbing a mountain, deep fake myself – you know?
Ben Byford[00:28:43] Yup.
Anthony Jeannot[00:28:44] Nothing that I couldn’t pull off and you know, make a conversation about in real life. I don’t want to trick people too much. But just enough to be like yeah –
Ben Byford[00:28:53] – that was me –
Anthony Jeannot[00:28:53] – do you know what else? –
Ben Byford[00:28:54] – do you like my six pack?
Anthony Jeannot[00:28:55] I would have loved the chat box to have learned my decision making and chat pattern, so it was me. Just I set it loose and it just notified me when I had a date, right. “Hey, you’ve got a date Tuesday 8pm, it’s all in your calendar.” And I just rock up, and it’s still me, right.
Ben Byford[00:29:11] I think that’s very doable.
Anthony Jeannot[00:29:13] Is that just because I’ve got a limited vocabulary?
Ben Byford[00:29:18] That is – especially the text version – we could do that, alright. Let’s get on that. I am slightly worried, and I don’t know, this might get you worried as well. I’m slightly worried that if at some point in my future, when I pass over, that I’ve got all these recordings, right, that you’re listening to right now of me speaking, and there’s all this text that I’ve produced in my lifetime, writing things down, there’s probably lots of images of me. I think you could just recreate – I’m not sure why you’d want to – but you could just do Ben Byford, what do you think about that?
Anthony Jeannot[00:29:55] Have you not seen this, because I’m about to blow your mind, it’s serendipitous, because I was talking about the future of dating recently, and I came across an article. Microsoft have just patented a technology that I believe the idea is that it will use your social media data, your speech recognition, your speech patterns, and create hologram imaging if video footage is available, and it will be a bereavement tech. It will be like if you were to – knock on wood – tragically pass young, and your family were like, “No, no, we could never move on. It has to be Ben or it’s nobody. I can’t eat, I can’t sleep. Ben or nobody.” And then they can just – Microsoft can – charge them a butt-tonne of money for Robo-Ben.
Ben Byford[00:30:43] Then they can just install me in the middle of the table and I can just float around answering questions.
Anthony Jeannot[00:30:46] Help me Ben Byford, you’re my only hope!
Ben Byford[00:30:48] So, I feel like scared, but also you can see why people are attracted to it. Although it’s on that kind of uncanny valley where it’s weird to know what to think about that, I think.
Anthony Jeannot[00:31:05] I think it’s what I’d put in – one of many I’d put in – a big category of, there is no way I’m going to have the answer to that, so let’s not think about it. Because I think even with like social media and the stuff we’re talking about now in terms of AIs influencing discovery on a host of platforms, it’s developing so fast and so suddenly and we’re so unaware of it, and obviously life is messy and it’s this and that and this and it’s hard to get causation anyway. The impact of the technology I think we’re years away from understanding. I can imagine my grandson saying, “And that’s where they all went wrong” and all of us looking back and being like, “We know that’s where we all went wrong, we thought it was over here.”
Ben Byford[00:31:57] Yeah, yeah, yeah.
Anthony Jeannot[00:31:58] Yeah, it’s just another of those, Could that be where it all goes wrong? Maybe.
Ben Byford[00:32:04] This tiny thing that we did.
Anthony Jeannot[00:32:07] I did see – it’s a meme that I did like, it’s a bunch of dominoes, and at one end it was like, “Guy who wanted to get laid at Harvard in college,” and then the other end it was, “insurrection on January 6th”. So it was like, this was the domino that –
Ben Byford[00:32:24] It could have been so different.
Anthony Jeannot[00:32:25] It could have been so different.
Ben Byford[00:32:07] If you had the opportunity just to have a load of things just rocking round your house, what kind of things would you want? So obviously it’s easy to think about Hoovers, hoovering and going around the house. What other things would you enjoy – robotics and AI things that could help you out?
Anthony Jeannot[00:32:47] Do you know what. A lot of it comes down to – I think I’m a bad friend – a lot of it comes down to conversational stuff. There are a lot of friendships I have where I’m like, “Ah, you are a good friend, but a lot of these conversations are work, so maybe if I could just roll the AI out on that.” I make terrible fashion decisions – I would really like help with that. Do you know what, all it needs to be is a mirror that goes red if I have toothpaste on my face. That’s it. If I’m leaving the house with toothpaste on my face it goes red. I go, “Oh, toothpaste on my face. Go back in.”
Ben Byford[00:33:21] That must happen at some point. There must be some sort of patent for a mirror that tells you how you are today compared to how you’ve been in the past. And what the time is and a calendar of things you have and whether you should take your vitamins. I’m sure that there’s something like that in the works.
Anthony Jeannot[00:33:43] Yeah, I mean Tinder would have a buttload of data on what people are likely to swipe right on or left on or whichever. So all you do is you take that data and you put it on a mirror, and it gives you like, “Percentage someone would swipe on you today” - boom.
Ben Byford[00:33:59] Ah, man. That reminds me of a thing that we used to teach in the data science courses I worked on. And it was around Google, right. So they have all this locational data and – quite a few years ago now – they put out this blog post that was about “Rides of Glory”, I think they were calling it, or “Fares of Glory”. And they were about, this Venn diagram, this correlation of rides, it was happening in cities – so they could compare against different cities – of how many people and at what kinds of times of the day, you would have passengers leaving from probably a work address, or some other type of address, going to an unknown never-been-before address, and then coming back, like 6 hours later to their home address, and correlating that with this “Ride of Glory”, right. This one night stand, basically.
Anthony Jeannot[00:34:54] Yeah.
Ben Byford[00:34:55] Let’s be more vulgar. And they were calling out more promiscuous cities, and stuff.
Anthony Jeannot[00:35:05] I do love when companies use data for – in an anonymised way – for clever stuff like that, like when Spotify were like, “The million people who played All By Myself on Valentine’s day, we see you.” I’m like, “Yes, guys, give us that. Wholesome content.”
Ben Byford[00:35:20] That’s that human nature stuff, that mirror back on us, right. And a lot of this stuff, coming back to the bias and stuff, that is often like betraying what it is that we aren’t seeing about ourselves or what necessarily we couldn’t see, because it’s just amongst all this information. And it goes, “Well there’s a correlation there and it’s because lots and lots of people just happen to do this weird thing at this weird time and we didn’t know and no one talked about it,” and now we can see that.
Anthony Jeannot[00:35:58] Yeah. I have a friend who’s like – because obviously there are like bits and pieces of things that are trying to do this right now, unsuccessfully – but he’s got a smart watch, a smart ring. One of them is temperature, and one of them is mood and one of them is heartrate, and then they all plug into his smart app, and it’s like, “Ah, you are getting lots and lots of data on yourself. What does it tell you?” I mean I’m sure someone somewhere will find this profitable, probably.
Ben Byford[00:36:28] There is a good website for that, actually, with people who are obsessed with making notes of certain things about themselves. So it’s often people who will note down when they have a stool, like pass a stool, like every day. Upload it to this website, and you can cross compare all this stuff. It is bizarre, but also fascinating. So, yeah, there are all sorts of people doing all sorts of things, so I’m not surprised at that behaviour to be honest. And that’s without the apps, right. It must get even worse.
Anthony Jeannot[00:37:02] I mean, yeah, exactly. Because that is the thing that mobile phones did, right. they’re like, “Here you go, go free with whatever data you want,” and you’re like, “Ah, look at these graphs. Ah, man. Have you seen my graphs.” Like yeah, great.
Strava. Strava’s another one. I’m like, “Do you know what, you went for a run. Great. Do you know who you can tell? Someone else.”
Ben Byford[00:37:27] There was this case were – I don’t know whether it was Strava, or it was the Nike running app. It was one of those, and Strava or Nike put up the information of popular runs in different places, right. across the world. And one of these runs was in an unmapped area of the country that had a military base on it, and this person or people were using the app and doing their exercises with it, but unbeknownst were sending this information that they weren’t supposed to be sending out into the world. And they could now – someone could pinpoint where they are, basically. So you get all these odd occurrences where, again, these circumstances haven’t been necessarily thought about, or rather people haven’t been told that this is a problem, right.
Anthony Jeannot[00:38:20] Yeah. Do you know what, though? Tying together multiple parts of our conversation, what we need is an AI movie script about a dude, this lonely dude, he’s running on his military base, unbeknownst to him his perfect match is nearby. She’s a runner, and finds this popular run on Strava, all of a sudden they discover the army base. Oh no, she’s arrested, he breaks her out. Happy ending.
Ben Byford[00:38:58] Nice. Great, and it’s all scripted by AI, as well.
Anthony Jeannot[00:38:57] All scripted by AI, so you get beautiful things like, “The cat is black,” “Yes, but why is it on the floor?”
Ben Byford[00:39:04] And then they kiss. No one knows what’s going on. I like it. I don’t have any cash to throw at it right now though.
Anthony Jeannot[00:39:09] It is funny. I did read the other day that you can’t have conversations in dreams. Your brain isn’t wired to have two people talking to each other. You can have dreams with two sets of dialogue, but they can never intersect. And it sounds a lot like the AIs are writing our dreams.
Ben Byford[00:39:30] That is bizarre. I mean my dreams are so boring right now.
Anthony Jeannot[00:39:36] It is another AI thing, right. Rubbish in rubbish out.
Ben Byford[00:39:37] Yeah, yeah. That is true. You get these bizarre things coming together in your mind or in this AI which, eh, at the moment, it could be so much more exciting, for sure. I’m sorry, I don’t have any good dreams to tell you about.
Well I’m going to call it. But before I do that, I’m going to ask you another question. Is there something that really excites you and really scares you about our future of automated technology?
Anthony Jeannot[00:40:12] That’s a great question. I think as somebody who, again, working in digital advertising a little bit, does comedy and has been on the wrong end of the bans in the last couple of weeks, I think my immediate curiosity and concern is the way that it impacts discourse now. Which isn’t a funny answer but it’s a true one. It is a huge concern of mine that the way that these echo chambers work. The way that it seems like people who I disagree with and would call scumbags are better at manipulating them than us, and there’s a little bit of envy there from me as well. And all of those things worry me greatly.
I think what I am excited about and optimistic about is the fact that smart people make these things, right. And there is within that the possibility to remove some of the stuff that we discussed earlier. There is the opportunity to rebuild some of these things deliberately, if we take that opportunity and that can be – in the same way that the biases for me to begin with were completely unconscious and nobody noticed and nobody was hurt by it – undoing it could also be super-unconscious and nobody actually needs to feel hurt by it. It’s that crazy thing of when you talk about privileged people, like equality feels like oppression to the privileged, but actually that’s because you’re taking stuff away consciously. Actually if you were to suddenly remove it in a way that they didn’t know, then all of a sudden you don’t have cry babies saying, “Oh, no. I’m not getting what I want.” No, you just didn’t notice it, right. So I think there’s this huge opportunity to bend the stick back the other way by being deliberate with some of this stuff and removing some of the inequalities that we didn’t know we had built into society to begin with.
Ben Byford[00:42:10] Yeah, nice. I think that that’s a really nice sentiment. I don’t think it’s one of those things that you can just pull the level and just have it overnight, but you know if you look 50 years in to the future you can move towards that. So thank you very much for joining me on this bizarre episode.
Anthony Jeannot[00:42:26] That’s alright.
Ben Byford[00:42:27] Where we rambled about all sorts of things. Pound shops, and –
Anthony Jeannot[00:42:33] wish.com. I apologise if Adam Sandler sues you at the end of this.
Ben Byford[00:42:38] Adam Sandler, yeah, exactly. So if people want to find you, follow you, contact you, how do they do that?
Anthony Jeannot[00:42:44] So, on Twitter, it’s @AnthonyJeannot, my podcast is Highbrow Drivel, so there’s a great episode with Ben there, I highly recommend you check it out, where we discuss a lot of – I guess like you said at the start, like the beginnings of this conversation started there. On Instagram I’m Anthony Jeannot. Obviously I want to get Ben back, so if you’ve got any questions this episode that you would like us to discuss again, hit me up and we’ll discuss doing this again.
Ben Byford[00:43:17] Nice. Thanks very much.
Anthony Jeannot[00:43:17] Lovely, thank you.
Ben Byford[00:43:19] Hi, and welcome to the end of the podcast. Thanks again to Anthony for joining us today. And again for asking me to interview on his podcast. It was a bit of a different episode this time, so hopefully it was a bit more open to people who maybe didn’t know all the lingo and wanted to get up to date, but also have a little bit of fun. I hope that came across, that was the kind of intention. Slightly more light-hearted episode this time, as opposed to our normal output. So thanks very much for bearing with us. If you’d like to give us some feedback, then please, that would be really great. Contact me at email@example.com and follow us on Twitter and Instagram; Machine_Ethics, and Machine Ethics podcast. If you’d like to support the podcast you can do so at patreon.com/machineethics. Thanks again for listening and I’ll see you next time.