87. Good tech with Eleanor Drage and Kerry McInerney

This episode we're chatting with Eleanor and Kerry on what is good technology and is it even possible? Technology is political, watering down regulation, the magic of AI, the value of human creativity, how Feminism, Aboriginal, mixed race studies can help AI development? The performative nature tech and more...
Date: 2nd of April 2024
Podcast authors: Ben Byford with Eleanor Drage and Kerry McInerney
Audio duration: 53:49 | Website plays & downloads: 58 Click to download
Tags: Feminism, Values, Regulation, Philosophy, Academic, Creativity | Playlists: Philosophy, Values, Feminism

Dr Kerry McInerney (née Mackereth) is a Research Fellow at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, where she co-leads the Global Politics of AI project on how AI is impacting international relations. She is also a Research Fellow at the AI Now Institute (a leading AI policy thinktank in New York), an AHRC/BBC New Generation Thinker (2023), one of the 100 Brilliant Women in AI Ethics (2022), and one of Computing’s Rising Stars 30 (2023). Kerry is the co-editor of the collection Feminist AI: Critical Perspectives on Algorithms, Data, and Intelligent Machines (2023, Oxford University Press), the collection The Good Robot: Why Technology Needs Feminism (2024, Bloomsbury Academic), and the co-author of the forthcoming book Reprogram: Why Big Tech is Broken and How Feminism Can Fix It (2026, Princeton University Press).


Eleanor is a Senior Research Fellow at the University of Cambridge Centre for the Future of Intelligence, and teaches AI professionals about AI ethics on a Masters course at Cambridge.

She specialises in using feminist ideas to make AI better and safer for everyone. She is also currently building the world's first free and open access tool that helps companies meet the EU AI act's obligations.

She has presented at the United Nations, The Financial Times, Google DeepMind, NatWest, the Southbank Centre, BNP Paribas, The Open Data Institute (ODI), the AI World Congress, the Institute of Science & Technology, and more. Her work on AI-powered video hiring tools and gendered representations of AI scientists in film was covered by the BBC, Forbes, the Guardian and international news outlets. She has appeared on BBC Moral Maze and BBC Radio 4 'Arts & Ideas'.

Eleanor is also the co-host of The Good Robot Podcast, where she asks key thinkers 'what is good technology?'. She also does lots of presentations for young people, and is a TikToker for Carole Cadwalladr's group of investigative journalists, 'The Citizens'.

She is also an expert on women writers of speculative and science fiction from 1666 to the present - An Experience of the Impossible: The Planetary Humanism of European Women’s Science Fiction.

She is the co-editor of The Good Robot: Feminist Voices on the Future of Technology, and Feminist AI: Critical Perspectives on Algorithms, Data and Intelligent Machines.

She began her career in financial technology and e-commerce and co-founded a company selling Spanish ham online!


Transcription:

Transcript created using DeepGram.com

Hello, and welcome to the Machine Ethics Podcast. In episode 87, we're talking to doctors Elena Drage and Kerry McInerney, both from the Good Robot Podcast. This episode was recorded on 11th March 2024. We chat about the Good Robot Podcast. What is good technology, and is it even possible?

The importance of joy and magic in technology. Technology as being inherently political, the value of human creativity, how feminism, aboriginal, and mixed race studies can help in AI development, and the performative nature of technology. As always, if you like this episode, you can find more at machinedashethics.net. You could also contact us at hello at machinedashethics.net. You can follow us on Twitter, machine_ethics.

Instagram, machineethicspodcast. Youtube, youtube.comforward/at machinedashethics. And if you can, you can support us on patreonpatreon.comforward/machineethics. Thanks again, and hope you enjoy. Hi, guys.

Thanks for coming on the podcast. If you could just introduce yourselves, who you are, and what do you do. Kerry, you could start. Sure. I'm doctor Kerry McInerny.

I'm a research fellow at the Leverhulme Center For the Future of Intelligence, where I cochair a project called the global politics of AI, which looks at how AI development is impacting international relations. But I'm also a science communicator, and I do quite a lot of work as an AHRC BBC Radio 3 new generation thinker, bringing my work on feminist and anti racist approaches to AI to wide audiences. And I'm really delighted to be on the show. Sweet. Thank you.

And Eleanor? Carrie's introduction is always really professional. And I always feel like I'm trying to, you know, remember the things that I do sequentially. So I'll give it a go. I'm a senior research fellow also at the Leverhulme Center For the Future of Intelligence.

And I specialize in applying feminist and anti racist ideas to making AI better and safer for everyone. And I have the great joy of working with Keri, my work wife on a number of projects. One of which is The Good Robot, the podcast that we have, and we've got this new book out together. And then I also do a number of, a bit more practical projects, which is quite unusual for researchers. I think both of us, are are told a lot in our center, you know, we need to do things that really, mean something to the world.

Whether that's giving the BBC a statistic on just how unrepresented women are in science fiction, and it's, much more unrepresented than in real life, as AI engineers or whether it's applying feminist and anti racist ideas to making the EUAI act, this enormous bit of legislation, more impactful, so that when companies are self auditing so that they can meet the EU's obligations, they do it, through a feminist and anti racist approach. So lots of different things. Awesome. And I apologize because I got sent the book, just this week and I've I've literally I've I've tried. I I read the introduction over the weekend, so I'm poised to finish well, finish carry on reading the rest of the book.

So this was really recent. So I guess the podcast and the book and your work kind of all collide. So, how does the podcast then turn into this book, I guess, is the question. So we were overall to get all the people that we, you know, I mean, people that really mean something to us on the podcast. And we have a diverse range of interests.

Carrie and I love to read. We are, you know, Carrie's been at Cambridge for 8 years, but she also has an enormous range of interests including cooking, and crocheting, music, etcetera. So So we wanted to also invite on the podcasts, artists, science fiction writers, and all sorts of people that are doing something cool in the AI ethics space. We get bored quite easily, so it's really nice to have that kind of interesting turnover, of guests. And then we ask them what should be quite a simple question.

What is good technology? And in fact, that that question becomes more complicated because good technology is not good everywhere. It's good in a particular context. For someone, it might mean their blood glucose monitor, but only when it's working well and has the cap on, not when it malfunctions. For the philosophies, it tends to be an opportunity to talk about, you know, what is good and for whom.

So these these larger, more abstract questions. So it's interesting to see how different people approach it. And there's a lot of love that resounds in the book. I think it's something that we don't have enough of in the world at the moment. People really love each other in the book, but also can disagree.

And there's a sense of solidarity, but also of difference, both in the styles and approaches to the question of good technology. But overall, there's a sense of that's what Carrie and I our our contribution to the book is. It it's bringing these people together saying coalitions are really important. It's not just about individual voices. It's about how you edit them, how you bring them together, how you put them in conversation with one another.

I I'm really interested in turning the tables on you guys, and I I apologize if this is like, I'm hoping that you've gestated this idea for long enough. So what is good technology? And, is it even possible? I love that you're challenging us with our own questions because I find it always really fascinating to hear people's responses to this question. And sometimes people really respond in quite a negative way to the questioning itself because they'll say, well, this is creating a really unhealthy binary around good and bad.

Or, like, is this even the right question to be asking? Which is both incredibly important, but also is a wonderfully academic thing to ask. Are we asking the right questions? As someone who's been, you know, full time in academia a long time, I definitely fall into those same kinds of patterns of thinking. But the more that I do this podcast, the more I become convinced that in the same way that there isn't one kind of bad or one kind of unhappiness, I think also there can be a multiplicity of goodness.

And I think the only way we can maintain hope in this, like, very bleak world is to believe that what one person's good technology looks like might be different to another, but it doesn't mean necessarily that we can't have this multiplicity of good. And, you know, I know there's a very famous quote about, you know, every happy family is the same, but every unhappy family is unhappy in their own way. And I don't think that's actually true. I think happiness, goodness, like, one of the main purposes of trying to explore all these different approaches, technology is recognizing that there is this plurality to them. So what do I think good technology is?

I see good technology as something that in both the large things and the small things helps us live our lives more freely and more expansively that pushes us towards a world without borders, a world without prisons, a world where we can live without fear. And, you know, I really think that there doesn't need to necessarily be a cap on how we imagine those technologies. And that doesn't mean that I don't think the technologies we use every day aren't very complicit. I love that scene in The Good Place. Sorry.

This is a spoiler for anyone who's not seen The Good Place, but it it's been out for quite a while now, so I feel fine where they talk about how hard it is to live a good life now because the world has become so globalised and complex. And that's something I think about every single day when I use technology. But at the same time, I also hope that even within those complicities, we can find moments for resistance, find moments for joy, and find moments for, what Luke De Narenha and Gracie Mae Bradley call non reformist reforms. So making changes which aren't about affirming a system that is unjust, but ultimately is about trying to get us to a better place. So those are all quite abstract ideas.

I know some sometimes it can be quite hard when people talk in very abstract things, so I often use the example of something as simple as knitting needles as a good technology, something which is still, you know, not separated from the world that we live in and how capitalism exploits us, right, when we think about where yarn comes from or where wool comes from. But at the same time, the simple act of knitting, for me, feels quite magical being able to create something with 2 sticks and a ball of string. So that's how I think about this question of good technology. Wow. What a beautiful response.

Well, we think a lot about what the negative sides of technologies are, and why, you know, good technology is never good for everyone, etcetera, etcetera. But I think the really important thing for me to how I make my life livable and not to, you know, not not too depressing is is trying to retain the sense of enchantment that we have even while we know technologies, are all dual use. There can be a negative impact whether it's, I I love this piece of software I use that I'm I'm not funded by, but Descript is amazing for editing podcast transcripts. And there is a sense of magic and wonder that every time I delete a section of the text file, it deletes a corresponding section of the video. And to retain that kind of magic and wonder is really important for me because I think that, you know, you can we can disenchant the world really easily.

There's this idea that technology has deleted enchantments, has has made, has has made mockery of magic. And this has all but disappeared. And actually with the kind of return of the tarot card that seems to be really having its moment right now in popular culture and, at events that I go to, there's always like a tarot card reader. And I think people are really kind of hunting for that. I I blame the enlightenment which, was trying to produce this idea of progress devoid of spirituality and, a sense of the unknowable.

So that's a really important thing for me to to retain. It doesn't mean that we're moving away from science. It just means that there is a way of interacting with technology that needs to to be joyful, but considered. That doesn't mean that we should be naive and not realize that, okay, the editing software I'm using actually probably takes me more time because now we don't have our editor anymore. So the labor impact of that is still very, very clear to me.

So we need to do what, to stay with what one of my favorite philosophers, Rosie Brydoti, calls the headache of trying to think our reliance on technologies that we may hate, that we may disagree with, that have an ethical impact, but also, take delight in. That is the reality of life, this kind of muddy mess. And we'll need to try and enjoy it, in some way. God, it is a mess though, isn't it? I think those are really great answers, so thank you.

I think one of the things that really struck me, I I guess there's there's there's 2 things I would I would like to bring up with both of those things, which are, I guess, not rebuttals, but, are things that I noticed. What you were saying, Kerry, is it strikes me that it's very political and very kind of values led. And it's it's this kind of striving for a better situation. Right? Not like, you know, putting a plaster over what we have, but actually trying to transition in a more meaningful way over here.

And but that that that that kind of presumes this idea of where we're going. Right? And and that is, embedded with some values, which is, you know, a whole sticky situation. And I guess science fiction and, or just fiction generally has a place in that, and as well as philosophy and academia. To be able to know where we're going is a useful thing.

Right? Like you pointed out, it'd be nice to have less borders. You know? And but then how do you get there and how does that work and and all that sort of thing, exploring that, landscape? I actually, believe deeply in in that one as well.

So I don't know. Do do you find that this whole endeavor at the moment is is just very political and very, values led? And and is that worrying because, obviously, values can be the moralistic values that people have do change, and per culture or ideology and such. Yes. Yes.

No. Absolutely. I think it is deeply political. And sometimes it's helpful just to lay that out and say we recognize that the work we do on the Good Robot podcast, the work we do and all of our incredible thinkers in the book do is deeply political work and sees technology and politics as always being really intertwined. And this is something I definitely struggle with, which is, you know, on the one hand, I think it is a crucial role of feminist thinking around technology to try and imagine different kinds of futures.

I get very worried about the rhetoric of us being locked into certain kinds of, quote, AI powered futures. I get worried about who's propagating that rhetoric, largely big tech companies and national AI strategies. But at the same time, I'm also quite wary of leaving us, you know, in a kind of state where we're saying, okay. There's going to be this feminist progress narrative, and things are always gonna get better because we're always working towards a better world, and that's inevitably be going to come. Because, unfortunately, I think we've seen a lot of ways in which there have been massive political and social transformations, which have been very positive.

But we've also seen places where we haven't necessarily seen these promises fulfilled either. And so I was talking about this actually, with Eleanor and with Sabine from Queer and AI. I was using immigration in my home country, Aotearoa, New Zealand, as an example and talking about how my parents say that there was a generation of New Zealanders in, say, 19 fifties sixties that they felt was actually more pro immigration than the generation that followed, in a kind of direct upsetting of this multiculturalist progress narrative. And they said, actually, I think that the post war generation was more tolerant or more accepting of immigrants and and wanted to see this vision of a multicultural society, whereas by the time the economic recessions hit in the seventies eighties, you ended up with another generation of New Zealanders who very much positioned immigrants as a threat to their jobs and their economic livelihoods. But I remember at the time finding that quite challenging because I was a lot younger, and I felt like, of course, things must be getting better for us.

There are many more people in New Zealand who look like me. There are many more people with similar cultural backgrounds, and yet, the way that we're portrayed or perceived our role in that society maybe is still always in flux and changing. So I agree that we need to really be working to have an active hand in the steering wheel if we want those promises of political change to come true. One of the key messages of feminist thought is that all knowledge is situated. And that means that it's always political.

There's always a politics to the location, to the moment in time, to the context. And I think this can be scary for companies because they want to try to be apolitical as best as possible. And at some point, it's just worth saying, you know, this is our politics because your approach to AI ethics isn't about trying to make the technologies neutral because that's impossible. What we need companies to be doing is just stating where they're at and then it's up to the consumer to decide if, if that's the right right company for them or if that's the client that they want to be working with. There's a really nice paper that's, that's really radical that I like and it's on algorithmic reparations.

And it takes the idea of reparations, which is still a source of lots of shouting in the UK and political shouting, often with my parents at home. And it tries to to use that idea to explore why algorithms that do justice, that work against power, need to really be explicitly doing that. And that means trying not seeking redemption for the past, but trying to to work against some of the wrongs of the past, by actively seeking good on behalf of the groups and demographics that that, had the worst time, out of the way that power was distributed in society. And even just to say, okay. No.

That's that's not what we want to do and make that explicit. I think, would be a movement in the right direction rather than pretend that that's not really, that's not exactly how difficult it is because those are the conversations that need to happen. It really needs to be that difficult and that explicit. Yeah. Yeah.

Awesome. I think the idea of just just company companies fessing up to to being somewhat political. And and and this is a PR nightmare. Right? But, like, being appreciating essentially their role in society and going, well, like, we make stuff and stuff has an impact.

And therefore, you know, we're we're changing the landscape and that is a political act, essentially. And just being mindful of of those things, I meant, would be a massive step forward. And and and being like responsible, accountable. That's it. Accountable for these things.

Because then because you can say all sorts of things, can't you? Yes. Yeah. And I think this is unfortunately something we've seen a lot with big tech companies where there is a public rhetoric around AI safety, and then privately, there is a lot of lobbying to water down regulation. I think OpenAI is probably one of the most famous examples here when it comes to, say, they are behind the scenes lobbying with the EU AI Act, while at the same time, Sam Altman was becoming a leading voice on AI ethics and AI safety on the global stage.

At the same time, I think the point you've raised around companies having politics are interesting not only in terms of the products they produce, but also increasingly as these companies start to act as political players. And this is something I work on with the AI Now Institute, a policy think tank in New York, which is drawing on the work of people like Swati Srivastava, for example, who puts forward this idea of hybrid sovereignty or saying what kinds of political leadership and political action is emerging outside of the state or the way that we might ordinarily think about politics. And when you look at the size of these companies, I think it becomes increasingly hard to deny that they are political actors, even when we think about this in the most state and traditional international relations way, which is through the lens of the state. I I also wanna say, when I was at school, I didn't I didn't study politics, and I didn't really know what was the political realm and what wasn't. And also another key message of feminism is that politics is everywhere, you know.

There's a politics to the domestic space, as there is to to the way that we're taught things in school to education. And, you know, well, we know certainly that there's a lack of accountability amongst our politicians, but do we connect that to the lack of accountability in big tech more broadly? So I I would really love for this to be taught in schools, you know, the politics of of the tech industry, of what Carrie just talked about, and how that impacts our kind of the day to day life, which is also a political life. So I don't know if I this is mean, but I I kinda wanted to go back to, Eleanor, your response earlier to the original question of what is good technology. Because I think because I'm a I I feel like I'm just gonna say I'm a realist and see what happens.

But I think the magical spiritual stuff that you were referring to, I'm probably not saying this in a particularly useful light, is is both anti progress and the the the idea of there being a world which is, finite. Right? So these tech companies and and and capitalism rely on this kind of infinite growth which doesn't really exist. So maybe the magical, and kind of more spiritual realms are helping with this kind of fight that notion. But I also feel like for me as a not very particularly spiritual person and a very scientifically logical person, it's it sounds like something I don't want to engage with, basically.

And something which is, you know, again, I think you brought up, but it's like not anti progress or anti, like, learning and understanding. But it it kinda feels like that to me. You know what I mean? I'm so glad that you brought this up because every time I see a Google engineer say that their system is sentient, I think what they're really doing is exploring the the magic of it. The the they're sort of leaving the language of science behind and to using the language of the humanities, of consciousness, of philosophy.

So, I I really deeply see it in engineers. In fact, I see it more in the way that engineers interact with their systems. My mom did electrical engineering and she was one of the first women to do round the world yacht racing competitively. She's now a dinosaur in her seventies. And she when she tells me how things work, whether it's, you know, domestic appliances or or or bigger piece pieces of kit, There is a magic, in her eyes, explaining something that is man made.

You know, it's it's she she's she's literally in the process of the explanation. It it gives me no magic to to to to look at things and how they're put together, really. My my boyfriend also studied engineering and he loves to see how things work. I quite like understanding how things work, partly because I was single for a long time. So it's very important to know how things work because no one else is gonna help you.

But it doesn't instill me with the same sense of kind of magic and wonder and awe as a great book or, you know, stuff that people who like literature enjoy. So I I think what I'd really like to do is is give a different language to the way that, science scientists, are are finding pleasure in in their systems. I don't think, you know, we need to discuss consciousness in a in a way that's quite un unscientific. I mean, let's let's let's be honest. The way that, Googlers were talking about whether their systems were sentient is not is not sciency.

And there's lots of evidence to suggest that consciousness is, is not a singular phenomenon, but is the result of lots of nonconscious cognizers. So thinking things that come together to create, what appears to be a single entity. And actually, this is a kind of, this is a myth, that is grounded in very anthropocentric ideas about what consciousness is. So there's a point at which, the scientist, go moves away from an explanation towards towards magic. And, actually, I maybe that's the twist to the story, want to reinject some some real insight back into it.

Well, I mean, maybe that's I mean, that it feels like chat g t GPT and the large language models and things like that are magical in that formulation because you you ask it to do something or you ask it, you know, to make prose. And it can to a way that we appreciate and can read and can formulate our own kind of ideas in our heads about what that is you know, how that's touching us and how that feels as an embodied consciousness or whatever. So I guess the magic at that point is, how we react to the stimulus. And and, you know, if we're talking about how, us as beings relate to the world, you know, if it looks like a duck, it's it's a duck. Right?

You know, if it looks like it's a biological thing that does something then, you know, we can relate to it in that way almost. I think GPT is a perfect example as is, digital arts. The people that know more about how it works actually experience more magic and wonder than the people who don't. So a lot of the I mean, I don't really like digital art, but, my friends who work in, art, but, my friends who work in, generative AI love it, partly because they understand the mechanics behind it. And that gives them, you know, a real sense of wonder.

So and this is this is what Einstein said. You know, he was a Christian. And he, the more he knew about the universe, the more awe inspiring it was to him. So I think there's this very false assumption that the more we know, the less magic there is. But actually, it can definitely be the other way around.

Yeah. I hate, the poems created by GPT because I love poetry. I've never met someone who was like, oh, I'm obsessed with the eights. It was just reciting poems to me, but also, like, love Chatt Gbt's poems. Yeah.

Yeah. I mean, I've got I've actually got a hunch that a lot of human creativity is actually and, like, value is derived from the stories we tell about the mediums and the people that make it and the authors and and all this sort of stuff. So I I feel like it's I feel like there's a paper coming along here. But there's a there's a story to be told in the creation of a product. And when there is a limited understanding or limited just creation process and the outcome is not necessarily visceral, then I feel like it's it's difficult to relate to it.

But it's weird because, on the flip side, you do get generated music, which is visceral, and it hits you in a different way. So I'm I'm trying to untangle this at the moment, but it's it's an interesting place, I think. Creativity and and AI anyway. Yep. Yeah.

Absolutely. So I know that we've we've talked about some of this already. So one of the other questions that you guys ask on the podcast, The Good Robot, is how can feminism help? And I guess what we're saying is how can we help with creation of good technology? In this, in on this podcast, we talk about AI and algorithms and that sort of thing.

So, is there, from what we haven't already talked about, some other explicit ways that you see that your work and and feminism generally can help? Yes. Yeah. Absolutely. I mean, I think that this has been one of the most exciting movements in the field of AI ethics, very broadly speaking, which has been this really big movement towards understanding how other kinds of movements and activism and areas of research which have explored injustice and the way that that shapes the world can be and are extremely relevant to how we think about new technologies, that AI poses certain kinds of new problems.

But often, the problems that they pose are either replicating existing problems or entrenching other kinds of inequalities that we already see in the world. And so we can't really fully grapple with how these technologies might be changing our societies or transforming our relationship with technology, without thinking about these other relations of power. And so for me, I see feminist thought and activism and histories as a really important way of rewriting not just the stories that we tell about AI to draw on what you just previously said, Ben, but also to think about and what our pasts and presence have been with technology and what kinds of futures we want to have with them. And so one way that I think about this a lot is through the lens of Asian American Feminism and Asian Diaspora Feminism, which have always interrogated the intersections of race, gender, and sex, in terms of thinking about what it means to exist in a society that maybe has not been built for you or a space that has not been necessarily hospitable to you, and what it means to rewrite both our human and our technological relations to be a more equitable and inclusive place.

And sometimes that looks like technological fixes, but a lot of the times, it looks like a lot more than that. It needs political fixes. So a lot of the work that I do specifically is thinking about how attributes like gender and race get represented in things like datasets. So I draw on a lot of the insights of things like mixed race studies, which looks at the experiences of people who have never quite fitted in to a lot of the data categories that we have to say, okay. Well, if we're scaling up and rolling out a lot of these AI systems, what does that mean for people for whom these categories have never really worked?

And what does this say about the kinds of new data collection practices or new systems we might need to build if we actually want these technologies to work for everyone. Because often, I think people are very, very well meaning, but they might not necessarily, a, have grappled necessarily with these histories or understand the kind of meaning of these racial categorizations or, b, they might be under too much pressure to produce profit. They might be under too much pressure to get a result. And so they decide actually these populations are simply not statistically significant enough for us to change our methods. We see this a lot with genderqueer people, with trans people as well, which is people who are say who they say, okay, we recognise that people these people have a different experience.

They might need a different categorisation, but we're under a time crunch. And so we're just going to leave that data to the side. And I'm not saying that addressing these problems is easy at all, but rather that with feminist and anti racist thinking, we do have a long legacy of grappling with what does it mean to try and defend the rights of people who have been historically oppressed or who are minoritised, and how does it actually transform both our tools and our societies into better places for everyone? Pretty nice. I think we do something a bit different after Carrie's very, another very beautiful explanation.

One of the things that I try to do is take, radical ideas from feminist theory and apply them to AI. So this doesn't mean this is quite far removed from doing, like, a head count of, like, how many women are there in the dataset kind of thing. And one of the ideas that we tried, the way that we're using quite effectively to show how AI works better than the way that people, say that it does is the idea of performativity. Now it came from gender studies. It was an idea, that was put forward by a number of different people including Judith Butler, and then was used by a feminist physicist called Karen Barad, who's fantastic, who explored through Niels Bohr's experiments with wave particles, how preserving the experiment and the apparatus affects the observed world.

So the way that you do the observation actually determines the results of the experiment. And this idea maps quite nicely onto what Judith Butler said about gender. That the gendered body does not preexist society, but is created through it. And that doesn't mean that there's a body that has society mapped onto it. It means that we literally, materially emerge in a repeated process, not just throughout the day, but kind of, you know, throughout your your lifetime, through the the way that the body interacts with institutions, whether you you know, what happens to you in jail as you move through life in the education system, etcetera.

And this may seem a little complicated, but I'll I'll explain it through a piece of software that was used by a company called Dataminr. And Dataminr started off as an event detection And so you could create a map around an area, And so you could create a map around an area. And then if there was loads of tweets in that area and images of fire and smoke, It was likely that there was a fire in that area, and then journalists could kind of target in on that story. And this seemed pretty innocuous. I actually met somebody a couple of days ago at Bloomberg who was working for the company, a journalist.

And when they they moved from working with journalists to working with governments and the police to help them track and monitor protests like Black Lives Matter. And it turns out that it wasn't just a single bit of software as is the case. It was a bundle, a cluster, and assemblage of different scraping. And these would come together to, give the police information about rights. We're in Baltimore.

There's a crowd gathering here. This is what's likely to happen. And what the technology was doing was sending an alert to the police, and that alert was saying there's a potentially violent protest going on in this place. Now the machine was deciding what constitutes a potentially violent protest. What does it look like?

Something that has not yet happened. And it was a kind of predictive policing based in the logic of the police. Right? There's a group of black and brown people holding signs saying defund the police. To you, the police, this is what a potentially violent protest looks like.

And by creating this, this alert, by creating a lens through which the police viewed the world through their own politics and their own assumptions, they were creating the protest, if that makes sense. The the violent protest doesn't preexist the technology's perception of it. And this is an example of how AI can be performative, how it actually produces the world that it claims to observe. Yeah. I think, it's really important because there's there's various, examples of this as well that, You know, where you it's almost like, where are you going to apply technology where it could be beneficial, I guess, in this values led landscape that we have, I've tried to discover earlier on?

And where is it going to go this other way and and be kind of dogmatic, almost? I feel like dogma is my favorite word in AI because it's like it's inherently dogmatic. Right? You give it an AI like dataset and then it produces the outcome from the dataset. So But interestingly, it comes to like what it is dogma is the kind of dogma that I think about.

And I think, you know, when we were talking earlier about ex exorcising religion and mysticism from science. But, yeah, we get a lot of this language that does come from, yeah, from from grimism. Yeah. Yeah. Yeah.

And I guess, you know, it can continue these things, you know. People have a lot of, there's a lot of talk about bias and the kinds of bias that we're introducing into these systems and things like that because of, you know, how humans interact. And and that is a form of, like, for me, it's like it could be you could form as religious dogma, but it's like it's dogmatic. Right? The the bias towards a certain way of viewing the world or, going down a particular line in YouTube, for example, and being shown more things, which are kinda reinforcing.

Again, it's, like, one of those other very simple examples of how these systems actually shape, you know, our experience, almost, and and how we as individuals, are placed in this environment. I guess, as you're saying before, kind of placed in this world on a constantly reconstituting it. Yeah. So it's it's it's it's, it's problematic. So I'm hoping that you guys have got all the answers and, you can just sort it out for us.

Is that right? Yeah. Absolutely. Oh, my god. Yeah.

So I I really, want to ask you the 2 questions that we always ask on on the Machine Ethics Podcast, which, one, we usually ask in the beginning, so I'm gonna ask for your brief answers if that's okay. And then we'll we'll move on to the second one, which is the the ultimate question. So what is AI essentially for you? I know we talked about, technologies and, how they exist in the world. But more specifically, kind of how do you feel about AI and and, the ideas surrounding AI?

Sure. I mean, I think this is a great question. So very briefly, I think of artificial intelligence specifically above all as a concept rather than as any singular technology because I think what the last few years have really shown us is the power of this idea, and its mass application to a lot of technologies that 15 to 20 years ago, we wouldn't have called AI. So it's simple things like a decision tree now being called artificial intelligence. And to me, the point there is not to say we need to continually raise the bar of what we count as AI, but rather to interrogate why does this idea hold so much power?

Why are we so interested in it? And again, I don't come from a background in computer science, just to be clear, and I know that there are very specific kinds of techniques, things like machine learning, for example, which people increasingly call and associate with being artificial intelligence. But as a scholar of politics, as someone who's been thinking very, very deeply about the kinds of stories we tell about AI and why they matter, I think we can see this concept taking hold in really powerful ways. And it really shapes the kinds of technologies we see as being desirable and the kinds of technologies we see as being possible. Such a lovely answer.

Me too. I think that it's best it's it's best to think about these things as an idea, so that you can go back to that stage and say, how should this idea be explored? And why does it make sense? And why does it not make sense? When we're we're building this online tool that helps companies meet the EU's obligations, and we focus a lot on that ideation stage.

So the the bit before you've even started to, to build an AI product, you sit there and think, right, who's this for? Who's on our team? Why are we building this? Is there evidence to suggest that AI would be useful in this domain? Is it tried and tested?

Who have we consulted? And I think that stage is the most important one. But we forget that all things, whether it's shoes or or or combine harvesters emerged as an idea. And the best products are are are ideas that responded to a particular need. I think with the great exception of Apple, which just creates needs, and I'm kind of more than willing to just tap in and buy additional stuff.

So go back to this idea of, like, you know, is this it's not just like, oh, our clients tell us we need to start using AI. So we need to just use anything, but, like, you know, is this the right the right thing for for what we're trying to do? And, I mean, I'm so, unfortunately, so disillusioned at the moment because I go to company event after company event where they're like, we just need to be using AI in some form. It doesn't me matter if it's real AI or just something that has AI kind of on a bit of the website, or, you know, there's a button that you press that switches on some, like, machine learning functionality, but it's not an integral or cool part of the system. So we need to make AI unsexy again.

It needs to be like another banal thing. Also so that the some of the companies we work with who are lovely who've been like, we've been using AI for ages. It's just the things that we're doing with it are not like super exciting. That's what we want, is to make AI boring again. Yeah.

I mean, for me, I think there's so many things which are falling out of the funnel almost. Like, you know, we were talking about AI, I'm gonna say 10 years ago. It's not 8 years ago. Okay? And it was it's all, image recognition and, you know, processing data in it.

In an interesting way, we've all sorts of different machine learning stuff and then some deep learning. And and now it's just kind of like LLMs. Is is AI just LLMs now? When when, you know, you you talk to people and it's like, oh, yeah. I I use AI.

And you're like, yeah. But are you but are you just talking about LLMs? Like, this specific subset of an architecture which is based on neural networks, which is probably distributed by this massive company. It doesn't smack of, you know, that's for me, that's not what AI is essentially. It's funny.

But, yeah. So I I I it resonates with me that we need to make AI, unsexy again. And you can make really small interesting things with, you know, on your computer. So it doesn't have to be this massive thing. It should be about play and experimentation.

I don't know why. I think because AI does things at scale and, and very quickly. It's become associated with scale, but it's not, you know. And the the large it just sounds good. It's like the kind of big swing dick thing of having the largest, like, model or the most the most data parameters or whatever.

But, a lot of the the really cool people that do kind of speculative design in AI that Carrie and I work with think about local contextual systems. And this sounds quite like, you know, eco friendly sort of hippy dippy, but actually if you're sitting there at home making a system that works for you, that you trust, that's another great example, of how AI can be local as in it solves you, your local problem, and it's a bit like slapdash and things kind of, you know, maybe a bit of bricolage and maybe not the cleanest, most beautiful thing you've ever seen. But it works well for you. You trust it. You know what it does.

You know its limitations. And those are the kinds of AI systems that we really need. Bit bit of knitting. Yeah. Exactly.

So the last question is, given the AI mediated environment that we live in, what excites you and what scares you? Yes. Many, many things scare me as someone who looks at the dark and seedy underbelly of tech, most of the time, some things excite me. I think to start with things that concern me or scare me, you know, I think that as someone who looks at the international politics of AI, the global extraction supply chains, the way that the expansion of digital infrastructures is reshaping world politics start to try and hold big tech companies to account, that we think about technological good not through the nationalistic lens of things like national AI strategies or national success, but rather start to see these technologies as something that if we're serious about them being good, that that necessarily has to be a global good, not a localised sort of nationalistic one. And I'm also very, very concerned about the environmental impact of these technologies.

It's something that I try to emphasize a lot when I talk, which is simply that if you care about being a conscious consumer of plastic, of other kinds of materials, then it's really important you also think about that in relation to technology. And I say that not in a way that, oh, I'm somehow perfect and that I am not complicit in this environmental damage. Of course I am. I'm talking to you on a computer right now, which I know is probably made, through various kinds of resource extraction. But I think, you know, the more that I work in this area, the more intentional I become about framing these kinds of environmental extractions being part of these sets of global injustices that AI can entrench.

So a favorite scholar of mine, Max Lab Warren, has a book called Pollutionist Colonialism. And in this book, they talk about, the way that our relationship with the land and the planet is often fundamentally colonial because we see it as able to withstand a certain amount of pollution or waste extraction before we say, okay, well now it's polluted. And that extractive relationship is itself the problem. So those are some of the things that really scare me, alongside some of the problems that we discuss in the book and elsewhere to do with how AI can entrench and exacerbate kinds of sexism and racism. I think the things that excite me, apart from, again, you know, the moments of wonderment I have and the kinds of technological play that Eleonore described earlier in this episode, is more the way that I see people galvanizing around these technologies and using the emergence of these technologies to ask these bigger value questions of what do we want from our societies and what do we want from each other?

I'm not saying again that the emergence of these technologies is necessarily the best way that this could have happened, but at the same time, I think it's really interesting and positive to see these questions around what is art and why does it matter? You know, what where do our products come from and why does this matter? And I think even though sometimes I can be very, very pessimistic about the tone of that debate, I was really encouraged to hear, for example, the way that the interest in the environmental impact of AI has massively taken off. They did a big series of street interviews, I think, in Taiwan, and this was one of the biggest flags that people raised is they said we're worried that these aren't eco friendly tools. And I found that really encouraging because I think 2 years ago, that was not really a huge public conversation.

At the same time, I also want to avoid that progress narrative that we talked about a bit earlier because I think this journey has necessarily been a little bit bumpy. So for example, I was really encouraged in 2020 when we had these big movements around facial recognition in cities being used by police. We had very big, agitation in the UK context around the use of the a level algorithm, and people becoming more familiar with ideas like algorithmic injustice and algorithmic inequality. And I think unfortunately, ChatGPT set us back, to me, several years in that critical debate. So I remember finding that moment, particularly disappointing because I think people really got sucked in to this huge, huge hype wave propagated by companies like OpenAI.

Nonetheless, even with that, you know, caveat in mind of it's not necessarily a linear road, I think this moment of kind of societal reflection on what do these technologies tell us about our societies and ourselves is what makes me a bit excited or a bit hopeful. I agree. I'm always excited by people asking questions. I think that's what life is all about. And I don't want it just to be the people who are hard done by who are asking those questions.

And that's sort of what we're seeing in big tech that these amazing engineers like Tanika Brue and Margaret Mitchell, who are supposed to be doing frontier tech work, asking big questions about, and exploring their excitement. Or instead being distracted by having to, you know, respond to things about ethics and culture issues in in technology. And so really what I'd like to see is is for everyone to be able to explore their excitement and not be held back, by the the barriers, the obstacles, and the things that are not going well and having to ask also different sorts of questions. I I would like people's, politics and concerns about the world, and it doesn't matter what your demographic is, to also enter into your feelings and your desires around AI too. Excitement, shouldn't be unfettered.

It should be, always tempered by your concerns and your understanding of the world. And I think that just makes excitement richer. I don't think that draws away from it. Naive excitement is not is not good for anyone. I'm excited by the rise of tech unions, by friends who are like, had a super smart friend who did law at Oxford and is now a barrister and has moved into representing tech unions.

And that to me is really cool because I think it signals something. And what it signals is just that we have better, more robust companies with workers who are like, yeah, I'm a scientist, but I'm also kind of, you know, invested in myself thinking about the systems that I'm making and whether I believe in those systems. Like, you don't have to just be, trying to get from a to b to do science. You can also locate science and politics and, a particular kind of knowledge and really think critically. I want critically thinking scientists, but also people from the humanities that know more about STEM.

I think we need to go and do, like, take your take your friend to work day exactly where you go and visit your friend in their place of work, and get to know more about what we're thinking. And just be kinder to each other. Yeah. I'm I'm excited about people gaining knowledge. Awesome.

Thank you. And that's a lovely place to leave. So thank you both for your time, your energy, your positivity. How can people find out about you both and what you do? Great.

Well, you can hear us on the Good Robot podcast. You can find our latest book, in bookstores, online atbloomsburydot com, also on Amazon. Speaking of complicity with big tech. And then, yes, you can also find us on the Leverhulme Centre For the Future of Intelligence website, and you can follow us both on Twitter, LinkedIn, and we also have A Good Robot TikTok and Instagram. What's the book called, Carrie?

It's called The Good Robot, Why Technology Needs Feminism. Great. And it's short essays. It's really easy to read and it's a perfect entrance into top thinkers around technology. And I promise, I've edited it within an inch of its life, so it's it's very readable.

Thanks, Ben. Thank you very much, guys. Thank you so much for having us. Bye. Pleasure.

Hi. And welcome to the end of the episode. Thanks again to Eleanor and Kerry. I think some of the things that really resonated with me from that episode. This idea of technology being inherently political and that we shouldn't shy away from that.

Maybe we could call that out. Maybe we could be a little bit more maybe a little bit more transparent about our own values and capacities and and all this sort of thing. Although I wasn't totally sold on the the magical nature of technologies, I think Elena had a really good point about the formative nature and the the way that we react to technology and the the unknown, the unknowable, the the way that things look and feel, organic or they make you, viscerally react in certain ways and things like that as extension of this magicality. I'm just gonna make up that word now. I found that really interesting as a point as well.

So thank you to them. Also, check out the podcast and hopefully, I will be on their show very soon. So check that out. And until next time, thanks very much for listening.


Episode host: Ben Byford

Ben Byford is a AI ethics consultant, code, design and data science teacher, freelance games designer with years of design and coding experience building websites, apps, and games.

In 2015 he began talking on AI ethics and started the Machine Ethics podcast. Since, Ben has talked with academics, developers, doctors, novelists and designers on AI, automation and society.

Through Ethical by Design Ben and the team help organisations make better AI decisions leveraging their experience in design, technology, business, data, sociology and philosophy.

@BenByford

Previous podcast: What is AI? Vol.3