54. The business of AI ethics with Josie Young

This episode we're chatting with the amazing Josie Young on making businesses more efficient, how the AI ethics landscape changed over the last 5 years, ethics roles and collaborations, feminist AI and chatbots, responsible AI at Microsoft, ethics push back from teams and selling in AI ethics, disinformation’s risk to democracy and more...
Date: 27th of March 2021
Podcast authors: Ben Byford with Josie Young
Audio duration: 51:28 | Website plays & downloads: 272 Click to download
Tags: Disinformation, Microsoft, Feminism, Responsible AI, Democracy | Playlists: Rights, Business

Josie Young operates at the intersection of Artificial Intelligence, ethics and innovation. She’s based in Seattle (US) and is part of Microsoft’s Ethics & Society team, partnering with product teams to build technology that embodies Microsoft’s responsible AI principles.

Prior to leaving for the US, Josie was named Young Leader of the Year at the 2020 Women in IT Awards (London, UK) for her work leading ethical deployment of AI in the public sector at consulting group Methods. In 2018, Josie gave a TEDxLondon talk on the design process she created for building feminist chatbots. She has collaborated with the Feminist Internet from time to time, looking at ways to build feminist technologies.

Josie is also the Co-Chair of YWCA Great Britain, a charity dedicated to supporting young women’s leadership.


Ben Byford[00:00:06] Hi and welcome to the 54th episode of the Machine Ethics Podcast, this episode we're talking to Josie Young. We recorded the 19 February 2021 and we chat about AI making businesses more efficient, but not obviously saving humanity quite yet. How AI ethics landscape's changed over the last five years. Ethics roles and their collaborators, feminist AI and whether chatbots over, responsible AI at Microsoft, simply doing cool stuff and much, much more. Thanks for listening. And if you'd like to find more episodes, go to Machine-Ethics.net

Ben Byford[00:00:42] You can contact us at hello@Machine-ethics.net. You can follow us on Twitter Machine_Ethics or on Instagram, a Machine Ethics podcast. If you can support the podcast, then go to Patreon.Com/MachineEthics. And I hope you enjoy

Hi Josie. It's awesome to have you on the podcast. Thanks for spending time with me again because we saw you back in Episode 12. I'm thinking for CogX 2017, we first met. So if you could just introduce yourself who you are and what you do.

Josie Young[00:01:20] Of course. Hi Ben. It's great to be here. So my name is Josie Young and I am currently living in Seattle. I'm a programme manager and I work in Microsoft's Ethics and Society team, which is super fun.

Ben Byford[00:01:36] Awesome, thank you. So yeah. So we met before back in 2017, which seems like so much time has passed since then. And we've both acquired children since that time. So life has happened, pandemics have happened, all sorts of things have occurred. You are now working with Microsoft and you used to work in other places, including in the UK, and you did a Masters as well at Goldsmiths. I've got these in my notes, you see. There's a question that we always ask in the podpact, which I can't actually remember if you've answered before, but it's been such a long time that I would like you to answer again. So, Josie, what is AI?

Josie Young[00:02:18] Such a great question. AI is, as we're currently talking about it in the world, is basically super intense machine learning programmes, I think. And also I think when we're talking about AI, we're talking about business applications of machine learning programmes. Because I think a lot of sort of what we call AI these days is being driven, developed out of business or industry, often in collaboration with academia. But I think I at the moment is sort of like enterprise scale, machine learning capabilities that trying to mimic, I guess, human capabilities like image recognition, speech recognition, all that kind of stuff. I mean, obviously I would say that because now I work at Microsoft and that part of that division of the company. So, I think it's a very specific perspective.

Ben Byford[00:03:20] So you think, because it's interesting that you say that, that it's kind of it's almost transitioned into this kind of software as a service world or area, which maybe, I don't know, five years ago, less so maybe. Does it change a transitional period there?

Josie Young[00:03:38.310] Yeah, definitely. And I think, previously it was more like research, theory. What would we need to make this happen and all those lovely utopian things of how we can change the world with AI. Whereas, I don't know if we're changing the world with AI. I think we're making businesses more efficient. And I think maybe being able to run some, like, cool analytics on things that we wouldn't have been able to run analytics on before. So being able to answer questions that we couldn't previously answer. But whether we're doing it on like a scale of saving humanity, I'm quite sure that's where the focus of AI is primarily at the moment. And also I remember reading, but Genevieve Bell of the Australian National University wrote something recently talking about cybernetics. So cybernetics were, this is all new to me. About cybernetics, a cool, big thing, sort of the [inaudible 00:04:40] 60s and 70s. And Genevieve Bell, she wrote this in The Griffith Review recently. And she's basically talking about how, the AI kind of Dartmouth conference in the 1950s sort of kicked of this whole, artificial intelligence thing. They were very, very focussed on quite specific applications of AI basically. How do we use computation to replicate the brain? Whereas cybernetics was more about how do we integrate this kind of technology, this kind of thinking with human systems, with culture and they had anthropologists sit there at their conferences and things like that. And so it's interesting that we are we're on the AI track, I think primarily rather than, say, cybernetics track, we were ... that.

Ben Byford[00:05:24.490] I guess there's some overlap there as well, because you can't have one without the other. The the application of the machine learning stuff could be for, transhumanist ideas or, applying them to a more human context.

Josie Young[00:05:38] Yeah, I think so. But I just think it has a whole lot of blind spots, though, because, I can't imagine if you had that conference with psychologists and anthropologists as well as computer scientists as well as information systems people, you wouldn't necessarily have, say, a machine learning powered like sentencing software that was discriminating against, black people, basically. And then when you raised that to the company, they're like, no, there's no bias here. There is because it's institutional racism in this context and you're amplifying it. And so I just think it's by just going down the AI track like so many blind spots that, oh, no, data is not neutral. Just like doing some maths on stuff isn't neutral. And so I think it's just it's such a narrow scope sometimes that to our detriment, to be honest.

Ben Byford[00:06:41.260] And I think we've had this discovery period where growing out of big data and the excitement around of big data analytics and, oh, I've got this data social media, we've got the Internet doing all these amazing things. And then we have IOT like things producing data. What can we do with that and how do we process that and how do we make sense of that stuff? And in my mind, the analytics or the AI machine learning revolution. Summer let's say, summers or winters people like to talk about, has really grown out of the fact that we've got all this data and we have to do something with it. And most of that is like you were saying, a business area to to make sense out of the data that they control. It's really interesting that we've had to discover for ourselves the pitfalls of some of that thinking, some of the I just apply this mathematical equation with this nodal structure and it will just be fine because, it's technology and technology doesn't care about anything. It just does what it says. That as we have previously discussed in all these different podcasts, that's not necessarily the case.

Josie Young[00:07:52] And it's so tricky. There's so much nuance as well. I'm trying to, I guess, explain those intricacies to people because, we're led to believe the scientific method produces objective results. It's as close as we can get to kind of objective truth, I guess, in a way. And so you've been kind of steeped in that. And maybe you've got a science degree of some kind. It doesn't really make sense when the rabid feminist turns up and it says nothing is objective. And everything has different interpretations. Everything is subjective. It's really tricky to sort of try and peel back the nuance and say, your intention can be good and you can still produce something that creates harm in the world.

Ben Byford[00:08:46] It's hard. So there are sorts of kind of discoveries or those things that we've kind of fallen into accidentally and had to kind of sort out and, make some sort of structure. And some of that is to do with what we now call AI ethics. Right. So there's this large area which has overlap all over the place, which is AI ethics. And before maybe people were talking about, digital sociology or digital ethics and there would be, people in universities and humanity departments doing that. And there would be technicians in computer departments doing AI stuff or analytic stuff. And now those kinds of areas are coming together, in my mind anyway to create this kind of area of ethics. How do you think, as well as those kind of pitfalls that things have changed over the the last three years since we met, but maybe last five years, how these things have kind of progressed and changed?

Josie Young[00:09:48] It's amazing when you and I met at the CogX conference because I think you're one of the first people that I met when I said, hey what you think about this ethics thing and you were like, yeah, totally. People don't usually agree with me when I say these things. It was so exciting and awesome and I think it really is about that, like AI, even I sort of gave a quite cynical kind of description of what AI is at the moment. We've got all these digital networks around the world that we're now putting all this machine learning autonomous systems stuff into. And that started happening about five years ago. And until we started to see those negative effects of doing that, like automating these kinds of things at scale, at a scale we've never seen before as a society, it's yeah, it's amazing to see how quickly it's changed and that it is much more mainstream now to be thinking about the societal impacts of this stuff. I think Europe and the UK is really awesome because everyone kind of naturally thinks about things in this way. And the EU is doing a whole lot of work around principles and all that kind of stuff. It's brilliant. And then over here in the US, it's a very different perspective. I think the tendency in Europe and the case to think about it kind of top down policy principles, governance and regulation and stuff like that. Whereas what I found in the US, it's much more kind of actually where is that meeting communities? How we doing kind of community activism around this great grassroots mobilisation, a lot more focus on applied tools to help teams do things differently rather than necessarily a tendency to those kind of frameworks and governance and stuff that is maybe more European invent, if you to compare the two. So I think it is interesting that, five years ago, no one was really talking about this, and now we can be assigned to see some pretty consistent with divergent themes from the different regions. And obviously I'm super Anglo centric. So I'm sure there's other stuff in other places, I'm definitely not aware of. I think it's interesting. When you and I met, I think you and I have always talked about how do we do this in an applied way? How do we sit with the teams who are building the technology and helping them make different decisions and help them think differently about it. And other other very smart people can do the kind of policy stuff in the top level stuff. Sorry I'm just rambling at you now. Great.

Ben Byford[00:12:41] I think that's the beauty of podcasts, actually, the less structured are ramble chats. So when I was researching you briefly, because I already know you a little bit, so that helped. But I already know that you've done some work in this area, but you also have a new Twitter account that you are a feminist AI researcher. So I was just wondering if you wanted to qualify what that meant to you and what it means to be a feminist AI researcher and why someone might do that.

Josie Young[00:13:12] So I think with anything feminist, it's always completely politically motivated. So when I was doing my chatbot research, so when I did my master's back in 2016, 2017, and that was the research I was doing when you and I met, I was trying to understand why we why are we designing all of our voice bots to be women. So Siri, Alexa, Cortana, Google assistant, all had women's voices and most of them also have women's names. So from a feminist perspective, I'm not so keen on this, what's going on here? And as I was doing some kind of research, what do we have currently in terms of our academic frameworks, maybe more applied perspectives on how to to really disrupt this way of designing that interaction layer without AI systems? I couldn't really find much. I'm actually a feminist, have a huge amount of opinions on this, and I found opinions from like the 90s, but I couldn't really find many kind of more contemporary kind of academic. And I thought, all right, well, that's clearly a massive problem. And also, what I did also observe in my research was that, people were still very narrowly thinking about, if you work in AI, you have to have a PHD in physics or you're a computer scientist or stats. It's all about stats. And I thought actually, that's one of the problems with how they're designing AI is that our teams are not multidisciplinary, they're way too narrow, they're way too technical skills driven. And so I thought obviously what the world needs is a feminist AI researcher who can bridge those two gaps, those two worlds by saying here is a kind of social science framework and my thinking about the broader effects of what you're building, then distilling that down in a way that teams who are building stuff day to day can consume and understand and then alter how they work. So taking those, because I just I don't think it's fair to assume everyone in a product team is a rabid feminist. I mean, they should be, obviously, but that's not a fair burden to put on everyone. But what we do need to do is be able to translate those ideas, that way of understanding how societal impacts are going to manifest based on that how you're designing an AI system. And so I guess it's like from the perspective of saying, hey, we need to have more people with a social science perspective in the space. And for me, that's being a feminist, but also that we need to really challenge who we get, who we give permission to be in AI and I'm someone who very proudly has never coded a day in my life, but I still think that I have an absolute right to be in the room where this stuff is being built. And it's actually important that people with social science perspectives, as well as the more technical perspectives can be working together. So I could build more cool stuff, basically.

Ben Byford[00:16:32] Cool stuff. Yeah. And you were talking about that back in 2016, 17. And I find it really interesting because it's a lot of the stuff that we were talking about then, as, applied nature, how can we actually help teams enact this stuff is things that I have seen, starting to happen, which is really nice as well. So in some of the work I've done and I'm sure some of the work you do now at Microsoft, you're enacting some of that now, which is really pleasing, to have gone well, we we need these people and we're not seeing them at the moment. And then you're suddenly seeing them. Teams are incorporating some of that learnings into how they operate and then hopefully into those projects, right?

Josie Young[00:17:22] Yeah, exactly. And I think, in the last, definitely like less than a year in the lot, yet last year there is a growth in jobs coming out, the people who can do that bridging so previously, hardcore, like data scientist. This is the person who works in AI. Keep [inaudible 00:17:44] up. That type of person. Maybe you'd have, like a social science researcher who's doing some academic research on the impacts of AI in society. But those two will never meet. Whereas in the last 12 months, I don't know if you've seen this as well. More and more roles are coming come out of this hybrid and often it's someone maybe with a policy or a project management background who can do the preaching really well and who can work effectively with a range of people in a team to help, think through those kind of social impacts of what you're building, but also provide a structured way to move through that product development lifecycle and actually produce something that's high quality at the end, which is cool, which I think is the way forward. And I'm hoping and I think, I would say that Microsoft is really investing in this, which is one of the reasons why I was over the moon to get to work in this team. But how do we prove to everyone that having these multidisciplinary teams actually drives innovation and you produce better quality stuff at the end? Actually. And that is in and of itself is good, too, and important, too. So, I think with the feminist chatbot research, that I did. One of the amazing collaborations I got to do after that was with the feminist Internet. And they used my research to design a workshop for the students at the University of London to build feminist Alexa prototypes, so fun and so cool. And I think they came with six prototypes in total. And these students, so they're the prototypes. But just like the way that this amazing collection of students across a range of disciplines would work together to build things that were designed to meet that meaningful human needs, that thought very carefully, that conversation design, they thought carefully about what kind of data or [inaudible 00:19:48] So they talked more about those. How would inside the conversation design with the voice bot handle harassment, for example? It was really fun, like one of the voicebox would rate out Rupal quotes, but like in a robot voice. And it was the greatest thing I've ever heard. And it just proved to me like all of these prototypes were so innovative and so interesting and needed by these different communities. But I couldn't necessarily see. So like Amazon creating those kinds of skills on their own, it would need to have come from a more diverse community. I just think that there is something really, really cool. We've not explored yet enough as an industry around, using these tools needs perspective sexually, make the cooler stuff.

Ben Byford[00:20:41] More cool stuff, I like that. That's going to be a sound bite that goes to the end of, make cool stuff. So this is a bit of a [inaudible 00:20:50] comment. But are chatbots over?

Josie Young[00:20:56] I mean, maybe I think it's all voice assistance now.

Ben Byford[00:21:00] True. True. Yeah. Yes, I think. Yeah, possibly. I mean, I don't think they've rocked my world as they were, the service that they often provide is suboptimal to something else, which could be a search. That's fine. A search is fine or could be direct contact with a person or possibly direct contact with a database that is going to go crunch something and then come back to me with an answer. And that's kind of a different way to do it as well. But this informational chat, that is supposed to be useful hasn't really kind of changed my world view and all that. But again, we're voice as well. I might be speaking from my particular viewpoint, which is that obviously the privacy aspects of some of these things is questionable with the voice, the voice devices. So I don't necessarily have them in my house, but it's something like VR or something like that. It hasn't really hit the public consciousness as much or that's why I feel like it. It hasn't anyway.

Josie Young[00:22:23] I totally agree. And I think, I don't have a voice bots in my life really. I turn off those features. I think part of it is, you can't separate these these systems and products from the companies that build them. I mean with Alexa, I'm very suspicious of the motives behind that product and how I'm being used as a data point for a massive business that has only made more money as a result of the pandemic. So I'm very suspicious of those things. But also, I think coming back to chatbots.I think people don't realise how hard it is to make a good one. It's hard to find the right use case of the right scenario to where what is the best way forward and then do the research and the design work to have a good conversation design that lines up, sets expectations with the user, but lines up with the data you have to make it work. I think when that sweet spot is identified, I think it can work really well. So there's the Cleo bot, which is a financial services bot where it's basically meant to be like a digital friend that keeps an eye on your money for you. And the conversation design is very feminist. They use a lot of gifts, they have jokes, they do a lot of very rapid kind of research and development work with their customers about the tone of voice in the way that they communicate different things. So I think, I think that works quite well. That's a really great product. And so they've found that sweet spot and they've invested in doing it well. Whereas when you're a massive company that just wants an FAQ bot, well, you're not going to invest in the same way. You're not going to actually want the bot to have a personality that's kind of identifiable. It'll probably be a bot that's named after the daughter of the CEO, for example. If I think because they're pretty, they can be quite cheap and easy to produce, but that doesn't necessarily make them good, which kind of sucks because you can have some good ones. Like the Cleo one is really cool. There's also this great example I read. I think Nesta had a write up about it where they had a chatbot that was part of a primary school. There was whenever all the kids came in, there was like a big panel of smiley faces, or sad faces or, different emoticons representing different emotions. And all the kids would hit the one that they were feeling that day. And it gave the school this pulse check of, the vibe at the school that day. And then there was a chatbot that a company did that was all about supporting these kids to be emotionally literate and connecting them with each other. So like the peer-to-peer support networks and stuff. So that is just, such a lovely example of thinking about how can we use this technology to enhance wellbeing in a primary school. And then you've got your privacy concerns and make sure that it's not hackable, obviously. But I just thought that's just so nice. That's so cool. So I think there are some spaces where there is some role for a chapbot yet, but it is hard to find that that nice sweet spot.

Ben Byford[00:25:47] That sounds really interesting. I have to find that and put it in the notes as well. It's nice because it's, you want those, being a parent, you want those activities that you have with digital technologies to have some sort of connection at some point, because of. Obviously, a lot of them, it's consumption and it's faceless communication almost. I know that we are actually talking now, and I can see you so, not so faceless. But like having a bot, like ask Jimmy to see if you're okay because you're feeling down that day and that that's super cool. That's really nice. I was wondering if you are able to tell us what you have been up to somewhat at Microsoft.

Josie Young[00:26:39] Yeah. All right. I'm going to try and give you a rundown. Yep. Sorry. I guess some of the aspects I'm interested in is because it's a big company. Right. So I'm interested to why they might have hired people like you in the team that you work and what the realm is, you know, that sort of thing. What can you touch and what can you effect and what kinds of jobs and activities you do? So one of the reasons I've always been quite interested in this team is because, like we've already discussed, one important side of transformation when we talk about ethical AI is like supporting the people directly who are building the stuff and actually translating principles into action is very, very difficult. And the ethics and society team at Microsoft I think it was set up in like 2017 maybe by Mira Lane. And she's just our resident genius. She's amazing. It was really thinking about how do we bring a design and research kind of mentality to this and have it connect with product. And I think it's really unique. And the fact that this is a team that Microsoft is resourcing, it's important and since then, the kind of they've grown out their responsible AI sort of infrastructure in the company. So there's the office responsible AI, that set the policy and standards. There's the ether, working groups that are our genius academics who are really sort of trying to keep, I guess, the company aware of trends and then solutions to different things as they come about and then with a team like Ethics and Society, we work more closely with product teams. So working with them directly to say thinking about our responsible AI principles at Microsoft, this is how you can bring them to life with the product you're building. And so to me, it's like multidisciplinary, just really cool. We've got lots of research and design and just like blowing my mind every day with a lot of like sharing within the team. And I'm just like everyone here is amazing. It's very tiring being around such smart people. I'm a programme manager, so we've got our PMS as well. And yeah, we just kind of go and work directly with product teams and do cool stuff. So one example that was actually released recently is the customer Your Voice product. And that is where people can basically build a synthetic voice, which is really cool. But obviously there's a lot of different things we need to think about with that type of technology. And so it has been released and we've got a gate over it. So you need to apply to be able to get access to the technology. There's a series of questions, try to understand what you want to use it for and whether that's sort of, I guess, in line with the uses that that Microsoft thinks is appropriate with this text. So I've been involved in that as well. And just really thinking about how do we see end to end all the possible kind of impacts and work with product teams to I gues, manage them, hopefully eliminate all the bad ones and then make sure that, when it's out in the world that, we've sorted all that out. So it's pretty unique space, actually. And the team itself is made up of people who've really variety of backgrounds, multidisciplinary in terms of experience as well as skills as well as disciplines, which is cool.

Ben Byford[00:30:23] Hmm. Sounds lush.

Josie Young[00:30:26] Yeah. Pretty good. I mean, it's worth moving across the world in a pandemic for, definitely.

Ben Byford[00:30:33] You did move from the UK to the US during the lockdown last year, so. Or not the lockdown. It was just after it, I think.

Josie Young[00:30:40] But is is a strange time-for the lockdown was lifted. Actually we couldn't say anyone. So we've been in London for like six years and then left.

Ben Byford[00:30:50] And then you had to quarantine as well. When you got there.

Josie Young[00:30:53] I mean, we didn't know anyone, so it wasn't hard. It's a quarantine.

Ben Byford[00:30:59] Good goodness gracious me. Well, that sounds great. This is probably going to be a No. I was wondering, when you're working with these teams in this job or your previous positions, whether you know you're working with those teams and you maybe get a bit pushback being that person in an industry that's traditionally been, break things, fix them later, whatever, sort of style. What kinds of things that people usually have problems with or gripes with or need more understanding on.

Josie Young[00:31:38] I think initially it's just like a misalignment of mindsets, quite often I can say something and people would just look at me like I have two heads, why you? This is a data science project. Why are you talking to me about using research? It doesn't make sense and I'm sure you've experienced as well. I think it's really and again, this is why this kind of bridging idea happens. I often front up to more so probably in I my previous role. But turning up to teams that honestly, I've never thought about things from this perspective before. And so I am like an alien and I'm standing here grinning at them, I'm going to take you on a journey, right. It's going to be something else. And they're like, I didn't sign up for this journey. What on earth are you going to do to me? So I think it's it's like really recognising where people are at and really trying to find that kind of shared language. So a couple of years ago I was working on a data science project and I've been really trying hard to sell this idea of, hey, maybe we need to go and talk to the stakeholders who are going to be affected by this thing that you're building. Maybe and got quite a bit of pushback on that. And then I realised I had an in the data scientist that we were working with because they needed to understand more context in the data that they were using to train all the staff. And then the way we were going to get that context is by going to the stakeholders. And so I almost had to kind of Trojan horse a couple of things into all. But because this client, would definitely agree with everything the data scientist said and was very comfortable not agreeing with anything that I said. So I basically had to say, they understand that framework. They trust that framework. So I need to find a way of expressing the need to have user research, the need to think about this from a service design perspective, the need to really understand the context of this huge data set that they were using. I had to try and repackage that into the language and the process and the approaches that they already trusted and work very quickly to demonstrate the value. This is how this is made, the quality of this product better. And that's so that's why it's important. And so. Bit of a Trojan horse-ing going along and then and then really trying to convert it into impact and saying, hey, look, this thing was like way better because we asked the question, if we hadn't of asked question this, you would have been stuck with something that you couldn't use in this way.

Ben Byford[00:34:09] I think that's really one of the most difficult bits, because it's almost if measuring the impact of the work, it's tricky because if you've done the work, then maybe you wouldn't have seen the negative side of things in such stark reality. But if you hadn't done the work, you would have been stuck with the bad stuff, possibly. So quantifying that is actually quite difficult, I would say. So you have to justify it in different ways. Is that the same with you, justification of your job? It's difficult. Or people getting on board with that?

Josie Young[00:34:45] No. I think because, Microsoft has been talking about they're responsible AI principles for a little while now, there is the broader infrastructure across the company. So it's not just me on my own saying, hey, this is important. It's like a whole lot more of us and. I know and this was said to me by quite a few people before I joined the company, but I think, under the CEO Satya, it's been a huge change in culture and just general vibe at the company. And I think the responsible AI conversation has really benefited from that. So I think a lot of hard work had already kind of been done in different ways to make that easier, which is great. And when you have a company of that size and something does get mandated from the top, it's a very different autorising environment than, if you're a consultant. You've got to come into an existing culture and practise that you don't have any control over and you're not necessarily going to be there long term. So it is quite a different dynamic, actually. But I mean, I do, I do like challenge. I did enjoy my previous role of just basically turning up and smiling at people for long enough until they started to agree with me that. What's your what's been your experience of it? Because it's like the biggest barrier is just getting that trust and bind really quickly.

Ben Byford[00:36:23] I think it's the economic, the value which is the difficult one. So you have to try and especially with the work I do is often directly to business. So you're trying to sell in yourself to a business and you're essentially saying, we need to do these things, otherwise it's going to go badly. It's almost a due diligence or style position. And no one likes risk people. No one's likes just due diligence people, the legal department. They just say no to everything. So it's about like trying to convey the value that you bring for, creating better products, which hopefully better suit the people you're trying to target and hopefully are just generally better for society and trying to convey the idea in terms that some of these people understand is quite difficult. I found it's quite hard. I think, like we were saying earlier, that there's just been more coverage of the sorts of things that have gone wrong and just kinds of things that you might see actually in the media now, which have slightly made that easier for us. But like you were saying earlier about some teams, just like never having thought about it, you still do get these conversations with people and you're like, yeah, I mean, we haven't really done anything like that or we haven't done an ethics workshop or we haven't really thought about the impacts of this or, all these sorts of conversations. You'll have a people you know, this is mental. You hire me now. Lot of people in this space now. But, yeah, sometimes it is literally mental and gets me a little bit upset when I have these conversations because sometimes they'll be with people who are like dealing with medical data or something. And I like you, you guys. What the f come on. So enough of that ramble.

Josie Young[00:38:32] I do think it's funny sometimes I technically don't talk about ethics, I just never say the words. And I've had experiences where I have said the word and I've got an immediate pushback. So it's really interesting what you say about that, the risk. You know, no one wants people come in and tell them no. And people are nervous when there's someone with a risk mindset around. And I think they're just going to be told everything that's broken and not be given a way to fix it as well. I think. And also I think some people, back to what we said that have said start some people just think that machine learning data science is objective. And so there's no room for ethics because, either the data just reflecting the world the way it is. And so, you know what? What can you do about that? Or the kind of calculations are inherently good because they uncover more truth, even if there is a bias inherent in that. But that is misleading you. Can lead to poor decision making. So I think it's really tricky to understand. What's the main source of your resistance to this and then working with that? It is I do think ultimately it needs through that push the quality, the quality is just going to be so much better. Trust me on this. This is how I make your product look, right.

Ben Byford[00:39:56] And that's a good justification for more diversity as well, the quality stuff, because it's, better ideas, more fitting for your different users, less impacts the users you haven't thought about, because we'll think about them, that's sort of good stuff. Is there stuff which is niggling at you? Things that we haven't really sorted out yet. And when we talk about AI ethics, what are the things left to do, do you think, or the areas that have kind of more space for research?

Josie Young[00:40:35] One of the things that really struck me coming over to North America is that really that connexion to on the ground organising and ethics. I think in the UK, I realised that I could probably spend a whole heap of time talking about AI ethics and never actually really get to the heart of social justice issues or organising or mobilising around social justice issues. Whereas I think that lived experience really drives a huge number of academic research work in the US as well as changes for policy and stuff. So I think we do as a a space, we're pretty wide. We're pretty privileged. And so, really like being aware of that and taking steps to connect the AI ethics work that we do to actually like on the ground kind of political resistance and improvements. Yeah, I think that's really I think that's a blind spot definitely for me. That I've been thinking about. And also, I think we're on our way to having these more multidisciplinary approaches and having a designer and a data scientist in the team and that making sense to people. But I do think that still we still just need to keep working on those muscles and making that more commonplace. And one thing that I would love to see us doing is like attaching a carbon counter to things, when we look at like how we're training models, you know, the amount of effort it takes to then run that model, what is the carbon counter on that? What are you doing about that? How do you how can your kind of carbon impact of your model, how can that factor be one of the deciding factors when you're choosing which model to run and whether to deploy something or not?

Ben Byford[00:42:30] I mean, that's a really good point. I think all this comes down to what you said at the beginning about doing good right. With this technology. And if we're not both ends, making sure that it's doing good, in the making of it and also in the deployment of it, then we're probably failing somewhere on the ethical kind of reason or the purpose of it. It took a whole year before electricity of a small town to make this and we're just selling better ads.

Josie Young[00:43:08] I think we all care because we all love cool stuff we get so caught up in oh my God. Like, we can do this like amazing new things now, which is great and may have an application in selling more ads. But at the same time, do you remember the conversations when people are saying, oh, but you can have it. Machine learning is a black box. There's no way we could ever interpret the results. Well, now we have lots of ways of interpreting results because people have had put in the effort to build the tools and make that a priority. And so everything here is person made. And so I think it's more about really questioning why we're not investing in that thing. Is it because it's inconvenient? Is it because it just might be like low on the priority list? And rather than, it's impossible? I think so. I think that that's kind of where in the game of convincing people that this stuff isn't impossible anymore.

Ben Byford[00:44:05] Nice. Which is a that's a really good message to to leave us with. But I do have one more question for you, Josie, if you have time. So the last question we always ask on the podcast is, what are you excited about and what scares you about this? AI digital machine learning mediated future.

Josie Young[00:44:33] Oh, I'm excited about. I'm excited about driverless cars. I want one. It's going to be great. I'm ready. So I'm excited about driverless cars and I'm excited about driverless cars because I think there's an opportunity to if we think about these kind of integraded transport networks and if we apply it in a way that makes movement around a city accessible to everyone, it could be a really amazing way of addressing fuel poverty, of not, I guess punishing people because I have to live further out of town away from their jobs. So I think it could be a really interesting way of I guess, of approaching social justice inclusion things in our cities in London particularly it would just be incredible. So I'm really excited about that. And the thing that terrifies me is that we are automating our values and our biases into the system, that thinking we can deploy them on a massive scale and we've not yet figured out how to take responsibility for the effects of that. And that stresses me that right in a significant way.

Ben Byford[00:45:47] And do you have any answers for that or is that just a thing that is being worked out, played out right now?

Josie Young[00:45:53] I think if you look at like the you know, the recent US election with all of the bots on Twitter and all these kind of misinformation campaigns and just is incredible how quickly something can spread and how quickly tangibly changes how societies run that like that just can't stand. We can't be beholden to that as a society anymore. And it really has to be regulation. It has to be government saying this is not the type of society that we want to have and we have a duty to safeguard. So I think there needs to be a limit on how algorithmically driven things are on platforms, especially like social media platforms. And but, you know, I think in a lot of ways, our political leaders are catching up because this technology is being driven out of business. And that hasn't necessarily had the same kind of oversight on investment from the public sector as other innovations in the past have had. The kind of line of sight is not as clear as it used to be. And so I think governments are playing catch up The UK government and Parliament has done a huge amount of work in the last few years to play catch up and be on top of this stuff. But, I think that we need to see that happening more and more. So back to that political mobilisation, everybody. Write to your local member.

Ben Byford[00:47:20] Write to them and hopefully tell them what the issue is. Describe the issue, because one of the things that we were up against in some of my work is that these leaders aren't always tech literate or necessarily up on the latest AI trends. Right. So there's a certain kind of disconnect there, which is probably part of the problem. But like you said there, there is effort there to try and do something about that. So hopefully that'll be good. I don't know if you remember, but do you remember watching the trial with Facebook and Mark Zuckerberg last year in the US?

Josie Young[00:48:01] I think I, I don't know if I caught much of it. I caught some of it. It was really funny like is because it was filmed and televised. The questions and the things that some of the the members were asking Mark Zuckerberg were just wholly inappropriate, for how the thing works.

Ben Byford[00:48:21] Right. So it's like just Mark just sitting there going rolling his eyes. Another stupid question. Okay, well, no, it doesn't work like that, but tadadada like continuously and it's like so annoying, so frustrating for people who, like, had better questions, would take him to task. It's unfortunate. But I mean, they weren't all bad, but a lot of them were bad.

Josie Young[00:48:49] Yeah, no, I think I remember a clip of him saying we make money through advertising. The guy was like, that doesn't make sense. I'm like, oh, God is so terrible.

Ben Byford[00:49:00] Days of this, days.

Josie Young[00:49:04] And I think I think that's a really interesting kind of symptom of, in Europe, and the UK, the mindset is public institutions need to respond to these issues. Principles at a government level, regulation, all party parliamentary group on AI and. Yeah, party time. And so that's there and that's working. Whereas over here in the US, you know, any kind of action is actually happening at the state level and it's been driven by local campaigns and mobilisation. So the algorithmic justice league of like, are doing an amazing amount of work in this space. But it's it's like it's locally driven. So because I just think about it completely flipped away. So I'm not surprised. I mean, it sucked. It shouldn't be like that, but mean I'm not surprised that Mark Zuckerberg was not really giving away any secrets in that questioning.

Ben Byford[00:50:15] I think we've come to the end now. You've got to go to work. And I have got to go to the weekend because it's Friday night for me over here. So thank you so much for your time. How do people find out about you, follow you and connect?

Josie Young[00:50:31] So I'm on Twitter, @swordstoyoung I mainly retweet and like things, so I'm a good lurker. I don't know. You can track me down on LinkedIn as well. They reach out for any questions about feminist chatbots, I'm your girl.

Ben Byford[00:50:50] So thanks very much for your time and hopefully speak to us in.

Josie Young[00:50:53] Thanks, Ben, thanks for having me.

Ben Byford[00:50:57] Hi, and welcome to the end of the podcast. Thanks again to Josie. Really nice to have an opportunity to speak to her. And also, it's nice to speak to another embedded ethics person who's down in the trenches doing this work. Please go follow her on Twitter and LinkedIn. Check out more episodes of Machine-Ethics.net and if you can support us at Patreon.com/machine_ethics. Thank you so much for listening and I will see you next time.

Episode host: Ben Byford

Ben Byford is a AI ethics consultant, code, design and data science teacher, freelance games designer with years of design and coding experience building websites, apps, and games.

In 2015 he began talking on AI ethics and started the Machine Ethics podcast. Since, Ben has talked with academics, developers, doctors, novelists and designers on AI, automation and society.

Through Ethical by Design Ben and the team help organisations make better AI decisions leveraging their experience in design, technology, business, data, sociology and philosophy.