78. Design and AI with Nadia Piet

This episode Nadia and I chat about how design can co-create AI, what the role of designers are in AI services? post-deployment design, narratives in AI development and AI ideologues, anthropocentric AI, augmented creativity, new AI perspectives, situated intelligences and more...
Date: 19th of June 2023
Podcast authors: Ben Byford with Nadia Piet
Audio duration: 40:56 | Website plays & downloads: 117 Click to download
Tags: Creativity, Ideologies, Design, Designer, Co-design, Deployment, Narratives | Playlists: Creativity, Design

Nadia Piet is an independent designer, researcher, organizer, and cultural producer with a focus on AI/ML, data, tech, [digital] culture and creativity. She’s the founder of AIxDESIGN [a community of practitioners & living lab for beyond-corporate AI], holds an MA in Data-Driven Design, and explores playful & purposeful tech through freelance projects and self-initiated experiments. Over the past 10 years, she’s worked as Head of Creative Technology at DEPT [a global digital agency], design researcher for emerging technologies at Bit, and across roles and continents with organizations such as Hyper Island, Pi Campus, Forbes, UN, AWWWARDS, DECODED, MOBGEN | Accenture Interactive, Mozilla, and ICO.


Transcription:

This is a machine transcription provided as is.

Ben Byford00:06 Hello and welcome to episode 78 of the Machine Ethics podcast. This time we're chatting with Nadia Pete and this interview was recorded on the 30 May 2023. We chat about how design can co create AI, what the role for designers are in AI services, post deployment design narratives in AI development and AI ideologies such as long termism economic prosperity, transhumanism and much more. Augmented creativity, new AI perspectives, as well as situated intelligences. If you like this episode, you can find more machineethics. Net or you can contact us at hello@machineethics.net. You can follow us on Twitter, machinethics Instagram, machine Ethicspodcast, and if you can support us on Patreon@patreon.com machineethics. Thanks very much and hope you enjoy.

Ben Byford01:11 Hi Nadia, thanks for joining us on the podcast. If you could like introduce yourself, tell me who you are and what do you do.

Nadia Piet01:19 Of course. Thank you for the invites and happy we could finally make this happen after way too many back and forth over the past months. Happy to join you. So I'm Nadia. Pete. I am a designer, researcher and the founder of Aix Design. I also think of myself as a perpetual sort of flashy. So usually that job title changes every few months depending on what I'm working on. So right now I'm putting together like an arts residency, so maybe I would add like a cultural producer or something like that. And sometimes I teach quite a lot or help develop like educational programs. So then maybe educator fits in there. That's mostly design and research at the moment.

Ben Byford02:05 Yeah, and were talking briefly before, so you're kind of working for yourself, so you have all these things that you like to do and I think some of those things you've just talked about there. And is AI X Design kind of a company, a community? What is that sort of thing?

Nadia Piet02:22 2018 I started doing this research around AI and design, which was actually at the time for my bachelor thesis. And I felt like I had learned so much and gone through all these ideas and insights and I wanted to share them in a way that wasn't just my little paper for school. So then I made a toolkit which is called the Immense Design Toolkit and I just made it like a free PDF, downloadable. And then so much response came from that from people I knew, but especially people I didn't know actually. And I thought, hey, these people should talk to each other, not just to me.

Nadia Piet03:12 2019, 2020, something like that. So it started literally like that, and then it sort of evolved into, like, we started doing events online mostly. Some in person started writing content or just meeting up in small groups to share ideas or exchange knowledge and so on. And soon enough, of course, these things start taking off. Yes, sort of a life. A life on its own. So, on the one hand, it's very exciting because it's, like, community driven. And so people have this space to initiate and sort of co create or just do whatever they find meaningful.

Nadia Piet04:02 40 hours a week for free because I have rent to pay like everyone else, and I also want to get other people involved that also have rent to pay like everyone else.

Nadia Piet04:35 So right now, we're very much in this process of figuring out how we can do the type of work we want to be doing that we find meaningful and exciting, while also figuring out how that can help pay rent so that we can actually give that time and make that time for this work. So we're now sort of creating a for hire. So it's like Ax Design community is one branch, and then Ax Design for hire, in which basically we want to work with all the people in the community, but for companies, organizations, institutes that are also working through these questions and might need some more brands on board.

Ben Byford05:18 Yeah. And if anyone isn't thinking in this way already, what are those questions? What are those things that people in the community are finding together or asking or are trying to work out how their work actually applies to this space?

Nadia Piet05:35 Yeah, there's a lot of them. Could probably make a huge mind map. I started very much from this idea of, like, how can design play a role in shaping AI and AI development? So how can this way of working and thinking, like design thinking and all those methodologies and sort of this way of thinking about trade offs and collaborative sessions like how can we bring those methodologies into AI development which is very just tech driven and sort of adopts that like agile workflow from software and sometimes forgets this has also changed. So that was the first thing. And also, what is the role of designers like UX designers, strategic designers, Surface designers? Which kind of literacy do they need? What is their role in the team?

Nadia Piet06:29 How can they take that role in an AI team of actually shaping the user experience and the output, even when they don't have this deep technical knowledge so that was very much the beginning. And then a bit later I got also very interested in so that's like design for AI. And then later I also got really interested in AI for design, which is more like, how does AI or machine learning or these sort of, like, computational approaches show up in creative practice, which could be like graphic design, but we're also running other programs, exploring just like yeah, like writing any sort of image making rights. Also like film or animation, but also, like music or sounds design. So sort of across even like, I don't know, performance art, like choreography. Right?

Nadia Piet07:18 Like, how can AI and ML sort of play an interesting role in that, which has obviously since been exploded. And then, yeah, the past year, so much has happened. So we're asking different questions now, I would say, which are more also a bit more, like, critical and a bit more structural, maybe political even. But not loving the AI space right now. And the sort of discourse and the ideologies and the narratives surrounding it. Not very into it. So we're just asking what are alternative ways of thinking about and making AI that do resonate with other types of values that are not just efficiency, profit maximizing value for stakeholders, which I get. We live in that world, but there's more to the world. So now our questions are much more in that space.

Ben Byford08:24 To you. Let's say we've said AI and we've said design a lot. What is that AI aspect to you? And I put it to you that actually that design thing is quite nebulous as well. And you've outlined some of the stuff that you would say is design or part of that design process. From the sounds of it, you're kind of designing for the people in that AI space, as well as the AI doing some design or being part of design practice. So I just wondered if you had an idea of what that AI bit was to you. What is AI and also what's the intersection of that AI and design thing?

Nadia Piet09:10 Yeah, absolutely. Big questions. Hey, for me, AI is I often when people ask that, start just sort of rambling all the things, it's not right, it's not like an entity, it's not a sort of person or, like, creature, even though it may feel like that at times and even maybe helpful to adopt that mental model at times. It's not that. It's statistics on steroids. And another thing that I feel like is a sort of myth surrounding it that's not helping is it's actually super labor intensive. And before you would see that, right? So, like, two years ago, if you were trying to create images with, like, a style gun, you had to go through so much labor of collecting the data and really making it, like cleaning your data set and perfecting it.

Nadia Piet10:09 And if one thing would be wrong, the whole thing would break and sort of that training which is very human labor intensive process, but also sort of ecologically and like politically, so many decisions are made along the way. AI doesn't just sort of happen. It's not deterministic. Like we don't have to do that or do it in any sort of way. It's not unavoidable. We choose to make it and we choose to make it in a certain way. But for me, really high level, it is like computers learning, right? So that's the difference of programming being like you have to write the rules and then there's an output, but it's like you have to explicitly set each step in each rule.

Nadia Piet10:55 Yeah, for me it's just like machine learning, basically that definition that I often as a term and as a definition find more helpful to think about. Right. So it's like instead of writing out the steps, getting the output, it's like you showed the output and try ask the machine sort of retrace those steps, which I think is super fascinating because I might actually find some stuff that we can't see that we don't get just because our brains do not work in the way. I can literally remember a string of five numbers and then my brains can't compute, which is so dumb in a way, when you compare it to the sort of capabilities a machine has or computer has. I think that's really exciting also as an idea, basically.

Nadia Piet11:45 And then AI for me is just sort of this idea of making computers do things that we thought only humans can do. And that in a way which I think that's quite a common definition, is quite interesting because how about all the other types of intelligences? We humans think we're so bad at us, but there's lots of other very interesting capabilities like fungi and octopus and all these forests like tree sort of trees, how they communicate stuff. This service insane. Right? So we are also creating AI or thinking of AI very much from a human perspective, which I guess we can't escape in a way, but I find that quite interesting as well about this intelligence bits like how we define intelligence is very based on how we do, which is very much like language and stuff.

Nadia Piet12:43 But what if we based it on other intelligences? Suddenly it would take on a whole different sort of shape or form. But yeah, for AI I think are those definitions. But I'm putting lots of little asterisks around them verbally, like making a million footnotes around it. And then your question around what's at the intersection with AI and design. It's kind of like the one I mentioned, but there's many ways in which they sort of meet or intersect or are entangled. So one is this idea of design for AI. So how can we bring sort of design, thinking process and those things into AI development? Sort of like Yemen centered instead of sort of tech driven, but another big one that I'm really into is sort of UX of AI as well, which is like, great.

Nadia Piet13:37 There's all these decisions that are made along the way and there's this really complex system of work, but ultimately I'm probably interacting on this tiny little screen. The ultimate user interface is just that. So how do you use that tiny little screen to make things explainable or to allow for more like agency and autonomy? And how do you translate those really big ethical principles of what responsible AI is into a freaking button? Because in the end, I am looking at buttons most of the time. Right. So it's like, I get that's very limited to very hard, but that is ultimately where most users are interfacing with your system. So I think that's quite an interesting thing. And then yeah, like augmented creativity, right? Like, how can we develop new type or new layouts or new, I don't know, forms, especially storytelling.

Nadia Piet14:39 I think it's really interesting, like new forms of storytelling that are less sending and less like, maker and viewer, but are much more co, creative and generative through the viewer, the maker and a machine or whatever model you would be running there. And then there's also just like educational design, or sort of like how do we design materials or whatever to help people build up sort of data and digital literacy, which you could say is not designed, but there is a sort of design element to that. So for me, almost everything's design, which I get is maybe not helpful, but I feel like bringing that approach of intentionally creating something with a certain intended output, it's designed for me, so that sort of translates to almost any discipline.

Ben Byford15:44 Yeah, I think design for me is like that, I think, like you were saying, but it's also kind of like you're bringing it to the fore. You're saying, we are doing some design and therefore we are reflecting on that design. We are critiquing it, we are into decisions. Exactly. Developers and project managers and all sorts of people do some of that design work, but then I guess usually there's air quotes. Here again, is up to the a designer or someone who is taking hold of that design to then reflect on it, critique it, make sure that it's right for purpose. It's going to work with whatever the user group is intended. It's not going to have unintended consequences for that user group. You know what I mean? That's maybe the design element, maybe even though there is spread out almost. Really.

Nadia Piet16:40 Yeah, exactly. And it's not like just designers do design, but I think exactly, like highlighting them as design decisions. Yeah, it's like literally just like marking it as that, I think is yeah, exactly. It invites people to be a bit more conscious and reflective and to talk about things more together, which I think is a good space to be in to make decisions better than, like, I don't have time, I'm just going to go with this.

Ben Byford17:11 Yeah, this works. So that's fine.

Nadia Piet17:14 We always do it like this.

Ben Byford17:16 Yeah, that's right. It's funny because when I think about a lot of this stuff, I feel like we have the same direction of travel, let's say, but we are talking with different language. So where do you think design can make more ethical services? Let's say taking the idea of kind of were referring to responsible AI, but how does design play into the ethical responsible palette when it comes to designing AI services? And just a leading question, but like, for me, it's a lot of design, a lot of that is helpful and it can lead you down that path in a useful way. Or not, I guess, as well.

Nadia Piet18:07 Yeah, a few weeks back, I had to speak at this conference and my colleague and program manager Ploy had to as well. And she had this slide that I was like, now I'm trying to remember the slide because she has sort of like synthesized all these things that we've been talking about in a very fragmented way, just in our check ins into this one perfect one. But I think it was around sort of the design. How does design or this right way of working show up in the different stages? So it's like before you decide to even make a thing, how do you scope it based on what are your motivations? Like, who are you addressing or serving or targeting? However you want to put that, right? So even before that and then how can you co create the thing with them? Right?

Nadia Piet19:03 Like user research, but even pushing it a bit further than that, sort of participatory design and then also very much after, which is something that actually don't hear. I hear less, like fewer people about, which is I hear people doing, like, user research, all of these things. But also afterwards, I think you mentioned it does like unintended consequences, especially if it's AI. It's a big thing because it's a bit unpredictable sometimes and you cannot just test through all the happy and unhappy user flows, right. That just doesn't go in this scenario. So I think we need to also look at design sort of post deployment. What levers or feedback mechanisms do users have to, I don't know, examine, to intervene, to opt out, to ask questions to the system, to flag when something doesn't feel right, for whichever reason. Right.

Nadia Piet20:08 Which is not just like thumbs up, thumbs down. It's like right, wrong. Because, yeah, life is not really just like it's not just right or wrong. It's often like, maybe you even understand why the system made a certain prediction. But like, in this specific context, it's different. So I think having more of that sort of dialogue as well, if you will, with the user and the system would be super helpful and then kind of go back. So you actually sort of do this it is design approach, which is ultimately yeah, I think designers are often used to this Iterative approach, which also makes you a bit humble and you know, you don't have it all figured out and that's okay. And I think that's a helpful also mindset to be in.

Nadia Piet21:01 So I think, yeah, design shows up in all these different stages, but the one that I'm not seeing as much, especially when it comes to responsible AI and who gets to decide that, right? Like you do an audit once and it's fine. Someone says it's fine, it's like, well, what if someone else is not fine? Where do they go with their experience? So, yeah, I think especially in responsible AI, I would love to see a bit more of that. Like post deployment design as well. Yeah, and through all the stages, I guess.

Ben Byford21:39 Yeah, I mean, it's very contextual, isn't it? These types of services could be whole plethora of things, but you'd hope that something which was serious, impactful, life changing, all that sort of stuff would have some levers and things interactions that someone had built in so that people don't just get that computer says no situation. And you're like, yeah, what do I do now? I need this thing. So any designers out there, get on it, sort it out and join the AI X design community, is that right?

Nadia Piet22:26 Yes. Yeah, exactly. Because it's not like and I know so much has happened over the past years, but it does actually still feel like it's early, especially when it comes to dealing with AI capabilities have exploded, just not overnight. Like it's been years. But it did happen so fast. Like that exponential factor was very observable and it was quite freaky. But interaction design and those things are not as exponential because it needs people to think through them and test things out and test them in the world, kind of. Yeah, I feel like around those topics and also responsible AI, which is also part design, but also, of course, like law and legislation and regulation and all of that. When it comes to all of those other things, it's so early, there's lots of work to do. Nobody has the answers. But we try.

Ben Byford23:29 Yeah, we're working out. So, looping back around when you were talking about the ideology, the prominent kind of thrust in the industry at the moment, I noticed obviously there was a recent post on the Slack Group which had this kind of it's feeling a bit of a downer at the moment because of some of these things. What do you feel like that kind of prominent ideology is? And where's that coming from, do you think? What's the issue here?

Nadia Piet24:03 Yeah, I mean, there's so much to unpack and I do not have the geopolitical brain. I do not by all means, I'm read enough or smart enough to try and summarize this, but I'll try anyway and I find this quite interesting because it comes back in so many conversations and panels, right? You're like, what's wrong with AI and the dark side of AI and all of these things? And there are problems that are specific or semi specific to AI or like amplified or sort of get worsened by it. But a lot of it is just with existing problems of the world that are then become much bigger.

Nadia Piet24:52 Just like existing inequalities and bias and discrimination and capitalism and the way that sort of just like economical divide in the way that's structured on a global scale which is just not really working for anyone except for a few people who are just having grand time and they are funding so much of this work. Right. And so they get to have a say, more say than others in what that is going to look like. And for me right now, a lot of these developments are driven by specific agenda and ideology that is a quite dominant and regular one but I do not subscribe to at all. And it's even some of these things like long termism which there has been some critique around and that one is very obvious.

Nadia Piet25:55 Like a lot of the people who are funding and being funded and doing huge work in the AI space, they're public about supporting this ideology of long termism which is essentially like prioritizing potential future happiness for some people that do not even yet exist. Happiness being economical prosperity, right? That is sort of a thinking step there as well as like oh, if we can make the future rich then people will be happy. I'm like, that's not like I'm not really seeing that because that's not what's happening now. But okay, and then there's so many things going on now, there's problems now that we need to solve.

Nadia Piet26:40 There's lived experiences of people breathing at the moment and why wouldn't we prioritize the here and now and near future and instead sort of hide into a further future which is like intellectually interesting to think about but it's just quite a bit detached from making sort of actual impact on things that I feel need attention. So yeah, that's just a bit like painful to watch sometimes. And just the problem of AI is the problem with capitalism. So it's like profit over anything and it's not wrong necessarily with money or business or profit or any of these things fundamentally, but there is something wrong with putting that over any other values or things that we want for each other and ourselves and that's huge. Who knows where to even start with that?

Nadia Piet27:50 But Abdo, who is also someone I work with a lot at Ax Design, it's like we don't have to break the machine, we don't have to take out the plug, we can just poke sticks in the cog and hope that maybe it sucks it up a little bit. So yeah, these are huge structural things, obviously, that are not exclusive to AI, but it does feel like the space right now is so dominated by these things that it's just not pleasant to be in. It sometimes feel like this is just messed up. Yeah, that's why we're starting therapy in the slack.

Ben Byford28:31 Yeah, I think that'd be nice. Especially I think that would be useful for me having limited kind of internal experience with some of this. So I quite often go to companies and do consultation or discussion or talk to business leaders or people. I don't usually have a day to day insight into how these decisions actually play out or get made. So it'd be nice to actually kind of take a litmus test of like where we're at, you know what I mean, across several companies or such. That'd be quite interesting. I was in a panel a couple of months ago and it was self selecting, right? It was an AI ethics thing and a lot of the polls we did were around. Are you this kind of person, are you using some of these things? Are you thinking about it?

Ben Byford29:29 And I think maybe half were discussing or using responsible AI techniques or processes which you might identify as AI or ethics, some of these terms. And that was really positive. But like I said, this is like a couple of hundred people who are there to see an AI ethics panel. So it's like all those other people who perhaps weren't there or merely didn't have time to be there, whatever it is, what are they doing, what are they thinking about? Much broader survey would be extremely illuminating, I think, about how people are feeling about space. And yeah, personally, I think the main issue is that capitalism aspect, which I probably talked about in earnest on the podcast before, sorry about that.

Ben Byford30:29 But it's one of those things which is like obvious, like, well, if we just had more social enterprises, this wouldn't be a problem. And that's already part of the system. So cool. Or like, you don't have to really break anything to make some of this stuff work, actually. Or again, if you had some legislation that had to prerequisite audits for using AI services in certain ways or whatever, blah, blah, and that actually might happen as well. So there's an interesting transitional period happening, maybe constantly, but I can feel it at the moment, there's this appetite for more direction on how to legislate or how to use it in a useful, responsible way.

Ben Byford31:18 Do you have any clients who are like, I'm presuming that they are self selecting and like, all your clients are like, we really need this stuff because we think you're great and we want this product to be great and it's going to be great in X way and it's going to be really good for people. Is that the sort of thing that people coming to you for? Or are they being smacked in the face and they're like, Quick, help us both? Bit of both.

Nadia Piet31:43 Bit of both, yeah. Because I think a lot of this stuff isn't just like saying that you want to do it right, it's like much more sort of layered and complex than that, which even comes down to an individual level where you're like, oh, I didn't mean it like that. Or you might do something or say something and later realize or someone points out that was not great and you weren't really aware of it and it wasn't like a huge but it's like you don't always catch it because if you did, you wouldn't have done or said something. So it's also about, yeah, sometimes people like clients, they think they're doing great, but there's some gaps. Some people think they're doing terrible, but they're actually doing they're showing up.

Nadia Piet32:38 So it's like that thing of I don't know what it's called, but you know, that thing is like really smart people always think they're dumb and really dumb people think they're really smart. I see that thing with AI ethics as well, kind of where it's like people who how well you think you're trying isn't necessarily reflective of how your product surface actually works and shows up for people. So I think it's a mix of people sort of all across the spectrum. Obviously it is people who want to try otherwise they would stare the other way and be like, this girl's crazy, I don't want none of this.

Nadia Piet33:22 But yeah, it's often like much more layered or even what were mentioning before, where it's like, oh, they've really taken care of it in part, say they do really rigorous user research and really do it in the most diverse, inclusive, good way. But then there's nothing post deployment, no way for people to speak back. It's a mix across. And a lot of my work is also more like metas. So it's like in education, for example, or like with last year we did a big project with the ICO, which is designed for data rights and how do we turn these ideas around nondiscriminatory AI into design guidelines that people in companies can actually do something with, which is then not specific to one organization or one team.

Nadia Piet34:18 So a lot of the work is also more like meta or more like across different organizations, which is also very interesting because you see lots of different approaches and frictions and so on. But, yeah, of course, there's a bit of self selection there with the people I actually work with and also for my own sanity, maybe, because I think and there's something I've been figuring out and thinking about, which is like, rather than fighting the system, rather than trying to break the machine or whatever, trying to fight what you don't like. It might be more helpful, fruitful, if anything, energizing and sustainable for the. Individual to work on what you do. Build a new machine invent a new machine, whatever, right?

Nadia Piet35:11 Like build the thing or try to imagine and craft and work towards the thing that you do want, even if it's like a s***** approximation, but imagine what that would look like rather than just critiquing or destructing what is like trying to construct something else. So I think also, yeah, I do choose not to work with some people because I'm like, it's going to do my head in and I don't want that.

Ben Byford35:40 Okay, cool. So the final question we always ask is what excites you and what scares you about this AI mediated future?

Nadia Piet35:51 Okay, this AI mediated future, can you describe that future for me?

Ben Byford35:57 Let's say from what you know at the moment going projecting into the future, what is the things that you hope don't happen and what are the things that you really hope will happen and are excited by?

Nadia Piet36:11 Yeah, so what I hope will not happen is this very beat like Monopoly that some big tech companies are having on AI. And that being the main driver and narrative, because from that point of view, the systems that do come into our lives, even if they work flawlessly, like computationally, they will just have values embedded that will drive us more apart and more out of ourselves and out of each other. Not in a future I want. What I am hopeful about is that I think AI or machine learning as a technique is so promising and so interesting. And we see it especially with, I don't know, also, like natural sciences or like biology or healthcare or we're figuring things out about the world that we didn't for a very long time because it's offering new perspectives and literally new ways of computing the world.

Nadia Piet37:17 And I think that's really exciting. And I think what would be really cool to see is if and again, this is something we've been talking about in the community. Instead of having artificial general intelligence, which is this idea of like one sort of chunky oracle knowing it all, we would have lots of small situated intelligences where people and communities and groups can take machine learning models and use them for things that they need and that becomes easier. So I would love if it's a bit more DIY and hackery and people can use these technologies to decide for them what's going to make their life better, nicer, more meaningful, and having the systems or the tools to sort of do that for themselves. I think that would be really cool. I'm really excited.

Ben Byford38:17 Awesome. Nadia, thank you very much for your time and for coming on the podcast. How do people find out about you, follow you, all, that sort of thing?

Nadia Piet38:25 Thank you. So I've made it really easy. So I have just my real name everywhere. So that's Nadia Pete on, I don't know, instagram, LinkedIn, Twitter, all the Bits, Mastodon, arena, all the channels, TikTok, which I'm not really on, but I have an account and then Aix designs. Aix design and also basically the same name everywhere. Website is aixdesign co like community and yeah, from there you can get lost in all the links, all the different channels and all the work we've done and yeah, if you have any questions, ideas, you want to contribute or join the community or you're doing other work in this space, then, yeah, you can always reach out. And I'd love to hear from you.

Ben Byford39:24 Awesome. Thanks very much.

Nadia Piet39:25 Thanks Ben, nice to meet you.

Ben Byford39:32 Hi and welcome to the end of the podcast. Thanks again to Nadia and again I'm really glad that we managed to put in time, make it work. So that's awesome. And again do check out the AI X design community for more information and just sharing insights and interesting links and.

Ben Byford39:50 Research and all that sort of stuff. There's also been a lot of chat. With other people I've been talking to. About these kind of AI ideologies some of which are coming out from the West Coast America and whether they are kind of inherently bad. We talked a bit about this on the podcast with Nadia and I'm kind of on the fence because some of the ways of thinking could be useful or you can take what you need from them. And I'm not necessarily yet in the whole camp of let's put a name on it, brand them and tell them that they're bad.

Ben Byford40:24 It seems a bit of a kind of a binary choice to me. I think there's probably some good things to take and some good things and some less good things that we can leave about some of those ideologies, those kind of narratives that we're being fed in this space but it's very interesting to have these conversations. So let's keep doing that at least. So again, thanks for listening and if you can so you can support us on Patreon.com We and ethics and until next time, goodbye.


Episode host: Ben Byford

Ben Byford is a AI ethics consultant, code, design and data science teacher, freelance games designer with years of design and coding experience building websites, apps, and games.

In 2015 he began talking on AI ethics and started the Machine Ethics podcast. Since, Ben has talked with academics, developers, doctors, novelists and designers on AI, automation and society.

Through Ethical by Design Ben and the team help organisations make better AI decisions leveraging their experience in design, technology, business, data, sociology and philosophy.

@BenByford