89. AI Ethics, Risks and Safety Conference - Special Edition

In this special edition episode we hear vox-pops recorded at the AI Ethics, Risks and Safety Conference in Bristol on the 15th of May 2024. We hear about AI regulations, AI Standards, AI Ethics frameworks, principles, ethics guiding research, awareness of the ethics of AI, and explainable AI.
Date: 24th of June 2024
Podcast authors: Ben Byford and Herbie Robson at the AI Ethics, Risks and Safety Conference
Audio duration: 12:55 | Website plays & downloads: 27 Click to download
Tags: Conference, Regulation, Standards, Principles, Frameworks | Playlists: Special edition, Conference


Transcript created using DeepGram.com

Hello, and welcome to the 89th episode of the Machine Ethics podcast. In this episode, we have 6 VoxPops from the AI Ethics, Risks, and Safety Conference held in Bristol this year on 15th May 2024. The conference brings together businesses and organizations to discuss, share best practices, to learn about upcoming regulations, standards, case studies, and the resources available for organizations and professionals working in artificial intelligence. The conference's main themes were around AI regulation, standards, training, and AI ethics frameworks. In this episode of VoxPops, you'll hear some of the people who attended the conference who will also talk about principles, ethics guiding research, awareness of AI ethics, and explainable AI.

I was attending the day and I also got to emcee the panel on explainable AI towards the end of the conference. As these are Vox Pops recorded in situ with people, beware that there is noisy audio, and thanks again very much to my interviewer on the day, Herbie Robson, for holding the interviews and asking great questions. The first voice you'll hear will be from conference organizer, Karen Rudolph. I hope you enjoy. How's it all going?

Yeah. It's very busy. As you can see, lots of people talking, networking. Lots of interesting conversations and presentations, questions. Yeah.

It's been brilliant. And have the outcomes been what you're hoping for? Yeah. I think I'm really pleased with the audience as well. We got really loads of technical people, data scientists, data engineers, head of risk.

So it's really the audience we wanted, businesses. Yeah. Yeah. It's really a spot on. Thank you.

And have you learned anything interesting yourself from one of the fantastic speakers? Too many things. Yeah. Way way too many things. My head is just yeah.

Frameworks, standards, implementation. Yeah. It's all here. So and that's been really good. Great.

Well, thanks again, and congratulations to Sunil Kumar. Thank you. See you next time. See it, in the data protection team at TLT, which is a national, full service commercial law firm, headquartered in Bristol. And I talked about AI regulation at, the conference.

So I gave an overview of kind of what's going on globally with AI regulation at the moment because it's been such a fast moving area, focusing then particularly on the UK's approach and the 5 principles that the UK, has implemented to regulate AI, which are safety, security, and robustness, appropriate transparency and explainability, fairness, accountability and governance, and contestability, and redress. Sure. Okay. My name is James Shiveras. I'm with Zurich UK Insurance Company.

I although I'm here sort of as part of Zurich UK representing them, I came more for my own interest as well. Zurich is actually very good with, ethics. We have a person who's in charge of that, Penny. Penny Jones, I believe her name is. She's in charge of the, AI and governance and ethics within our company, which for a large company is quite, I wouldn't say unusual, but, you know, not common that large companies have such a a person.

So that's very good. And so I'm just here to find out more information that I can report back to Harrison as well. My role is, AI development lead within Zurich. So, I run quite a few different projects building AI models and stuff for them. Was there anything in those talks this morning that was of particular relevance to your team or your role?

I mean, yes. They're all quite relevant, especially to do with the fairness and bias and discrimination. Also, the regulatory aspect. Personally, stuff that I'm trying to learn about is more to do with the, you know, what's out there, what's in the UK and stuff. Because although we, Penny is one who's in charge of this, we all need to be aware and know on how to, handle it.

So yeah. Great. And would you say it's been a particularly large step implementing AI into your organization's, workforce? Or is it ramping up? Well, I've only been there for a few months myself.

So I guess ramping up. I mean, we do have quite a few AI, products being developed. However, only I believe only 1 or 2 are currently in production and actually being used in Angus. So so far, you know, not not a huge amount, so it's definitely ramping up. I mean, insurance companies, are not known for being at the forefront of tech, shall we say, but this one, I think Zurich is doing a good job to push it forward and, you know, bring bring it to the modern day.

So, my name is Grolle. I work for. Empirisys. We are a, data science startup, and we work with a lot of clients in kind of high risk organizations. So mainly kind of oil and gas, chemicals, construction, those those sorts of areas, energy.

So, yeah, I had the opportunity to come along today, and, I think there's definitely an opportunity to, learn a bit more about how how to implement an AI ethics framework, you know, within the company, and also learn about how, you know, what what's varying between between regulatory frameworks in the UK and also EU and and internationally as well. So, just a bit of a general learning experience around ethics and, you know, the the the recent advancements really. So That's amazing. So who are you here with and what are you hoping to get out of the talks? Yeah.

So, I'm here with the University of Bristol. So I'm a project manager for a research project into financial services and how we can improve it through technology. So, yeah, I'm here today to basically hear about the different challenges around regulation primarily within, with a within financial services and from AI. So, so that we can effectively support the business community we're working with through research and through support that helps them and enables them to actually implement these technologies in the future. Yeah.

And what are your main research interests within that, do you would you say? Yes. So there's a few different areas that we're exploring, but kind of the main the main areas are around improving access to financial services for those who are underserved. So how can, as an example, how can AI support, financial advisers to make their service more available to more people? Is it is it obvious example?

Yeah. Things like that, but also operational efficiencies. So we're, yeah, we're looking at the whole kind of spectrum of financial services, but exploring, yeah, specific applications of how we can implement it. Great. And was there anything from this morning's talks that was of particular relevance to what your group does?

Yes. Yes. So the whole first thing around the kind of the timeline for regulation and changes around regulation that are coming in from the EU and the UK perspective because that will massively affect the, the the stuff that we deliver and the decisions that we make. Yeah. So I'm Burham.

I'm studying cancer and how you can use deep learning to diagnose it, basically. Obviously, this is, like, really taken off in industry as well. There's really powerful tools in that already. So what I'm trying to focus on is explainability. So deep learning neural networks can be very black box.

You don't know what's going on in them, which means that their outputs are inexplainable, which is really bad when they're wrong because if you misdiagnose someone or you don't diagnose someone, the consequences are huge. So I'm looking at ways you can actually explain the decisions of black box models in the context of image processing for capture diagnostics. That's really interesting. And how have the frameworks that have been talked about today Yeah. Are they at all relevant to being implemented in your research?

So I think the really interesting thing is that in academia and in like smaller research groups like mine, it's not really on the forefront of people's minds that I have like proper ethical frameworks in place. People just assume that like it's a you have your moral values, they'll spill into your research, but I think today is really highlighted that the best way to do it is to actually have ethical foundations and principles guiding the research, and I think that's the best way to do it. That's what I've taken away from today. Fantastic. Yeah.

Have there been any highlights otherwise from any of the talks? Yeah. I think one of the talks was about, how regulation and innovation aren't mutually exclusive things, like one doesn't impact the other. They can work together. I thought that was really interesting, really insightful.

Fantastic. Yeah. Thanks very much, brother. I really appreciate it. Okay.

Alright. So what have you found really interesting and exciting from today, this morning or this afternoon? Exciting has been the amount of work that's being done. I work for a company that's relatively new Mhmm. In the space, and we are part of the responsible AI team within a larger group.

And it's all a bit loose and we're kind of unsure of where to go and the amount of resources available and minds that are doing this in practice and doing it well is really exciting and reassuring. Mhmm. What stood out to you about the the new legislation that is, gonna be applicable to to what you're familiar with in the world of AI? AI? So is it Is this UK legislation that you're referring to?

Well, I'm picking up an accent. So where Yeah. Where are you America. Yeah. Yeah.

Yeah. I live in Maine. Okay. In the US, it's all frameworks. There's NIST.

And then there's, the AI blueprint something something wrong wrong name, which is guidance. There's no real legislation yet, which is slightly worrisome, but I guess that's how the US kind of rolls as well. There's a lot of freedom for private institutions to make up their own, practices. I think eventually we will have to have regulation or at least a collective of companies agreeing to standards of practice, hopefully. Right?

OpenAI is kind of doing that. So, yeah, we're we're on the looser side. I like the UK. It's kind of it's kind of federated, but although it's it's bottom up. Mhmm.

I think always a federated system is good. And at the top, you should be rather general to allow for case specific, regulation and practices to and allow companies to operate successfully within their sphere while also maintaining a certain amount of standards. So yeah. And based on what you've heard about today, do you anticipate the development of AI stateside and in the UK and Europe growing at different bases or impacting each other differently? Yeah.

I think I think there's different styles and everyone has something to learn from each other. Again, if if there's more discourse, if there's more communication, if there's more consideration of within each body of law, how do we best make sure that we're just making AI that is beneficial. Right? It's always about scaling. In the US, it's always about, let's scale because that makes money.

Mhmm. But you gotta consider whether you're scaling benefits or you're scaling harms to people. Sure. Okay. I will say, Marie Oldfield was definitely the best Yeah.

Realist. Oh, she's fantastic. Oh, yeah. Like like, she's like the one that's like, I make these decisions. I have to talk to people.

The consequences are very high, and I take as much responsibility as I can. And she should be a model for what other people do. I think she was only allowed one question, which maybe she her talk went too long or something, but she should have been allowed to speak for way longer. It was a disservice to her and all of us. Ciao, Marie.

Yeah. Yeah. Alright. Thanks very much for your time, man. We appreciate it.

Cheers. Pleasure to meet you, man. Hi, and welcome to the end of the podcast. We've got lots of new interviews coming very soon on the podcast, so stay tuned. Thanks again to everyone who spoke to us on the day and to Karim for organizing such a great event.

What struck me was there was a wide variety of people. I think beforehand, I had this idea that it might be a bit more business led than I'd liked, but actually I found that there was students and academics and data scientists and all sorts of people there, so that was really good. It was actually also really fun, and we had some drinks afterwards as well. So I'm looking forward to next year's. Also, hopefully, with the success of this first one, maybe we'll have a broader scope and more people to come to the next conference next year if it goes ahead.

So, hopefully, looking forward to that one. Thanks again for listening, and if you can, you can support us on patreon. Patreon.comforward/machineethics. And you can find more episodes like this, machinedashethics.net. And you can contact us at hello at machinedashethics.net.

See you next time.

Episode hosts: Ben Byford

Ben Byford is a AI ethics consultant, code, design and data science teacher, freelance games designer with years of design and coding experience building websites, apps, and games.

In 2015 he began talking on AI ethics and started the Machine Ethics podcast. Since, Ben has talked with academics, developers, doctors, novelists and designers on AI, automation and society.

Through Ethical by Design Ben and the team help organisations make better AI decisions leveraging their experience in design, technology, business, data, sociology and philosophy.


Episode hosts: Herbie Robson

Herbie is an engineering graduate with an interest with AI tools for sound processing. He's now taking bookings for music mixing and mastering via email - herbierobson (at) gmail.com