76. The professionalisation of data science with Dr Marie Oldfield

This episode we're talking with Dr Marie Oldfield on definitions of AI, the education and communication gaps with AI, explainable models, ethics in education, problems with audits and legislation, AI accreditation, importance of interdisciplinary teams, when to use AI or not, and harms from algorithms.
Date: 17th of April 2023
Podcast authors: Ben Byford with Dr Marie Oldfield
Audio duration: 38:10 | Website plays & downloads: 83 Click to download
Tags: Accreditation, Education, Audit, Diversity, Harms, Legislation | Playlists: Legislation

Marie (CStat, CSci, FIScT) is the CEO of Oldfield Consultancy and Kuinua Coaching. Marie is an experienced AI and Ethics Expert. With a Background in Mathematics and philosophy Marie is a trusted advisor to Government, Defence, the Legal Sector amongst others. Marie works at the forefront of Ethical AI, driving improvement and development. Marie has been called upon to validate degrees and provide input to UK Universities. Marie was invited to the Executive Board of the Institute of Science and Technology, to be an Expert Fellow for Sprite+ and a member of the College of Peer Reviewers for Rephrain.

Marie is Founder of the IST Artificial Intelligence Group, Founder of the IST Women in Tech group and a Professional Chartership Assessor for the Science Council. Marie is frequently invited to speak on popular podcasts, panels and at conferences to about her experience and research in AI and Ethics. Marie founded Oldfield Consultancy to solve complex problems ethically with the latest Technology. Oldfield Consultancy provides analytical training for technical and non-technical teams. Marie Founded Kuinua Coaching, after years of training Senior Civil Service and Military Leaders, to provide coaching for Professionals and Executives in Leadership, Negotiation and Soft Skills.

Marie is passionate about giving back to the global community through extensive pro bono work, with a focus on education, poverty, children and mental health. For several years Marie led the division of Global Consultants for pro bono work for the American Statistical Society where Marie was honoured to have worked alongside the UN, Doctors without Borders and MapAction, specifically during the Ebola Crisis and Nepal Earthquake. Marie is extremely proud to be able to improve the life chances of the poorest and most vulnerable across the globe working with Statisticians for Society, the UN, the Royal Statistical Society, the Institute of Science and Technology, Pro Bono Economics, Statistics without Borders and the Science Council.


Transcription:

Transcript created using DeepGram.com

Hello, and welcome to episode 76 of the Machine Ethics podcast. This time, we're talking to doctor Marie Oldfield. This episode was recorded on the 4th April 2023. We chat about the education and communication gap of AI, the importance of ethics in education, inherent problems with audits and the legislation of AI, the importance of working with interdisciplinary teams, embedding ethical thinking, and the pitfalls and harms that can be caused by AI and algorithms. If you'd like to find more episodes, you can go to machine dash ethics.net.

You can contact us at hello at machinedashethics.net. You can also follow us for updates at machine_ethics on Twitter, the Machine Ethics Podcast on Instagram. And if you can, you can support us on patreon@patreon.com / machineethics. Thanks very much for listening, and hope you enjoy. Hi, Marie.

How are you? Thanks for coming on the podcast. If you could just quickly introduce yourself, who you are, and what do you do. Well, I'm Marie Oldfield. I'm a charter statistician, charter scientist, and fellow of the Institute of Science Technology.

I'm senior lecturer at LSE. I'm the director of Oldfield Consultancy and an expert fellow for Sprite. Brilliant. Thank you. So we met, about a month ago.

We were doing a panel discussion, with lots of, interesting people, talking about AI ethics c subjects. I think it was actually quite broad, the things we touched on, but I think we found it interesting. So, hopefully, the the audience did as well. That was a data science festival panel discussion. But today, you're very gracious to come back to talk to us.

And the first question we always ask on the podcast is what is AI? Oh, good starting question. For me, AI doesn't exist, which is ridiculous because I work in the field. So philosophically, where you've got a definition of AI, you are kind of trying to explain complex concepts to people, so it's easiest to anthropomorphize those and kind of make it into a Siri or an Alexa and then tell people that that is AI. If you look at what intelligence actually is, in terms of a human concept, it's not anything that a machine can currently do.

Machines can work within boundaries. They can use datasets. They can, use algorithms, but that is nothing like, human intelligence, and that would require something like general intelligence and context to get a machine anywhere near what a human is. So artificial intelligence, for me, the intelligence part of it has not been defined correctly enough for that to exist. If we're saying the intelligence exists artificially, again, for me, that's that's not a thing.

So I like to kind of compare it to machine learning or data science where we can actually kind of say machines, and we can look at science, and I think that's more applicable in this context. If you wanted to look at what the popular version of AI would probably be, for me, that would probably be something like Siri, Alexa, chat, EPT, something that is, you know, appearing to be, intelligent, but not necessarily is. Yeah. And I guess, so you you're saying that these things get kind of put in this bucket of AI, but that maybe that is misleading and not telling the whole story. Whereas Siri and Alexa are exhibiting behaviors that we might suggest that AINUS, I guess.

Yeah. And and I think, it's a language issue. So we try to use the language that we've got to describe things that we don't understand. And in trying to do that, what we might do is mistakenly attribute a word, that might not mean necessarily that, but is the closest approximation to what we're actually looking at, and then try and use it. And in doing that, what we do is we make it, simple enough for general, you know, understanding, but what we also do is remove some of the complexity from the actual theory and and the kind of concept.

And then what happens is you get a lot of, misunderstanding from people that can't understand the the technical depth of of the algorithms that we're dealing with, and that induces a lot of risk inherent in the consumers using the products that are being described in a certain way, but are not being depicted in potentially the way that they should be. Mm-mm. If we just stood that forward, would you be more interested in people talking about this product contains machine learning or this this has a neural network in it or, like, being more specific about the technology than, just that it's AI stuff sort of thing, or or is that more kind of a proprietary problem or, like, you know, trade secrets issue as well? I think we're looking at an education gap and a communication gap, where we haven't really educated people in the school systems to understand these types of complex concepts. So then when we try to describe them to sell them, we've got to try and use concepts that people generally understand, which is it's it's not appropriate because it's it leads to a lot of, problems later on when the users can't understand the implementation of the algorithm.

And we're also looking at a communication gap because, potentially, either the words don't exist to describe it or we're using incorrect wording. And some of that might come from trying sell certain products where we like to say neural networks, how it works like the brain. Well, I mean, does it really work like the brain? We're not even entirely a 100% sure how the brain works, so it's amazing how we can say that a neural network works the same. And we need to be able to kind of break down neural networks into a way where we can express them to the general public so they can understand them because, ultimately, the general public are the people on the receiving end of all this technology that we're creating.

I I guess there's also a presumption that we know what's going on as well and how they work. Yeah. Right? Yeah. Yeah.

And actually, maybe when, you know, some some black box models Mhmm. We're not very sure how they do work, and that's another issue because we really should be breaking those down as well. And we shouldn't be making black box models. We should be, building explainable models that people can understand so that when a decision is made about their life by this algorithm, they can either challenge it or understand it. Yeah.

I guess, like, there's a there's a load of stuff in there that comes to mind, around about, you know, how you might legislate for this stuff or, make clearer make and, you know, there's all these words like trustworthy and all sorts of stuff. And and I guess what you're talking about there really is also kinda context, you know, in that context of someone being, making a decision for some insurance or a loan or whatever it is or, health care or all sorts of stuff, if they are subject to this algorithm or, you know, almost like any process, I guess. You know, you know, it's easier to ask a person, like, how they came to that decision or even if it's kind of, like, post hoc than it is, you know, one of these models. So is it more about that transparency for you than the specifics of the algorithms maybe or those would that help too? I think so that there's a lot in there, and I I think that it comes down to process and how you build ethical AI.

And at the minute, my consultants is partnered with LSE to build software so that developers are able to go ahead and build AI that is ethical. And where we've seen that that's an issue is in funding bodies. We've not got an ethical gateway, where we're looking at, where developers are building AI or or similar algorithms. When I did an empirical study, we were looking at about 70% plus that were saying that when they have an issue with their implemented technology, they deal with it on the back end with PR and marketing, and that is not how we should be doing it. We should be looking at the concept phase, how we build AI that is robust and fair to society.

If you wanted to go a step further with that, you'd be looking at legislation. In my opinion, again, we've got a gap. So where we are trying to use words like bias and trustworthiness, we haven't defined them. They're very vague words. So if you look at a lot of the, working groups and think tanks that have, developed guidance based on these, they've done it in such a way that it's not really granular enough for developers to use it.

So therefore, how can we expect developers to create robust, solutions when actually the guidance is not granular enough for them to use. And the ACM recently did come out with something that looked a bit more granular that might be able to to be implemented. But, really, the emphasis is on the process of, involving the right people into disciplinary work, making sure the requirements are correct, the users are involved. You validated and verified your model, you've software tested the model, you've implemented it correctly, and then you've retested it. And this is not happening in many of the cases, especially in start ups that are trying to scale really, really quickly.

If you implement legislation, you have to be granular because you can't just leave loopholes and you have to make it very clear to people what it is they're trying to achieve. We would then have to be certain on what we were trying to, get them to do in order to get them to do it. That risks either stifling innovation or making legislation so, specific, and these kind of words that we use in everyday language so specific that you need such a definition that you would have to use them in a certain way all the time. So it's it's very difficult to see a way forward and I think part of that is within education because at the minute, the subject benchmark statements at university level and for a level don't contain ethics in the technical sciences. So in date, we we don't actually have them for data science or for AI.

So if we're going to bolt on modules onto technical degrees, and then we're going to say, well, actually, you need to do you know, you can do the data science module. Where is the ethics module? Where is the understanding of how this stuff works and how it's implemented. Because if all you're going to tell people is this is how you build it, then how do you expect people to then reverse on that without being part of a professional body or without having guidance to then go and reverse engineer ethics into their algorithms? So there are because technologies move so fast, the gaps are becoming ever more increasing between education, communication, language, legislation.

What we risk doing is just going at this too fast and saying right let's just put laws in because laws will stop all this stuff. Laws won't. It's like audit. All they will do is they will, people will find loopholes in them or when cases go to court, the legislation won't be mature enough to help either lawyers understand what it is that they're supposed to be doing in in the legal cases and that would, you know, that's another issue. How do lawyers understand the algorithms?

And and that's another thing that we do at Oldfield Consultancy, help them to do that because it is quite it's it's a difficult thing for a nonspecialist. And, you know, implementing it that way just takes away the freedom for people to go ahead and develop pilot solutions, test solutions, and it risks making it so specific on on the end where you're actually implementing it for users that you have no leeway to start trying to alter your solution. And and I'm not really sure how the legislation would work. I'm not sure what part of the process they would drop into. And if they did do an audit checklist, audit checklists are notoriously difficult.

If you ask somebody to do one thing, the interpretation of that from every single company that's building an algorithm will be completely different. It might be from a 1 page ethical statement to a 25 page ethical statement. And and what's in there, and how do you specify what's in there? And then if you specify that, what have you missed? So, again, it's particularly difficult to to try to find a way forward.

And I think that legislation immediately is not that way. Education and guidances. That was, I think that was really interesting. I think, obviously, we have a wealth of kind of interest in this area. Like, legislation is sort of, seemingly occurring in different places.

And I'm I'm really interested in the idea of education because I've done some teaching, and, I like to go into universities and do these ethical technology days, various, courses and bolt on courses. But that seems like that seems like an obvious choice. Right? We we should just be doing that, in my opinion. Right?

You know, if we we're making these technologies which, like, affect, like, millions of people, we should probably think about how they affect millions of people and the kinds of things that we could maybe steer towards and stay away from, you know, in that situation. But with the guidance and education, it's almost like we're not gonna get there fast enough for, like, our current position. So do you think there's a you know, if we're saying that maybe legislation's not going to, be appropriate, what can we do in the short term for for what's currently happening? I think legislation has its place. I think we need to be able to define more of what it is that we need to do before we can go down that route fully.

I think in the meantime, as per previous government inquiries and some of the top consultants is the same, we need an accreditation. So I've been working with the Institute of Science and Technology to establish an AI accreditation so that we can guide developers and we can help them network, we can provide, methodology, we can tell them how to ethically and robustly model, their solutions. And then also we need to look at what does the current education system in the UK, give us in terms of ethics. How do we put ethics into the the modules that we've already got? And how do we ensure that developers are responsible?

Because if you don't ever teach them that, you can't then expect them to do it, in the future because they may not know about it. They may not be part of a professional body. They then you're putting the burden on them to go and read up on how they do it. And if we're completely honest, there is not a huge amount of best around this area in how to model ethically and robustly. We've got the best practice guidance, the aquabook from the UK government.

We've got different workshops giving us different answers. But really it comes down to how we model and it's how we look at society and how we take that data, What we want to get from that, what question we want to ask, and then how we model it, and what the output is. And is that, something that might have a negative impact for society? And following a good best practice process means that we limit the risk and we limit the reputational damage, but that does have to exist before we can actually follow it. Mhmm.

So do you think that will happen then? I mean I think it's critical that we get there. And that is why, I decided to develop my software, because if we haven't got any handrails, we can't expect anybody to follow them. And there's a lot of there's there's a lot of guidance out there that kind of gives you the high level values, and it talks about ethical development. But in practice, what does that actually look like?

What does it look like for a developer on the ground? What processes do they need to follow? And it comes down to really basic stuff. I mean, if you look at previous government inquiries, you're talking about leadership, communication, human resources. You're not talking about do people do coding correctly.

You're talking about all the context around that issue, and that's what's being picked picked up. If you look at the recent House of Lords inquiry in 2021, or 2020, it might have been we see exactly the same problems that we were seeing 30 years ago, but we see extra ones. We see lack of interdisciplinary work in. We can't work as an island as developers. We have to have the team for the context.

And if we've not got the context around the model, how can we possibly develop it if we're not understanding it? I mean, I've I've kinda met people that have said, let's do, gender balancing on our, model, and we'll say 50% male and 50% female. And the thing with that is you can't reverse engineer data. So if you've got 30% women and 70% men in your dataset, that's probably because of either you've collected it statistically incorrectly or you've collected that and that is what society is, and that's what the reflection is. So we have to work with that and look at the model and see what the outcomes are and identify the issues that are causing potential problems in our output, not necessarily take these steps to reengineer the model into what we think it should be because a utopia is not what we live in.

I've got a I've got a caveat to that. I've got, like, a devil's advocate situation. So, this might be getting into the weeds a bit too much, but if you have if you're always going to use the data verbatim, are you always gonna post hoc, like, to chain change the output? Say you have a situation where, let's say loans thing. Like, we're we're giving away all the loans to women and then to men.

This is probably the opposite case most of the time. But, let's just say that, pretend this dummy example. Are we gonna say, okay. Let's just that's just what the data is saying. Therefore, it's just that's it.

Our service does that, or should we do something about that? Right? You know, can we change the dials post hocly do something or, you know, change the the way we're imputing into the model, how we're training our model. Obviously, not changing the data itself, but, like, you know, choosing to do, different kinds of training. So the answer at that point is not in your model, it's not in your data.

Your answer is in the people and you have to ask the people what's happening. I worked on a project like this and we had amalgamated a lot of different, groups of people into one population. When we broke the population down, it's because we were using the old categories that the government had of BAME. And when we broke down the BAME category into the different, cohorts of different types of, cultures and people within it, we pinpointed the people that weren't doing so well at university, and we wanted get their grades up and we wanted to figure out where we could provide the support. When we figured out what cohorts were suffering, we went and spoke to them.

And when we found out that actually in their culture at that time, a lot of the cohort were having to work in in their parents' businesses. And then if we change the lectures an hour later per day, they'd all be able to come to the lectures instead of missing them. That was the takeaway. So the answer wasn't in the data, and the answer is not always in the data. You'll spot issues within the data, but the data won't tell you why.

The only way that you can find that out is going back into society. So with, credit card allocation, what might have been missed there when women were being declined for the credit card is that you have some women that, stay at home with the children. It doesn't necessarily mean they've got no money. It just means that they they do something different, which doesn't necessarily bring an income in. They might be a director of a company, which means that their income is completely different to everybody else's.

It doesn't mean that they can't pay off credit card. It doesn't mean that they shouldn't have a credit card. It's just that the context around that particular group of people is completely different. In understanding the context around that, we can then change our model to say, we can sell to these different groups of people, if we use different rules or we use different guidance or we get some extra information from them because it might be that you can't provide your income in the specific format that has been asked for by the credit card company, so you need to provide it in a different way. And this is the issue with AI systems in general.

They do not take account of different cases. So what they'll do is look at the general person that in that instance might work 9 to 5, get PAYE, get some income per year, that's all they get, and that's the end of it, and then you make a decision. If you've got anybody else that's different from that that has maybe inherited wealth or something else, they're going to fall through the net and get declined because there's no other way that they can provide their their income. So we need to be aware that not everybody fits in the box, and this is true for, immigration. It's true for, benefits, and we've got to consider that because some of the people that are at the other end of these algorithms are people on benefits and are people that are that are immigrating.

And they can be seriously disadvantaged to the point of, mental health issues and suicide because of the way that these algorithms just completely leave them with no way forward, and there's no person to contact. Or if there is, they can't do anything about it because they said the computer says no. So in the way that we implement technologies, we've got to understand that the people that we do it for are society, and the people at the other end is society. So they should be our first priority, not creating a model and looking at the data and then trying to either reverse engineer it or change the data at the front end. And I I guess we've had some, real life examples of this, you know, playing out, in the Netherlands and, in Australia, as well as, with, you know, public sector stuff.

But, you know, obviously, all sorts of random and horrible, like, disasters that have been happening due to the some algorithms that have been just used, and and not, checked or or done, let's say, had negative outcomes, let's say. So do you think there should be I mean, I think you've already mentioned it, but I think you're you're pointing out that there should be a professionalization of the of a data scientist, and that that could be a profession itself that has accreditation. Maybe it has some oversight. You know, many of the financial bodies and advertising bodies and all sorts of things have some sort of, you know, oversight body, that comes out the of the government. Do you see that as, you know, a really useful place to get to?

And then when you are making stuff at home, that's different to, like, making stuff as a business that is accredited sort of thing. I think you're entirely right. And I think that needs to happen because, when you had maths and statistics, you had the Institute of maths and its applications, you had the, Royal Statistical Society, and then all of a sudden we have machine learning, data science, and artificial intelligence. One of the issues that came out of the recent government require, inquiry in the House of Lords was lack of respect for experts. So So if you've not got respect for the experts that are doing the modeling, how can you possibly move forward with their models?

Because we're meant to be using an evidence based approach. That lack of respect potentially could come from the fact that there is no charter body for professionals in these areas because the the technologies move so fast. These areas of of kind of sprung into into being without any kind of, you know, body behind them. And you're lacking that kind of professional, recognition. I mean, we have it for scientists, for the charter scientists from the Science Council, we've got it for statisticians.

And statisticians are called into courts, they're legal experts, they can be asked to do all kinds of things and in policy at the minute. In some areas, it's a legal requirement to have a charter statistician work on your work. So if we have to do that, then when we've got algorithms that are affecting a massive proportion of society, why do we not have a professional body that is a methodological leader, and provides the accreditation for professionals, not only so that businesses can see who they're hiring, so that the the experts can get the recognition. And there is from from my empirical research, a huge problem between businesses and people wanting to work for them because the businesses don't know exactly what they want to hire because they're not sure what skills they need because there isn't really a checklist. And people that want to work in AI aren't really sure what they need because what coding languages do they need?

How much experience? Do they need contextual backgrounds? Nobody's really sure. And this is causing an issue within the hiring pipeline. So, really, the professionality of data science, machine learning, and AI is being called into question, And there's been repeated, requests for professionalization of these disciplines for a long time.

And I think within the Institute of Science and Technology, we're making a move towards that. But my, problem is the the way that you get a chartership for a body is particularly difficult, and they're really not handed out anymore. So if we are going to invest 950,000,000 in AI innovation, in the next few years and we're going to put AI at the forefront of the UK agenda for the next decade or 2 decades, then why on earth are we not going to open that chartership route for professionals in these fields so that not only they can be recognized, they can have that recognition for their skills, and they can be recognized by businesses and people that they're working for so that we know that they can develop ethical AI. We know that they're capable of doing that. It seems to me a huge oversight.

Yeah. And I guess, like, you pointed out just at the end that that the part of that situation would be the the kind of responsible, ethical training or methods or process, whatever, as part of that, accreditation. Yeah. No. That's what we get from other professional bodies.

The training, the guidance, the debates, the discussions, the understanding of our disciplines. So the fact that that is missing for these these disciplines. And you might argue that there are special interest groups for AI machine learning and data science in different professional bodies, but we can argue that they should really be having their own. Yeah. And I guess that they would be well positioned to maybe have some oversight or auditing capacity or, you know, finding capacity as well.

You know, at the moment they're talking about that being under the ICO or other places where we have, you know, they have to make up their own remit for how that would work and stuff like that. So, yeah, maybe that would be interesting to have. Yeah. I think it's critical because without the ability to have conferences, without the ability to have, expert speakers, the network in the community, being able to publish in journals, being able to have discussion groups. I I just think that we're lacking, a huge, you know, we are we're really lacking in giving opportunities to people that are coming through that are really going to be the future of the workplace.

And not only are we disadvantaging businesses and, people that work in these disciplines, ultimately we disadvantage society by not doing things that are very, very easy. Putting critical thinking and ethics into education, a professional body, a professional chartership, none of these things are difficult. But when we had the subject benchmark review recently, there was no move towards putting ethics into the curriculum. So I'm not entirely sure whether people are going out into the workplace and looking at what's actually happening and doing some research and then going on to these panels and saying, well, why do we need that? It's absolutely critical because otherwise, we just we see a huge amount of negative impact, and that's not what anybody wants, especially when it could be your family member that suffers from this and you've implemented the algorithm, you know, we need to try and bridge these gaps as soon as possible.

What kinds of things would be, like, very important for you to to impart as a, in those educational settings, you know, if we're dealing with just kind of general stuff, maybe younger years, not not younger years, but, you know, secondary, a level. But then at, university, more specific, what other things that need to happen essentially, in your mind? In order for me to get where I am now, I did, a degree in maths, a master's in applied statistics, a master's in philosophy and a doctorate in computer science. I think that that journey doesn't need to be that lengthy. We really should be learning a bit of philosophy earlier on if we're going to do things like AI and we're going to look at, complex concepts and we're going to look at ethics, philosophy philosophy has got a lot to teach us in that area.

So if we, you know, look at aspects such as ethics and we can even put those into computer science and it's actually really simple. It's just about telling people that are building models, whether it's a basic statistics graph that says height is affected by something, or if you're taller, you've got a bigger shoe size. What are the implications of that? How do you spot when your data is incorrect? How do you collect your data correctly?

How do you make sure that your solution is is ethical? Who do you work with? Who are your interdisciplinary team? It's that simple. It's not like you have to go to town and create new curriculums, new degrees, new masters courses.

You just need to make sure that people can critically think and understand what they're doing in the context in which they're doing it, which at this point is not officially taught. It's not an assessed criteria. Yeah. So you need to some way of, like, like, demonstrating that you've critically thought about the process of, you know, you're creating a thing and its impacts and all that sort of stuff. Yeah.

Because it's not as simple as just producing a correlation graph anymore. It's about, building software and then testing that software. We've got many disciplines all kind of crashing into each other at this point. And if you're going to build models, you've got to understand how you code them. And part of that is, when you're trying to look at assumptions that are made within society and you want to code those into a program, you can't very easily code how people think.

When we've had these issues before within code, it could be one line of code, 3,000 lines down that has that assumption, and you have to go in and correct it. And not only understand very clearly what the specialist wants you to write, but also how you translate that into coding language for the model to actually do it. And that requires a huge amount of thinking and testing, and that's something that we are not trained on in the technical disciplines at this point. Yeah. And even if it's possible at all or if even if it's appropriate at all, you know, and the Exactly.

Yeah. The the the gaydar research comes to mind because they had that idea that you could match someone's face to their sexual orientation. And it's like, can you? Like, why is this useful? And to to, you know, to what purpose would this ever be actually, positive, move for us in society.

Like, there's loads of ideas where people can find data and and ask a question, but the question that they're asking or the goal that they're trying to achieve with the data, could be wholly inappropriate or fantastical and is unlikely to yield actual results which are kind of statistically relevant. Or, like, the the thing is with these models, they are always going to try and find some correlations. Right? Whether the correlations are it's not necessarily causation correlation. Anyway, I digress.

I was looking at, some of your articles on your website, and I thought the I was quite excited by the, AI has a PR problem. I wondered if you could, talk a little bit about what what you were thinking there. So this article is currently under review, and it's the empirical research I conducted. So it was to look at how do people not only perceive AI, but is there a difference in perception between users and developers of what AI actually is? And we found that there was.

And that leads them to view it more optimistically or negatively. And that was within, a much larger study, that I conducted that was about, what kinds of methods do you use within your business to ensure that you're modeling robustly and ethically. And that was where we found out that businesses prefer to use PR to solve their problems rather than robustly model, And it came down to the fact that they didn't know where to get the correct guidance from, or they just didn't want it because they were trying to scale too quickly. And I mean, let's be fair about it. It's quite easy to sell a new shiny thing to a lot of people, but when it fails, it's okay if it fails in isolation, but when it fails and then impacts society, then you've got a huge problem.

And I like what you said earlier because you said why, and that's the question that I ask all the time. Why are we doing it? Why are you building these models? What is the reason? And if your reason is, well, I just want to make a load of money, well, then you need to consider what you're building because you might want to make a lot of money, but you need to not be impacting society in a massively negative way.

And some of the things that I've seen, because I'm on a lot of entrepreneur networks, and I do a lot of reviewing for funding proposals, a lot of the the proposals that I've seen and the the solutions I've seen have have got no mention of ethics, and they've not got any robust modeling process. So then I would be asking, well, why? What's the purpose? Do you just want the funding? Do you just want the money?

Do you just want the prestige of getting this grant? Or are you actually building this because you've found a reason to do this and it actually benefits society? Because that's one of the reasons why we should be doing things. And that's why I've done so much, voluntary work in my career and so much pro bono and disaster relief work because I think that we should be using our skills for good. And it's it's like I was saying the other day on a talk, you can use a lot of this stuff for good or for evil, and it really comes down to where your moral boundaries and your ethical boundaries lie as to what you're prepared to do to either solve a problem or earn money.

You know, I'm not in this, you can see from our company website, I'm not in this to just go out and make 1,000,000 and 1,000,000 of pounds. I'm in this because I've seen, people affected by this type of modelling. I've seen people die, I've seen people commit suicide, I've I've seen that happen. And when I've questioned the modeling there's been no opportunity for challenge and and nobody's gone back and fixed it. So what we've done is we've either sent people out knowing that they might die or being that negligent that they've still died or the, the algorithm that's been put in place has led people to to commit suicide because they could not get any help.

And I do not think that in today's world that that's the place that we should be in and that's why I started my company because I was absolutely succeed in it. When I do my work I make a 100% sure, from every effort that I can possibly put in that our solutions are fit for purpose. I look at ethical and robust modeling, I consult on this, I go in and I fix models that are broken, that haven't gone through this process. And I look at society and I try to ensure every single time that I do things with integrity, transparency and explainability so that the users can understand, and that they're not left behind and that developers know what we've done and and how we've done it. Because I think anything else is, a complete, it's it's just not fair on your industry because you're just letting it down completely, and there's just no reason for you to do that.

Thank you, very much for your time. Getting towards the end now, the the last question I'd like to ask is what excites you and what scares you about this AI mediated future? I'm not scared because I see it as a really good challenge. I see it as something that that there's a lot to be there's a lot to be done in terms of technology. But I think that there there are a lot of benefits from it as well.

So we shouldn't be scared of it. We should move forward and see it as something that we can use as humans to, not necessarily improve our lives, but but to our benefit. And we should look at these challenges and we should address them and then be able to move forward into a world that is slightly better than the one that we're in now. Awesome. Thanks.

Very succinct. Brilliant. Thank you so much for your time. How do people, follow you, find out about you and all that sort of stuff? You can look on my website, oldfieldconsultancy.co.uk.

You can Google me. I'll come up on LinkedIn. I'll come up on Twitter. I'm also on ResearchGate where all my academic work is as well. Awesome.

So thanks very much for your for your time and your expertise and your passion, and, hopefully, we'll see you again. Thanks very much. Hello, and welcome to the end of the episode. Thanks again to doctor Marie Oldfield. It was was a real pleasure, talking.

We first got together, as we mentioned in the episode, at the panel discussion with the data science festival, and it was really nice to kind of dig in a little bit further, just the 2 of us, on those more specific aspects, which was really cool. We actually, after the episode finished, carried on talking for quite a while, which is, somewhat unusual because quite often people have to get off quickly and get on with their workdays and such. So that was really nice. So thanks again, Marie. Also, I'd love to actually get, some more of those panel discussions going together in future or to do a livestream, episode of the podcast.

So do, reach out. Let me know, either on Twitter or on the email if that sort of thing would be interesting, and we can endeavor to get that done. Anyway, for more of my ramblings and things, please, again, support us on Patreon, patreon.comforward/ machineethics. And, again, thanks for listening. Speak to you next time.


Episode host: Ben Byford

Ben Byford is a AI ethics consultant, code, design and data science teacher, freelance games designer with years of design and coding experience building websites, apps, and games.

In 2015 he began talking on AI ethics and started the Machine Ethics podcast. Since, Ben has talked with academics, developers, doctors, novelists and designers on AI, automation and society.

Through Ethical by Design Ben and the team help organisations make better AI decisions leveraging their experience in design, technology, business, data, sociology and philosophy.

@BenByford