86. What is AI? Vol.3

This is a bonus episode looking back over answers to our question: What is AI?
Date: 19th of March 2024
Podcast authors: Ben Byford and guests
Audio duration: 14:47 | Website plays & downloads: 82 Click to download
Tags: Bonus episode, What is AI | Playlists: Special edition


Transcription:

Transcript created using DeepGram.com

Hi, and welcome to the Machine Ethics podcast. This time, we have bonus episode, our 3rd edition of What is AI, where we have a selection of answers from last year or 2 of our interviewees.

In this episode, in this order, we'll hear from Reid Blackman talking about software that learns by example, Madhulika talking about patterns and stats, Sarah Brin, talking about different types of intelligences, Roger Spitz, talking about AI as functions that are normally done by humans. Ryan Carrier, what is AI as a hard question and its fuzziness. Ricardo Baeza-Yates, talk about AI as mimicking the human brain. Mark Coeckelbergh, different narratives of AI. Harriet Pellereau, with AI's lack of emotions. Josh Geller's quip about something that we shouldn't be striving for, Dr Marie Oldfield, AI doesn't exist, Marc Steen, machines as tools, Guy Gadney, why the term is almost too broad, and finally, Mitchell Ondili with how AI can be a family of technologies.

If you'd like to find more episodes from us, you can go to machine-ethics.net. You can also contact us at hello@machine-ethics.net. You can follow us on Twitter, machine_ethics. Instagram: machineethicspodcast. YouTube at machine-ethics. And if you can, you can support us on patreon: patreon.com/machineethics.

I always love this question, and I ask it constantly because we have so many different answers to it. So I hope you enjoy this kind of plethora of different examples of what people think of when they are asked what is AI to them.

Software that learns by example. It's done. You know, obviously, AI is broad term. Machine learning is a subset of AI, but it's the vast, vast, vast majority of AI that any business is designing, developing, deploying right now.

So it's machine learning. What's machine learning? Machine learning is just a fancy phrase for software that learns by example.

Yeah. I feel like that's a question that trips me up every single time I hear it. I think the best way I think about it is, AI is really being able to predict something that comes next in a particular pattern.

I know that sounds vague, in part because AI is used in so many different ways. Right? It's used to predict health outcomes. It is being used perhaps inappropriately, to predict if someone, can be a criminal in another instance.

But also it's used to kind of identify, you know, who's a candidate that should, you know, proceed to the next stage in the hiring process. So the way I look at it, it's really being able to predict what comes next.

There's, of course, a lot of predictive AI that is neither artificial nor intelligent. So, that's something that I really wanna center, even when I think about, you know, what AI really means, and what it doesn't.

So the first the first part is is is easy. Right? So that's artificial. Right? So so for the purposes of of this conversation, we can see say made by a machine. Intelligence is the harder, trickier one. Right?

So we can say that intelligence might mean the ability to solve problems or complete tasks, and that's that's, like, the the shorthand I often use. But you might also know that there's a whole bunch of different types of intelligence.

Right? So there's, like, kinetic intelligence. There's interpersonal intelligence. And I and I think that the the current definition of AI, has a very limited definition of what intelligence is and can be.

We use artificial intelligence in in the sort of as a broad term, part of the, you know, I guess, the computer science, field or what have you, which is seeking to create machines capable of thinking and learning.

Doesn't mean that they are, but, effectively, they perform certain cognitive functions, which typically require human intelligence.

So maybe perceiving, reasoning, some degree of learning, or problem solving, and we can talk about what that means in terms of decision making.

More fundamentally, I guess, day to day, there's an element around recognizing speech, playing with language or processing language, at least, visual perception.

And, and we'll talk about it as well, but but, you know, playing a role on some form of a decision making value chain.

Does it mean that they understand everything? It that's a great question. So I used in the mission statement, AI and autonomous systems, and and we do that for a very specific reason.

Rather than define AI, which which becomes very hard to do. In fact, the the technical words we use now are artificial intelligence, algorithmic, and autonomous systems.

Not because I know necessarily where AI finishes and algorithmic systems start and then how that but but because, actually, that gives us the all encompassing perspective of the systems that we aim to have some governance oversight and accountability to.

But I do have an answer for you, which is we view artificial intelligence as replacing the human brain or the human decision making process in systematic form.

And then we partner that with autonomous systems because now we can replicate the rest of the human experience, my dexterity, my fingers, my mouth, my eyes, my senses.

The sum of which, so the brain and the thought process and problem solving combined with that human dexterity, the combination of AI and autonomous systems allows us to replicate human tasks.

And so by thinking in this holistic way, what we feel like we're capturing is how the human is either in the equation or maybe being replaced by the equation.

And and we want to govern that process or or make sure we're examining those systems? Yeah. That's a hard question.

I think AI is different for every person. I would say that the standard definition is trying to to mimic the human brain, but I think that's a definition that is a bit, naive in the sense that, computers are are not like us.

So maybe it's really artificial in the sense that it's different, but still it's not that artificial that, for example, doesn't use resources that are important to us.

Yeah. I mean, AI has a has a long history. There's different types of AI.

I think what we are today talking about when we talk about AI is usually machine learning. So if I will mention AI, I mean, machine learning, Another point I make about that always is that AI is not like just this thing.

First of all, because it's like, you know, part of bigger technical systems and infrastructures, but also because for me, AI is not only, a thing, but also a story.

It's also something that people tell, about AI, about the future of machines. People have all kind of imagining, you know, imaginations about that.

So for me, AI is also that and and is also interesting as a as a more humanities kind of person to to look what are the narratives, about AI, and and how does it shape, how we think about AI?

What is AI? I think that the first thing that is sensible to do is to actually define intelligence.

Because, you know, that's that's a kind of component of that. And, you know, we could describe that as the ability to understand and learn and problem solve, and adapt.

But also to reason. Obviously, artificial intelligence is when those behaviors, those actions are being completed by something that's been created by man, like a computer.

Yeah. So I sort of been thinking a lot about therefore what, like, what fundamentally is different between the 2.

I do think a lot a lot of the kind of questions that we have are around the fact that artificial intelligence is, like, slightly limited in its, ability to reason.

You know, like, human reasoning is informed by facts and and logic. Although, neuroscience have actually proven that emotions are, like, integral to to decision making.

I think there's, like, some studies. I think there's a study from, like, the nineties where there were kind of there were some of patients that had had that sort of emotional region of their brain damaged.

The was it the amygdala and the prefrontal cortex, those sort of areas which are kind of key to emotion.

And in those states, the the patients were rendered, like, unable to make decisions. And so, yeah, I think that whilst, you know, we can we can definitely attribute some modes of intelligence to artificial intelligence ultimately.

Like, the the kind of the lack of emotion. Computers fundamentally lack emotion, and therefore, they can't they can't perform reason to the same extent that humans can.

And and so I I would say that artificial artificial intelligence is slightly misleading as a term because it suggests an intention and a and a kind of, an ability to reason that I don't think they can truly have in a comparative way to humans.

So as you know, right now, that's a particularly thorny issue, but I think that where I'm coming at this issue from is that the very idea of artificial intelligence is an anthropocentric conceit that humanlike intelligence is both something desirable and something that we should strive to emulate in some kind of synthetic form?

Oh, good starting question. For me, AI doesn't exist, which is ridiculous because I work in the field.

So philosophically, where you've got a definition of AI, you are kind of trying to explain complex concepts to people so it's easiest to anthropomorphise those and kind of make it into a Siri or an Alexa and then tell people that that is AI.

If you look at what intelligence actually is, in terms of a human concept, it's not anything that a machine can currently do.

Machines can work within boundaries, they can use datasets, they can, use algorithms, but that is nothing like, human intelligence, and that would require something like general intelligence and context to get a machine anywhere near what a human is.

So artificial intelligence, for me, the intelligence part of it has not been defined correctly enough for that to exist. If we're saying the intelligence exists artificially, again for me that's that's not a thing.

So I like to kind of compare it to machine learning or data science where we can actually kind of say machines, and we can look at science. And I think that's more applicable in this context.

If you wanted to look at what the popular version of AI would probably be, for me, that would probably be something like Siri, Alexa, chat, DPT, something that is, you know, appearing to be, intelligent, but not necessarily is.

Yeah. As you've noticed, I said what people call AI. Frankly, I I think AI is a bit of a silly term. I know people have been using it since, what, is the 1950 something with this Dartmouth conference?

Silly in a sense that it draws so much attention to mimicking people. Like, it's intelligence, like human intelligence, first thing that comes to mind, and then artificial.

So I don't like the term so much. I would rather talk about, tools, that we can use or instruments that we can that that through which we can perceive the world or or machines that help us do things.

For example, a a a spade, you don't call them artificial hands, do you? Or or or a bike, you don't call that artificial legs. So why do we call these, machines or tools?

I mean, they're they're machines and tools. Well, I think AI, broadly, is either the best thing that's ever gonna happen or the worst thing that's ever gonna happen, and probably the truth is somewhere in between on a broad level.

For me, it's it's an interesting one because it is such a broad umbrella.

It and I I'm I'm sort of, old enough to remember, you know, when the web first started, and I was like, oh, it's the Internet. What is the Internet? And and, actually, the answer is is that it it depends.

It depends on on what the exact question is. So AI, as we know, is this incredibly broad term. Okay. I think the problem is that because of how ubiquitous it's become as a word, AI currently means everything and nothing.

The AI act, I think, gets it right when it it refers to it as a family of technologies, And I think the idea is that, you know, it's it's just a superset of machine learning that performs certain functions that are recognized somewhat as intelligent functions, though I think that's a bit of a misnomer.

But, yeah, I think that's kind of the easiest way. It's it's it's a set of of processes that leads to outputs that we currently quantify as intelligent.


Episode host: Ben Byford

Ben Byford is a AI ethics consultant, code, design and data science teacher, freelance games designer with years of design and coding experience building websites, apps, and games.

In 2015 he began talking on AI ethics and started the Machine Ethics podcast. Since, Ben has talked with academics, developers, doctors, novelists and designers on AI, automation and society.

Through Ethical by Design Ben and the team help organisations make better AI decisions leveraging their experience in design, technology, business, data, sociology and philosophy.

@BenByford