85. New forms of story telling with Guy Gadney
Guy Gadney is CEO of Charisma.ai, bringing to life the Future of Storytelling using advanced Artificial Intelligence.
With Charisma, Guy is transforming interactive entertainment through the use of advanced technology, producing projects for Warner Bros, NBCUniversal, Sky, the BBC, Oxford University and many others. He has also recently led the adaptation of John Wyndham’s novel The Kraken Wakes into an immersive narrative game powered by Charisma.
Guy is also on the Board of Oxford’s Story Museum, and a co-founder of The Collaborative AI Consortium, researching the impact of Artificial Intelligence on the Creative Industries.
Transcription:
Transcript created using DeepGram.com
Hi, and welcome to episode 85 of the Machine Ethics podcast. This episode, we're talking with Guy Gagne. This was recorded on the 6th December 2023. Guy and I talk about new forms of storytelling and placing people inside a story, the LLM hype and the data of LLMs, quantum computing, awareness of data bias, the data lake of murky stuff and data provenance, copyright infringement, and the assets of the creative industries, and possibly the destructive ideology of innovation. You'd like to listen to more episodes.
You can go to machinedashethics.net. You can contact us via email hello at machinedashethics.net. You can follow us on Twitter, machine_ethics. Instagram, Machine Ethics Podcast. YouTube, at machinedashethics.
And if you can, you can support us on Patreon by going to patreon.comforward/machineethics. Thanks very much, and hope you enjoy. Hi, Guy. Welcome to the podcast. Thank you.
Thanks for coming on. If you could give us a brief, synopsis of who you are and what do you do. So, my name is Guy Gagne. I run a company called Charisma AI. We're really interested in how AI is gonna impact the creative industries.
And I've long held a a sort of passion, almost a calling for how storytelling might change due to technology in its broadest sense. And AI, solves a lot of the problems there. So, my focus really is on on creative industries, on storytelling, and, and how AI might spawn wonderful new forms of storytelling. Awesome. Thank you, Guy.
I I have already got the questions for you just from that intro. So that's awesome. The question we always ask the the head of the show is kind of what is this thing? What is AI to you? What are we talking about?
And Well, I think AI, broadly is either the best thing that's ever gonna happen or the worst thing that's ever gonna happen, and probably the truth is somewhere in between on a broad level. For me, it's it's an interesting one because it is such a broad umbrella. It and I I'm I'm sort of, old enough to remember, you know, when the web first started, and I was like, oh, it's the Internet. What is the Internet? And and, actually, the answer is is that it it depends.
It depends on on what the exact question is. So AI, as we know, is this incredibly broad term. Specifically within Charisma and the way that we, have started. We started, with natural language processing. So the ability to, understand what someone is saying to one of our characters, and, and then machine learning that sits alongside that to to build up the datasets and the data management systems that we have sitting way under the hood.
You know, we try and hide a lot of that technology for for for creative people who are writers and just want to write good stories. So so for me, you know, there there is this very broad church of AI, for us specifically with natural language processing and then machine learning. And then increasingly as as time has ticked on in the last couple of years, it's naturally started to be infused with elements of generative AI about which we've we've long held, a very bullish view on actually. It's it but but again, we can talk about that in a minute. So in your intro, you kind of in introduced this idea of being there are other forms of entertainment or or story writing, different kind of modes.
How how do you kinda see that? How do you see the AI's the kind of technologies enabling new ways of telling stories? Well, so let's keep it at a very high level to begin with. I think the way in which people, are looking at AI and also how they're fearful of AI is is is put into 2 camps. And the first one is it, is when it's applied to the current state of something.
And, generally, you can spot that because people use the words efficiency or productivity or things like that. And very often that's where, it may be to do the same thing but faster or the same thing but with less people or, you know, you get this you get the idea. And that is, a whole debate, but but let's put that to one side for a second because the area that interests me is actually the other side of that, which is what new can be done with this? Where is the innovation? Where what does it enable us to do that we couldn't do before?
And for me, it's in in storytelling, it's actually a very simple conundrum, which is we and, or or or sort of what happens really when when you place the audience inside a story. Mhmm. And that very simple statement has enormous ramifications because you start to think about, what is the impact? What what you know, how far do you go? Is it an open world simulator in a sort of Elon Musk or or, you know, real life thing?
Or is it GTA 6? Is it, No Man's Sky, which is more, you know, procedurally generated worlds in the games industry? And our view really has and and the way that we built the technology starts from the art of storytelling. We didn't build a chatbot tech and then try to, you know, bolt on storytelling or bolt on emotions or bolt on memories or these sorts of things. We started thinking how does a story work and now how can we make that story new by putting the audience inside the story?
So in essence, what charisma does is that it allows, a, a player, a user, whatever the terminology that we that that we want to do, to speak to the characters in this story and for the characters to understand and contextualize what the, what you've said. And that then changes the story because it changes their emotions. And, naturally, if you've got a spy story, for example, and you are, able to build up a sense of trust with that spy, then they will tell you more information. If the spy does not trust you, then they're not gonna tell you the information, and the story goes down a different pathway. So that to me is fascinating.
And naturally, there are questions about, well, how many pathways are there and so forth. But but for us, really, a lot of that, those more finite questions are solvable with AI because and and because of the approach that we're taking storytelling. So it's for me, that's that's the interesting bit. It's like, what new forms of storytelling, can we generate? And and just to wrap that piece up, it it was sort of the inspiration behind Charisma, which is I wanted to tell these stories, and I couldn't find a technology that, you know, that wasn't Excel or Word or HTML or IBM Watson, god forbid, you know, that that that wouldn't absolutely break me as I tried to tell these stories.
So that was really the one of the inspirations behind Prisma AI. Yeah. So what you're saying there is that it I guess that with the advent of this technology are these these sets of tools essentially, because you're obviously talking about, various, parts of the stack like the NLP bit might do are we are we talking about sentiment and we're talking about, pulling kind of themes from what someone's saying. And then maybe there are other parts of that stack as well that you're you're using and, depending on what you're trying to produce, I guess. Yeah.
Yeah. And and you you've you've touched on on a couple of things there. Just why you say so sentiment analysis is great. Everyone's doing sentiment analysis and they should and that's part of the mix. Yep.
Themes is interesting because now what happens is that that sentiment is now contextual. And quite rightly, if you say, I mean, try it in real life, you know, but, you know, if you go up to someone and say I love you, the context in which you say that is so layered that, you you sort of get you sort of get the get the analogy. Right? Yeah. It it could be it it could have many different impacts depending on the context of that particular moment.
And that, that's where story comes in. You know? That's where that contextualization around what's happened before, your relationship with those characters, and, critically, what the narrative in the universe, is, changes that changes that response. So that's that's a bit of the magic. That's, you know, that's the, there'd be dragons in some way.
So you were alluding to before that there is it is you could brute force it before, maybe. You could, I'm imagining someone writing a lot of text. Right? And maybe having some sort of arcane ID thing to tie all these texts together, loads of, like, variables going around to to try and make this all work into some sort of big story soup. Are you kind of just making that ginormous task which, seems like it could have happened but would have been just seemingly impractical, intractable before.
And it's now this thing that you can kind of go away and do, and maybe you almost need to to kind of, open people up to the idea. I can imagine a writer, hasn't had this tool before and may not kind of be able to instantly work out what the the end product might be, essentially. I guess there's the 2 questions but Well, the the the the they are 2 questions. I think the, the the latter one first, which is that it depends on the type of writer. And when we were first starting to design, the Karisma tech stack, we ran writers' rooms.
We ran them with the BBC. We ran them at Story Futures, in Bath and Bristol. And a lot of that was really to to gauge how we should design the product and tap into that existing workflows. And interestingly, some writers, for us, surprisingly, games writers especially, actually didn't they they they they were resistant to it because they had been used to writing in certain ways. The most overlap came from theater and in and and within that, immersive theater.
Mhmm. And my interpretation of that was that as a theatre writer, as a theatre practitioner, you're so aware of the audience more than you are with as with working in TV or film or even games where you're writing your piece, but actually, you know, the player's not there or the viewer's not there. There's it's it's in some ways, it's even though it may be branching, it's still a still a monologue. Whereas in theater, you're aware of the energy. You're aware of a of a of a performance night, and they say at the end, you know, thank you.
You've been a great audience, and there's meaning behind that. And I'm sure they wouldn't say it if it was a loudy a lousy audience that night. And then immersive theater even more so because the player is the sorry. The the, the visitor, the audience, on an immersive theater production is contributing, and you need to think about that audience. You need to cast them in the story.
So I think there's a a a definite fit there. Back on to the, that sort of structural piece, I think the big shift and the innovation that we're trying to bring, again, is very simple, which is natural language. You know, The way that Bandersnatch worked on Netflix or a lot of sort of interrogation scenes in video games work is you have 4 buttons. You have the ubiquitous a, b, c, d, choose between those. You're like, no.
I don't wanna choose those. I wanna choose something else. I wanna, you know, I'd I wanna go down this, but I wanna say something else because what's in my mind now as an imaginative person is something other than what has is up on screen. And the moment you, you you'd decouple, that interface and you put it into a natural language environment where I can now speak to that character, It's infinite. Managing that is a whole other piece, but I think that's again is where where we've spent a lot of time to to create the tools that that can create these sorts of environments for writers and make them easier not having to write, you know, infinity.
Yeah. Yeah. Yeah. Yeah. Exactly.
Like, you don't wanna end up having to write for all eventualities because then, well, you know, you're back to square 1, essentially. Yep. Yeah. So why so bullish? It feels like the the the recent kind of explosion in LLM's large language models has, would be some like a a boon to what you're doing or could be utilized in some way or what's what's the tension there?
Oh, so I think, the way it's it's all in the way you use it, you know. And we were we were very aware of the hype cycle of things like NFTs and and metaverse. And so it went, something just doesn't smell right. You know? Does it part no.
Not really. It may come back further down the track, but having seen a lot of these cycles come and go, those particular ones I was a little bit nervous of. AI, you know, the the hype cycle started in the 19 fifties arguably. So we're not looking at something which is suddenly new, you know. We're looking at an evolution which has then spawned, you know, sudden growth over the last, 12 months.
So, you know, really, you you you can look at a number of different launch points, but chat gpt clearly was one that leapt into the public conscious. I think there is hype around LLMs, and specifically that a lot of the finance industry and VC is like, yo. How many LLMs have you got? Oh, well, you know, this company's got, you know, 40 LLMs, and that one's got 50. Therefore, they must be valued more.
I mean, that's just completely ridiculous, a way to to to judge anything, and incredibly, short lived actually. I think that will that will come on that that will come and go. So for me, the bullishness comes from, actually the long gestation period of AI and the various AI winters that have happened in the past, and that has allowed a series of foundation stones to to to be placed onto onto which we build sort of where we are today. I think that then you start to look at where we are today also in terms of the the incredible levels of innovation that are coming and the equal and, potentially opposing, you know, questions that are raised around that innovation. And that is sort of where we are today, and I think there's gonna be that that that slight battle.
So I'm I'm very bullish about its growth without a shadow of a doubt, and I've often said that I think, you know, AI is not this thing that exists in a bubble. That's part of its difference. You know, it's it is this tsunami in the sense that it is broad and it is deep and the volume of change that it's bringing to to, every element of society, from the coffee cup manufacturer on my desk through to chipsets, through to video games, through to creative industries and manufacturing transport, you know, all of these different areas. That's different to metaverses or NFTs. It's a very, very different model.
So, the bullishness, I think, is is largely born out of having chartered the history of it and then starting to see where where it's gonna head. It's simply not going away. It simply is not. And I would, I I I'm I'm a 100% convinced that it's it's about how we what we do with it next that is interesting. Yeah.
And and in your mind, I know you obviously asked that question at the beginning, but are those things, would you say machine learning things, or are they, all of the things that you might associate with AI technologies? So you mentioned NLP, obviously. You might have expert systems. You know, you might have, the kind of neural networks, but also all the other things that contribute to machine learning techniques, like, support vector machines, decision trees, you know, the whole gamut of those things or are you talking very specifically about, one specific technology do you think? Oh, it's it's the breadth.
And that's part of the that's part of it is that it's not just one piece. It is that this is a a recipe with multiple ingredients, and the way in which you combine those those ingredients is very variable, and that that only expands the innovation. So it is it is a, you know, it it is it accelerates its growth, broadly speaking, as AI. And by the way, that's before we even factor in quantum, you know, quantum computing, which is is a whole other podcast. I mean, like, a whole other podcast channel, Ben.
You you know? We we we we will meet again in a couple of years. We can talk about quantum stuff. Cool. Yeah.
That is that that that's coming down. That's, you know, that's another, rocket, booster rocket to be fired into this whole thing at some point. Yeah. Yeah. Yeah.
I I guess I haven't like, the the the the promise of quantum is the kind of instantaneous, you know, producing answers to numerical questions. I haven't I haven't really dove into the depths of what that means for machine learning techniques, really, but I can imagine it instantly does stuff right there. A lot of cryptography are scared of what's coming down the line. But is is that a silver bullet for, like, more bigger AI techniques, do you think? Well, I I tell you where my head's at today as at where are we sort of almost Christmas 23.
I've I've actually got more interested in smaller language models recently rather than the larger ones. I think there is a a breadth which can breed a sort of vanilla output in a lot of ways. And, and for certain use cases, that's great and is fine. It's sort of, you know, one thing does everything. Mhmm.
But, also, it tends to breed a homogeneous approach to the outputs, whereas something where, where you're you're you're you're tied to a to a specific set of data might have might have a more focused result. And, you know, you look at again, pulling back, you look at the amount of data, the the the the demographic of the data, which sounds a weird thing to say, but, you know and I'd I I don't like personalizing technology too much, but let's call it the demographic of the data that is currently in play at the moment. It is very broad. It is very Western. It is very English.
Arguably, it's very white. It's very middle class. It's, if it's Internet, it's very angry. You know, the the, the the characteristics of it are, are probably things in which we should we should just to keep in mind. And it's what we started looking at back in 2018, 2019 when we when prior to building anything to do with the machine learning with Prisma, we were exploring unconscious bias in AI datasets.
And that, part of what came out of that paper was was an answer which, you you know, there was a there there was a lot of detail in there around how how practically people could think about it. But actually, the call was just think about it. You know, think think that there is a bias. If you if you if you if you're thinking about the fact that there is a bias in all of us but also in the datasets, Are you okay with that dataset? And who have thought about that, I think, is probably is is the equivalent from shifting to neutral to first gear.
At least you're moving forward. Yeah. You know? Yeah. Yeah.
I mean, that I mean, this is this is the podcast. Right? This is everything about what we are here to kind of expose, get out there in the world, try and get people to think harder about, some of these technologies and practices. And part of my work and the people some of the people I've I've interviewed is that thing. It's like, okay, it's a bit like getting yourself, data ready.
There was this thing, it's probably still a thing, but like about 5 years ago, everyone was getting data ready, or getting digital first, and and all these sort of terms, for businesses, because they had these distributed kind of systems, and they didn't really know how to use some of this technology technology coming in because they had data, but it was in different pots and it was in different shapes and all this sort of stuff. Getting data ready was part of that situation where you can actually utilize that data and do some, you can do some basic analysis. Right? People want to do basic things with data. And if you want to do more exciting things with data, you can start leaning on some of those mature those machine learning techniques, as well.
But it feels like we need to be getting into this kind of, like, have you thought about your data situation? Like, it's not a big data lake. It's like it's a lake of bias. Right? It's a lake of, murky stuff.
Have you thought about the murkiness of your data? Yeah. Absolutely. Because I I, so again, back in probably 2019, I think I forget the exact chronology, I was doing a a fellowship with the Southwest group of Southwest Universities, and it was a fellowship of on on automation. Mhmm.
And it it it was a wonderful opportunity, particular moment in time, to have, a bit of time to deep think about this. That's what they that's what the purpose of it was. And a couple of the insights that that that it gave me, I think, what the first one was something I sort of categorized as as saying, you know, AI is the cause, automation is the effect. And we were focused on automation, And that is really interesting because suddenly you're starting to look not at, you know like, your question was around which bits of AI is it. It's like, okay.
You know what? It doesn't matter. The what matters is the effect and what effect are you having. And, that and it's automation, which is the effect. So let's start there and then work backwards.
So I think there was there was there was that piece around the the philosophy of it. And at the same time, a cup there were a couple of other key things that happened during that period. One was, I was researching data provenance. Yep. Where does this stuff come from?
And looking a lot at big tech. And I started diving into a legal case between the, Authors Guild of America and Google starting in 2015, which was around and it it stemmed around Google Books and Google scanning books and copyright. And I started to look at Google and not only the Google Books, but also things like Street View. And the fact that Google was going into museums like here in Oxford and you know, which are open you know, technically speaking, open source, you know, creative commons, whatever the the the the the particular, categorization they want to put on it. But in essence, they're open.
And yet when I looked at the street view version of a museum, which is open, there's the there's the little sign at the bottom right hand saying copyright Google. Like, that's not okay. That is that is that, you know, at the at the it's just not okay. It's it's it's the and and you're you because you're not thinking oh, well, it's just a little label. You're thinking what is the motivation behind someone making the active conscious decision to have gone in and said we were gonna scan your collection.
We're gonna scan your museum. We're gonna scan your art, and it's all about getting and and the pitches. And I know because I've had that pitch myself when I was working for a content owner. Oh, well, what what it's gonna do is it's gonna create new, awareness for you. It's gonna create new, new revenue because it's more people coming and more awareness of it, more visitors footfall, whatever.
And yet, actually, what was going on at the very least was then copywriting that material. So that was a red flag to me. Potentially, it was then using that data to, to train models. And as we know at the moment with the various cases that are around, it is a it is an it is incredibly hard to prove, whether or not that has done. And, however, what the, CEO of the Authors Guild said at the time around that case was was very simple, which which is very true, that there is a a massive change, redirection of wealth away from the creative industries to the tech.
And, indeed, I was at a conference a couple of weeks ago in London where one of the speakers there said it in even more visceral terms, which is, the tech industry has been on the attack the media industry, for years. Just no one's ever called it. It's a it's an absolute outright attack, and it's an attack on, advertising. It's an attack on publishing. It's an attack on copyright.
It's an attack on everything. And I think and this and and so my what what we need to do then and what I've been, keen to to energize is as is an element of awareness within the creative industries that what the creative industries owns is incredibly valuable, You know? And copyright is important in that, but so is creativity, so are stories as a way to communicate between people. So when you when you ask about sort of data in its own sense, I I I it it it it reminds me back of when I was working in Penguin Books back in back in the day, which was sort of CD ROM days. And we had I had an IT guy working in the team there, and he talked about all the books that we published as data.
And you can imagine how popular that was. But that was the first time that I'd heard, you know, poems or short stories or biographies or, you know, Harry Potter or whatever, collectively referred to as data, and it abstracted it from the things that we love into something that seemed to be something we didn't really care about. And it was a very it was a moment, I think, where the two, banks of the river moved away from each other, and and I've been keen to bridge that ever since. And ironically I mean, not ironically. Coincidentally, it's why I sit on the BridgeAI advisory group representing the creative industries because I'm keen to build that bridge, And that bridge needs to be done by by understanding, and making bloody sure that we've we we are we are very clear about the landscape into which we're moving.
And ethics is a strong part of that, but actually it's more detailed than that. Mhmm. So I I feel like, I'm gonna put my technologist hat on here. I feel like they, they would probably say that the landscape is changing, and the, you know, the media industry or the creative industries need to catch up with the coming tide or the the change in technology. And, you know, if they are not going to play in that arena, then they are not going to, you know, be around for very much longer.
I guess the other side of that is what you're saying is they are undermining the kind of the creative aspect of the thing in its its own right and probably undermining the kind of economic, flow back to the creator itself as well. So I don't know if maybe, like for me, when you were talking about the museum, I'm just kind of riding on the things you said, it it felt like the the copyright just isn't appropriate essentially. You know, they might have got permission, but maybe copyright isn't actually the right thing to be slapping on the bottom of that picture. You know what I mean? It it it doesn't mean enough to to to make sense, you know.
It might be that the copyright of that image is Google or it might be shared copyright of that image, but the things that it's taking pictures of could be the commons. And then how we're squaring that, is copyright the right, vehicle for that? Probably not. So, yeah, it's interesting how the the landscape is changing and and maybe the the the economics is is going in the way of the fact that we haven't actually caught up legally or kind of ethically, for those things. I mean, to a degree.
I think that, you you know, it there is there is clearly a, there is clear there there there are some very sort of, you know, prevalent one liners around at the moment. It's like things like, oh, copyright. You know? Yes. But, I read a book.
I read a book, and then I write another book as a result, of reading that book. You know, I'm not infringing copyright. And it's exactly the same as ingesting, that into, you you know, into a large language model. Well, no. Not.
It's a completely different technology process. Your your your that that argument is, anthropomorphizing computers for a start. It's giving compute creativity where it's not. It's a machine. Mhmm.
So that argument, I think, is completely null. But, unfortunately, you know, has has value in it. And then, you know, as we look into innovation and and ethics and so forth and the battles recently at OpenAI, for example, clearly, that's what that battle was about. And it's fine as long as one's very clear about the vested interests that and and the motivations that sit around it. It's not as as as clear as saying it's innovation versus regulation.
It's much more complicated than that. And, you know, there are some interesting quotes from from Andreessen Horowitz around this, around basic remember that they are funding this, and they are dictating a lot of the strategy. And if you want specific examples, look at the recent case of, of Stability AI who's moved away from open sourcing into more commercial entities as as dictated by their investors. K? So that that's that's in the press.
That was reported, and that was that was part of the move around it. So there there are other interests in this space. Now the motivations of the VC is to turn a profit for themselves. That's the motivation. There is it's it's very simple for themselves and for their portfolio companies, or for their portfolio, or companies.
Yes. But, actually, it's for for the people who are providing them with the funds to be able to go out and invest. And so having seen this this story again in in in 20 in for the dotcom 1, there is elements of this play replaying itself as well. So as we look at things like, well, shouldn't we be allowed to innovate? Yes.
But I might say to you, Ben, tell you what, I want to innovate. I what I want to do is I don't really like the way that you've painted your house. So I'm gonna paint in fact, you know what I'm gonna do? I'm just gonna take out the front wall because that's innovation. And what I'm gonna do is I'm gonna turn it into a cafe for people.
And by the way, if you don't mind, I'm going to then take the revenue from that because, I think it's really cool and it's my idea, and how cool is that? Now you might have something to say about that. You might not. It'd be fine. Yep.
But, I you might say, well, hang on a second. You're infringing. I I own this house. This is this is my house. You know?
Like, well, I know, but come on. Don't be so ownership. Really, granddad? Come on. But get with the program.
We're doing innovation here. And and you start to see that actually there's a little bit of, you know, there there are there are there are motivations behind it, which is slightly different. So this is a complex. We're we're operating quite a complex nuanced environment, at a moment in humanity where, unfortunately, nuance is is not the flavor of the day And, you know, politically more binary and, and populist volume gets gets the gets the piece. So it's a little it's it's complicated.
But as long as we like, I keep going back to to to our point about datasets and nonquantum bars. As long as we're thinking about it, you know, then then that's a good thing. And for people who are at the technology side, if you are coding it and you're thinking about it, that's a good thing. There is absolute design in the algorithm. So, actually, let's circle back that.
When when you were writing that paper, what kinds of data were you talking about, when you were writing that? What kinds of was it was it writing? Was it numbers? You know? Oh, it was so interesting because because back in back at that point, there was very lucky to work off.
We knew that, because there wasn't that level of training in the same way that there is now. And, around that time, which was probably just as GPT 2 was launching, again, I was in touch with, a guy called Connor Leahy who's now working, in the StabilityAI, umbrella, and he he had just hacked GPT 2 and, was gonna release the source code on online. And I connected with him. We had a fascinating call, where we talk he Connor is, you know, incredibly smart and philosophical and technological. So we started to think about what datasets were there and what were not, and he then went on to found to be one of the founders of Eleuther AI and, you know, which was behind the pile and various other things.
But, arguably, at that point, the, the datasets were rare if they were public open source or highly proprietary, which were Google. So we were we were having to make assumptions around a lot of what was what what the the lay of the land was at that point. But also, you know what? That gave us a little bit of freedom to be able to think slightly more outside the box and not get tied down into into moments in time, you know, into detail around a specific dataset, but start to pull back and look at it more of a more of a, a landscaping, which which made made, I think, that paper, have more longevity. Yeah.
Yeah. Yeah. Well, you have to you have to send me a link afterwards. Yeah. Well It's up anyway.
So we spoke a bit a bit about this kind of technological attack on the creative industries. Do you see, the LLM situation, or kind of a a large, like, language, like, as the the the prominent term there as being problematic, especially for storytelling, but also maybe for also kind of the kind of prospects for for telling stories and the economic, situation for for that, you know. To a degree, yes. But but let me be clear. The the the, the concept of the attack, that was that was very much me quoting someone else on it.
I think where I'm I'm close to to what the author's bill said, which is there is a an unprecedented redistribution of wealth, you know, and and the problem with that is if the pendulum swings too far. And, technologically, it it it it it is the problem of training models on synthetic data, and it starts to get recursive. The purer the data and I'm pausing on the word because pure is is an odd word to use, but let's say the more actually, in some ways, the more human created the data. As far as we consider at the moment, the better it is to train on that data, the more synthetic and recursive that data is. In other words, it's been produced by AI.
You start to get, less value in that data, which means that the the input of the human is important, and therefore there's value associated with it, and therefore we should appreciate that. And that's really, I think, the the key. Certain things can be automated and certain things actually we still for for a number of technological reasons, we still need humans in the mix, and that's okay. You know? It's it's okay.
It's we don't need to automate, the hell out of absolutely everything because, you know I mean, again, the there's a car analogy, which is I'd far prefer driving a manual than a an automatic because it's more responsive. It's more visceral. I like it. I enjoy it. Yeah.
I feel I'm a part of it. Yeah. An automatic just this slow, you know, slow to respond. Do you like driving an automated car? I don't really I frankly, I don't really like driving driving anyway, and I can't wait for, you know, self driving cars because I think the whole thing is is, again, sort of statistically ridiculous that so many accidents and and and traffic issues are caused by humans.
But I don't really, I don't really enjoy driving, you know, an automatic. No. I find it. It's boring. Yeah.
Yeah. I think I I'm with you. I think the the quicker they can usher in, automated, even if it's buses, you know, just like Yeah. Buses, they are everywhere. No.
That you can get in and and you can quite happily read the paper or the ink eye, you know, the e ink screen, you know, this is the future world, on the way to work and don't have to worry about, you know, being conscious enough and having had my coffee to, like, get in the car and not have Yeah. But but, you know This is you know what? This is such a good point, this, because because what where AI has focused and you look look at the big headlines from DeepMind and various others over the years. It's like it they are, hey. AI has beaten the chess master.
AI has beaten the top Go player. AI has beaten beaten, like, beaten. It's like, bah. It's it's beaten. It's once.
Like, why? You know, AI has managed to recreate, you you know, an artwork or Bach. Okay. Right. Well, again, there's context to that, which is what is what is Bach, what is not.
To me, it's very simple. If Bach composed it, it's a Bach. If it's not, you know, it's like me trying to do it, which is not gonna end well. Now okay. There's that.
But surely, you know, traffic issues, environmental issues, energy issues, there are some bigger things at play here which you could have started to move these things on. Now the technology people listening to this might be infuriated by that statement on the basis of, well, you've gotta get from one step to another. Yeah. But there's a direction, you know. So thank you very much for your time, Guy.
The last question we always ask on the podcast is around this idea of, you know, something you might have already said, but, you know, what excites you and what really kind of frightens you about our AI mediated future? I mean, the the creative part of my mind is incredibly excited by it. And I I think the, and I said that that I think as we sit here in in 20 23, if we were to fast forward to 2030 and, and, again, you know, for context, it's it's just this week was the first birthday, for, like, of CHAP GBT. So let's fast forward 9 years. I don't think we're at 5% of the innovation that that that we'll see, you know, over the lot from of where AI is gonna go and its impact, which, you know, is is probably an easy thing to say.
So I think that the way in which we use it is is incredibly exciting. And the, new innovations that we that we will see coming out of it, the and the new benefits that we see coming out of it, I I'm really excited, for specifically, I'd love it I'd love it for people to start to turn the lens of AI turn the lens of AI onto, societal issues. I think that's that's an important piece, and and hopefully will become a positive action. You know, I mean, BridgeAI does a does a good efforts about, you know, in the UK, Innovate UK is for is is looking at, yes, the creative industries, which I've I've sort of involved with, but also manufacturing and construction and transport are the other 3. So there's there is a there is an ongoing focus on that which excites me.
And in terms of my fear about it, I think there is a I I I think fear is an interesting one because I don't really think about the concept of fear in its broader sense on that. I more think about how we might solve fear with education. And I look again at at sort of the national strategies that that Finland did early early on with their AI strategy, and, it all revolved around, educating the company the educating the country such that poor the fear that goes away, and they can start to look at a clear line of sight and a pathway moving forward. You know? That would and that to me is that to me is both the fear and the excitement in one go.
Guy, thank you so much for your time. I know that you've got a head off. How do people, find out about you, follow you, find your work? Probably, I mean, charisma dotai is is where we we focus our our time on, everything to do with Charisma. Otherwise, generally, I'm on Twitter a bit, on LinkedIn a lot, and relatively easy to find online.
Right. Thank you very much. Thank you, Ben. Great to see you. Hello, and welcome to The End podcast.
Thanks again to Guy for coming to talk to us. He obviously has strong opinions about technology and and its application in the cultural industries. I find find the the unprecedented redistribution of wealth is probably true, but, also, it seems like techno determinism, doesn't it? It's it's hard to see around that or the different possibilities of how that could have played out. I'm hoping, especially with this LLM and copyright infringement issue at the moment, that we're in a position of maybe thinking again about what copyright is and who is it for and how we can use it and how it operates in a technologically mediated world.
If you have any thoughts on that, would like to come on the show, then do email us at hello at machinedashethicsdot net. And as always, tell your friends about the podcast. And if you can, you can support us on Patreon, patreon.comforward/machineethics. Thanks so much for listening. Bye.