105. The AI Bubble with Tim El-Sheikh

This month we're chatting again with Tim El-Sheikh. We discuss podcasting, history of openAI, London startups, what are the AI use cases, is GenAI even safe? the AI bubble, snake oil salesmen, why do we need all these data centres? replacing human workers, data oligachies, the erosion of trust in AI, AI psychosis and more...
Date: 19th of November 2025
Podcast authors: Ben Byford with Tim El-Sheikh
Audio duration: 01:00:53 | Website plays & downloads: 79 Click to download
Tags: AI Bubble, Podcast, Work, Generative AI, Startups, OpenAI | Playlists: Business

Named one of the world’s top 100 voices shaping the future of AI, Tim El-Sheikh is a biomedical scientist and ex-pro athlete turned serial deeptech, AI and social entrepreneur since 2001 and is one of the pioneering, first-generation AI founders at London’s Silicon Roundabout.

Find more from Tim at the CEO Retort.



Transcription:

Ben Byford:[00:00:05]

Hi, and welcome to episode 105 of the Machine Ethics podcast. This time we're talking for our second time with Tim El Sheikh. This video was recorded on the seventh of November, 2025. I was keen on talking to Tim again, but this time on the concept of AI bubble and the possibility of it bursting.

Ben Byford:[00:00:27]

Tim and I also talk about podcasting, previous AI startups in the history of OpenAI, AI use cases, the safety concerns of generative AI, the ROI of AI, and why do we need all these data centres anyway, the logic of replacing human workers, the almost new gold rush, the serious data to be collecting public's data and prompts fed into LLMs, and the possible data oligarchies that this is producing; the erosion of truth, and much more. If you want to find more episodes, you can go to machine-ethics.net. You can contact us at hello@machine-ethics.net. You can follow us on Blue sky, machine-ethics.net. Instagram, MachineEthicsPodcast, and YouTube, @Machine-ethics. If you can, you can support us on Patreon, Patreon.com/machineethics. Thank you and hope you enjoy.

Ben Byford:[00:01:30]

Tim, hi. Welcome to the podcast. Or should I say, welcome back to the podcast as you are on episode 63, AI Readiness. So thanks for coming back. If you could please tell me about who you are and what do you do.

Tim El-Sheikh:[00:01:48]

Oh, wow. Well, thanks for having me back. It's been, what was it, four years? We just chatted before we recorded four years ago. Jesus. Yeah, so much has happened, hasn't it? So what am I now? Because I'm doing so many things these days. I suppose I can say I'm a podcaster. Thanks to you. It's your fault. I love it. I love it. I mean, we could probably talk about it, but I think it's great. I launched the CEO Retort podcast in November 2024, so we're approaching one year anniversary. And the idea behind it was to basically challenge all the nonsense and the stupid narratives that you hear from the likes of Sam Altman and Elon Musk, et cetera, when it comes to AI, frankly, anybody when it comes to AI. And the reason, or rather what qualifies me to do that is that I am a tech entrepreneur since 2002. Luckily, I've been involved in AI before AI was a thing, I'm part of the same cohort as DeepMind back in the day at Google campus. That's in the early 2010s. So Google decided that they wanted to bring all of the AI brains from around the country into that ecosystem. So even though we didn't have an office at Google campus, but it was more like we were just gathered there a lot. But then eventually, once they allowed companies to move in, I moved in there in 2014, I think. And it was an amazing experience because, again, that was literally the who of who of AI today. Like, literally, obviously, you got deep mind, but there are guys in there that are behind the AI at TikTok, behind the AI at Microsoft, behind the AI at meta. I always like to make the joke that I am literally one of the people that brought AI the way you see it today. I don't know if I should apologise or say you're welcome.

Tim El-Sheikh:[00:03:41]

But in my case, I made a conscious decision that I do not want to be involved in consumer-facing AI ever. I can explain why that if you want later on. My focus has always been business to business, especially in science, because my main background is in I was in biotech. I'm a biomedical scientist by trade, which is interesting. That's how I discovered AI, because in the '90s, people used AI at the time through bioinformatics to decode animal genomes. I studied genetics. To me, it was like, whoa, okay, it's an interesting combination because by that point, AI to me was basically something that you see in games, and I'm an avid gamer. I know you're an avid gamer as well. I thought, wow, actually, you could do something more serious than just gaming? You decode genomes? Whoa. So that's how I got into AI and I got into all that. Again, I wish I could say that this is all part of a plan. I'm just a lucky son of a gun because as I said, I've experienced the true value of AI. So it's not about chat bots and all that garbage, but AI that truly is revolutionary in scientific applications. Of course, we've seen that with DeepMind and other great guys, too. But also my first startup was an ad network where we built AI recommendation systems, which is what you see on social media. Again, I never knew that that would be a thing. But of course, when we heard that social media companies want to use these recommendation systems as the content auditor or a way to replace human editorial process, that was the point where when I was at Google, I was saying, Okay, that's complete other garbage. That's not going to work.

Tim El-Sheikh:[00:05:23]

Everything that is happening today, people like me called out over 10 years ago. Again, not because I'm a genius, because I just don't know how this thing works. As it happens, when I was at Google campus, the thing that I was building, the reason what attracted us towards Google and that community is that we were building language models at the time to fight misinformation online, which was also already a thing. Did I know that there will be large language models and ChatGPT, whatever? No. But somehow the plan is just aligned where I found myself in this situation where, okay, I am in a position to call out all of the nonsense and claims that they made. And yeah, here I am.

Tim El-Sheikh:[00:06:07]

It kind of, made me burn a lot of bridges as well because I had VCs at the time and investors, and they're not happy with what I'm doing because I'm calling out everybody. So 2022 was the year where I just had to exit, let leave that world. And it was a crazy year because that was the year where I lost my parents as well, which is horrendous. So you got to change my perspective on life. And of course, we were all recovering from the pandemic as well. So it was like, I felt, you know what? At the time, I was running a company called... Well, I'm still running actually, still alive, Nebula, which is how you and I connected. We talk mostly about Nebula when we spoke. But it was all about how do we really change people's perception about AI, the fact that AI can do the certain things that we know that it's really, really great at and stop pretending that it can do all the Hollywood nonsense that is being pushed at them. But people think that AI can do the Hollywood nonsense.

Tim El-Sheikh:[00:07:08]

I felt like, I started...I'm approaching 50. So I felt, okay, assume I'll be alive until hundred. I thought, okay, what do I do with the second half of my life? Okay, I'm going to campaign for ethics, digital ethics, ethical AI, responsible AI. And of course, now AI is part of literally everything that you see around us. I look for things, equality, women's equality, digital equality, human rights, et cetera, et cetera. And I think if people see my podcast, you can see the conversations I'm having and the people I'm bringing. Literally, it's everybody across the board because AI is impacting every single element of our lives. And that's who I am. It's a bit long.

Ben Byford:[00:07:52]

That was epic. Thank you, Tim. It's funny because I think I must crossed your path because I had a startup in 2013 or something, and we would be often in Google campus, but down in the bowels in the cafe at the bottom because we didn't have any money and that thing. So we must have events and stuff. I must have seen you and I and done, oh, man, that guy's doing some cool stuff.

Tim El-Sheikh:[00:08:22]

I tried to avoid the café, actually. It was so busy. It was so packed all the time.

Ben Byford:[00:08:27]

Yes, it was mental. Yeah, that's true.

Tim El-Sheikh:[00:08:30]

It was fun, though, because they held a few events there. But most of the time we were upstairs in the closed doors, as it were.

Ben Byford:[00:08:38]

Yes, exactly. Because you had proper offices up there and it was a free... Well, a relative free-for-all downstairs, wasn't it?

Tim El-Sheikh:[00:08:45]

Yeah. Well, the café was free. I mean, that's the thing. They wanted it to be open to everybody, which I think is great. I mean, again, Google is the one... People talk about who is the key force in the world of AI. It's always been Google. I would argue it still is today, despite the OpenAI is getting all the attention, whatever. But Google is really the powerhouse of AI, which is, to some extent, once I can say it's a good thing, but also it's not necessarily a great thing. If you want to go back to history, I mean, Open AI was formed as a direct antidote to Google. It's crazy that now OpenAI turned into the actual villain itself.

Ben Byford:[00:09:27]

Yeah, they drifted far from where they began, though, right? It's a very different beast. I hooked you in today because I was like, Tim, you're doing a lot on LinkedIn, and it'd be really great to talk to you about AI and this concept, right? I'm going to put it out there of the AI Bubble that we're in. And I'm interested in that because I see it as this economic thing that's happened, right? People putting a lot of money and a lot of credence in into this new wave of AI, which was probably heralded back in 2023. I don't know, with ChatGPT coming out.

Tim El-Sheikh:[00:10:11]

Well, end of '22. Was it? Yes. November 2022.

Ben Byford:[00:10:15]

Winter '22. Yes. And it's suddenly making it a viable thing for people. This idea of the transformer network LLMs suddenly blew up. So we could obviously talk about some of the other stuff that you're talking about, but I'd like to just kick us off with what do you think about that? Are we in this AI bubble? And is it going to burst? And what does that look like? If we are witnessing another Dot Com Boom and bust or like another bust for this AI technology that we are spending so much money on or people are.

Tim El-Sheikh:[00:11:01]

Yeah. I mean, that's a great question because even while, actually, even you and I talked about the AI bubble, I think, back in 2014, when you came to my podcast. It was quite obvious at the time because what you tend to see with bubbles is that a market, especially investors, get overly excited about this new thing that is emerging, whatever it might be. Obviously, we had the. Com, of course, then we had crypto and so on. Then what It tends to happen, people just over-hype and over-promise what it can achieve and so on. Then eventually, all the money is spent and then they realise, okay, it was all complete nonsense. Then the whole bubble just explodes. Then the grown-ups, if you like, take over and they say, well, actually, look, the technology itself is fine, but this is what can actually be used or how we can deploy it in the real world in a productive way, et cetera. But AI, and to be honest with you, I would say maybe in 2022, that was obvious that it was happening.

Ben Byford:[00:12:02]

I think it was 2024, 2022, 2024. Because I think you said 2014 earlier. So it's like we haven't been going for that long.

Tim El-Sheikh:[00:12:11]

Sorry, 2024. I think it was obvious that the bubble started to form. For me, in my eyes, it was maybe towards the end of 2023, because at the beginning, for those of us involved in the AI, GPT was already out there, so you could I have access to it and so on. But also I was very lucky that with my company, Nebula, we actually had early... I don't know if you saw in the news, I think it was in January / February 2023, Microsoft went public and said, 'Hey, now We've got GPT on Azure, and we have a hand group of companies that have been invited to access it and play with it'. We were one of those companies. It's a very good technology. It's legit. In fact, it's one of the things that we wanted to do with my previous company at Google at the time, which is we were building language models. It's like, how do you build knowledge from all the corpus of data that you can get? How do you use Smart Search to find the information that you need, et cetera? What GPT has done so brilliantly, I think, is the fact that it made it easier for you to find that information through that conversational interface.

Tim El-Sheikh:[00:13:26]

For me, LLMs, they're fantastic. I suppose you're one of the geeks, right? If you know, for many, many, many years, those of us involved in tech in general, the fantasy that we had was that we wanted to be able to have a conversation with machines. We wanted to talk to machines. Instead of clicking and typing, you can just tell it what you wanted to do. That was always the dream. To me, GPT was that dream. It was happening, and I felt that's amazing. As a conversational user interface, amazing. But this is as far as it went, as far as I'm concerned. Until that point, no one was talking. I mean, opening out, we're talking about AGI, but again, AGI was a concept that's been around since 2002, and I was like, yeah, there was lots of iterations. It was against a fantasy. Nobody took it seriously. But the fact that with time, people started to look at generative AI in general as this panacea or this thing that could be used as a therapist.

Tim El-Sheikh:[00:14:30]

Now, of course, a biometric scientist, I'm like, 'What are you talking about?' You can start using it as a companion. They started coming out with all these complete bizarre claims towards the end of 2023, that was the point where you could start to see, Okay, what the hell is going on here? When you and I, actually, again, when we spoke, we spoke soon after Apple announced Apple intelligence. I made the point to you at the time that to me, Apple were the only company. Apple were the sensible ones where effectively they were saying, forget all of the AI nonsense. Basically, AI is there to help you enhance your capabilities on the phone. But the other key thing as well that they were pushing for the AI has to stay on your device. It's on device AI, so therefore it's more private, et cetera. Again, none of that is new, by the way. This is the stuff that we were talking about for well over a decade in terms of what AI, consumer AI really looks like.

Tim El-Sheikh:[00:15:29]

Now, to me, like I said earlier, that the reason I did not go into consumer AI, firstly because I've always been a big believer that people are not ready for AI as consumers. AI in itself is not safe enough for us to put it out there. I'm not the only one who's been saying this. I mean, even Mo Gaudat from Google, he was saying the same thing as well.

Ben Byford:[00:15:51]

Are you referring to just the large language models aren't necessarily ready?

Tim El-Sheikh:[00:15:59]

Generally, any kind of AI. All AI. It's a very powerful... It's like saying, should we give people access to the nuclear power plant? That thing. It's not there yet. To have that direct access to AI, to me, that was always a very big problem. But the other thing from the commercial side of it, it's ridiculously expensive because if you want people to use AI models, any kind of AI models, the cost of compute has always been really high, even 10 years ago. Ten years ago, we had massive bottlenecks. To me, I think the biggest revolution, if people want to talk about revolution right now, it's the compute. What Microsoft and NVIDIA have done is revolutionary. I wish we had some of that stuff 10 years ago. Life would have been so much easier, but it's still not cheap. It's not cheap enough for you to say, 'Hey, let's just put this out there so everybody can use it like they use a search engine, whatever.' Well, no. We are seeing what's happening with the rising costs at OpenAI and all the other companies. So once you start putting all of that together, what I started to see towards the end of 2023, that the bubble was forming.

Tim El-Sheikh:[00:17:17]

Now, the one thing that I expected that by the end of 2024, that bubble will burst because people will realise, okay, that the costs are completely out of control, the ROI is not there. And et cetera, et cetera. But the one thing that happened that I think none of us anticipated is, of course: Trump. To me, that has changed everything because like with crypto, the crypto bubble basically blew up in 2022. And then nobody was talking about crypto. People were working on blockchain. Blockchain as a tech, again, really cool tech. It's fantastic. But the whole crypto and the NFT nonsense, it died away, didn't it, for a couple of years? Yeah. But then Trump came in. What When it happened to crypto, boom, it got inflated. Again, it's all fake inflation, by the way. It's not real interest from investors. It was a lot of money laundering thing going on. Nobody knows what's going on, but definitely the Trump family are absolutely in the middle of it. Now, effectively, they turned AI into a meme bubble. The amount of money that they're throwing at it right now is truly absurd. I've never seen anything like this before.

Tim El-Sheikh:[00:18:27]

Now, this bubble turned into to something else. I've never seen this. I know people like to compare it to the Dot Com Bubble or whatever. I disagree, actually. I don't think it is. I would have said it was a Dot Com Bubble before Trump got elected. That would, but by that point, it wasn't even as bad as the Dot Com Bubble, but it would have been painful for some investors. But now... Oh, my goodness. I would combine that with the Dot Com Bubble and the 2008 financial crash, purely because of the amount amount of money is being thrown into data centres. Again, I'm not against data centres. We need data centres. Like I said, we need more compute. But the amount of the number of data centres that they want to build is absurd. Now they're talking about sending data centres to space. I'm like, 'What? What are you talking about?' I think as a bubble, it's bad.

Tim El-Sheikh:[00:19:24]

Now it's not just a technological issue, it's a financial issue. You combine When you find these two things and you've got somebody like Trump who does it. If the bubble explodes right now, I don't think it will explode the way we think it will. I think what's going to happen, the investors are already jittery. Something is going to pop it, but it's not going to be a full pop because what will happen, Trump will jump in because he doesn't like the markets looking bad, quote unquote, as he says it. There will be some bailout. OpenAI, at the time of this recording, they were already talking about, Oh, we got to need government funds the CFO recently was saying that she expect some government funds because the work that they're doing is so important. I'm like, Yeah, that's called bailout. Then he will bail them out for sure. I think some companies will die as a result of that. Even companies maybe, dare I say it, like Perplexity. I think they're going to die unless somebody acquires them. But what are they going to happen? They're going to drag it even further. I think the ultimate bubble explosion, probably now, in my head, I think this is going to be the first part of the bubble exploding, but they'll bail it out. More money will be spent. I think maybe depending on what happens in the next elections, which is the midterms. I don't know. I'm not an expert in American politics, whether that would have any impact on how Trump behaves himself or not. But definitely once the Trump's presidency ends, I think the bubble will explode and it be really bad. So I think we're going to go through some serious pain for the next three years, which is horrendous.

Ben Byford:[00:21:08]

So I mean, if you're one of those investors, you're either doubling down and then you got to bail out in that time, right? Or you're supporting the next Trump successor, basically. You're like, how can I ride this into the future, maybe? But what I find interesting about everything you said is it doesn't really demonstrate any real impact to us. We haven't really talked much about other than what you were saying before and why you got into AI in the first place. It feels like we are being sold this thing that, like you were saying, we're not seeing the ROI. We're not seeing the benefit that actually means that we need to spend this much money on land, water, electricity, and all these resources on more data centres, more compute for this thing.

Ben Byford:[00:22:04]

Do you have a sense of why we're even chasing down, or why these big companies are chasing down this amount of investment, amount of massive worldwide data? I like it, too. I don't know if you're a science fiction fan, but the foundation series by Isaac Asimov has a Trantor. Trantor is the admin centre of the whole dynasty, the empire. And it's just this planet completely covered in structure. It's like, do we want Trantor? What are we doing, guys? I don't know if that's... Is that a good idea? I don't know.

Tim El-Sheikh:[00:22:54]

Yeah. But that's the thing. It's who's driving it? Because certainly the The work that we've done with Nebula, the reason we launched Nebula in 2019, which is my company, although now I'm dismantling it. I want to move it into the nonprofit. We can talk about that later if you want. But the whole point of Nebula, generally as a company, is that we wanted to guide all of the decision-makers and enterprises and organisations in terms of what AI can and cannot do. What does that mean? Why would you need AI in the first place? What's the ROI? The question was always about, okay, because you feel you've got a problem and you think AI can solve that problem. Well, then the first question has to be, well, why do you have that problem in the first place? Why do you think AI is the solution? Once you start asking these questions, which is basically what data science is about, because it's about connecting the dots. You look at the problems within the enterprise. Of course, you look at the data around that. The data is really, literally is the blueprint of everything, isn't it? That's the world that we live in today.

Tim El-Sheikh:[00:23:55]

What I try to do, AI is fantastic at basically How do we plug in these gaps? Where you've got the gaps that cause these problems? Well, do you think AI can solve that problem or can a human solve that problem? The question when it comes to AI in humans, has to go back to robotics in humans. Why do we need robots? Because we created robots so that they can do the things that humans can never, ever do. For example, we can easy send robots to Mars. To send a human to Mars, you need to... First, it costs a crazy amount of money because you have to have a living ecosystem system, you have to provide safety, et cetera, et cetera. But with a robot, you don't need to do that. But more importantly, you can design a robot to do things, to be able to navigate through the terrains, which is the reason why I see humanoids as the most useless robots ever, because they're literally a replica of a human. That defies the whole purpose of robotics. But AI is exactly the same.

Tim El-Sheikh:[00:24:54]

We use AI... I mentioned earlier about the decoding of animal genomes and the human genomes. Again, The AI wasn't used to replace geneticists, but rather to help geneticists to conduct calculations like multibillion calculations that otherwise it was impossible without the AI. But you still had to have the geneticists. That's AI.

Ben Byford:[00:25:18]

You're alluding to that with Google DeepMind and the protein folding and all these things.

Tim El-Sheikh:[00:25:24]

Absolutely.

Ben Byford:[00:25:26]

Where we've applied this AI technique and It wasn't really a thing before, and now it's a thing, and it's like, go nuts, guys. It's like things like that, which I concur. I feel like we could spend a lot more time doing things which aren't possible without the technology instead of replacing people or doing that stuff.

Tim El-Sheikh:[00:25:51]

But that's the thing. To me, that's why it goes back to the narrative that I'm challenging because I had a guest on my podcast, it was Dr. Jeffrey Funk. He was one of the people that actually predicted the bubble way before everybody else. He even wrote the whole book about it, I think, in early 2024. We actually launched our discussion before. So now everybody's talking about AI bubbles, but him and I were. I think we're probably the first video on YouTube. Hence, it's super viral on YouTube. It's the only thing I have viral, which is like, yeah.

Ben Byford:[00:26:23]

Are we just going to agree that there is a bubble? Because obviously, I was alluded to the fact that there could be this thing that we're calling a bubble. Do you have a, yes, this is an AI bubble?

Tim El-Sheikh:[00:26:33]

Oh, yeah. No, it's a bubble. The question is, I think the question should not be whether it's a bubble or not. I think those of us involved in the field that we've established it was a bubble, like I said, in 2024, like early 2024. But the question was, for me at least, it's like, what does the bubble burst look like? Because ultimately, this is when you start seeing the maturity of the technology. Right. Like as I said earlier, this is where the grown up start to take over. With the Dot Com Bubble, it blew up and we had all the useless websites dying away or whatever. But then eventually, the whole world started to understand, Okay, what is it that we can really achieve with the internet? And here we are. So the same thing can happen with AI. But then AI is a little bit different because AI has been around for a long time, for 70 years. It's not even the internet was new, but AI is not new. I think everything that we see today with AI, where people talk about, Oh, we've got this new AI. There's nothing new. It's literally the same thing.

Tim El-Sheikh:[00:27:35]

The transformer technology, for those who probably don't know what transformer is, the T-bit of GPT, it's the transformer technology, which is literally at the heart of all generative AI market. That was actually invented by Google, and that was back in 2015, I think, or 2014. I can't remember. It was a long time ago. They actually developed it because they were actually facing a similar problem that funnily enough, my company was facing at the time that just makes it more efficient for you to be able to combine different corpaces of text. Actually, not corpasis, it's corpora of text. For example, the problem we had with my company, for instance, because it was science-based. The problem with science is that you've got multiple different fields. You have multiple dictionationarys. You would have dictionary for, say, physics, dictionary for biology, dictionary for whatever. Normally, what people would be doing at the time, they would have an AI model for each one. We were saying, well, actually, no, you should have one AI model that can bring them all together. But then how do you bring them all together? There would be a lot of redundancies, so that creates a big problem.

Tim El-Sheikh:[00:28:42]

The same problem Google had with Google Translate, because the problem that Google had with Google Translate, and it used to be quite crap at the time, where it would have to translate one language to, let's say, if it wanted to translate, say, German to Mandarin, it would have to translate German to English then English to Mandarin. It was never a direct translation. They basically tried to figure out, well, how do we combine all these dictionaries in a way so that we don't have to go through all that weird convoluted approach to translate language? They came up with the transformer technology.

Tim El-Sheikh:[00:29:17]

Now, obviously, as is the case with all big companies, they didn't have the foresight to see how far they can go with that. Obviously, OpenAI did, so kudos to them. But the point I'm making here that everything that you see here is not new. To me, again, even though it's a bubble, bubbles tend to happen with new things. Blockchain was new, the internet was new. AI is not new. Like I said, AI has been around for 70 years. In theory, it's a bubble. I suppose it's probably generative AI bubble. But this is why I am very upset with this whole situation, because when people talk about AI, they really mean generative AI. But unfortunately, now it has masked and kind of painted that picture, that the whole AI is generative AI. Therefore, the bubble is even bigger.

Tim El-Sheikh:[00:30:07]

My concern, even though it's a bubble, but I think it's the bubble that will impact the entire AI industry because the problem is it's going to be trust. If people hear the term AI, they won't care. Oh, well, actually, no, I'm not talking about generative AI. I'm talking about, I don't know, manufacturer's AI. No, it's AI. We don't trust it. That is bad. That's why it's a bubble now. So It's a bubble that is caused not just by excitement, but a complete misunderstanding of what AI is, but also the level of, I don't know if I should legally say this, but I would almost say lies by the likes of Elon Musk, Sam Altman, everybody involved in AI. When they were saying, Oh, we're building AGI, the AGI is the thing, et cetera, et cetera, et cetera. Even though, again, AGI is not a new concept, but we've been trying to do AGI for years. When I say we, I mean as in the tech sector.

Tim El-Sheikh:[00:31:03]

It never happened because it's just not possible. But they kept on going on about it, and they always use AGI as the excuse to release products that has nothing to do with AGI, like Sorra. What on earth What is Sorra? What's Sorra got to do with AGI or intelligence? But this is the problem. It's like they're just over inflating these crazy expectations. Because people don't understand what AI is, going back to my original point, and notice as well that what these guys are doing, they're always using Hollywood references. Like, Elon Musk likes to say, Oh, I want to build like R2D2. Good luck. Ain't going to happen because that's fantasy. Sam Altman obviously talks about...remember when they released... I can't remember which GPT, when they released the speech capability and he started talking about Her from the movie.

Ben Byford:[00:31:53]

Yes.

Tim El-Sheikh:[00:31:55]

That's not it. That's not real. But this is how people connect with AI because they don't know. All they know is what they see in Hollywood. These guys are saying, well, guess what? You know what you've seen on Hollywood? This is what we're building. People like me and you who know the reality were actually saying, no, that's not real. That can't happen. It won't happen. This is the issue. I think that's why it's a bubble, but it's a bubble based on complete lies, fluff, fantasy rather than new technology. That's why I would say it's unprecedented. It's a bubble, but we've never been through this before. I I have no idea how this is going to end. All I know, it's going to be a disaster, for sure.

Ben Byford:[00:32:34]

I like the reference to science fiction, because science fiction often takes a dramatic, often catastrophic view of these technologies. But a lot of the time, it's very rare to find a good example of AI use where things went well, people were happy about it, and it isn't like everything. And I think like what you're saying before, it's like they're being sold as this thing will revolutionise everything. And it's very hard to contextualise that. And I think they probably find it hard to contextualise that as well. It's like, how does everything suddenly change. And I think with an upheaval, even like you say, if it's not going to happen, then we're just spending this money and it's going to be a massive bubble. But even if it does happen, that's going to be an unprecedented change that we probably won't be able to deal with as a social society. How can we change overnight to this other mode of operating without it going really badly, essentially? So I find that it feels odd to me that people are actively trying to do this thing, which is probably definitely going to be bad. And it's not necessarily going to be bad for the reasons that other people think it's going to be bad. It's just like, we're just not set up for this, guys. You know what I mean? The economics of it won't work necessarily. Do we want to be ruled? I don't know, what's the outcome here? Are you going to be some ruler of the world with this AI thing? Is the AI going to rule everyone? Do we even want any of this? And the difficult thing is to go, well, out of all of these options, you're probably thinking that you're going to be in control of this thing that can do anything. And at that point, it makes you this super God person with this powerful AI. And it's like, do we want that? Any which way you cut it, it's like, I don't know, guys. I don't think this is a good idea. Maybe I'm not drinking the Kool-Aid, as they say. I think there's so many good things, like you were saying, that we can produce with this technologies, and it is a set of technologies. At the moment, when people say AI, it's probably just generative AI that they're talking about, and a subset of generative AI, which is probably image generation, which is some stable diffusion and a language model, strapped together. You know what I mean? There's all these things that combine into what we're actually talking, the actual thing we're talking about and not magic. When you say AI, it's basically like all bets are off. It's just something that I actually can't describe. But this is It's a thing, it uses less to compute.

Ben Byford:[00:35:33]

I guess one of my other questions is, why do we need all these data centres? We've got models, right? Why do we need more? Why? Is it for this AGI dream? Or do we actually need... I actually like... Sorry, I've had a mini rant, so you can...

Tim El-Sheikh:[00:35:56]

No, but you're absolutely right. You mentioned interesting point about that AI can revolutionise everything. Now, firstly, what do you mean by everything? Secondly, why do we need to revolutionise everything? What's wrong with everything? That's the key question. You know what I did with businesses? It's like, okay, you say you've got a problem, you think AI can solve that problem. Well, firstly, why do you have that problem in the first place? But then when you ask people, Okay, what are the problems that you're dealing with? Automatically, people... Actually, the argument that I've been making for years, in fact, I say this on the podcast quite a lot, that to me, AI is involved in what I see as the five pillars of society. The things that we all care about, regardless of your politics or background, we all care about those five things, and that is finance, healthcare, education, the environment, and our identity. Look at the problems in these things. I think you can see a lot of problems there already.

Tim El-Sheikh:[00:36:54]

Okay, can AI solve some of these problems? For example, if it's healthcare, what are the problems that we have in healthcare? There's a lot of inefficiencies, and it's mostly admin. Can we use AI to get rid of the admin stuff? In fact, way back in the day, because I was a biomedical scientist at the time and I worked at the hospital. Basically, I was already thinking about AI, and I was talking to nurses and doctors and stuff, and all of them at the time, that was in the early 2000s. People already were thinking about AI of how do you get rid of the paperwork. And literally, the thing that never left my mind, especially the nurses when they were saying, If there's anything that could get rid of the paperwork, that would be amazing because then as a nurse, I can do what I need to do and what I enjoy doing. And that's literally helping patients, interacting with patients, working with patients. I don't have to worry about the paperwork. You see, this is where AI is fantastic.

Tim El-Sheikh:[00:37:48]

But then when you have the people, like the Musks of this world, say, 'well, actually, we can create AI that would replace nurses.' Now, firstly, why do you want to replace nurses? What are the problems with nurses? There aren't any problems with nurses, right? Now, somebody said, Well, maybe some nurses are not very nice people. Well, they aren't nice people in everything, everywhere, right?

Ben Byford:[00:38:09]

Yeah. If you're in the American system, maybe the medical environment is very different, right? So it's like a more privatised situation. So in the UK, we obviously have the NHS, and in different countries, they have different versions of that privatisation to state-funded stuff. So I think in the UK, where we say, 'no, we want more nurses', but in the US, you've got this profit-making thing going on.

Tim El-Sheikh:[00:38:40]

Exactly. I was coming to that. I think that profit, that's the thing because you see, because there are two issues here. And going back to Jeffrey funk, he said it perfectly, I think, when we had that discussion. He said that the problem with these people that go on about that they want to replace doctors or human, they haven't got a clue how these things work. So It's like, I always find it funny, especially when you hear somebody like Altman. But to be fair to him, loads of them were saying the same thing for years when they say things like, 'Oh, AI can cure cancer.' Now, remember I said at the beginning, I'm so lucky in terms of I know the right things at the right time. As a biomedical scientist, you know what my major is? Oncogenomics, the genetics of cancer. When I hear people say to me, Oh, AI can cure cancer, my My first response will be like, Oh, really? Show me how. And they never do. Show me the data. Show me the papers. Tell me, explain that to me. Explain that to me. They can't. The issue is these people, they haven't got a clue how work works, basically. That's how Jeff put it at the time.

Tim El-Sheikh:[00:39:48]

He's like, They don't understand work. Because what he has done, which I thought was fascinating, because his expertise in robotics. For 40 years ago, he was basically trying to, or rather he was figuring out how do you enhance work in your particular jobs with robotics. But in order for you to do that, you need to understand the nature of the job first. Going back to it, which is very similar to what I said in terms of the business. If you've got a problem in your business or what are the causes of the problem, which means you have to understand all of the aspects of your business model and et cetera. You have to look at the whole picture. But when these people go on about, Oh, let's replace doctors. Well, do you know how a doctor works? Obviously, they don't because when you ask them these questions, they can't answer it. That's the thing. They think that because they're really good at computer science. We did see that. I see this a lot in Google campus, to be honest, where they're really clever at maths or really clever at engineering. They think, Oh, because I'm so clever at these things, I can do anything. I can do graphics design. I can do teaching because how it could be.

Tim El-Sheikh:[00:40:51]

That goes back to, I think, our societal interpretation of intelligence. You and I actually talked about that, didn't we, in the first episode, four years ago about what intelligence means. I think there's this perception that if you're good at maths or good at engineering, you're clever, and that's it. But if you're good at music, good at arts, who cares about that? But the reality is, if you're intelligent at maths, you're intelligent at maths. You take that person out of maths, they're a complete other idiot. Best example of that, of course, is Sheldon Cooper. For those of you that know, that's a show from Big Bang Theory. He's a physics genius, a prodigy. But you take him out of that world. The guy's a complete numbty. But these guys are the same. I'm not saying that they're stupid. I don't attack people in that way. But stay in your lane with all due respect. Shut the hell up when it comes to things that you don't understand. I say very openly that I'm an absolute luddite when it comes to physics, even though I enjoy reading quantum physics and astrophysics. But I'm not going to get anywhere near that. People talk about quantum computing. Good for you. I'm not getting there because I know nothing about it. I'm honest about it. But when you have these guys that go on about, they try to be clever. They try to have this persona that they're geniuses. A genius doesn't know everything. A genius actually knows that they're really good at that one thing. Einstein, he knew he was the best in physics, but then he figured out ways, well, how do you apply some of these areas in in the real world, but he would still ask the questions, well, look, I don't know how this works. I think physics could be applied in this area, but let me work with other experts in this area to help me understand. I mean, this is how intelligence works.

Tim El-Sheikh:[00:42:43]

This goes back to the point about control. Who wants to control what? Frankly, the way I see it, they're replacing people. You see companies these days that they're replacing thousands of workers. That's not because AI is capable of doing the work. It's because it's cheaper. It's all about profit. When it comes to control, to me, it's not the AI that is going to control it. The AI can't do anything. We're nowhere near the AI that we see in all these sci-fi movies that they have this conscious belief that they can take over the world or they need to take over the world. That just does not exist. But people who have, if you like, who are holding the switch as it were, holding the keys, they're the ones that can manipulate knowledge. So you know the There are five things that I talked about, right? They can manipulate education, and we're seeing that in America already. They can absolutely manipulate the common narrative of people is, Oh, AI is better than you. Therefore, we can fire you. They don't care about the positive outcome of that because customer service, apparently, is not important anymore. So if it's crap, fine. So long as we can save X thousands of dollars per person. This is the argument that I always like to make, that humanoids, for for example, the most useless robots ever. But the only reason they would use a humanoid instead of a human, because a humanoid using Elon Musk's figures cost what? $30,000 or whatever it is, but you pay that only once. An employee would cost a company $30,000 per year. Now, can the humanoid do the same thing as a human? No. But at least they don't have to pay $30,000 every year. You see what I'm saying? So this is what we're seeing right now.

Ben Byford:[00:44:26]

I would probably say that you'd probably buy in reality, right? I'm seeing this play out and I'm like, okay, I bought my humanoid and it's like, okay, I'm going to make it like, like, like notoriously hard problems for robots, like folding clothes, like something very easy for human, but extremely difficult for robotics. Like, I've got this folding clothes bot and it breaks down sometimes and it needs updating. And I've got a subscription to my fleet of... So it probably isn't a once and done I would say as well because you have cars, they have fuel, they have to be uptake, you have to fix things. I just don't believe that's even the case anyway of how that would work.

Tim El-Sheikh:[00:45:14]

But there's also the argument that they make, and I feel it's quite sinister. Actually, by the way, there are other sinister reasons in my view. But one of the sinister arguments that they make is that, Oh, humans need holidays. Humans need to have a break and stuff. These are always don't have to. You know what I'm saying? It's like they really want that 24-hour workforce that never stops. Now, again, nothing wrong with that. If a robot can do it, great. I mean, that's all point of manufacturing. That's how it all started. But generally speaking, what I see today is this very anti-human rhetoric, which is like 10 years ago, certainly I can speak from what we had within the Google ecosystem. Nobody was I'm not talking about that. None of us. Even AGI was not a thing. No one talked about AGI. I read a few papers about that, whatever. I was like, 'yeah, good luck with that'. But generally, people were like DeepMind. They were focusing on for material science at the time, I think, in my case, we're focusing on using...fighting misinformation. Loads of companies there, they were trying to figure out how to enhance encryption data with AI, mostly because they wanted to deal with the dangers of quantum computing that can decrypt the entire blockchain, for example, within seconds or minutes or whatever. How can we enhance the algorithms for encryption algorithms with AI?

Tim El-Sheikh:[00:46:41]

There's real problems, right? But nobody was talking about, Oh, we want to replace humans. But certainly, again, from 2023, I think, you started all that talk about, How do we replace artists? How do we replace authors? How do we replace designers? I'm like, Why? The only reason is that they don't want to pay people to do the work. That's it. It's as simple as that. But the other thing, and again, I did talk a lot about that before, is that there is a very aggressive attempt now to grab as much data as possible. Because over the years, the only way all these companies, big tech companies, were getting data was through social media. Now, that's a lot of data. But the problem with social media is the fact that it's really whatever you post online, you've got the full control of that because you choose what you want to post. You can add the persona, as we see on Instagram, most of these influences are fake. It's like, yeah, I can pretend to be whatever I want to be or whatever and have that conversation. But you could still generally figure out what's the general public sentiment towards a particular topic or politics, whatever. There's a lot of data, but it's not enough.

Tim El-Sheikh:[00:48:01]

What we're seeing right now for the first time in history, and this is where I chat GPT and we're seeing that rise of psychosis, AI psychosis, so-called, is that for the first time in history, people are sharing very, very deep thoughts, ideas, their true personality with these chat bots. What people don't realise, we have access to that. You might say, well, Tim, surely people are not going to be reading all these lines one by one. It's not about whether people can access it. There's a record of every prompt that you generate, and we can use AI to start going through all of that to say, Well, okay, what do people talk about? What's the interesting thing? That's never happened. That's people's real thoughts. It's not a façade.

Tim El-Sheikh:[00:48:54]

The other sinister is the thing with humanoids, especially humanoids that are designed for households or whatever. I don't know if you saw that new humanoid that came out. I forgot what it was called. It looks like one of those Apple speakers. It's got cloth all over the place. It looks very cute or whatever. They were saying that if it doesn't work, somebody in the call centre will have full access to that robot, that they can scan your entire house. But guess what? That's the whole point. They want to have access to the inside of your home.

Ben Byford:[00:49:29]

Your data. In fact. Yeah.

Tim El-Sheikh:[00:49:32]

Exactly. Basically, now, they want to have direct access to your thoughts, to your personality. They want to have access to your living environment, your working environment. That's an unprecedented data breach, in my view. The fact that they really want to have access to your private life in this way, it's crazy. Again, talking about control, Larry Ellison, who briefly was the richest man in the world for, what was it, two days or something? But he's the founder of Oracle. Of course, he's been in the news a lot. In fact, I even had the journalist that broke that news about his relationship with Tony Blair and Tony Blair Institute. But he said it publicly recently that to him, he was like, and he said it as if it's a great thing. Imagine if we all have data on people, every single person, and we make sure that everybody behaves themselves. What are we talking about here? Again, where is the AI here? Where is the AI? What's this got to do with AI? This is the key point. For me, the way I see it, going back to that five pillars that I talked about, if you have direct access to people's way of thinking and people's way of life in those five areas, people's identity, their health care, obviously, Palantier in the UK is trying to get access to all of our data in the NHS data. If you have access to the education, they can manipulate your education. You are controlling people. You don't need AI for that. These are the people behind the AI basically masquerading that it's always an AI innovation. But in fact, it's an attempt to aggressively grab as much of your data as possible. That way you can, quote, unquote, behave yourself. To me, that's what the AI control looks It looks like it's nothing to do with AI. It's the people behind the AI.

Ben Byford:[00:51:33]

It feels like if the bubble bursts, there's going to be a land grab for what the data is available and who controls the data and what data is coming in still from whatever is accessible in people's homes, their phones, their robots, whatever it is. So that might be an interesting upshot of this whole thing is that we get left... When the dust settles, we have just data oligarchies, I guess. Let's call them.

Tim El-Sheikh:[00:52:05]

Maybe that's why they want the data centres, because it's all centralised, isn't it? This is why for me, again, there are a few of us out there that basically we're calling for decentralised data. Basically, it's decentralised AI. Again, this is what Apple is doing. The fact that all of the AI is on your phone, no one can access it, to me, that's the future. Also from the efficiency and cost perspective, it makes sense because if you as a customer, you buy the phone, well, you're paying for it. You've paid already for the AI. That's the ROI, which is the reason why I said before, for you to offer consumers access to AI, somebody has to pay the bill. Apple obviously saying, Well, actually, we're not paying the bill. The user will pay the bill. The user has it on their phone. I'm like, Yeah, that's great. That's how it's supposed to be. But I think you You asked earlier as well, what's the point of all these data centres? That's a very good question. What is the point of all these data centres? I think if you look at what Larry Ellison is saying, I think that gives you the answer. That's what the data centres are for. It's nothing to do with, 'oh, yeah, let's give people access to AI'. You need to store the data somewhere. And there you go. It's that centralised database of people.

Ben Byford:[00:53:23]

Yeah, because I think it feels like, for me, I'm trying to make sense out of that situation. It's like either, like you said, it's just a storage system for this mountains mountains of data that is going to be produced, is currently being produced, and it needs to go somewhere, it needs to be computed, and needs to have done something done to it to make it useful. But also it's like, there's this idea that these AI tools are going to keep getting better with more data and more compute as well. So there's this like, we've been talking about the bubble, but there's also this idea of the plateau, right? We're going to throw more compute and more data at one of these generative models, and it's going to keep getting better forever, right? It's going to be better in terms of it can do maths better, it can do human responses better, it can do language translation better, summarization better, whatever task that you're trying to actually test it on. And it feels intuitive to me that that isn't the case, that these things are just going to always get better. And especially when you consider that you're now polluting your own data with stuff that you produced from the same models. So when people were like... I don't know if people know this, but you could send emails which are just all AI generated, or you could create an AI generated image, or you could create an a generated website. And then, therefore, at the end, you're just going to get this snake that eats itself situation, where you're polluting your good data, whatever that means, with synthetic data, and you get this general gradual degrading because it's like, 'oh, well, I'm listening to myself over and over like a big echo', right? It's just going to slowly degrade over time.

Ben Byford:[00:55:25]

So it feels like people are like, are we chasing something which is actually infeasible and doesn't really make any money, doesn't really help us in any way, and is making all these data centres and stuff like that in places that maybe we don't need them or it's a waste of money and time and not solving any climate issues? Or are we going to magically make this AGI thing which is going to solve all of our... You know what I mean? This is like, I think that's far-a fetched than what is actually the case at the moment where it's just not actually providing any benefit.

Tim El-Sheikh:[00:56:07]

Well, hence it's a bubble, isn't it? It's basically going back to the Dot Com, the best example of the Dot Com bubble, of course, was pets. Com, isn't it? This is basically multiples of pets.com in one thing.

Tim El-Sheikh:[00:56:23]

But the other thing, actually, I just had a thought, actually, because one of the things that I always hear about, and again, they just suggest to me that people just don't know what they're talking about. It's like when they say, well, actually, the infrastructure is great. If you remember with the dot-com bubble, a lot of companies that actually produced the or rather built the fibre optics or whatever, they all went out of business. But the fibre optics where they still serve their purpose for decades or whatever, or that AI is like railway or the data centres are like rail. I'm like, actually, no, it's not. No, it's not. Because what is the core element of all these data centres? The microchips. Microchips don't last for decades. Now, you're a gamer. We're both gamers, right? I use Nvidia, right? Now, which smart gamer would ever go to the market to say, You know what? I want to buy an Nvidia microchip that's 10 years old.

Ben Byford:[00:57:23]

It's now redundant, right? So you're just like,Well.

Tim El-Sheikh:[00:57:26]

Absolutely.

Ben Byford:[00:57:27]

It doesn't work so...

Tim El-Sheikh:[00:57:29]

And the thing is, looking at this, there's different research that shows that basically all of those microchips right now, their lifespan, because obviously the computer is so intense with these models, the lifespan of these microchips is 3-5 years. Let's say all these data centres spent all of that money. Now, NVIDIA, of course, by the way, I don't agree that people say NVIDIA is the one that's going to lose that. I don't understand. They make a lot of money. Excuse my language, but they are absolutely laughing to the Bank. And kudos to them. They've got the technology, they've got the microchips, they hacked up the price. I mean, that's what you do in a bubble as a business. They're not losing anything. They'll be fine. Because let's say if the bubble burst, NVIDIA will be absolutely fine. I mean, I have No, I don't believe anything would happen to NVIDIA. But people who are building, because there are companies that are taking loads of loans and stuff to build these data centres, and they're basically investing in these microchips. Well, what will happen after three or five years? Who's going to pay for the microchips that don't work anymore or they're not particularly useful anymore?

Tim El-Sheikh:[00:58:36]

So this idea that it's a long term investment is complete for the birds because it's not like the railway. The railway, yeah, it did last for centuries or whatever, but microchips don't last for centuries. Again, it's that narrative that people just don't get it. It's like, AI is like fire. No, it's not like fire. Fire is something that we've done for millennia. Again, microchips don't do that.

Tim El-Sheikh:[00:59:04]

It's frustrating. You could tell that I'm frustrated with this because it's the whole thing is just led by complete misleading information or information made by either grifters or incompetent people or people who are not experts in the field. They're the ones who are doing all the talking, which is the reason why I started my podcast, and that's not to promote the podcast, but it's more like to make the point. The mission of the podcast is very simple. I want to bring real experts out of the click bait shadows because everything that we have right now, it's all meme, click baity, garbage that is controlling the entire AI narrative, so much so that governments listen to them. I have sources now, Ben. I'm a journalist, apparently. I was like, What? But I have people in government, in the UK government, that told me that there are people in government and government departments who, mind you, they are the ones who control policy. They think AI was invented three years ago. Riddle me that, Ben.

Ben Byford:[01:00:19]

Yeah, I mean, there are opportunities talking about a very specific thing, which is not what AI is or should be? You know what I mean? So I'm interested in... Obviously, you got your podcast. I was on a couple of years ago, a year ago. A year ago. And we had a chat and it was really nice to air frustrations as well. And I was wondering, what is it that you're trying to achieve from the not-for-profit foundation thing that you're trying to do? And Is there anything that we haven't talked about in this AI bubble situation that you want to briefly touch on before we finish?

Tim El-Sheikh:[01:01:08]

Well, I think that because one of the key questions I get a lot these days, and this is a a typical thing you see in the deep tech, because I'm basically a deep tech entrepreneur. So it's all about research and development. So basically, people like me, we talk about the things before everybody else is talking about it. So like I said, we talked about the bubble before everybody else is talking about the bubble. So right now, the talking point that we have is what happens after the bubble. Whatever that bubble explosion looks like, I think we all agree that the consequences of the explosion is pretty catastrophic. There'll be financial meltdown, a lot of money lost, so much so. Sometimes I can get a bit hyperbolic about it. I'm thinking it's even risky to entrepreneurship because if entrepreneurs can't get money from VCs that lost all the money, how do they start a business? For me, This is the question. It's like, what happens after the bubble? How do we clean up the mess? For me, this is my mindset these days. It's the post-AI bubble cleanup. That's basically the billion dollar question because it's so many things.

Tim El-Sheikh:[01:02:16]

I think the problem right now, certainly from the commercial point of view, which is the reason why I want to do the nonprofit side of things, is that the trust in AI will collapse. I think right now it's probably already collapsing. Nobody can say anything good about AI. Basically, I feel that if we can find a way to basically prioritise solving the problems that caused by AI. In my case, I want to look into how do we reverse or find ways to solve the problem of this AI psychosis. Again, there's a lot of research out there that shows that hundreds of thousands of people have developed pretty horrific mental health problems from generative AI. How do we solve that? People talk about the cause and the outcome. Well, what's in between? That's what my non-profit is going to be doing. We want to look at how do we solve that problem. Of course, I don't believe any VC would invest in that because they don't care. It has to be government-funded because governments will have to figure out a way to fix that mess because people will vote the crap out of them like we've seen right now in New York. We're going to see a lot of that post the bubble for sure. People They're not going to absolutely retaliate against whatever government they have because the economic consequences are quite obvious. So that's basically what I'm looking at. It's like, how do we solve, how do we clean up the mess, the mental health mess? Obviously, the data, what will happen to the data like you alluded to, who's going to access that data, and basically campaign for proper regulations once and for all, because we've been talking about this regulation for bloody forever. Nobody's done anything. Yeah.

Ben Byford:[01:04:00]

Yeah, I just remembered a question to ask you earlier, but I'll save that for another time. Because it's really interesting you're talking about the psychosis side of things. Our next episode is actually we're talking about AI and relationships and socialising AI and that connection. So if there was one thing that you could advise just a general listener, what would you advise them to do if they using this technology?

Tim El-Sheikh:[01:04:33]

Not to fall for the praise because that's what it's doing right now. Actually, I was experimenting with it yesterday, funnily enough. I think it was with ChatGPT, where I was asking it very specific questions. What was it about, actually? What's it do with the environment? There was something that came out yesterday. Oh, my God.

Ben Byford:[01:04:53]

Sorry. ...

Tim El-Sheikh:[01:04:57]

But I asked it the question. It was very, very a specific question, right? Almost like a yes to no question. Straight away, I said, 'Oh, this is such an amazing question. Well done for thinking about it'. I'm like, Huh? What are you doing? This is what hooks people in. It's that self-praising, nonstop praise. Because we live in a world where people are stressed all the time for good reasons, somehow that creates this comfort zone in your head that you just want to talk to somebody that is always praising you. So my advice is just be careful with that because you can very easily fall for that. We all do. We're all humans. We all have our cognitive biases, et cetera. So just watch out for that. Don't allow ChatGPT to fuel your cognitive bias.

Ben Byford:[01:05:47]

Tim, I think that you're amazing, and I think that's a really good answer to just the best person, actually, just in the world.

Tim El-Sheikh:[01:05:57]

Oh, you're so kind. I, I didn't pay Ben to say that. I swear. I didn't pay him.

Ben Byford:[01:06:04]

No, you're an awful... No. Thank you so much for your time. I'm definitely going to speak to you soon. As always, keep up the good work. And if people want to find you, follow you, interact with you, how do they do that?

Tim El-Sheikh:[01:06:20]

Tim El-Sheikh:[01:06:20], you should be able to find me easily. You can Google me as well. But if you want to check me out, just check out the podcast, which is ceoretort.com. So that basically takes you to my podcast, my newsletter. Yeah, what else is there? There's nothing else. Yeah, that's it. All the good stuff is there.

Ben Byford:[01:06:42]

Okay, cool. Thank you for your time, Tim.

Tim El-Sheikh:[01:06:45]

Thanks so much, Ben.

Ben Byford:[01:06:48]

Hi, and welcome to the end of the podcast. Thanks again for Tim for coming on for a second time. If you'd like to find more about Tim, you can go to CEO Retort, or you can find his episode, Episode episode 63 of the Machine Ethics podcast: AI Readiness.

Ben Byford:[01:07:04]

Obviously, some episodes are a product of their time, and I feel like there's a lot of conversation around the AI hype at the moment. So if you're listening to this in the future, can you message back and tell us what happened? Okay. I also really like the idea that I hadn't necessarily come off or come over before this idea of the AI oligarchy and this new land grab for all this data, all this very the sensitive stuff that we are giving, prompting to these systems in a way that we don't normally interact with the Internet. And what is happening to that data and who has that data, who keeps it, has ownership over it, and how valuable that is, is something that I haven't seen talked about enough in the work that I've seen. So it's something that we should probably be mindful about, as well as the security aspect of that as well, and privacy stuff. So if you want to talk about that, do get in contact.

Ben Byford:[01:08:02]

Thank you very much for listening. Obviously, go to machine-ethics.net for more episodes from us. If you can, you can support us, Patreon.com/machineethics. And hopefully see you next time.


Episode host: Ben Byford

Ben Byford is a AI ethics consultant, code, design and data science teacher, freelance games designer with years of design and coding experience building websites, apps, and games.

In 2015 he began talking on AI ethics and started the Machine Ethics podcast. Since, Ben has talked with academics, developers, doctors, novelists and designers on AI, automation and society.

Through Ethical by Design Ben and the team help organisations make better AI decisions leveraging their experience in design, technology, business, data, sociology and philosophy.

@BenByford