79. Taming Uncertainty with Roger Spitz

This time we chat with Roger Spitz about how to think about the future, what does a futurist do? Thriving with disruption, a chief existential officer, virtuous inflection points, delegating too much authority / decision making, our inappropriate education system
Date: 11th of July 2023
Podcast authors: Ben Byford with Roger Spitz
Audio duration: 01:13:36 | Website plays & downloads: 125 Click to download
Tags: Education, Futurist, Existential Risk, Disruption, Decision making | Playlists: Existential risk, Philosophy

Based in San Francisco, Roger Spitz is an international bestselling author, President of Techistential (Climate & Foresight Strategy), and Chair of the Disruptive Futures Institute. Spitz is an inaugural member of Cervest’s Climate Intelligence Council, a contributor to IEEE’s ESG standards, and an advisory partner of Vektor Partners (Palo Alto, London), an impact VC firm investing in the future of mobility. Techistential, Spitz’s renowned strategic foresight practice, advises boards, leadership teams, and investors on sustainable value creation and anticipatory governance. He developed the Disruptive Futures Institute into a preeminent global executive education center that helps organizations build capacity for futures intelligence, resiliency, and systemic change.

Spitz is an advisor, writer, and speaker on Artificial Intelligence, and has invested in a number of AI startups. From his research and publications, Roger Spitz coined the term Techistentialism which studies the nature of human beings, existence, and decision-making in our technological world. Today, we face both technological and existential conditions that can no longer be separated. Spitz chairs Techistential's Center for Human & Artificial Intelligence. He is also a member of IEEE, the Association for the Advancement of Artificial Intelligence (Palo Alto), and The Society for the Study of Artificial Intelligence & Simulation of Behaviour (UK).

Spitz has written four influential books as part of “The Definitive Guide to Thriving on Disruption” collection, which became an instant classic. He publishes extensively on decision-making in uncertain and complex environments, with bestselling books in Business Technology Innovation, Future Studies, Green Business, Sustainable Economic Development, Business Education, Strategic Management & Forecasting.

To learn more about Roger Spitz's work:
The Definitive Guide to Thriving on Disruption: www.thrivingondisruption.com
Techistential: www.techistential.ai
Disruptive Futures Institute: www.disruptivefutures.org


Transcription:

Transcript created using DeepGram.com

Hello, and welcome to episode 79 of the Machine Ethics Podcast. This episode was recorded on 13th June 2023. This time, we're talking to Roger Spitz. We chat about what it's like to be an AI. What does a futurist do?

Thriving with disruption, the idea of a chief existential officer. And virtuous inflection points, are we delegating too much authority or we have a loss of agency or decision making when it comes to our use of AI, whether our education system is appropriate for our AI mediated future, and much more. If you'd like to find more episodes from us, you can go to machinedashethics.net, and you can contact us at hello at machinedashethics.net. You can follow us on Twitter, machine underscore ethics. Instagram, machine ethics podcast.

And if you can, you can support us on patreon, patreon.comforward/machineethics. A big thank you to all our Patreon supporters, especially those who've been supporting us over the years as we move into our 7th year doing the podcast. And thanks for you for listening, and hope you enjoy. Hi, Roger. Thanks for joining me on the podcast.

It's a pleasure to have you, all the way from, sunny states at at the moment. It's very sunny here in Bristol. How how is it there? San Francisco is not the sunniest part of the world. We got shortchanged when we left from London, I think, from the California dream, but hey.

Oh, it's yes. It's like the fog, isn't it? That's that's the yeah. So if you could quickly just introduce yourself, who you are, and what you do. That's great.

Thanks, Ben, and fantastic to be with the Machine Ethics podcast. So really, really a pleasure, etcetera, zeitgeisty, and and really important topics. So I'm Roger Spritz. I used to be an investment banker. I was global head of mergers and acquisitions for one of the the large European banks, advising, CEOs, founders, investors on their most strategic acquisitions or divestments covering technology.

I did that based in London, globally, and then San Francisco for a few years. And over the past few years, believe it or not, retrained as a professional foresight practitioner, aka futurist. So I have a foresight practice called Tech Essential, which we'll talk about, technology and existential. And I have a medication platform, which is the Disruptive Futures Institute, which probably we'll also talk about, which helps make sense of, evaluate kind of situations and how to respond to our complex, unpredictable, and nonlinear world. So there's this question we always ask at the beginning of the podcast.

I think there's there's so much that we've just said that we can dig into, which is awesome. But to kick us off, what is AI? And I guess the lead on question from that is you've talked about disruption quite a lot. So what do you mean by disruption as well? And I guess, is AI part of that equation?

Yeah. Yeah. Good, good good starting point. And it's interesting how these questions of semantics and meaning and understanding are just, so fundamental. So we use artificial intelligence in in the sort of as a broad term part of the, you know, I guess, the computer science, field or what have you, which is seeking to create machines capable of thinking and learning.

Doesn't mean that they are, but, effectively, they perform certain cognitive functions, which typically require human intelligence. So maybe perceiving, reasoning, some degree of learning, or problem solving, and we can talk about what that means in terms of decision making. More fundamentally, I guess, day to day, there's an element around recognizing speech, playing with language or processing language, at least, visual perception. And, and we'll talk about it as well, but but, you know, playing a role on some form of a decision making value chain doesn't mean that they understand everything. So at its basic level, that's that's what we understand by AI.

I find that, it's really important to just briefly touch upon 2 or 3 elements, subplots of AI because of of the impact they're having today. 1 is machine learning, which is effectively, you know, what is allowing machines to code for themselves. What's allowing machines to learn without being explicitly programmed, so whether it's from data or what have you. And the other one, I guess, is, is deep learning where the artificial neural networks are trying to replicate the brain. Again, not saying they're doing it, not saying it's effective, not saying computers are brains, but but that kind of aspiration to really, allow computers as best as possible to replicate the the human brain and and and what that means.

So so that's that's how we kind of think about AI. I guess the final point maybe if you take some of the sort of categories of AI and, you know, let's take Nick Bostrom's, you know, from his book, Superintelligence, and his work at the Future of Humanity Institute. I guess he and most people have read similar clusters. They're not necessarily always the same terms that are used, but, effectively, there's the artificial narrow intelligence, probably what today most people call AI. It's quite specific in terms of, precisely defined tasks, which are being defined.

That kind of is helpful around pattern recognition and data. And, really, here, computers have done pretty pretty decent job, so they can play on a sort of specific activity, you know, chess and and win or autonomous driving, although that's still not not completely mastered, but progressing, natural language processing with, you know, your Amazon Alexas, again, which seem pretty lame, but but in reality, they they may not be that far from the kind of moving from 95% to 99% comprehension, which which will make a big difference, things like image, facial recognition. So that's kind of ANI, the artificial narrow intelligence. The second one, I guess, is artificial general intelligence, and this is where a lot of the debate is is going, which is can AI have effectively comprehensive cognitive capability across a wide number of tasks? Can you reach what some, you know, Ray Kurzweil labeled singularity, whereby computers achieve human level intelligence?

So here, the the what that would look like, you'd have machines that have self aware consciousness, ability to solve problems, ability to learn, and and, and, you know, maybe even plan plan for the future. And the third one, I guess, is artificial superintelligence. So here, not only are computers able individually to reach human intelligence, but there's some kind of swarm collective combined cognitive ability of the entire humanity and and network systems. And that's you know, the most famous one doesn't exist, but I refer you to Stanley Kubrick's, and obviously, Clark's space, obviously, with how and that's what they kind of envisioned as, what, that superintendents could look like. And then the debate is really around today, probably, the artificial general intelligence.

And so for the purposes of our conversation and certainly in terms of of risks and that, a lot of the debate is really on on AGI, I guess. The that's that's a couple of questions I wanted to ask. I I think Mhmm. One of your points that you're making was around AGI and the, consciousness being part of that equation. Is that something that you personally believe that the the more general an intelligence gets, the there is a prerequisite towards some sort of stuff.

You know, we could call it a consciousness or some sort of awakening or maybe even like, an internal, being or what it's like to be an AI. Or or do you think I mean, I'm obviously leading this as a leading question. Or or do you think it's like something else? Or, you know, you can be, let's say, extremely, intelligent and do extremely interesting things and wide variety of things without having this kind of sentience piece, involved. Yeah.

That's so, listen, I think and, again, I'm, you know, I'm an ex investment banker today working as a foresight practitioner, and it's multidisciplinary, but I'm obviously not probably the world's expert on consciousness. But I think the the considerations around consciousness are probably quite, you know, nuanced and and, developed and, had really many facets to it for humans, let alone for machines or advanced, you know, systems. But my personal view and I don't know if I I mentioned that in my kind of illustration, but my personal view is that it's more kind of self awareness consciousness than than really purely consciousness per se. And so, I kind of see it slightly more as as machines that are becoming self aware so that when there's some considerations around, you know, planning for the future or thinking about problem solving, and that is some some self awareness, and I kind of label that as consciousness. But I I think your point is an important one because one of the things we look at in terms of advanced AI systems moving up the value chain is is whether it matters or not that they understand necessarily what they're thinking about, whether they're actually able to reach the same functionality of the brain or whether, in a sense, it's the outcomes that are kind of proxies for decision making and evaluation.

And and you could also argue that the human brain is also very different and that the different the levels of understanding are very different and that the the the decisions taken on on the information is is incomplete and not always rational or whatever. So, you know, you can have that same debate around the human brain, to what degree do we have full understanding of it, to what degree is it fully consistent across humans, to what etcetera, etcetera. I don't know if that helps answer the question. Yeah. No.

No. I I think it's one of those things where it spawns more questions, and I think we'll be talking about that for the whole episode of if we don't kind of, like, you know, go, cool. Can sidestep that. Maybe we'll come back to it. Who knows?

We'll see. So I think, I I was really interested to learn more about, kind of this transition. You you decided to, go into this kind of forecasting prediction, sort of role, which is interesting because it kind of mirrors what machine learning kind of does. I wonder, you know, what what is it that Futurist does, and why you put together your kind of Tech essential. Yes.

Yeah. Yeah. Yeah. So would you call that company, or is it a, institute or something? Yeah.

So there's, there's basically a foresight practice, which is more effectively a consulting, which we call Tech Essential. And then Disruptive Future Institute is is educational. So we help capacity build, for futures intelligence and decision making in complex environments, etcetera. Yep. So I I guess you got through these different facets, and I wondered why you went into, futurism, writing the book, doing the institute, consulting, all that.

Thank you. Yeah. But you're right. They all have a common theme, which is which is how to think more, you know, structurally about the future. So the first thing is is maybe clearing the elephant in the room in terms of, what futures do, what what what are futures, which is in a sense your question.

I guess it's a personal thing why I decided to move towards that, and then what is actually a futurist? Maybe the point on the the the common misconception around the the the full side field is that futures don't seek to predict, paradoxically. So we believe that the future is actually unpredictable. None of the scenarios or things we imagine are likely to materialize exactly as we kind of envisage them. But by thinking constructively or systematic systemically around the future and imagining sometimes the unthinkable, the unimaginable, we believe that that has a number of benefits.

1 is that you're likely to to anticipate better certain situations because you've kind of thought quite broadly. 2 is that you might be able to inform decision making and build resiliency or or other kind of capacity building, because you've thought about all kinds of different eventualities. And 3, you might be able to orient the outcomes towards your preferred futures because you're consciously thinking about not only what the future could look like, but role you what role you could have in in shaping that. So, a futurist is really someone who's exploring many possible futures systemically as well as the drivers for change, but always to inform short term decision making. So that's another misconception is that you're just thinking, oh, what what will happen a few decades ahead?

Well, actually, all all we're doing is trying to ask the why, why not, what if, what if not, so what, to inform decision making today. So our departure with kind of normal or more conventional strategic planning is effectively we think about longer term time horizons. We think in a more emergent way. Things kind of evolve by trial and error. They're not necessarily predetermined milestones.

We're seeking, in a sense, questions as opposed to just looking and diving into answers of of kind of certain assumptions. And we think that the world is is systemic, nonlinear, complex, that things are connected and spill over in ways that are not predicted or that you might have a difference between the first, second, or third order implications. And it's it's that kind of plurality, which means you're constantly looking for for signals as opposed to really just kind of being presumptuous or assuming certain certain outcomes. For the anecdote, I think the foresight feel and certainly scenario planning developed considerably after World War 2. So what happened then is that the world was confronted maybe for the first time in a kind of direct way with the what was previously unprecedented possibility of nuclear annihilation.

And so from that, people like Herman Kahn at at the RAND Institute in in the seventies or Pierre Wark and others that rolled that shell at the time looked at effectively, in one case, for military and nuclear strategy, in the other case, for business strategy, what it looks like when you're acknowledging that the future might be radically different, from today, imagining a world that that doesn't necessarily exist. And maybe the final comment I'll make, which is very important, is that in addition to what we talked about as as helping be more anticipatory and all that, You know? If you ask the what if questions, and this includes things that can be very positive, you know, what if I could cure cancer? What if, you know it it kind of focuses the mind not only on the discontinuity of of every day or your strategic plan, but it's really gonna push you maybe to actually make these things happen. And a lot of it's it's in a way the nondistopian aspect of science fiction, you know, asking the the what if question.

So that's that's hopefully given some some elements around, you know, what a futurist does. My personal journey, as you're asking and and that links to not only why I became a futurist, why we wrote the the the collection thriving on disruption, is I was in m and a for 25 years, as I mentioned. And the world when you're kind of in investment banking, you feel you're pretty much on top of the world. You feel you're dealing with all the decision makers. Everybody knows what they're doing.

Everything you're dealing with is quite strategic. You feel you're looking quite long term. What I started realizing over the, you know, the the sort of last few years when I was still in the industry is that this thing called disruption, or we'll come back to what it is, but nonlinear change is actually quite unpredictable. And, actually, there are more and more questions as to what is happening around us. And at some point, I just got very interested in in understanding better change.

And when our CEO asked me to spend time and to move to San Francisco from London 7 years ago with the same role in in m and a, I promised myself to go down that rabbit hole. So I did a lot of courses around, you know, complexity and systems thinking with SunTrust Institute. I did courses around foresight and and futures thinking with Institute For the Future in Palo Alto, University of Houston. I have a very good foresight course. They're oldest in the world, actually.

And courses at MIT on on AI and strategy, the d school on on, you know, design thinking and innovation at Stanford. And after kind of a year or 2, Singularity University, and after a year or 2 of these kind of courses, the penny dropped. I realized that I was kind of not living my life in a consistent way with with how I was seeing it. It wasn't just alignment in terms of values and purpose. It was literally I didn't feel that the way business leaders, organizations were driving their strategy was consistent with the reality of our complex, unpredictable, nominally in our world, and so I decided to kind of unpack that.

To to wrap this question, basically, and link it to the book, I started talking about these topics more and more when I left investment banking 3 years ago, and that was just before the pandemic. And one of the focus areas was decision making in complex, uncertain, and unpredictable environments. Now what happened is that when the pandemic hit, I kind of suddenly realized that a lot of what was happening was not related to the pandemic per se. The world is what it is and had been what it was for certain time, but it fleshed out a few things. The degree to which society and humanity was unprepared to understand the true nature of our complex world, The velocity and systemic transformations that can suddenly cascade and and what that means, The public and private institutions, whether governments, agencies, educational systems, incentive structures that basically are meant to kind of guide or or support that are increasingly ineffectual.

And putting all that together, basically, I sort of thought, wait a minute. This is this is a problem for the whole world. And fast forward to today, basically, events have shown that this is better to be thinking about the world as I've described it and as it really is as opposed to assuming it's controllable, linear, and predictable. And we kind of just suddenly went through the roof in terms of demand for courses, for talks, for publications, and so we wrote this definitive guide. We give executive courses for capacity building, for future intelligence.

And, effectively, it was 2 degrees serendipity plus a kind of feeling that, that these were important. And I guess the last comment to your question, do I think that AI falls within these kind of disruptions or things to think about? And I haven't yet defined disruptions. We can come to it at your convenience, but I do definitely think that AI plays a role in the drivers of of fundamental change, and these are not, in the way we think about disruption, isolated discrete events. They they're kind of systemic, and have, you know, broad ramifications as they kind of self reinforce other eventual changes.

So with, I'm gonna take a stab and say the idea of the the kind of systemic idea of what you're saying that the world exists in this untamable way, and you are, utilizing tools and, systems to be able to better, kind of react to the realities as you see them and anticipate the next kind of events. And and AI is a current event maybe, but there will be other events. And these are sort of the kinds of things you might talk about, and and I'm guessing that I was reading your, some of your your work and there's this idea of the triple a. So and and one of those a's is agility. So you have this idea that, perhaps you have to concentrate, some time on being agile, in kind of not a project management sense, maybe a project management sense as well, but as a business sense.

You know, the the business has to be able to respond to external events, complex systemic events which happen in the world, which may be outside of your direct control, but you can respond to them. Is that kind of the the vibe? I've just kind of had a little mini rant there. Sorry. No.

No. These are I mean, you're spot on. These are all these are all elements of of that. And listen. Language is is tricky.

Right? It's helpful because it allows us to to have the conversation we're having. It also has a lot of different meanings for different people. Right? So I think the the two two factors which are important in kind of then unpacking the triple a and specifically the, you know, reacting to to the how agility plays a role with that.

I think the first element is is just kind of so disruption is really a constant, and it's simultaneous, and it's systemic. So we don't use it in the sense of Clayton Christensen as disruptive innovation for specific, you know, company or product innovation or technology. We use it as as really change is constant, and there are many drivers of change, and they're self reinforcing. And so that's the first element. The second element is is by complex, and and you use the word untamable, which is very interesting.

Indeed, we'd we're unlikely to to tame the environment per se, but maybe and agility is one way we believe of doing it. We can tame, quote, unquote, being better prepared for that untamable outside world. And so the element around complexity, and I'm sure you're, you know, you're familiar with this, but for, you know, for some of the listeners who may not, and the distinction with complicated is that complicated, you have what's called known unknowns. Cause and effect can be established exempt here. You can rely on science and experts, and, basically, there is an answer to that.

And that is actually where AI is very good, and and that is, you know, how you send a probe to Mars, how you fix plane. These are not trivial, but but with expertise, you can develop it, and it's relatively predictable outcomes you can derive. Complex is nonlinear, so that means that if you have kind of something that arises, the outcomes and the outputs can be completely disproportionate. That can be good. You know?

None of this is necessarily good or bad. You know? If it's social media and you're sending the right message, and you're trying to make a social movement around equality or social justice or around climate initiatives, that's great. That is nonlinear. If it's other things that jeopardize democracy with disinformation, that's that's that's good.

But nonlinear is effectively that idea that is not proportional. It's not predictable in the sense that you can't establish beforehand what the outcomes will be. So that's the Amazon river. You know? How if you move something, how does it impact something else?

You you know? You have no idea. So there's an element of emergence, trial and error, etcetera. And then multiple drivers of change, you can't just isolate a or b. That's why we use, you know, disruption in the sense of systemic disruption.

So coming to agility and our triple a, what we're kind of saying is in this world of systemic disruption, which is complex with the features we've described, there are certain things which even if you can't predict the outcomes, because by definition, that's what we're saying. It's it's not predictable. There are certain things you can do to still be comfortable and maybe even thrive on disruption. So the first one, we'll talk about agility because you mentioned it first. And, indeed, we don't use it in the in the kind of more conventional sense.

We use it in the sense of having the agility to bridge short term with long term. So it's really sense making. So you might have that vision of of of what possibilities may arise in the different futures, which are 5, 10, 15, 20, 30 years away. But, effectively, only the present exists, and so you need to constantly have the agility to reconcile, to zoom in, and to zoom out, between different time frames, and and that requires, you know, feedback loops, experimentation, different types of agility. The emergent agility is is emerging real time in the here and now, and the strategic agility is is reconciling maybe longer time periods with the the shorter term.

So that's kind of the agility bucket. Anticipatory, which is the 2nd day of the triple a, is really just think some of the things we talked about earlier is what a futurist does. You're thinking about that the world is unexpected, that the the multiplicity of possibilities. You accept that you have agency to maybe detect and evaluate some of these and and drive towards a preferred future. You're qualifying signals as opposed to just looking at trends.

For for us, trends are really the starting point. They're the past. But, really, you look at signals and thinking about the next or implications of those possible signals. You're qualifying and challenging assumptions all the time. You're kind of having more of a, you know, to use in Buddhism, a beginner's mind.

You're and you appreciate, and you're trying to you know, even though we're not cabled as humans to to think nonlinearly, you appreciate that it is nonlinear. So you can have sudden exponential changes, which are very much which seem unexpected, but, actually, they just had an exponential profile, etcetera. So those are the features of anticipatory. And then the third aid was is really using Nassim Taleb's term around antifragility, which is, you know, there's certain features which will strengthen if there's shocks or at least be resilient if there's shocks. And so that idea of antifragility is really, you know, around incentives, around for instance, if, you know, if you're an airline and you buy back all your shares to try and get your stock market up for 5 minutes and that you're using cash from your balance sheet to do that, that's fragile.

Because if there's a shock, if there's a pandemic, if there's anything that goes wrong, you don't have that cash anymore. You can't sustain those shocks, and you've really done it for nothing because you're just kind of fudging a share price for 5 minutes, no one in the tube. So these are sort of some of the things about anti fragility. And the idea, just to kind of wrap on that, is that these are within our control. We're not trying to tame, to use your word.

It's a great word. I haven't used it in that context, but I love it. We're not trying to tame the outside world because it is unpredictable. We we can't. However, we can think about being more anticipatory.

We can think about building antifragile foundations, and we can have that agility where we're not having to do an either or between short term emergencies because we're unprepared versus longer term visioning. We can find the agility to zoom in and zoom out between those different time frames. I'm just really curious. Do people do that share buyback thing quite often? Or Oh my god.

You have no idea. The I I I hope I'm not saying anything wrong, but I think in some of the years, you know, before the pandemic and before the, you know, financial crisis and that, we're talking about 800, you know, 100 between 500,000,000,000 and 1,000,000,000,000, I think, share buybacks globally. And listen. There's a lot of you know, if you go to do an MBA at Harvard or any finance I mean, I did the most in finance, so I understand You know? I advise companies to do that all the time.

I understand the different reasons why one might do that. I'm not saying it's it's all entirely ridiculous, but what I am saying is that it assumes a stable, predictable, and then your world, and it is fragile. So if you have a shock or think that unexpected, which we're having an increasing amount of, that's what happens. So if you Google, you know, or just research it even superficially, you'll find that it's a big thing. And then these same airlines and banks or whatever, you know, come begging for money for the governments because they're in a particular situation, and then think about how, you know or or even the same as being hyper optimized.

You know? You have all these you know, I have absolutely no respect for them, but some of these strategic consultants, Lisa, which is in McKinsey, if you're, hopefully, you're not sponsored by them. But anybody who wonders why I don't like McKinsey, just just read when McKinsey comes to town and some of the amazing investigative journalists work from from The Wall Street Journal. But but the idea, whether it's McKinsey or others, is, you know, hyperefficiency is a similar concept as as this buyback, which is fragile. So, you know, you hyper optimize supply chain, 0 stocks, 0 inventory.

You you know? But the problem is, as you've seen during the pandemic or other shocks or or the geopolitical events recently, is that the minute anything goes wrong, basically, the point of failure collapses the whole system. So you're you're you're relying on basically, it's wonderful to be hyper optimized and hyperefficient and zero inventory and all that and only have outsourced in the best countries to outsource, etcetera, unless there's sanctions, unless countries get taken over, unless there's shocks in the system, and anyone is a point of failure for the entire system. So, basically, on paper, share buybacks, on paper, hyper optimized systems are great, but, actually, building in that Slack is what allowed Zoom to go from nothing to the most established video conferencing platform overnight because it had that Slack. If it was hyper optimized, it would not have been able to be what it is today.

Anyway, there are many examples of that. One of my favorite is is Starlink versus, you know, the normal satellite systems. You know, ViaSat, when when Russia invaded Ukraine, they were hoping to just activate ViaSat and the whole satellite system and have Ukraine go dark, which they managed to do for a moment. Starlink, which is the the system from, you know, Elon Musk, basically, is decentralized, lower satellites, which, you know, if you go to an MBA in at Harvard, they'll tell you that it's inefficient, that there's a lot of wastage, that all kinds of things. The only thing is that when this happens, Starlink is still working in Ukraine, and they have full coverage.

And so after that, basically, China started wondering how you can be more of Starlink and not the sort of typical conventional satellite systems. And the US DOD started stress testing and making sure that their future kind of satellite systems would be set you know, we're not hyper optimized on that. So so these seem trivial, but but, ultimately, they all boil down to a single point of failure in systems, which can basically cause, you know, catastrophic, even existential consequences if it hits critical infrastructure or power or communications or health care. So so it's all kind of linked that. And what we're not trying to do is is control the world, but but think about how how to prepare for it.

I think I feel like that's almost, doubly important for, like, our public services and our our our institutions that those things do have a certain amount of slack, and, obviously, there's a certain amount of lack of slack here in the US. Not in the US, in the UK at the moment with our health system. But Mhmm. I I don't have a enough information to really go into that, personally. It's just, one of those news items that come is reoccurring, issues here for for us anyway.

I like, from your thriving with disruption, the idea you had I don't I almost don't know if it was a a passing, comment, but it was this idea of a, chief existential officer. I I like that. It it sounds quite tongue in cheek, but is that is that a a situation that is actually there is someone kind of doing this work in an organization or and and that's their job essentially? Or is it that they are just sitting there worrying about things, tearing their hairs out? You know?

Yeah. That's what it sounds like. No. Listen. It's it's a it's a you know, there's an element of tang and chi where we kind of coin the term for role we envisage.

We're not aware that it exists, and we don't have it in a formal way, although we believe we do that as as our as our DNA and as as our as a matter of course. But the role is indeed chief existential officer or kind of CEO too. And while part of it is tongue in cheek, a lot of it is also the realization. What you have is that you have an increasing number of low probability but very high consequence risks and opportunities in organizations or or cities. And and, incidentally, you mentioned tearing your hair.

Yes. A lot of it is is indeed, risks and things that are kind of untoward, but it works to both ways. Neutrality and opportunities can also be sorry. Disruption and and change and and these essential possibilities can also be dual and and opportunities. So if you consider something that's a low probability but very high consequences, they can also be very high opportunities.

You know? So new developments, new in inventions, curing certain diseases, opportunities in climates. It's 1,000,000,000,000 of dollars of value that will be generated. So it it kind of works both ways. It's not just a risk management role, and it comes back to this nonlinearity and the fact that it's systemic disruption.

So think kind of a self reinforcing. So we see the chief risk officer as someone who focuses on conventional regulatory compliance, technology risks, and these in a like it or not, most of them do it in a linear role. So if the probability of something is low, they'll be happy with that. But what if the probability is low, but because of the nonlinear world and possible outcomes and self reinforcing events, that low probability wipes out your entire country or city in New York, if you're not doing certain things for rising water, or your company, if you're a company. That is existential at the level of or or you're not doing, you know, the right kind of skilling and education for yourself to to be future savvy and relevant in the future.

That is existential at a level. And if it's certain other things such as biodiversity, climate, pandemics, cyber, it can even be, or disinformation, existential at at a country level or even humanity, conceptually. You know? So the idea is really to accept and understand that there are these extreme risks and that they do filter through also to organizations, you know, cyber, and AI, and other things. And so it's thinking about both the opportunities and the risks, thinking about how you can't isolate yourself from them.

And, basically, these are outsized opportunities, which often accompany outsized risks. So they're not necessarily disconnected from the duality of of that, and they tend to be, you know, systemic. So this is why and how we kind of think about that, and it's really, again, thinking about not just probabilities or likelihood or the usual kind of more conventional risk analysis, but thinking about it more more systemically. There are 2 quotes I'm just gonna share with you because we find them just amazing, and we put them in our book to contextualize chief essential officer. The first one is Bruce Sterling, who kind of says, you know, when you can't imagine how things are going to change, that doesn't mean that nothing's going to change.

It means that things will change in ways that are unimaginable. And you see constantly the press and everybody unthinkable, unimaginable, never happened before. Well, maybe some of those they were signals that we didn't pick up or that we picked up but decided to do nothing about, pandemics or other things. And and so maybe we just either didn't have the imagination or chose not to imagine them, and that's that's one thing. And the other quote, which is, you know, the the American humorist Kurt Vonnegut, and it's phenomenal.

And I just wanna kind of say this slowly because for me, this is this sums it up all with, you know, what you learn at in conventional strategy or business goals and that. We'll go down in history as the first society that wouldn't save itself because it wasn't cost effective. So the idea of the chief existential officer is, you know, this idea of saving yourself or or creating exceptional value, hopefully, sustainable value, which is if you just go through the conventional strategic planning and risk assessments and all that, you might find that certain things are not cost effective or not risky or not whatever. And then, effectively, we're not gonna save ourselves. You know?

So so it seems like a slight tongue in cheek, but, actually, we we are quite serious in the ramifications. And I think organizations, society, and the world is is toying with some of the risks of not thinking like a chief existential officer. I feel like it's, an apt quote for our environmental situation for sure as well that they you know? It seems like we're a lot of us are doing business as usual, which is probably not super useful for our current situation. And I know that you you obviously feel strongly about the environmental situation.

So I was wondering if you could just give me a a a brief overview of that feeling. And then I was just wondering then the kind of flavor of the day is the AI situations that we can go into, maybe what you feel the existential risk of the the current AI debate is about, because it's kind of it feels like it's just raging at the moment. On my, on my Twitter and LinkedIn, you know, it's just exploding. So I don't know how you feel about those things. Yeah.

So I'll accept to treat them kind of in a similar way and link to chief essential officer because you'll see why in a minute, but we do believe that, AI is an existential consideration, not necessarily for the same reasons, and definitions as as a lot of the the kind of noise, but we'll we'll come to that. So starting with climate, maybe just kind of stating the August. Like everybody, or like many people, should I say, unfortunately, I'm, you know, very concerned about about it. And in my more recent life, I've kind of tried to convert the concern to how, at my level, can I be helpful to contribute to to to agency and to to outcomes, however little they might be? And a a few observations maybe.

And then this happened, I wouldn't say, by coincidence. You know? I tried to be a kind of thoughtful citizen and and that. But it also happened because of how linked it is to the topics we've been talking about. If you think about, well, existential risk for 1, if you think about systemic change, if you think about, you know, exponentiality and complexity, etcetera.

And so one of the, you know, one or 2 of the things maybe just to to share in terms of concretely how we think about climate. And, actually, we're, you know, we're soon launching our own, sustainability and climate academy to kind of help build kind of knowledge and understanding of this. But one aspect is is really, thinking about inflection points. We often talk about inflection points in the context of climate as tipping points which are reversible for things which are untoward, and and that's a very important and and key definition. And for those who who follow or even just reading the media, I mean, it's impossible not to to be aware of how many tipping points, unfortunately, are being crossed, on the negative side.

A lot of the the idea of an inflection point is kind of neutral or dual and can actually be positive. So we try and kind of think about what are the ingredients which can allow virtuous inflection points. And so when you see the difference alignment between legislation and between education and between incentives and governance and regulation. So take, for instance, electric vehicles. And I understand there's a lot of negative externalities.

But but so far today, it it seems like probably moving towards addressing some of the externalities and having non gasoline cars is is probably something favorable. And in doing so, if you have certain countries that are putting dates in terms of, you know, banning, gasoline cars, if you're having certain incentives, tax incentives or subsidies to help the consumers so that they shouldn't be the ones footing the the sacrifice. If you have certain, disclosure and regulatory reporting that forces companies to disclose what they're doing in certain ways. If you have certain maturity of the of the sector, like now what's happening with the infrastructure in the US where you have, you know, Tesla and General Motors and Ford, which are becoming more consistent, and we're moving towards, you know, bay better kind of infrastructure, plus the improvements, hopefully, in battery life and, hopefully, in some of the negative externalities. Once you put all that together, you can move to a virtual inflection point.

If you kind of do the same around some of the food issues or water or construction, you're suddenly hitting a big proportion of the world carbon emissions. And so one of the things is understanding how do you get to virtuous inflection points. And, basically, it's the same it's the difference between innovation and kind of, you know, shiny next gadget from Silicon Valley or whatever, which is, you know, clearly not effective, versus transformative innovation and systemic change. And that's where you need to think that in complex systems, you have levers for change that don't have an equal ranking. So the the biggest lever to make effective change and transformational change in complex systems is through education.

So changing the mindsets, the assumptions we made, the education. Education, the broader sense, including for leadership teams. And then the structures and incentives are important, but not as effective. But, effectively, these are not either all questions. You need to intervene at the different leverage points with an understanding that some actually have a greater weight than others.

And so, you know, to kind of cut the discussion around climate, this is kind of how we contribute. I'm not a client climate scientist. I'm not, you know, building the next, you know, crazy invention or whatever, but I'm kind of trying to be more conscious within the realm of what we focus on and the following we have in terms of an education platform of how we can support the the energy transaction and the understanding of of of these things. The second aspect of your question, which I'm putting in the same bundle because you will see now, we do consider to be an existential risk, is is how we think about AI from the perspective of an topologies of all the references of who's saying what and and that. We've topologies of all the references of who's saying what and and that.

We put them in 3 buckets, you know, kind of dystopia AI for the dystopians, pragma AI attic, so pragmatic AI perspectives, and then the utopia AI, we call it, for the those are utopia. And we see also some that are more liminal and shifting and metamorphosing between categories, whether it's tactical or not. And we understand the debate as to, you know, with all the externalities of AI and other urgent questions that are concrete and today and affecting a lot of people, Why are you putting in an essential bucket something that's so far away, so far fetched, so uncertain? So I'm gonna sort of spend a few minutes on this because I think it's it's potentially interesting for for your listeners, and and it's the question you're asking me. So I think the first thing is to understand what we mean by existential.

And we understand existential through existentialism. The roots of existentialism is philosophical. It's the work of Kierkegaard, of Nietzsche, of Hersl, of Heidegger, of of people like SAC, around human existence, agency, and choice. And, really, for me, that is a fundamental aspect where we are today standing on the edge of our free will and thinking about the fundamental concept of choice. And the thing about computational, you know, rational technology is that it's no longer neutral because it drives away contingency for certain circumstances of life.

And for those who are interested, you know, Martin Heidegger has been writing about this for, I don't know, 75 or so years, I think, in 19 fifties. I forget exactly. But pretty much, you know, mid 20th century. And this is not a new topic, and the idea is that the freedom as individuals is determined by our own choices and actions. And if technology is determining outcomes on our behalf, is it curtailing the choice of different outcomes?

Are we therefore having decision making that's beyond our control? And I'm not only talking about the important topic around, you know, algorithms deciding your mortgage or or descaled because we're no longer deciding, actually able to make sense of our complex world and make decisions in the world and not being descaled because we're no longer deciding, actually able to make sense of a complex world. So that's the first point, and that's why we call our practice tech essential many years ago before this debate. And that's we actually have what we call tech essentialism, as in existentialism 2.0, thinking about, in the 21st century, you can't look at the human condition separate from our technological environment. And therefore, we have lost exclusivity on on human decision making.

Humans have lost exclusivity on that. So the point I'm trying to make here, and, again, this is subjective and people are very welcome to disagree, is that if we don't think existential is only an extinction risk. Exessential risk does not necessarily mean the same thing as existential catastrophe and does not necessarily mean the same as extinction. And if you look at the definition by Nick Bostrom, for instance, you know, he'll say 1 you know, an existential risk is one that threatens the premature extinction of earth origin originating intelligent life or the permanent or drastic destruction of its potential for desirable future development. So and and don't get me wrong.

Again, a lot of this is neutral. We believe the amazing things that happen with computers and technology and science and AI. Drug discovery, support in terms of helping for climate and that. But we can't ignore that machines are continuously learning and and increasingly higher human functions. To cut to the chase, my concern and the essential risk for us is not the evolution of of machines.

I don't believe there'll be robo killers. I don't believe that it's imminent that we'll necessarily have artificial general intelligence. I'd almost say that isn't the question. The question for me is, what is humanity doing with its educational systems, with its governance structures, with its incentive structures to stay relevant in this technological world. And so it's that agency to make sure that humans stay relevant, and that's what we define as our triple a.

To stay relevant, we believe, in today's world where machines are learning fast, to understand and make sense about complex and predictable worlds where things are nonlinear, we need to be more anticipatory. We need to have more agility in how we think about impacts and and time frames, and we need to have antifragile foundations. And if we don't, with machines learning fast, we will no longer be able to make sense of our complex world. We will be losing the position we have in terms of dominance and agency in driving decision making in complex environments. We'll no longer know how to do that.

We'll be forced to have delegated, authority and delegated decision making to machines. And that, for me, is an existential risk because there's not descaling, there's potential for mass employment. There's what is the what do humans do in that context of lack of understanding and ability to make decisions on that. So long story short, that is how we believe it's an existential risk. Concretely, it's affecting that existential risk is materializing in our humble opinion today, with very tangible and concrete things.

And, fortunately, there are very tangible remedies that can be addressed if you change the educational system, the governance structures, and the incentive structures. The challenge and our concern is not so much what machines are doing, but the lack of agency and the lack of doing what humanity should be doing to ensure that humans stay relevant in this technological world. So is it just because you're saying that it's it's really the the agency piece, the autonomy that humans have that we are, offloading? We're giving the machines or enabling the machines to have some, capability to make decisions for us. And then we are also then offloading those decisions.

And and actually that we should be more mindful, in what kinds of decisions that we are delegating, essentially, and also decide that we are changing how we operate to to meet that, new capability almost. If I was a science fiction writer, I'd probably describe environments where those things were good and bad. Right? So you could you could probably project forward and go, well, actually, if we delegate all our decisions, that you'll be a literal nanny state where we just don't have to do anything really, and we can just exist. And the machines or the AIs or, whatever, can provide for us if if that came to the the extreme example.

And and I guess you're saying that is not what you want. Like, that's that's not where we're going, and that's not what we want to get to. We have, we have some decisions that we wanna hold on to, essentially. Is that is that kind of your feeling? So there's an element of that, and then there's thinking about the next order implications of what that means.

So the next order implications and thinking about the multiple world of possibilities is that, yes, that is a possible outcome, and one could argue that those advanced AI systems maybe could do a better job for the environment, for humanity, for equity, for social justice, for provisioning and the sustainability of the world and humans. So that's that's very possible. The the challenge is, do we assume that's the case and and rely on that? Mhmm. Mhmm.

And and what are the implications of that knowing that even if that were a likely or an outcome that materializes, if things go wrong, if the machines themselves is too complex and and, become unreliable, we appreciate that more and more, machine learning and it could be the case tomorrow, as soon as today, and some of these technologies are incomprehensible and and complex and suffer the same features of, you know, superintelligence meets super stupidity or accidents or or what have you. So I don't have a problem with the scenario lobbying and corruption and short sightedness of many of the way the world is run. The question is, whether you can rely on that outcome happening, and if it does happen, whether you can rely on that outcome remaining like that. The second thing in terms of next order implications is not so much, oh, we've delegated a few decisions and so what, etcetera. It's it's the implications of that if you think about next order implications.

What in the current configuration of society remaining sustainable as humans, it's, you know, what is the substitute then for work and and means if we are being automated from some of the more automatable tasks and the narrow artificial intelligence, and we are also not finding alternatives where human decision making has the edge. And we no longer may be able to be as creative or make sense of the world and etcetera? Are we increasing the chances of being able to make sense, operate in, make decisions, find worth, value, economic sustainability in the world, or might an outcome be that there's mass unemployment, and and that that transition to AI systems becoming, you know, virtuous either takes some time with a lot of damage on the way or doesn't materialize like that. Then there's the irreversibility if AI and machines reach a certain point and if humanity reaches a certain point of relevance, irrelevance, capabilities, or what have you, how can you reverse that if one chooses not to? So the the impact, you know, is like CRISPR and and chromosomes and DNA.

I have no problem with some of the innovations, and if you can cure disease and that, I'm just mindful that some of these might perpetuate themselves and be irreversible for all the future generations if you change the DNA of humanity. So it's that idea of of irreversibility, which I I wouldn't ignore. So once you start putting all this together, the ramifications are are quite broad, and they're going beyond just, you know, the relying on that singular potential favorable or neutral outcome. It's it has irreversible outcomes. It has unpredictability.

And then the outcome you've described is one of the outcomes. That's where maybe you might have others, whether it's by accident, by design, or by self awareness and consciousness that those advanced AI systems choose another outcome than looking after in a satisfactory way, humans. So so you're kind of opening the door to to to to many things, some of which are reversible, and some of which today do look like a material degradation of human existence. Yeah. And I guess, by the last, like, sentence, I'm guessing that you you would say that that's not not an ideal situation.

Well, I think that losing agency and moving towards something irreversible, which has a number of configurations which are absolutely existential in the sense that's used all the time, including essential extinction and catastrophic. Yep. And where you're losing agency to to alter it or for decision making, because probably as machines continue to learn and if human becomes kind of less so, the relative difference is increasing that reduces the agency we I believe we currently have. So, you know, it's like the column bridge dilemma. You know, the earlier you you impact and influence an emerging technology, the less you understand the data and where it's going, but the more control you have over it.

The longer you wait, the more data you have, the more you see where it's going, but the less influence you can have on the outcomes. So the question of of of the quandary of of uncertainty and timing is an important one. Also, if you consider that we live in a world where, you know, where there are these existential risks I'm talking about more broadly, where there are these small events that have more significant outcomes, some of which can be catastrophic, and you're kind of pushing all those aside and losing the agency over how you might address them. I do believe that that's taking maybe a ness a necessary kind of risk, and and of an existential nature even today. Yep.

Yeah. Yeah. And I think there's I think there's probably, like, a whole, huge truckload of cultural artifacts that says that we should definitely not do that as opposed to maybe a single, small shelf full of things that do say that we should do that. So I think the our imagination has already gone there and decided that's probably not the especially useful, ending point regardless of, like you said, like, the deviations on the road to that particular place. But, Ben, just on that, if I may say, just just for the avoidance of doubt because it's it's it's an interesting point.

The I'm not sure these are either all decisions. In other words, I'm not necessarily saying you have to rein in or cancel or or stop AI. I actually think that even if we tried to, we wouldn't be able to. I believe that there's certain dynamics of change around technology and II, which which have the velocity and trajectory, which some of which is not fully within our control already, because themselves are complex systems which are not fully controllable. What I am saying is that we are not having in the debate, in my humble opinion, enough voice and noise and initiatives and agency around what humans need to do to stay relevant and to cohabitate with that evolution which is taking place.

So it's not to say that the debates around ethics and privacy and IP and copyright aren't essential. They are essential, but these are not either all decisions. What I'm not seeing is what country, except for Finland and Israel or one or two others, are changing the educational systems to make humans relevant and better able to make decisions and appreciate complexity. So this goes beyond what you impose Google or whatever. It really is fundamental to our what are the leaders for change in complex systems?

It starts with education. And so unless humans are upgrading themselves, and I'm not using it in a trans in a transhuman sense, but really just our better ability for critical thinking, for for all the things we know about in terms of skills we need for our complex 21st century. Unless we're actually changing the educational system for that, it's still knowledge driven. It's still preparing you for jobs which we assume will exist, etcetera, etcetera. And that is where I don't understand the debate, because I don't understand why not more is being said than done about what humans are doing to increase the chances of staying relevant.

And so that's where I just I mean, when I say I don't understand, yes. I understand that it's complicated to change the education system. I understand that a lot of people are incentivized not to want to change it. I under you know? But but, ultimately, that is for me the existential risk we're taking.

Yep. And do you do you have a sense of kind of what are those key things that you would, incorporate in in the education system? Yeah. So it's it's everything that moves away from just relying on knowledge and the assumptions of specific jobs. Right?

So it's it's critical thinking. It's first principles, you know, beginner's mind. It's an appreciation and an understanding of uncertainty that the world is uncertain, not to be frazzled because something changes. In in in certain Eastern philosophies, changes are constant. You know?

We understand that we're transient. It's trial and error appreciation of exploration. It's appreciating that that failure goes hand in hand with innovation. It's the distinction between innovation and invention. What's kind of incremental versus what's really, you know, you know, the first principles.

It's creativity. It's being very fluent in the language of data and information. It's it's a language. You know? That would mean that we're better prepared for disinformation and for other manipulations because we understand and we speak that language of of data.

It's, our relationship between humans. It's our relate it's debate. It's acceptance of different viewpoints. It's, how we cohabitate with with machines and interact with them to appreciate that relationship and what a human strength. It's the development of agency.

So it's all these things which we encapture as actually triple a. It's to be more anticipatory, to have that agility, and to have antifragile foundations, because the premise should be our world is unpredictable, uncontrollable, nonlinear. If you think about the educational systems, they're preparing you to better answer known questions which have known answers as opposed to preparing you to ask questions that don't exist, so that we haven't thought about, and invent something that goes with that. Mhmm. All of our complex challenges, whether it's societal, whether it's climate, whether it's, polarization, geopolitical, all of these are complex challenges.

Unfortunately, it's not to say science isn't important, and it's you know, unfortunately, that is following its course, and AI is good there. So AI will continue to to develop expertise for things that are more reliant on expertise, that are known unknowns, that are predictable. What we need to do is is develop a better ability to problem solve humanity out of these complex challenges and and problems. And so all of the above could be what education brings you. Education, the broader sense, the learning and learning and and relearning through life.

You tell me, but I think except for a few exceptions like Finland and Israel and others, that's not the educational systems. You take some of the best mathematics teaching and education in the world like Singapore. They're still relying on on that slightly more predictable, comprehensible, cause and effect world. As a as a parent, I think the my job is almost, at the moment, you know, to look after them, obviously, but to, prepare them to think. Right?

To to to boil it down, it's a mass simplification. But to be inquisitive, to to be inquisitive, to be able to think through problems, and to be able to almost like savvy, you know, they they they know what the tools are that they can use to accomplish some sort of, problem solving, thing. And it it almost doesn't matter what that thing is, entails. Correct. Yeah.

I feel like, that that's that's my new job. So I've got, I got my normal job, got my parent job that got on as well. Hopefully, I'll do a good job. We'll see. If you if we're still in the podcast in, what?

He's they're sick 6 and 2 at the moment, so it might take a bit of a while. I fear we're getting towards the end, although there's so much more that could be said. I wanted to briefly before I ask the last question which I think we've actually talked about quite a bit, but, maybe get a bit more of a personal answer. Mhmm. Before then, I just wanted quickly you specify these camps.

So you have the utopian, the dystopian, and the pragmatic camps. Did you do you see situate yourself between any of those camps, or you're kinda staying out of it almost? No. I'm I'm happy to put myself in the in the dystopian camp. Just simply not that I have a dystopian view of of the world.

Actually, our work, thriving on disruption, and our our as as the institute is really to show that the duality of change and disruption and to show the agency we have to change it. But the reason why I've comfortably put myself in the dystopian camp is to acknowledge the existential risk and to hopefully trigger and force ourselves to, have the agency and to and to effect change. And I don't think the debates are fair, and I understand the vantage point of the different spectrum of debates, and a lot of them are motivated for obvious reasons, but some of them are in good faith. But I do believe that discarding and dismissing the existential nature of some of the next order implications, if we continue with the status quo is dystopian. But we just don't mean a dystopian in terms of robo killers or superintelligence.

We mean it as making sure humanity doesn't become its own super superity kind of thing, to be blunt. So I I guess because it kinda carries into my final question, which is probably you've already answered it, but, what scares you and what excites you about this, kind of technologically mediated future? Yeah. So it's it's it's interesting to to put the two aspects of the question together because that's very much funnily enough, it's the same answer because we see very much the duality of much of the world, including technology, including AI. We accept the tensions, the contradictions, and the paradoxes of of situations.

We don't need a kind of binary thinking or or meaning that the things that excite me is the possibilities. There's no doubt that, technology is very powerful. There's no doubt that AI can be very powerful. There's no doubt that that can be used. That is already used in many instances in a very virtuous way.

It's saving lives. It's allowing mobility and vision and things which which would be unimagined without it to that. So it's it's discovering drugs. Literally, you know, AI is hasn't run discovery as as, of course, you'll you'll know, and has done so, and things that are rolled out, etcetera. So the the virtues are phenomenal.

The possibilities are phenomenal, and that that's exciting. But, fundamentally, it's the agency of humanity that's exciting in terms of how they develop technology, how they might be able to use it, and how they might be able to work with technology. What scares me is is indeed, as you say, things we've we've touched upon, which is we can't ignore the trajectory and velocity of machines learning fast and what that means. And we can't ignore our the way our current educational systems, government structures, and incentives are driving the outcomes of the world, and are relying on a stable, predictable, and linear world, which is not the one we live in. If we add to that some of the drivers of change, which are self reinforcing and which are kind of more on the dangerous side of the spectrum of neutrality, you're moving towards, humanity, which is potentially less and less able to keep the kind of clock from midnight if you take, you know, the midnight clock with the 0 being, by by our friends at the, you know, atomic atomic agency or whatever who keep their midnight clock, and and it's currently very close to midnight.

And, you know, what scares me is how little it can take, especially in unpredictable, complex, nonlinear environments or anything to overflow. And I'm including that even society, even the US, you know, the the difference in the US and Venezuela. I don't know how much or the UK or or anything. So or or nuclear bombs or other events we do. So I I really it's that duality, the paradoxes, the tensions, and all that.

And the starting point is kind of the awareness of some of the things we're doing, then we need to have the agency to do something about it. But one of the reasons we're writing what we're writing and doing the programs we're doing and and why I got interested in these topics is also because I feel that there's not sufficient awareness, whether it's decision makers or simply the 8,000,000,000 people on earth around these topics. And if you're not aware of them, you're that much less likely to do something about it. Roger, thank you very much for your time. How do people contact you, follow you, and give you money, all that sort of thing?

Now that's amazing. And listen, thanks so much for such a genuinely rich two way exchange. I mean, what you're doing, the focus area you have, I know you're you're you're not a fashion victim. You've been doing this for for very long time, but it so happens that it's it's extremely important today. So thank you for doing that and for having me on.

I think, listen, for reaching out is quite simple. Disruptive Futures Institute is kind of the the kind of content education platform. We're trying to give a lot of information for free as well to help raise awareness on social media. So just Google that. You'll get all the social media links.

I'm on LinkedIn as well, quite active. Follow our disruptive futures institute. Follow me as as Robert Spritz. Connect. DM me.

So just please, just keep an eye out. We have a lot of things planned, in the pipe, and we're moving more from just enterprise and corporates to to the general public on that. So stay tuned. Sweet. Thanks very much for your time.

My pleasure. Hi, and welcome to the end of the podcast. Thanks again to Roger. I really like that his ideas around this chief existential officer. I know it's still a bit tongue in cheek, but it's this idea maybe a CEO or another person has this new role of kind of futurising or or making the company or organization appropriately structured to deal with future disruption.

Another idea which keeps cropping up is our education system or education systems around the world. Whether they are appropriate, they're changing in a way that makes sense. And another idea I think we're all grappling with at the moment is how much authority, how much, decision making or agency should we give up? How much of this is a good idea? And how much, do we maybe wanna hold on to?

You know, are there certain places where AI could work, but just, you know, maybe we don't need to use it. It's just inappropriate, that sort of thing. It would be almost nice to have a more of a concrete answer there or a list of things. So if anyone has any ideas about that, do get in contact. Again, if you wanna support the podcast, you can go to patreon.comforward/machineethics.

Though we're not sponsored this episode, I just wanted to highlight we've got lots of talks and workshops happening, including myself and various other partners. If you and your organisation are looking to get up to date with AI generally, AI ethics, and the process of AI ethics, let's say, how to formulate that in your own company and incorporate it into your own business practice or indeed get someone in to do some consultation then please go to ethicalby. Design, and you can find some of our services and inquire there too. Thanks again, and see you next time.


Episode host: Ben Byford

Ben Byford is a AI ethics consultant, code, design and data science teacher, freelance games designer with years of design and coding experience building websites, apps, and games.

In 2015 he began talking on AI ethics and started the Machine Ethics podcast. Since, Ben has talked with academics, developers, doctors, novelists and designers on AI, automation and society.

Through Ethical by Design Ben and the team help organisations make better AI decisions leveraging their experience in design, technology, business, data, sociology and philosophy.

@BenByford