51. AGI Safety and Alignment with Robert Miles

This episode we're chatting with Robert Miles about why we even want artificial general intelligence, general AI as narrow AI where its input is the world, when predictions of AI sound like science fiction, covering terms like: AI safety, the control problem, Ai alignment, specification problem; the lack of people working in AI alignment, AGI doesn’t need to be conscious, and more
Date: 13th of January 2021
Podcast authors: Ben Byford with Robert Miles
Audio duration: 56:04
Website plays & downloads: 24 Click to download
Tags: youtube, Communicator, AI safty, Alignment, Conciousness, AGI
Podcast authors: Ben Byford with Robert Miles
Audio duration: 56:04
Website plays & downloads: 24 Click to download
Tags: youtube, Communicator, AI safty, Alignment, Conciousness, AGI
Rob Miles is a science communicator focused on AI Safety and Alignment. He has a YouTube channel called Rob Miles AI, and runs The Alignment Newsletter Podcast, which presents summaries of the week's research. He also collaborates with research organisations like the Machine Intelligence Research Institute, the Future of Humanity Institute, and the Centre for the Study of Existential Risk, to help them communicate their work.
No transcript currently available for this episode.
Previous podcast: Privacy and the end of the data economy with Carissa Veliz