Pages tagged: Academic

Good tech with Eleanor Drage and Kerry McInerney

Dr Kerry McInerney (née Mackereth) is a Research Fellow at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, where she co-leads the Global Politics of AI project on how AI is impacting international relations. She is also a Research Fellow at the AI Now Institute (a leading AI policy thinktank in New York), an AHRC/BBC New Generation Thinker (2023), one of the 100 Brilliant Women in AI Ethics (2022), and one of Computing’s Rising Stars 30 (2023). Kerry is the co-editor of the collection Feminist AI: Critical Perspectives on Algorithms, Data, and Intelligent Machines (2023, Oxford University Press), the collection The Good Robot: Why Technology Needs Feminism (2024, Bloomsbury Academic), and the co-author of the forthcoming book Reprogram: Why Big Tech is Broken and How Feminism Can Fix It (2026, Princeton University Press).


Eleanor is a Senior Research Fellow at the University of Cambridge Centre for the Future of Intelligence, and teaches AI professionals about AI ethics on a Masters course at Cambridge.

She specialises in using feminist ideas to make AI better and safer for everyone. She is also currently building the world's first free and open access tool that helps companies meet the EU AI act's obligations.

She has presented at the United Nations, The Financial Times, Google DeepMind, NatWest, the Southbank Centre, BNP Paribas, The Open Data Institute (ODI), the AI World Congress, the Institute of Science & Technology, and more. Her work on AI-powered video hiring tools and gendered representations of AI scientists in film was covered by the BBC, Forbes, the Guardian and international news outlets. She has appeared on BBC Moral Maze and BBC Radio 4 'Arts & Ideas'.

Eleanor is also the co-host of The Good Robot Podcast, where she asks key thinkers 'what is good technology?'. She also does lots of presentations for young people, and is a TikToker for Carole Cadwalladr's group of investigative journalists, 'The Citizens'.

She is also an expert on women writers of speculative and science fiction from 1666 to the present - An Experience of the Impossible: The Planetary Humanism of European Women’s Science Fiction.

She is the co-editor of The Good Robot: Feminist Voices on the Future of Technology, and Feminist AI: Critical Perspectives on Algorithms, Data and Intelligent Machines.

She began her career in financial technology and e-commerce and co-founded a company selling Spanish ham online!


The Politics of AI with Mark Coeckelbergh

Mark Coeckelbergh is Professor of Philosophy of Media and Technology at the University of Vienna and author of more than 15 books including AI Ethics (MIT Press), The Political Philosophy of AI (Polity Press), and Introduction to Philosophy of Technology (Oxford University Press). Previously he was Vice Dean of the Faculty of Philosophy and Education, and President of the Society for Philosophy and Technology (SPT). He is also involved in policy advise, for example he was member of the High Level Expert Group on AI of the European Commission.


Algorithmic discrimination with Damien Williams

Damien Patrick Williams (@Wolven) researches how technologies such as algorithms, machine intelligence, and biotechnological interventions are impacted by the values, knowledge systems, philosophical explorations, social structures, and even religious beliefs of human beings. Damien is especially concerned with how the consideration and treatment of marginalized peoples will affect the creation of so-called artificially intelligent systems and other technosocial structures of human societies. More on Damien's research can be found at AFutureWorthThinkingAbout.com


Privacy and the end of the data economy with Carissa Veliz

Carissa Véliz is an Associate Professor in Philosophy at the Institute for Ethics in AI, and a Fellow at Hertford College at the University of Oxford. She works on privacy, technology, moral and political philosophy, and public policy. Véliz has published articles in media such as the Guardian, the New York Times, New Statesman, and the Independent. Her academic work has been published in The Harvard Business Review, Nature Electronics, Nature Energy, and The American Journal of Bioethics, among other journals. She is the author of Privacy Is Power (Bantam Press) and the editor of the forthcoming Oxford Handbook of Digital Ethics.


Robot Rights with David Gunkel

David J. Gunkel (PhD) is an award-winning educator, scholar and author, specializing in ethics of emerging technology. Formally educated in philosophy and media studies, his teaching and research synthesize the hype of high-technology with the rigor and insight of contemporary critical analysis. He is the author of over 80 scholarly journal articles and book chapters, has published 12 influential books, lectured and delivered award-winning papers throughout North and South America and Europe, is the managing editor and co-founder of the International Journal of Žižek Studies and co-editor of the Indiana University Press series in Digital Game Studies. He currently holds the position of Professor in the Department of Communication at Northern Illinois University (USA), and his teaching has been recognized with numerous awards, including NIU's Excellence in Undergraduate Teaching and the prestigious Presidential Teaching Professorship.

David recently wrote the book Robot Rights.


Moral Machines with Rebecca Raper

Rebecca is a PhD candidate in Machine Ethics, and consultant in Ethical AI at Oxford Brookes University, Institute for Ethical Artificial Intelligence. Her PhD research is entitled 'Autonomous Moral Artificial Intelligence', and as a consultant she specialises in looking at developing practical approaches to embedding ethics in AI Products.

Her background is primarily philosophy. She completed her BA, then MA in philosophy at The University of Nottingham in 2010, before working in analytics for several different industries. As an undergraduate she had a keen interest in logic, metametaphysics, and the topic of consciousness, spurring her to come back into academia in 2017 to undertake a further qualification in psychology at Sheffield Hallam University, before embarking on her PhD.

She hopes she can combine her diverse interests to solving the challenge of creating moral machines.

In her spare time she can be found playing computer games, running, or trying to explore the world.


Art & AI with Eva Jäger & Mercedes Bunz

Eva Jäger is Assistant Digital Curator at Serpentine Galleries London and Co-Investigator of the Creative AI Lab with Mercedes Bunz. She is also one part of Studio Legrand Jäger, a multi-disciplinary creative practice researching design and technology together with Guillemette Legrand.

Mercedes Bunz is Senior Lecturer in Digital Society at the Department of Digital Humanities, King's College London. Her last books is an open access publication on machine communication looking at interfaces (University of Minnesota Press/meson press 2019) written with Finn Brunton and Paula Bialski. Her last journal article is: ‘The calculation of meaning: on the misunderstanding of new artificial intelligence as culture’ in Culture, Theory and Critique.

The Creative AI Lab website https://creative-ai.org/


AI alignment with Rohin Shah

Rohin is a 6th year PhD student in Computer Science working at the Center for Human-Compatible AI (CHAI) at UC Berkeley. His general interests in CS are very broad, including AI, machine learning, programming languages, complexity theory, algorithms, and security, and so he started his PhD working on program synthesis. However, he was convinced that it is really important for us to build safe, aligned AI, and so moved to CHAI at the start of his 4th year. He now thinks about how to provide specifications of good behaviour in ways other than reward functions, especially ones that do not require much human effort. He is best known for the Alignment Newsletter, a weekly publication with recent content relevant to AI alignment that has over 1600 subscribers.


Automation and Utopia with John Danaher

John Danaher is a Senior Lecturer in Law at the National University of Ireland (NUI) Galway, author of Automation and Utopia and coeditor of Robot Sex: Social and Ethical Implications. He has published dozens of papers on topics including the risks of advanced AI, the meaning of life and the future of work, the ethics of human enhancement, the intersection of law and neuroscience, the utility of brain-based lie detection, and the philosophy of religion. His work has appeared in The Guardian, Aeon, and The Philosophers’ Magazine. He is the author of the blog Philosophical Disquisitions and hosts a podcast with the same name.


Moral reasoning with Marija Slavkovik

Marija Slavkovik is an associate professor in AI at the Department of Information Science and Media Studies at the University of Bergen in Norway. She works on collective reasoning and decision making and is specifically interested in these types of problems in machine ethics. Machine ethics is basically trying to answer the question of how do we program various levels of ethical behaviour in artificial agents. It is a very interesting field for both computer scientists and humanists and I like it because it pushes very hard reasoning problems back to the surface of AI.

Marija's background is in computational logic and in control theory and is also interested in all aspects of automation. She mainly writes scientific articles on computational social choice and multi-agent systems. However, being in a half media department, she is exposed to a lot of issues in how information spreads in social networks and how information gets distorted after being spread through a network and/or aggregated. Marija is now trying to bring this problem into the Machine Ethics conversation, because there is a lot of decision automation happening behind the scenes of information sharing, we see a lot of emergent behaviour of systems of artificial agents and people, but we do not fully understand it or can control it.


Respecting data with Miranda Mowbray

Miranda Mowbray is a lecturer at the University of Bristol, where her research interests include cyber security and big data ethics. She was an invited speaker on AI and cybersecurity at the Global Cybersecurity Summit in 2017. She has a long-term interest in topics relevant to this podcast: her paper “Ethics for Bots” was published in 2002.

Miranda’s PhD is in Algebra, from London University. She is a Fellow of the British Computer Society. She spent last summer doing a research project with two Masters students on subverting the security of a swarm of a hundred small autonomous robots.


#AIRetreat

Interviewees:

Links from participants:

Find images on Twitter #airetreat or instagram #airetreat


How to design a moral algorithm with Derek Leben

Derek Leben is Associate Professor of Philosophy at the University of Pittsburgh, Johnstown. He works at the intersection of ethics, cognitive science, and emerging technologies. In his new book, Ethics for Robots, Leben argues for the use of a particular moral framework for designing autonomous systems based on the Contractarianism of John Rawls. He also demonstrates how this framework can be productively applied to autonomous vehicles, medical technologies, and weapons systems. Follow on Twitter: @EthicsForRobots.


Evolution and AI with Tim Taylor

Tim works in academic research and commercial development of Artificial Life (ALife) and Artificial Intelligence (AI) technologies, with a particular interest in the foundational issues of true autonomy and open-ended creative evolution. He is also interested in the historical development of these ideas, and has recently written a book on the (very) early history of the idea of self-reproducing and evolving machines ("The Spectre of Self-Reproducing Machines: An Early History of Evolving Robots", currently under review with publisher). He holds an MA in Natural Sciences from the University of Cambridge (specialising in Experimental Psychology), followed by a MSc (with distinction) and PhD in Artificial Intelligence from the University of Edinburgh. He has held a wide variety of positions in academia and in tech companies, including work on evolutionary techniques in the games industry (MathEngine PLC, Oxford), postdoctoral research on swarm robotics (University of Edinburgh) and co-founder and CTO of a company developing continuous learning AI systems for fund management (Timberpost). He is an elected board member of the International Society for Artificial Life and an associate examiner for the University of London Worldwide.


The meaning of life with Luciano Floridi

Luciano Floridi is Professor of Philosophy and Ethics of Information and Director of the Digital Ethics Lab, Oxford Internet Institute, University of Oxford. He is also Professorial Fellow of Exeter College, Oxford and Turing Fellow and Chair of the Data Ethics Group of The Alan Turing Institute. The philosophy and ethics of information have been the focus of his research for a long time, and are the subject of his numerous publications, including The Fourth Revolution: How the Infosphere is Reshaping Human Reality (Oxford University Press, 2014), winner of the J. Ong Award.


Machine Ethics with Susan and Michael Anderson

Michael Anderson, professor emeritus of computer science at the University of Hartford, earned his Ph.D. in computer science and engineering at the University of Connecticut. Susan Leigh Anderson, professor emerita of philosophy at the University of Connecticut, earned her Ph.D. in philosophy at the University of California, Los Angeles. They have been instrumental in establishing machine ethics as a bona fide field of study, co-chairing/authoring the AAAI Fall 2005 Symposium on Machine Ethics, a IEEE Intelligent Systems special issue on machine ethics, and an invited article for Artificial Intelligence Magazine on the topic. Further, their research in machine ethics was selected for Innovative Applications of Artificial Intelligence as an emerging application in 2006. Scientific American (Oct. 2010) features an invited article on their research in which the first robot whose behavior is guided by an ethical principle is debuted. They have published "Machine Ethics" with Cambridge University Press (2011).


Robotics and autonomy with Alan Winfield

UWE Professor of Robot Ethics - Engineer, roboethicist and pro-feminist. Interested in robots as working models of life, evolution, intelligence and culture.

Links:

Alan's blog
EPSC principles of robotics
Robotics: A Very Short Introduction


Robot transparency with Rob Wortham

Rob Wortham is currently undertaking a Computer Science PhD at the University of Bath researching autonomous robotics, with a focus on domestic applications and ethical considerations. How does human natural intelligence (NI) interact with AI, and how do we make the behaviour of these systems more understandable? What are the risks and benefits of AI, and how can we maximise the benefit to society, whilst minimising the risks? I am interested in real world AI for real world problems.

Previously Founder and CFO of RWA Ltd, a major international company developing IT systems for the leisure travel industry.


Retrospective cast 1


AI as slaves with Joanna J Bryson

Dr Joanna J Bryson Reader at University of Bath, and Affiliate, Center for Information Technology Policy at Princeton University. Artificial & Natural Intelligence; Cognition, Culture, & Society; AI Ethics, Safety, & Policy.


Technologies narratives in culture with Sam Kinsley

Sam Kinsley is a Lecturer in Human Geography at the University of Exeter. His teaching, research and associated writing examine the politics, experiences and spatial imaginations of technology. Recently Sam's work has focussed on how knowledge claims are made about the world through social media data and the ways stories about automation have become normalised in the popular imagination. In the longer term, Sam’s research has been concerned with the ways ideas about particular kinds of future are produced and performed through technology research and development practices. As part of this research, he has undertaken field work in Silicon Valley and was embedded within the Pervasive Media Studio in Bristol.