Pages tagged: Academic

Technologies narratives in culture with Sam Kinsley

Sam Kinsley is a Lecturer in Human Geography at the University of Exeter. His teaching, research and associated writing examine the politics, experiences and spatial imaginations of technology. Recently Sam's work has focussed on how knowledge claims are made about the world through social media data and the ways stories about automation have become normalised in the popular imagination. In the longer term, Sam’s research has been concerned with the ways ideas about particular kinds of future are produced and performed through technology research and development practices. As part of this research, he has undertaken field work in Silicon Valley and was embedded within the Pervasive Media Studio in Bristol.


AI as slaves with Joanna J Bryson

Dr Joanna J Bryson Reader at University of Bath, and Affiliate, Center for Information Technology Policy at Princeton University. Artificial & Natural Intelligence; Cognition, Culture, & Society; AI Ethics, Safety, & Policy.


Retrospective cast 1


Robot transparency with Rob Wortham

Rob Wortham is currently undertaking a Computer Science PhD at the University of Bath researching autonomous robotics, with a focus on domestic applications and ethical considerations. How does human natural intelligence (NI) interact with AI, and how do we make the behaviour of these systems more understandable? What are the risks and benefits of AI, and how can we maximise the benefit to society, whilst minimising the risks? I am interested in real world AI for real world problems.

Previously Founder and CFO of RWA Ltd, a major international company developing IT systems for the leisure travel industry.


Robotics and autonomy with Alan Winfield

UWE Professor of Robot Ethics - Engineer, roboethicist and pro-feminist. Interested in robots as working models of life, evolution, intelligence and culture.

Links:

Alan's blog
EPSC principles of robotics
Robotics: A Very Short Introduction


Machine Ethics with Susan and Michael Anderson

Michael Anderson, professor emeritus of computer science at the University of Hartford, earned his Ph.D. in computer science and engineering at the University of Connecticut. Susan Leigh Anderson, professor emerita of philosophy at the University of Connecticut, earned her Ph.D. in philosophy at the University of California, Los Angeles. They have been instrumental in establishing machine ethics as a bona fide field of study, co-chairing/authoring the AAAI Fall 2005 Symposium on Machine Ethics, a IEEE Intelligent Systems special issue on machine ethics, and an invited article for Artificial Intelligence Magazine on the topic. Further, their research in machine ethics was selected for Innovative Applications of Artificial Intelligence as an emerging application in 2006. Scientific American (Oct. 2010) features an invited article on their research in which the first robot whose behavior is guided by an ethical principle is debuted. They have published "Machine Ethics" with Cambridge University Press (2011).


The meaning of life with Luciano Floridi

Luciano Floridi is Professor of Philosophy and Ethics of Information and Director of the Digital Ethics Lab, Oxford Internet Institute, University of Oxford. He is also Professorial Fellow of Exeter College, Oxford and Turing Fellow and Chair of the Data Ethics Group of The Alan Turing Institute. The philosophy and ethics of information have been the focus of his research for a long time, and are the subject of his numerous publications, including The Fourth Revolution: How the Infosphere is Reshaping Human Reality (Oxford University Press, 2014), winner of the J. Ong Award.


Evolution and AI with Tim Taylor

Tim works in academic research and commercial development of Artificial Life (ALife) and Artificial Intelligence (AI) technologies, with a particular interest in the foundational issues of true autonomy and open-ended creative evolution. He is also interested in the historical development of these ideas, and has recently written a book on the (very) early history of the idea of self-reproducing and evolving machines ("The Spectre of Self-Reproducing Machines: An Early History of Evolving Robots", currently under review with publisher). He holds an MA in Natural Sciences from the University of Cambridge (specialising in Experimental Psychology), followed by a MSc (with distinction) and PhD in Artificial Intelligence from the University of Edinburgh. He has held a wide variety of positions in academia and in tech companies, including work on evolutionary techniques in the games industry (MathEngine PLC, Oxford), postdoctoral research on swarm robotics (University of Edinburgh) and co-founder and CTO of a company developing continuous learning AI systems for fund management (Timberpost). He is an elected board member of the International Society for Artificial Life and an associate examiner for the University of London Worldwide.


How to design a moral algorithm with Derek Leben

Derek Leben is Associate Professor of Philosophy at the University of Pittsburgh, Johnstown. He works at the intersection of ethics, cognitive science, and emerging technologies. In his new book, Ethics for Robots, Leben argues for the use of a particular moral framework for designing autonomous systems based on the Contractarianism of John Rawls. He also demonstrates how this framework can be productively applied to autonomous vehicles, medical technologies, and weapons systems. Follow on Twitter: @EthicsForRobots.


#AIRetreat

Interviewees:

Links from participants:

Find images on Twitter #airetreat or instagram #airetreat


Respecting data with Miranda Mowbray

Miranda Mowbray is a lecturer at the University of Bristol, where her research interests include cyber security and big data ethics. She was an invited speaker on AI and cybersecurity at the Global Cybersecurity Summit in 2017. She has a long-term interest in topics relevant to this podcast: her paper “Ethics for Bots” was published in 2002.

Miranda’s PhD is in Algebra, from London University. She is a Fellow of the British Computer Society. She spent last summer doing a research project with two Masters students on subverting the security of a swarm of a hundred small autonomous robots.


Moral reasoning with Marija Slavkovik

Marija Slavkovik is an associate professor in AI at the Department of Information Science and Media Studies at the University of Bergen in Norway. She works on collective reasoning and decision making and is specifically interested in these types of problems in machine ethics. Machine ethics is basically trying to answer the question of how do we program various levels of ethical behaviour in artificial agents. It is a very interesting field for both computer scientists and humanists and I like it because it pushes very hard reasoning problems back to the surface of AI.

Marija's background is in computational logic and in control theory and is also interested in all aspects of automation. She mainly writes scientific articles on computational social choice and multi-agent systems. However, being in a half media department, she is exposed to a lot of issues in how information spreads in social networks and how information gets distorted after being spread through a network and/or aggregated. Marija is now trying to bring this problem into the Machine Ethics conversation, because there is a lot of decision automation happening behind the scenes of information sharing, we see a lot of emergent behaviour of systems of artificial agents and people, but we do not fully understand it or can control it.