Pages tagged: Machine ethics

Machine Ethics with Susan and Michael Anderson

Michael Anderson, professor emeritus of computer science at the University of Hartford, earned his Ph.D. in computer science and engineering at the University of Connecticut. Susan Leigh Anderson, professor emerita of philosophy at the University of Connecticut, earned her Ph.D. in philosophy at the University of California, Los Angeles. They have been instrumental in establishing machine ethics as a bona fide field of study, co-chairing/authoring the AAAI Fall 2005 Symposium on Machine Ethics, a IEEE Intelligent Systems special issue on machine ethics, and an invited article for Artificial Intelligence Magazine on the topic. Further, their research in machine ethics was selected for Innovative Applications of Artificial Intelligence as an emerging application in 2006. Scientific American (Oct. 2010) features an invited article on their research in which the first robot whose behavior is guided by an ethical principle is debuted. They have published "Machine Ethics" with Cambridge University Press (2011).


How to design a moral algorithm with Derek Leben

Derek Leben is Associate Professor of Philosophy at the University of Pittsburgh, Johnstown. He works at the intersection of ethics, cognitive science, and emerging technologies. In his new book, Ethics for Robots, Leben argues for the use of a particular moral framework for designing autonomous systems based on the Contractarianism of John Rawls. He also demonstrates how this framework can be productively applied to autonomous vehicles, medical technologies, and weapons systems. Follow on Twitter: @EthicsForRobots.


Moral reasoning with Marija Slavkovik

Marija Slavkovik is an associate professor in AI at the Department of Information Science and Media Studies at the University of Bergen in Norway. She works on collective reasoning and decision making and is specifically interested in these types of problems in machine ethics. Machine ethics is basically trying to answer the question of how do we program various levels of ethical behaviour in artificial agents. It is a very interesting field for both computer scientists and humanists and I like it because it pushes very hard reasoning problems back to the surface of AI.

Marija's background is in computational logic and in control theory and is also interested in all aspects of automation. She mainly writes scientific articles on computational social choice and multi-agent systems. However, being in a half media department, she is exposed to a lot of issues in how information spreads in social networks and how information gets distorted after being spread through a network and/or aggregated. Marija is now trying to bring this problem into the Machine Ethics conversation, because there is a lot of decision automation happening behind the scenes of information sharing, we see a lot of emergent behaviour of systems of artificial agents and people, but we do not fully understand it or can control it.


Moral Machines with Rebecca Raper

Rebecca is a PhD candidate in Machine Ethics, and consultant in Ethical AI at Oxford Brookes University, Institute for Ethical Artificial Intelligence. Her PhD research is entitled 'Autonomous Moral Artificial Intelligence', and as a consultant she specialises in looking at developing practical approaches to embedding ethics in AI Products.

Her background is primarily philosophy. She completed her BA, then MA in philosophy at The University of Nottingham in 2010, before working in analytics for several different industries. As an undergraduate she had a keen interest in logic, metametaphysics, and the topic of consciousness, spurring her to come back into academia in 2017 to undertake a further qualification in psychology at Sheffield Hallam University, before embarking on her PhD.

She hopes she can combine her diverse interests to solving the challenge of creating moral machines.

In her spare time she can be found playing computer games, running, or trying to explore the world.