Pages tagged: Machine ethics

Rights, trust and ethical choice with Ricardo Baeza-Yates

Ricardo Baeza-Yates is Director of Research at the Institute for Experiential AI of Northeastern University. He is also a part-time Professor at Universitat Pompeu Fabra in Barcelona and Universidad de Chile in Santiago. Before he was the CTO of NTENT, a semantic search technology company based in California and prior to these roles, he was VP of Research at Yahoo Labs, based in Barcelona, Spain, and later in Sunnyvale, California, from 2006 to 2016. He is co-author of the best-seller Modern Information Retrieval textbook published by Addison-Wesley in 1999 and 2011 (2nd ed), which won the ASIST 2012 Book of the Year award. From 2002 to 2004 he was elected to the Board of Governors of the IEEE Computer Society and between 2012 and 2016 was elected to the ACM Council.

Since 2010 he has been a founding member of the Chilean Academy of Engineering. In 2009 he was named ACM Fellow and in 2011 IEEE Fellow, among other awards and distinctions. He obtained a Ph.D. in CS from the University of Waterloo, Canada, and his areas of expertise are web search and data mining, information retrieval, bias and ethics on AI, data science and algorithms in general.


AI ethics strategy with Reid Blackman

Reid Blackman, Ph.D., is the author of “Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent, and Respectful AI (Harvard Business Review Press), Founder and CEO of Virtue, an AI ethical risk consultancy, and he volunteers as the Chief Ethics Officer for the non-profit Government Blockchain Association. He has also been a Senior Advisor to the Deloitte AI Institute, a Founding Member of Ernst & Young’s AI Advisory Board, and sits on the advisory boards of several startups. His work has been profiled in The Wall Street Journal and Forbes and he has presented his work to dozens of organizations including Citibank, the FBI, the World Economic Forum, and AWS. Reid’s expertise is relied upon by Fortune 500 companies to educate and train their people and to guide them as they create and scale AI ethical risk programs. Learn more at reidblackman.com.


Moral Machines with Rebecca Raper

Rebecca is a PhD candidate in Machine Ethics, and consultant in Ethical AI at Oxford Brookes University, Institute for Ethical Artificial Intelligence. Her PhD research is entitled 'Autonomous Moral Artificial Intelligence', and as a consultant she specialises in looking at developing practical approaches to embedding ethics in AI Products.

Her background is primarily philosophy. She completed her BA, then MA in philosophy at The University of Nottingham in 2010, before working in analytics for several different industries. As an undergraduate she had a keen interest in logic, metametaphysics, and the topic of consciousness, spurring her to come back into academia in 2017 to undertake a further qualification in psychology at Sheffield Hallam University, before embarking on her PhD.

She hopes she can combine her diverse interests to solving the challenge of creating moral machines.

In her spare time she can be found playing computer games, running, or trying to explore the world.


Moral reasoning with Marija Slavkovik

Marija Slavkovik is an associate professor in AI at the Department of Information Science and Media Studies at the University of Bergen in Norway. She works on collective reasoning and decision making and is specifically interested in these types of problems in machine ethics. Machine ethics is basically trying to answer the question of how do we program various levels of ethical behaviour in artificial agents. It is a very interesting field for both computer scientists and humanists and I like it because it pushes very hard reasoning problems back to the surface of AI.

Marija's background is in computational logic and in control theory and is also interested in all aspects of automation. She mainly writes scientific articles on computational social choice and multi-agent systems. However, being in a half media department, she is exposed to a lot of issues in how information spreads in social networks and how information gets distorted after being spread through a network and/or aggregated. Marija is now trying to bring this problem into the Machine Ethics conversation, because there is a lot of decision automation happening behind the scenes of information sharing, we see a lot of emergent behaviour of systems of artificial agents and people, but we do not fully understand it or can control it.


How to design a moral algorithm with Derek Leben

Derek Leben is Associate Professor of Philosophy at the University of Pittsburgh, Johnstown. He works at the intersection of ethics, cognitive science, and emerging technologies. In his new book, Ethics for Robots, Leben argues for the use of a particular moral framework for designing autonomous systems based on the Contractarianism of John Rawls. He also demonstrates how this framework can be productively applied to autonomous vehicles, medical technologies, and weapons systems. Follow on Twitter: @EthicsForRobots.


Machine Ethics with Susan and Michael Anderson

Michael Anderson, professor emeritus of computer science at the University of Hartford, earned his Ph.D. in computer science and engineering at the University of Connecticut. Susan Leigh Anderson, professor emerita of philosophy at the University of Connecticut, earned her Ph.D. in philosophy at the University of California, Los Angeles. They have been instrumental in establishing machine ethics as a bona fide field of study, co-chairing/authoring the AAAI Fall 2005 Symposium on Machine Ethics, a IEEE Intelligent Systems special issue on machine ethics, and an invited article for Artificial Intelligence Magazine on the topic. Further, their research in machine ethics was selected for Innovative Applications of Artificial Intelligence as an emerging application in 2006. Scientific American (Oct. 2010) features an invited article on their research in which the first robot whose behavior is guided by an ethical principle is debuted. They have published "Machine Ethics" with Cambridge University Press (2011).