60. Responsible AI Research with Madhulika Srikumar

This time we're talking AI research with Madhulika Srikumar of Partnership on AI. We chat about managing the risks of AI research, how should the AI community think about the consequences of their research, documenting best practises for AI, OpenAI's GTP2 research disclosure example, considering unintended consequences & negative downstream outcomes, considering what your research may actually contribute, promoting scientific openness, proportional ethical reflection, research social impact assessments and more...
Date: 25th of August 2021
Podcast authors: Ben Byford with Madhulika Srikumar
Audio duration: 40:38 | Website plays & downloads: 176 Click to download
Tags: Research, Responsible AI, GTP-2, Risk | Playlists: Existential risk, Legislation

Madhulika Srikumar is a program lead at the Safety-Critical AI initiative at Partnership on AI, a multistakeholder nonprofit shaping the future of responsible AI. Core areas of her current focus include community engagement on responsible publication norms in AI research and diversity and inclusion in AI teams. Madhu is a lawyer by training and completed her graduate studies (LL.M) at Harvard Law School.

Managing the Risks of AI Research: Six Recommendations for Responsible Publication


No transcript currently available for this episode.

Episode host: Ben Byford

Ben Byford is a AI ethics consultant, code, design and data science teacher, freelance games designer with years of design and coding experience building websites, apps, and games.

In 2015 he began talking on AI ethics and started the Machine Ethics podcast. Since, Ben has talked with academics, developers, doctors, novelists and designers on AI, automation and society.

Through Ethical by Design Ben and the team help organisations make better AI decisions leveraging their experience in design, technology, business, data, sociology and philosophy.

@BenByford