Pages tagged: Alignment

AI alignment with Rohin Shah

Rohin is a 6th year PhD student in Computer Science working at the Center for Human-Compatible AI (CHAI) at UC Berkeley. His general interests in CS are very broad, including AI, machine learning, programming languages, complexity theory, algorithms, and security, and so he started his PhD working on program synthesis. However, he was convinced that it is really important for us to build safe, aligned AI, and so moved to CHAI at the start of his 4th year. He now thinks about how to provide specifications of good behaviour in ways other than reward functions, especially ones that do not require much human effort. He is best known for the Alignment Newsletter, a weekly publication with recent content relevant to AI alignment that has over 1600 subscribers.

AGI Safety and Alignment with Robert Miles

Rob Miles is a science communicator focused on AI Safety and Alignment. He has a YouTube channel called Rob Miles AI, and runs The Alignment Newsletter Podcast, which presents summaries of the week's research. He also collaborates with research organisations like the Machine Intelligence Research Institute, the Future of Humanity Institute, and the Centre for the Study of Existential Risk, to help them communicate their work.