Press "Enter" to skip to content

U.S. AI Safety Institute Bolsters Leadership with Top Experts from OpenAI and Stanford, But Not Without Controversy

The U.S. AI Safety Institute, housed in the National Institute of Standards and Technology, has added five new members to its leadership team. The new members, including experts from OpenAI, Stanford University, and the White House Office of Science and Technology Policy, will help execute tasks outlined in President Joe Biden’s executive order on AI. They will focus on designing and testing AI models, overseeing agency operations, implementing broader agency strategy, and fostering international cooperation.

Ars Technica profiled one, Paul Christiano, a former OpenAI researcher known for his work on AI safety and his predictions of potential AI doom, has been appointed as the head of AI safety at the US AI Safety Institute. While some view this appointment as a risk due to Christiano’s “AI doomer” views, others believe his expertise makes him well-suited for the role. The appointment has sparked controversy within NIST, with some staff members expressing concerns about compromising the institute’s objectivity and integrity. Christiano’s responsibilities will include monitoring current and potential risks, conducting tests of AI models, and implementing risk mitigations.

Why do we care?

If you haven’t said something controversial, you haven’t said anything interesting.   I’m going to take a wait-and-see approach here and note he’s one of five new voices.      This move aligns with the strategic goals set forth in President Joe Biden’s executive order on AI, which aims to establish leadership, reduce risks, and foster international cooperation in developing and deploying AI technologies.

And where are the best places for arguments like this?  NIST.