New Commerce strategy document points to the difficult science of AI safety
The Department of Commerce on Tuesday released a new strategic vision on artificial intelligence and unveiled more detailed plans about its new AI Safety Institute.
The document, which focuses on developing a common understanding of and practices to support AI security, comes as the Biden administration seeks to build international consensus on AI safety issues.
AI researchers continue to debate and study the potential risks of the technology, which include bias and discrimination concerns, privacy and safety vulnerabilities, and more far-reaching fears about so-called general artificial intelligence. In that vein, the strategy points to myriad definitions, metrics, and verification methodologies for AI safety issues. In particular, the document discusses developing ways of detecting synthetic content, model security best practices, and other safeguards.
It also highlights steps that the AI Safety Institute, which is housed within Commerce’s National Institute of Standards and Technology, might help promote and evaluate more advanced models, including red-teaming and A/B testing. Commerce expects the labs of NIST — which is still facing ongoing funding challenges — to conduct much of this work.
“The strategic vision we released today makes clear how we intend to work to achieve that objective and highlights the importance of cooperation with our allies through a global scientific network on AI safety,” Commerce Secretary Gina Raimondo in a statement. “Safety fosters innovation, so it is paramount that we get this right and that we do so in concert with our partners around the world to ensure the rules of the road on AI are written by societies that uphold human rights, safety, and trust.”
The AI Safety Institute is also looking at ways to support the work of AI safety evaluations within the broader community, including through publishing guidelines for developers and deployers and creating evaluation protocols that could be used by, for instance, third-party independent evaluators. Eventually, the institute hopes to create a “community” of evaluators and lead an international network on AI safety.
The release of the strategy is only the latest step taken by the Commerce Department, which is leading much of the Biden administration’s work on emerging technology.
Earlier this year, the AI Safety Institute announced the creation of a consortium to help meet goals in the Biden administration’s executive order on the technology. In April, the Commerce Department added five new people to the AI Safety Institute’s executive leadership team.
That same month, Raimondo signed a memorandum of understanding with the United Kingdom focused on artificial intelligence. This past Monday, the UK’s technology secretary said its AI Safety Institute would open an outpost in the Bay Area, its first overseas office.