White House unveils AI governance policy focused on risks, transparency
The White House released its much-anticipated artificial intelligence governance policy Thursday, establishing a roadmap for federal agencies’ management and usage of the budding technology.
The 34-page memo from Office of Management and Budget Director Shalanda D. Young corresponds with President Joe Biden’s October AI executive order, providing more detailed guardrails and next steps for agencies. It finalizes a draft of the policy that was released for public comment in November.
“This policy is a major milestone for President Biden’s landmark AI executive order, and it demonstrates that the federal government is leading by example in its own use of AI,” Young said in a call with reporters before the release of the memo.
Among other things, the memo mandates that agencies establish guardrails for AI uses that could impact Americans’ rights or safety, expands what agencies share in their AI use case inventories, and establishes a requirement for agencies to designate chief AI officers to oversee their use of the technology.
Vice President Kamala Harris highlighted those three areas on the call with the press, noting those “new requirements have been shaped in consultation with leaders from across the public and private sectors, from computer scientists to civil rights leaders, to legal scholars and business leaders.”
“President Biden and I intend that these domestic policies will serve as a model for global action,” Harris said.
In addition to the memo, Young announced that the National AI Talent Surge established under the order will hire “at least 100 AI professionals into government by this summer.” She also said OMB will take action later this year on federal procurement of AI and is releasing a request for information on that work.
Under the policy, agencies are required to evaluate and monitor how AI could impact the public and mitigate the risk of discrimination. That includes things like allowing people at the airport to opt out of the Transportation Security Administration’s use of facial recognition “without any delay or losing their place in line,” or requiring a human to oversee the use of AI in health care diagnostics, according to a fact sheet provided by OMB.
Additionally, the policy expands existing disclosures that agencies must share publicly and annually that inventory their AI uses. Those inventories must now identify whether a use is rights- or safety-impacting. The Thursday memo also requires agencies to submit aggregate metrics about use cases that aren’t required to be included in the inventory. In the draft, the requirement for aggregate metrics applied only to the Department of Defense.
The policy also establishes the requirement for agencies to designate within 60 days of the memo’s publication a CAIO to oversee and manage AI uses. Many agencies have already started naming people for those roles, which have tended to be chief information, data and technology officials.
“This is to make sure that AI is used responsibly, understanding that we must have senior leaders across our government who are specifically tasked with overseeing AI adoption and use,” Harris said of the CAIO role.