White House releases final guidance for 2024 AI use case inventories
The Biden administration has finalized guidance for federal agencies’ 2024 artificial intelligence use case inventories, teeing up a more comprehensive process than previous years and setting a mid-December deadline.
That new document is dated Aug. 14, but was published publicly on AI.gov Friday, according to a website tracking tool used by FedScoop. While much of the document remains the same as the draft, the final version includes several changes, such as narrowing the scope of excluded use cases and adding a section on deadline extension requests for compliance with risk management practices.
The guidance also establishes a clear deadline for inventories to be submitted to the White House Office of Management and Budget: Dec. 16, 2024. Agencies are required to disclose the information about their AI uses via an OMB-managed form and subsequently post a “machine-readable CSV of all publicly releasable use cases on” their website.
“Artificial Intelligence is helping agencies advance their missions and modernize the services we are delivering to the public,” Clare Martorana, federal CIO and chair of the Chief AI Officers Council, said in a statement. “And with increased AI use comes increased responsibility which is why annual AI use case inventory reporting is so important. It provides the public with greater transparency and helps preserve trust by demonstrating a commitment to ethical standards, accountability and responsible AI usage.”
The new guidance is the latest iteration of how non-Department of Defense and intelligence community agencies will go about collecting and producing lists of their planned, new, and existing AI uses. The inventories were first established under a 2020 Trump-era executive order on AI, later enshrined into statute, and now have been expanded upon by the Biden administration.
In their first several years, the annual AI inventories, which are also required to be shared publicly, suffered from inconsistencies and even errors. Similarly, the disclosures have varied widely in terms of things like the type of information contained, format, and collection method.
New process
Under the new guidance, however, the inventories will include more standardized categories and multiple choice responses for agencies.
For every individually inventoried use case, agencies are required to report things like the name of that use, its intended purpose, its outputs, and whether that use is rights- or safety-impacting as defined by OMB’s memo on governance of the technology.
Agencies will also have to provide more granular information for a subset of AI use cases based on their development stage. That information includes categories such as whether a use involves personally identifiable information, whether a model disseminates information to the public, and whether custom-developed code is required.
Additionally, agencies may not remove use cases from inventories if those uses have been retired and instead must mark them as no longer in use, which the Department of Homeland Security has already started doing.
Other additions to the process this year include a requirement to report aggregate metrics about uses that don’t have to be individually inventoried and mechanisms by which agencies can tweak the practices they’re required to follow for certain uses. Agencies can waive one or more of the required risk management practices for rights- and safety impacting under OMB’s AI memo and determine that a use case presumed to fall under one of those categories doesn’t actually match the definitions in the memo. All of those requirements have a public reporting component.
Alternations from draft
Notably, the final guidance for agencies refines what uses are excluded from reporting in the inventories. The final guidance excludes just two categories of uses: research and development use cases and when AI is being used as part of a national security system or in the intelligence community.
But the draft had also excluded using “an AI application to perform a standalone task a single time” unless that task is done repeatedly, or is used for related tasks and excluded uses “that are implemented using commercial-off-the-shelf or freely available AI products that are unmodified for government use and are used to carry out routine productivity tasks, such as word processors and map navigation systems.”
In fact, under the final guidance, agencies must now report whether each use case in its inventory is “implemented solely with commercial-off-the-shelf or freely available AI products.”
The final guidance also creates new permission for agencies to remove information from their public inventories. While the document preserved the prohibition on removing retired use cases, it added a line noting “agencies may remove use cases that no longer meet inclusion criteria.”
Meanwhile, a new footnote adds detail on what a “planned” use case means. The document now defines that as a use that “has been initiated through the allocation of funds or resources or when a formal development, procurement, or acquisition plan has been approved.”
The final guidance also removes references in the draft to specific information that would be included in the public disclosures for aggregate metrics and disclosures about waivers — though it still notes that public reporting is required for both.
For aggregate metrics, OMB’s memo requires agencies (and DoD) to report the number of rights- and safety-impacting uses and compliance with risk management practices. For waivers, the guidance states agencies must release a summary and justification.
This story was updated Aug. 26, 2024 with a statement from Federal CIO Clare Martorana.