Advertisement

Data, talent, funding among top barriers for federal agency AI implementation 

Agencies disclosed obstacles to responsible use of AI in compliance plans recently shared with the White House.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
(Getty Images)

Federal officials are citing several common barriers to carrying out the Biden administration’s directives on artificial intelligence, including preparedness and resource issues, according to recent compliance plans shared by agencies with the White House.

A FedScoop analysis of 29 of those documents found that data readiness and access to quality data, a dearth of knowledge about AI and talent with specific expertise, and finite funding levels were among the most common challenges that agencies reported. Agencies also disclosed obstacles when it comes to their IT infrastructure, limitations in government-ready tools, and testing and evaluation challenges, among other issues.

The compliance plans, which were required to be completed and posted publicly in late September, are one of the first windows into how the executive branch is developing methods to ensure responsible AI use in alignment with President Joe Biden’s executive order on the technology and the Office of Management and Budget’s corresponding guidance

Among the questions officials were asked to address in those plans was describing whether there have been barriers to the responsible use of AI and steps the agency has taken — or intends to take — to address them. The responses reveal legacy issues for government modernization now posing challenges for AI efforts.

Advertisement

About 75% of plans reviewed by FedScoop listed or described examples of specific barriers the agency is facing, though the documents varied widely in terms of detail. Of those, roughly a dozen mentioned data hurdles, six mentioned talent or knowledge gap problems and six underscored funding limitations.

In response to a FedScoop inquiry to OMB about the themes, a spokeswoman said the office “continues to work with agencies as they implement AI risk management practices to ensure the responsible and ethical use of AI in their operations.”

Alexander Howard, a digital government expert who currently blogs about emerging tech and public policy issues through his independent publication, Civic Texts, said that the barriers reflect longer-term issues for the federal government. 

When the federal government was working to implement new technologies in 2009, for example, Howard said the two biggest challenges were around procurement and talent. While there’s been progress since then, he said “it’s interesting to see that 15 years later, these barriers are still there.”

Similarly, improving enterprise data management is an ongoing issue for the government, and that issue is particularly salient now with AI. Making government AI-ready “comes down to data,” Howard said. “It can’t do something if the data isn’t there for it to work on.”

Advertisement

Reliable, AI-ready data

Agencies that noted data barriers specifically cited issues like outdated storage methods, AI-readiness, and lack of trusted training data.

The Department of Energy said legacy methods of storing data, such as warehouses and databases, are outdated and weren’t designed to account for AI. “As a result, DOE faces obstacles in ensuring that data used for AI training and use in AI models is high quality, well-curated, and easily accessible,” the agency said.

That has manifested in “fragmented data sources, inconsistent data quality, and inconsistent and insufficient data interoperability standards,” DOE’s compliance plan said.

The Department of Veterans Affairs, meanwhile, said barriers for its efforts were “access to authoritative data sources for training, testing and validation of AI models and ensuring that these data sources have documentation describing how they are cleaned and refined to support model audits.” 

Advertisement

VA pointed to its enterprise data platforms and creation of an enterprise data catalog as examples of work to address that barrier, saying those platforms are “crucial” to being able to access personally identifiable information and protected health information securely.

Some agencies noted the status of existing data improvement efforts. The U.S. Department of Agriculture said it’s still working to implement a portion of the Foundations for Evidence-Based Policymaking Act — known as the OPEN Government Data Act — which requires public government data to be machine-readable. The agency plans to chart the next steps for data readiness in its upcoming AI strategy and said that making more progress on that front “would contribute significantly to USDA’s readiness for AI.”

NASA, which also cited data readiness as an issue, said it was conducting workshops within the agency to address the problem. Those workshops, in part, will attempt to “identify data enhancements required to fuel transformation with data and AI. “

‘Ripe for improvement’ 

That data issues were a top barrier is notable given how much AI tools hinge on reliable data sources.

Advertisement

Valerie Wirtschafter, a fellow in the Brookings Institution’s Artificial Intelligence and Emerging Technology Initiative, said that while the commonalities in agencies’ barriers weren’t necessarily surprising, the most acute of the issues seemed to be the data challenges. 

“This strikes me as an area that is ripe for improvement and also one that can be addressed through more detailed guidance,” Wirtschafter said in an emailed response.

Howard said that a potential answer might be having a team within government dedicated to structuring “data that’s currently trapped in PDFs or elsewhere” and implementing the OPEN Government Data Act. 

“But that’s not where the ethos is right now,” he said. Howard added that it seems like “the current leadership is people who understand how to use tools to publish, but it’s not people who use tools to make.”

Ensuring that federal data is AI-ready is already an issue that’s attracted attention from officials and lawmakers as well.

Advertisement

A working group within the Department of Commerce’s Data Governance Board has taken on the issue and is looking to develop standards for AI-ready government data. While the standard of machine readability is necessary, the department’s top data official, Oliver Wise, previously said that it’s “not sufficient” to meet the expectations of users in the AI age.

And in Congress, a bipartisan pair of Senate lawmakers are also seeking to extend the life of the Chief Data Officers Council included language in their bill that would task the group with specific work on AI data management practices. 

Talent, funding wanted

On talent, agencies said there was a need to improve understanding of AI tools within their current workforce and hire more people who are in roles dedicated to advancing the technology. 

“Like most federal agencies, USDA does not have sufficient AI literacy and AI talent today,” the agency said in its plan. “Without a significant investment to increase workforce literacy in AI and attract AI talent to USDA, our ability to execute the upcoming AI Strategy will be limited.”

Advertisement

In the Nuclear Regulatory Commission’s case, one barrier is employees’ aversion to the technology itself. NRC said its “workforce has expressed trepidation as well as a general lack of knowledge of AI capabilities.” As a result, the agency said it would continue to enable “effective change management” so workers can take advantage of those capabilities. 

Hiring new AI talent and upskilling the workforce have been important parts of the Biden administration’s goals when it comes to federal actions on the technology. The administration currently has aspirations to hire 500 AI and AI-enabling workers by 2025.

Agencies, which were also required to report information about AI talent, said that hiring efforts have already included using the Office of Personnel Management’s direct hire authority for AI and AI-enabling workers — a mechanism aimed at making the hiring process easier — and training staff.

USDA said it posted several positions using that authority, which has reduced time to hire and made the agency’s AI positions “more competitive than before.” The agency further noted that it’s expanded its number of U.S. Digital Corps fellows for AI efforts, is working to mature its training program for data science and has launched a generative AI training course through a partnership with Skillsoft. 

Agencies also said they’re training up their existing workforce. The Department of the Treasury said it’s working to acquire training developed by the General Services Administration. It expects to embed that training into its employee learning platform by the end of the year and is encouraging employees to educate themselves on AI with other training provided by GSA and free open-source videos.

Advertisement

On funding, agencies said limited financial resources increase risk and make it difficult to test and review uses. USDA said the lack of dedicated funding increases “risk of improper deployments.” 

The Department of Commerce, whose National Institute of Standards and Technology experienced a cut in the last fiscal year’s appropriations, said “AI governance remains a broadly unfunded requirement, severely impacting Commerce’s ability to thoroughly analyze and track the responsible use of AI.”

Similarly, NRC said it’s “only able to assess, test, implement, and maintain new capabilities where resources have been made available to do so.” The agency said its IT and AI leaders will continue to express resource needs during budget formulation and execution.

In the case of the U.S. Trade and Development Agency, having limited staffing and funding as a small agency means it “leans upon the lessons learned and best practices of the interagency.” 

The Export-Import Bank of the U.S., another small agency, said “AI use cases compete for funding and staffing with other important priorities at the Bank including non-IT investments in core EXIM capabilities, cyber security, and other use cases in our modernization agenda.”

Advertisement

While funding and talent pose barriers, they’re among the issues that Wirtschafter said federal agencies are more equipped to deal with than others. 

The federal government has made strides to bring talent in through the Intergovernmental Personnel Act, direct-hire authority and the U.S. Digital Corps, she said. “Funding is also always a challenge, but I do think there are specific funding pools available for modernization efforts as well, and pay for a lot of these roles is typically quite reasonable,” Wirtschafter said.

Ultimately, the information on challenges provided in the compliance plans serves as something of a preview of forthcoming strategies to identify and address barriers to responsible AI that are required under OMB’s governance memo. Those plans must be published publicly by March 2025 and will include information about the status of the agency’s AI maturity and a plan to make sure AI innovation is supported.

While most agencies provided some list of challenges they’re facing, others noted there’s still work to be done. The Securities Exchange Commission said it “plans to establish a working group that will be responsible for identifying any barriers to the responsible use of AI, including with respect to IT infrastructure, data practices, and cybersecurity processes.”

Madison Alder

Written by Madison Alder

Madison Alder is a reporter for FedScoop in Washington, D.C., covering government technology. Her reporting has included tracking government uses of artificial intelligence and monitoring changes in federal contracting. She’s broadly interested in issues involving health, law, and data. Before joining FedScoop, Madison was a reporter at Bloomberg Law where she covered several beats, including the federal judiciary, health policy, and employee benefits. A west-coaster at heart, Madison is originally from Seattle and is a graduate of the Walter Cronkite School of Journalism and Mass Communication at Arizona State University.

Latest Podcasts