Advertisement

OpenAI reveals first federal agency customer for ChatGPT Enterprise

USAID will use the tool to reduce admin burdens and ease partnerships, OpenAI’s Anna Makanju said in a Q&A with FedScoop.
A photo of a smartphone and a laptop displaying the logos of OpenAI and ChatGPT. (Photo by MARCO BERTORELLO/AFP via Getty Images)

Just a few days after OpenAI’s multimodal AI model won a FedRAMP High Authorization as a service within Microsoft’s Azure Government cloud, the generative AI company says that it’s partnered to offer ChatGPT Enterprise to its first federal agency customer: the U.S. Agency for International Development. 

Anna Makanju, OpenAI’s vice president of global affairs, told FedScoop that USAID plans to use the technology to help reduce administrative burden and “make it easier for new and local organizations” to partner with the agency.

Makanju said OpenAI is “actively pursuing” a FedRAMP Moderate accreditation for ChatGPT Enterprise, which would clear the generative AI platform to handle moderately sensitive federal data like personally identifiable information or controlled unclassified information outside of Microsoft’s Azure Government service. She said the company had nothing to share, at least for now, regarding potential work with cloud providers beyond Microsoft. 

ChatGPT Enterprise, released last August, is meant for larger organizations and is supposed to offer more advanced analytics and customization, though other federal agencies have already started using the software in more limited ways. 

Advertisement

That USAID is ChatGPT’s first federal agency customer isn’t surprising. Under Administrator Samantha Power, USAID has made artificial intelligence a key focus. Earlier this year, she met with both Makanju and the CEO of Open AI competitor Anthropic, Dario Amodei. Both of those discussions focused on how generative AI could be used in the context of distributing aid, among other topics. Overall, the agency’s focus on artificial intelligence has ramped up, with new work meant to guide both potential use cases of the technology and possible threats to safety, security, and democratic values. 

Generative artificial intelligence applications can have myriad use cases, but the federal government still faces significant concerns about the security of government data and potential ingrained bias within the software, among other issues. The Biden administration’s October executive order on artificial intelligence discourages federal agencies from outright banning the use of generative AI, but several agencies have taken steps to limit or block use of the tools.

We of course understand some agencies have hesitations or questions about how this technology can be integrated safely and effectively into their operations,” Makanju told FedScoop. “We also continue to advocate for the development of policies that delineate appropriate use cases, such as limiting the use of consumer tools to public, non-sensitive data — akin to how government agencies utilize search engines.” 

Makanju recently answered a series of emailed questions about OpenAI’s approach to government customers. The following has been edited for clarity and length. 

FedScoop: OpenAI has met with USAID and is working with Los Alamos National Laboratory. Can you say what other federal agencies OpenAI is in conversation with — and what use cases for generative AI do you imagine for the government? 

Advertisement

Anna Makanju: I believe that the best way for government officials to understand advanced AI models is to use these tools. These tools can also enable governments to serve more people more efficiently — and already, nearly 100,000 government users across federal, state, and local levels are utilizing the consumer version of ChatGPT. USAID recently became the first U.S. federal agency to adopt ChatGPT Enterprise. The agency plans to leverage ChatGPT to reduce administrative burdens for staff, and make it easier for new and local organizations to partner with USAID.

We are also trying to make it easier for the US government to use our services by making ChatGPT Enterprise accessible through multiple Government-wide Acquisition Contracts.

We’re also focused on education and hands-on experimentation with AI technology within government agencies. Recently, we supported the GSA Federal AI Hackathon on July 31 and participated in the FHFA Generative AI Tech Sprint, demonstrating our dedication to innovation and practical AI applications.

While it is currently primarily an efficiency tool, we hope to see significant and important breakthroughs being enabled by this technology. In the private sector, we have seen Moderna accelerate their ability to conduct clinical trials. We hope to see them bring life-saving vaccines to market faster with our tools. We hope to see similarly impactful results for [U.S. government] agencies.

FS: What impact did the Biden administration’s executive order have on OpenAI, particularly around government use of generative AI systems?

AM: The executive order sought not only to encourage the development of safe, secure, and trustworthy AI but to encourage the U.S. government to use the technology. While we have been encouraging agencies to save time with these tools for several years, there was a lot of uncertainty on what was permissible or desirable in terms of that use, and the EO seeks to help agencies with these questions. Notably, the executive order’s establishment of chief AI officers within agencies is playing an important role in promoting AI fluency and encouraging generative AI pilots across the government. 

FS: Right now, OpenAI use within the government is primarily happening through Microsoft Cloud. Do you envision that happening through another cloud provider anytime soon?

Advertisement

AM: Government customers can engage with OpenAI directly through our ChatGPT products or APIs, both of which are hosted on Microsoft’s Azure Cloud. Additionally, Microsoft offers its own separate product, Azure OpenAI. While we continuously assess opportunities to expand our offerings, we do not have any updates to share regarding integration with other cloud providers at this time.

FS: How is OpenAI thinking about the Federal Risk and Authorization Management Program, or FedRAMP? 

AM: OpenAI is actively pursuing FedRAMP Moderate Accreditation, recognizing the importance of meeting the rigorous security and compliance standards expected by federal agencies. The introduction of FedRAMP’s Emerging Technology (ET) Prioritization Framework, as highlighted in the recent executive order, underscores the government’s commitment to integrating innovative solutions securely. 

AI continues to evolve, so we hope to work closely with federal stakeholders to ensure that the FedRAMP security risk evaluation process allows government users to access the latest AI tools as they come online. 

FS: The Biden administration just announced progress on some of its AI goals, and, in particular, that Apple had signed onto the voluntary commitments. To what extent did signing onto the voluntary commitments change OpenAI’s approach? Were there stipulations or procedures that OpenAI had already committed to, or were there particular changes that the company made in response to these White House guidelines? If so, what were they? 

Advertisement

AM: The voluntary commitments introduced by the Biden administration closely align with what OpenAI had been doing for some time. These commitments not only reinforced our existing practices but also provided a valuable framework to standardize and formalize efforts across the industry and internationally.

Areas such as external security testing, information sharing regarding AI risks with governmental bodies, and the development of systems to identify AI-generated content were already focal points for us. The voluntary commitments have spurred us to accelerate initiatives in these domains. 

We view the growing participation of industry leaders like Apple as a positive step toward unified standards in AI safety. This collective effort enhances the overall trustworthiness of AI systems and promotes a collaborative environment for addressing the challenges and opportunities presented by AI.

FS: What has OpenAI’s involvement with the Department of Homeland Security’s AI Safety and Security Board looked like thus far? How is the company working with DHS on some of the safety risks the agency has started to point out, particularly in regard to large generative models?

AM: OpenAI is actively engaged with the Department of Homeland Security’s AI Safety and Security Board and our CEO, Sam Altman, is a board member. OpenAI has participated at both the CEO and staff level, and we have provided our views as to the role of AI developers in identifying and mitigating risks to critical infrastructure.

Advertisement

This involvement underscores our commitment to collaborating with government, other industry leaders, and civil society to ensure the safe and secure deployment of AI technologies. And as part of our research around preparedness, we continue to be in dialogue with DHS, and have briefed them on our work, including how we assess the risks associated with LLMs and biological threats.

FS: We’ve been covering the generative AI guidance issued by federal agencies extensively. Some agencies seem to be blocking ChatGPT, while others have slowly moved ahead with examining the technology. What do you make of these varied responses?

AM: Adoption of new technologies raises new considerations and takes time — especially within government. We are encouraged by agencies like DHS that are proactively exploring how generative AI can support their missions, including issuing guidelines for using commercial generative AI services, like ChatGPT.

Latest Podcasts