NSF CIO on generative AI: ‘It takes a lifetime to build a reputation; it takes a moment to lose it’
Like many agencies across the federal government, the National Science Foundation is taking a measured and risk-based approach to the adoption of generative AI, its newly appointed CIO said Wednesday.
“It takes a lifetime to build a reputation; it takes a moment to lose it,” Terry Carpenter, who was announced as NSF’s CIO in January, said of generative AI and its risks at the Elastic Public Sector Summit, produced by FedScoop.
The concern, Carpenter elaborated, is that despite the power some of these AI tools hold, if not used responsibly and thoughtfully, they can lead to unintended and harmful outcomes.
“There’s some reality to that fear,” he said. So far the agency’s approach in considering generative AI models has been: “Where can we use this effectively to maintain our high standards and can we assure that the outcome that we get from it is what we intend?”
Carpenter continued: “So when you think about … our reputation of giving money to the right research for the betterment of society, it’s really important to us that we uphold the standards of the merit review process.”
While NSF hasn’t outright forbidden the use of large-language, generative AI models across its enterprise, Carpenter said the agency has banned the use of such tools that are “public-facing” — like commercial versions of OpenAI’s ChatGPT or Google’s Gemini — to review research proposals submitted to the agency for funding consideration.
“So when we looked at the realities of what ChatGPT and other large language models could provide, you know, when they’re riding out there on that external data, you have to really think about: Well what data am I giving it? Where is that data? And how sensitive is that data?” Carpenter explained. “So we looked at those kinds of things and we said, ‘You know what, we’re not ready yet.’ So we’re going to not allow the public-facing tool sets to be applied to review proposals.”
That led to the creation of a policy at NSF “that says you cannot use [public-facing generative AI] to review a proposal,” he said.
However, that’s not a blanket ban. Carpenter highlighted that the agency is conducting an internal generative AI pilot: “a chatbot to try to help our partners and customers out there that are trying to seek grant monies to know whether or not they should try to write a proposal, and if they do write a proposal, how can we help them in that process,” he said.
The pilot is in the early stages and hasn’t yet been opened to the public.
The decision to move forward with the chatbot came about after NSF made an open call for ideas, through which it received 79 submissions, according to the CIO. After matching those ideas to key areas in which NSF leadership thinks generative AI is applicable — insight generation for the grant-awarding process, internal process enhancement, and improvement of customer and employee experience — and running them through what Carpenter called “a risk-based decision model,” the agency landed on just the one.
On top of that, while NSF employees are forbidden from using generative AI for reviewing proposals, the proposers themselves are allowed to use the technology in developing their submissions, Carpenter acknowledged. The agency just requires those who do so to disclose how they used the technology in their proposal “to gain learning and transparency in that,” he said.
“I think there’s a lot of learning to go,” Carpenter said. “We have to figure this out. It’s not going away. And I think a lot of the commercial industry and our partners are helping us to think through how do we apply our own internal data, and where can we assure the protection of the proprietary data that we’re given in the proposal process. We have a duty to protect that for the people that propose, and that’s what we’re thinking about.”