State Department shutters AI-based project that aimed to forecast violence and COVID-19
The State Department is no longer pursuing an artificial intelligence project that aimed to “test the statistical relationship between social media activity overseas and activity by violent extremist organizations,” an agency spokesperson told FedScoop.
The shuttered pilot is one of several initiatives disclosed in the agency’s AI use case inventory and is still listed on the State Department website.
The pilot’s inclusion on the list comes amid questions over the approach that federal agencies take to cataloging the technology. A recent investigation by FedScoop found a lack of standardized processes for disclosure across the government.
On the website, the use case is currently titled “forecasting” and is described as “using statistical models, projecting expected outcome into the future.” The online page also says that the tool has been “applied to COVID cases as well as violent events in relation to tweets.”
The use case was attributed to the “R” bureau, which commonly refers to the agency’s public affairs division.
The State Department did not say why it’s no longer working on the project — or if the pilot ended because the technology did not work or because the use case did not align with responsible AI principles established by Executive Order 13960, which was issued by the Trump administration in 2020. The State Department previously told FedScoop that it was, in response to that executive order, “employing a rigorous review process and making necessary adjustments or retirements as needed.”
The pilot is a reminder that the State Department is looking to apply artificial intelligence to increasingly-sensitive domains. Notably, researchers have questioned the efficacy and appropriateness of deploying similar kinds of technology, but it’s not immediately clear how this particular system was designed or tested.
State did not address FedScoop’s questions about whether outside companies were involved in developing the system, and its accuracy in predicting outcomes.
The State Department is also working with a system for detecting deepfakes developed by the Global Engagement Center, a State Department-based government organization that focuses on combatting disinformation. A spokesperson described the technology, which was also disclosed in the inventory, as “a custom AI-based algorithm to detect synthesized social media profile pictures on foreign social media accounts that are engaged in spreading disinformation and propaganda that could undermine U.S., ally, and partner national security interests.”
The State Department is looking at other technologies, both those created internally and outside the agency, to “identify synthetic media or altered media at scale,” the spokesperson added.
The State Department public inventory now includes nearly 40 use cases, overall.
As the government continues to weigh its approach to artificial intelligence, the State Department has been particularly focused on implementing the technology into its own operations, as recent interviews with officials have exemplified. The agency’s enterprise AI strategy is expected soon, as well.