AI deepfake detection requires NSF and DARPA funding and new legislation, congressman says
Lawmakers warned of the dangers of AI-generated deepfake content during a House Oversight subcommittee hearing Wednesday, pushing for additional funding for key federal agencies as well as new targeted legislation to tackle the problem.
There was bipartisan agreement during the “Advances in Deepfake Technology” hearing that the government should play a role in regulating deceptive, AI-generated deepfake photos and videos that could harm people, particularly related to fake pornographic material.
Approximately 96 percent of deepfake videos online are nonconsensual pornography, and most of them depict women, according to a study by the Dutch AI company Sensity.
Rep. Gerry Connolly, D-Va., ranking member of the House Oversight Subcommittee on Cybersecurity, IT, and Government Innovation, said additional funding for the Defense Advanced Research Projects Agency and the National Science Foundation is “critical” to creating advanced and effective deepfake detection tools.
Dr. David Doermann, the interim chair of computer science and engineering at SUNY Buffalo, said during the hearing that DARPA was taking the lead within the federal government to tackle deepfakes, but highlighted that there was more that the agency could do.
“I think the explainability issues of AI are things that DARPA is looking at now,” Doermann said. “But we need to have the trust and safety aspects explored at the grassroots level for all of these things” within DARPA.
Connolly noted that the Biden administration’s recent AI executive order included productive steps to tackle deepfakes, leaning “on tools like watermarking that can help people identify whether what they’re looking at online is authentic as a government document or tool of disinformation.”
“The order instructs the Secretary of Commerce to work enterprise-wide to develop standards and best practices for detecting fake content and tracking the providence of authentic information,” Connolly added.
Legislation to tackle deepfakes was introduced in May by Rep. Joe Morelle, D-N.Y. The “Preventing Deepfakes of Intimate Images Act” would make the sharing of nonconsensual deepfake pornography illegal.
The proposed bill includes provisions to ensure that giving consent to create an AI image does not equate to consent to share the image. The bill also seeks to protect the anonymity of plaintiffs that sue to protect themselves from deepfake content.