Experts: Government can provide resources to support AI development
The government can support the development of artificial intelligence by helping educate the public on the technology’s potential applications and establishing an ethics policy on its use, several experts told the White House in response to a recent call for public input on future of AI.
The White House Office of Science and Technology Policy submitted a request for information earlier this summer to help develop its stance on artificial intelligence, and several organizations responded to the call by outlining benefits and the challenges they believe researchers and policymakers will encounter as AI develops.
[Read more: White House seeks public input on artificial intelligence]
The government needs to educate the public on artificial intelligence and dispel theories that it will lead to a robot apocalypse, Joshua New, policy analyst for the Center for Data Innovation, told FedScoop.
“Destigmatizing it in the public sphere, saying, ‘This is like a great technological benefit, we should be pursuing it aggressively,’ I think that’s the most important thing that the government can do,” New said.
The Center for Data Innovation responded to the RFI by noting that government should continue to talk about the benefits of AI and attempt to dispel fears of the technology.
“It is encouraging to see OSTP proactively working to better understand the technology, promote its research and development and set the record straight about the potential opportunities associated with the technology,” the center’s response says.
It concludes: “OSTP should also play an active role in dispelling the prevalent alarmist myths about AI, particularly concerns that AI will lead to higher rates of unemployment and even eradicate the human race, which, besides being wrong, threaten the acceptance and advancement of this technology.”
In IBM’s response to the White House RFI, the company outlines the benefits of AI in many applications and explains what government initiatives would support its growth.
The company notes that its opinion on the growing technology comes from “decades of research and commercial application of AI.”
“We believe that many of the ambiguities and inefficiencies of the critical systems that facilitate life on this planet can be eliminated. And we believe that AI systems are the tools that will help us accomplish these ambitious goals,” IBM said.
IBM’s response also acknowledged the need to develop a system of ethics for AI so the public can learn to trust it.
“Trust will also require a system of best practices that can guide the safe and ethical management of AI; a system that includes alignment with social norms and values; algorithmic accountability; compliance with existing legislation and policy; and protection of privacy and personal information,” IBM notes, adding that it is developing such a system in collaboration with its partners.
Some worry that a machine’s algorithm will be programmed with biased information or be used to justify biased decisions.
The Center for Data Innovation acknowledged the need for accountability of algorithms to address those concerns.
Some have proposed that “algorithmic transparency” would solve the problem, but New said mandating all algorithms be released to the public is “preposterous,” because it “removes any sort of economic incentive to invest in proprietary technology.”
Figuring out exactly what the policy should be, though, is a complicated question.
The Center for Democracy and Technology also responded to the RFI. Lisa Hayes, CDT’s vice president of programs and strategy, told FedScoop the organization has been developing a tool to share with companies to help them self-audit for algorithmic bias.
“Our other big call for the administration and for lawmakers is to really think about how to reduce algorithmic bias and to figure out how to promote a diverse workplace,” Hayes said, “so that the inputs going in… reflect the society that we actually live in rather than just a handful of people who are doing the programming and deciding what data should be collected and stored and how it should be categorized.”
There’s also the common fear that artificial intelligence could take away people’s jobs.
At a recent White House conversation on artificial intelligence, panelists Robin Chase, co-founder of Zipcar, and Martin Ford, author of the book “Rise of the Robots: Technology and the Threat of a Jobless Future,” said increased automation could lead to a significant loss in lower-to-middle class jobs.
[Read more: White House: U.S. wants to be at the forefront of automation policy]
But in its response to the RFI, IBM was optimistic about the workforce changes: “We believe that new companies, new jobs and entirely new markets will be built on the shoulders of this technology.”
The company noted that government could work with partners and universities to help make education in building or using AI systems more accessible to a range of students and professionals.
Adams Nager, an economic policy analyst at the Information Technology and Innovation Foundation, said the shift toward automation isn’t approaching as quickly as some people may be saying.
“Automation isn’t happening that fast,” Nager said. “Simply put our productivity growth has actually been really sluggish lately.”
He added: “This is going to happen gradually. We are already seeing it happening, but in very — kind of low impact areas.”
But like AI or not, Hayes said lawmakers need to be preparing for the changes it will bring.
“We are feeling incredibly optimistic about the future of AI,” Hayes said. “We also think that it is somewhat inevitable, even for those policymakers who are not feeling so optimistic about it, they should still be addressing it and working on thinking through what skills and training are going to be necessary for our country to properly adapt to AI — because it is coming rapidly.”