Defense Innovation Board debuts AI ethics principles for defense
The Defense Innovation Board debuted a shiny new document on Thursday — a report on the principles the board believes should guide the ethical use of artificial intelligence by the Department of Defense, with an emphasis on maintaining control and accountability for whatever is built.
The report’s publication marks the end of a 15-month process of both public and private debate that kicked off when the secretary of Defense first asked the DIB to weigh in on this topic.
Broadly, the document explores DOD’s existing ethical framework, defines artificial intelligence and what makes the technology different, proposes principles for AI ethics in the department and makes recommendations for the implementation of those principles.
The report defines AI as “a variety of information processing techniques and technologies used to perform a goal-oriented task and the means to reason in the pursuit of that task.” It also defines AI as separate from autonomy.
Both combat and noncombat uses of the technology, it says, should be:
- Responsible. Human beings should exercise appropriate levels of judgment and remain responsible for the development, deployment, use, and outcomes of DoD AI systems.
- Equitable. DOD should take deliberate steps to avoid unintended bias in the development and deployment of combat or non-combat AI systems that would inadvertently cause harm to persons.
- Traceable. DOD’s AI engineering discipline should be sufficiently advanced such that technical experts possess an appropriate understanding of the technology, development processes, and operational methods of its AI systems, including transparent and auditable methodologies, data sources, and design procedure and documentation.
- Reliable. DOD AI systems should have an explicit, well-defined domain of use, and the safety, security, and robustness of such systems should be tested and assured across their entire life cycle within that domain of use.
- Governable. DOD AI systems should be designed and engineered to fulfill their intended function while possessing the ability to detect and avoid unintended harm or disruption, and disengage or deactivate deployed systems that demonstrate unintended escalatory or other behavior.
During Thursday’s public meeting, a little live debate took place among the board members on principle number five, around what board Chairman Eric Schmidt called the “off button concern.” Ultimately, the board agreed to add language to that principle making it clear that humans or automatic functions should be able to deactivate systems that aren’t working as intended.
The report also includes a total of 12 recommendations for implementation of the principles, ranging from enhancing workforce training programs to convening an annual conference on AI safety and security.
All of these principles, the DIB is careful to note, exist “within the context of the existing DoD ethical framework.”
“We have found the Department of Defense to be a deeply ethical organization,” the principles document reads. Laws including the U.S. Constitution, Laws of War and other international treaties already guide how the defense agency approaches questions of ethics in warfare, the DIB says.
But as the principles themselves attest, AI is new and different enough to warrant its own ethical considerations.
“Now is the time, at this early stage of the resurgence of interest in AI, to hold serious discussions about norms of AI development and use in a military context — long before there has been an incident,” the DIB’s document reads.
During Thursday’s meeting, board member Richard Murray called the document an “opportunity to lead.”
As an independent advisory council, the DIB can only make suggestions to DOD. “This is our proposal to them, this is not their policy,” Schmidt, a former Google CEO, said Thursday. “I, for one, do not know what their response will be.”
That said, the DIB has a fairly solid track record of seeing its recommendations adopted, in some form, by the department — the creation of the Joint Artificial Intelligence Center (JAIC), for example, stems at least partially from a DIB recommendation.
The approved principles document will now be transmitted to the the Secretary of Defense, who will lead the department in deciding whether or not to adopt the principles as policy.