The Obama administration is calling for the government to prioritize basic and long-term research on artificial intelligence, and develop a governmentwide policy on autonomous and semi-autonomous weapons.
The recommendations were part of an extensive report the White House released Wednesday that looks to outline the federal government’s part in developing rapidly evolving AI technology.
The administration also advised agencies to look at creating a Defense Advanced Research Projects Agency-like organization to support “high-risk, high-reward AI research and its application,” and suggested that agencies “prioritize open training data and open data standards in AI.”
The recommendation to nail down policy on autonomous and semi-autonomous weapons follows White House Chief of Staff Denis McDonough’s comments this summer that the White House is “looking, through the Department of Defense, at how we can use artificial intelligence — machine learning — to increase our defenses.”
He noted then the administration is “are also very clear-eyed about making sure that there’s an appropriate human interaction in any kind of weaponization of any of this artificial intelligence.” He also said “arms control arrangements and negotiations” might be needed to take some capabilities “off the table.”
“This I think we can be a world leader in, and we’re designing our policy to be a world leader on establishing a code of conduct or a set of understandings,” McDonough said.
According to the report, “the administration is engaged in active, ongoing interagency discussions to work toward a governmentwide policy on autonomous weapons consistent with shared human values, national security interests and international and domestic obligations.”
Respondents to the White House RFI advised against broad regulation of AI, the report says, and they recommended instead adapting existing regulations to account for the technology.
The administration has already been working to integrate emerging technologies safely in certain industries — for example, the Federal Aviation Administration recently issued new rules on small unmanned aircraft, and the Transportation Department recently unveiled a framework for regulating autonomous vehicles.
The administration recommends in the report that the DOT “continue to develop an evolving framework for regulation to enable the safe integration of fully automated vehicles and UAS, including novel vehicle designs, into the transportation system.”
In a recently published interview with WIRED magazine, President Barack Obama said early in the development of artificial intelligence technology, “government should add a relatively light touch, investing heavily in research and making sure there’s a conversation between basic research and applied research.”
But when the technology is more mature, government needs to get more involved, he said.
“As technologies emerge and mature, then figuring out how they get incorporated into existing regulatory structures becomes a tougher problem, and the government needs to be involved a little bit more,” Obama said. “Not always to force the new technology into the square peg that exists but to make sure the regulations reflect a broad base set of values. Otherwise, we may find that it’s disadvantaging certain people or certain groups.”
Another thing agencies need to consider, the report says, is how AI influences cybersecurity — and vice versa.
“Agencies involved in cybersecurity issues should engage their U.S. government and private sector AI colleagues for innovative ways to apply AI for effective and efficient cybersecurity,” the report recommends.
In the WIRED conversation, Obama said his national security team should be concerned about specified AI, not generalized AI. In particular, he noted someone could develop an algorithm with one job, like “go penetrate the nuclear codes and figure out how to launch some missiles.”
“If that’s its only job, if it’s self-teaching and it’s just a really effective algorithm, then you’ve got problems,” he said.
“I think my directive to my national security team is, don’t worry as much yet about machines taking over the world,” Obama said. “Worry about the capacity of either nonstate actors or hostile actors to penetrate systems, and in that sense it is not conceptually different than a lot of the cybersecurity work we’re doing. It just means that we’re gonna have to be better, because those who might deploy these systems are going to be a lot better now.”
In conjunction with the report, the administration also released a “high-level framework” of goals for the executive branch related to research and development of AI. The framework is mainly organized around a set of seven strategies, which include goals to make long-term investments, “develop shared public datasets and environments for AI training and testing,” and ensure safety and security of artificial intelligence-driven systems.
In 2015 the federal government spent about $1.1 billion on unclassified R&D in artificial intelligence-related technologies, according to the strategic plan. Federal investment, the strategic plan explains, should be focused around “areas of strong societal importance that are not aimed at consumer markets.”
The plan notes that “although these investments have led to important new science and technologies, there is opportunity for further coordination across the federal government so that these investments can achieve their full potential.”
Taking a mostly positive view of artificial intelligence, the plan outlines ways AI could benefit the economy in many sectors, improve education or quality of life and enhance national security. But the plan also acknowledges the risks of AI, including “the potential disruption of the labor market as humans are augmented or replaced by automated systems, and uncertainties about the safety and reliability of AI systems.”
The administration plans in the coming months to release a follow-on report “exploring in greater depth the effect of AI-driven automation on jobs and the economy.”