Experts are saying the Obama administration’s first ever report and national strategy for artificial intelligence R&D is by and large a positive byproduct of events and other research this year on the subject, which culminated Thursday with a White House conference.
[For more on the report, read: White House unveils new goals for AI]
Lisa Hayes, the Center for Democracy and Technology’s vice president of programs and strategy, told FedScoop “the report was terrific.”
“In the community more broadly I think the report was very well received,” Hayes said. “Folks are delighted that the administration is laying down a roadmap for future administrations to follow.”
She said CDT, who responded to the White House’s request for information on the topic, mostly agrees with the report’s recommendations.
Greg Brockman, co-founder and chief technology officer of OpenAI, said in a statement, “It’s great to see that the administration thinks AI will have a huge impact. We’re particularly encouraged by its policy recommendation relating to open training data and open data standards.”
OpenAI is a nonprofit sponsored by big names in technology, like Elon Musk and Peter Thiel, among others.
The Partnership on AI to Benefit People and Society said in an official statement it was “happy to see the report on the future of AI from the White House Office of Science and Technology Policy.”
“We agree that AI can be a major driver of growth and social progress,” said the partnership, whose members include Amazon, Google, IBM, Facebook and Microsoft. “Harnessing advances in AI to their fullest potential will involve collaboration by industry, civil society, government, and the public on the broader social, legal, and ethical influences of AI.”
Joshua New, policy analyst for the Center for Data Innovation, told FedScoop he thinks both the report and strategy were “very positive.”
“They definitely incorporated a lot of the feedback from the comments I saw in the comment period they had, and some of the feedback from the events [the White House has] been doing,” he said.
CDI also responded to the White House’s request for information.
As OpenAI noted, New also said it was good that the report emphasized open training data and data standards, and added it was positive that it addressed the skills gap in the workforce.
One thing New flagged as not actually being in the research and development strategy was whether government should set standards in artificial intelligence for industry.
“The government can absolutely be involved in helping and encouraging the development of these standards, but it shouldn’t be setting these standards for industry in general,” New said. “The strategic plan doesn’t say that the government is going to be doing this, but it doesn’t say that they’re not going to be doing it.”
What should happen with regulations?
But the work to define federal involvement in the technology isn’t over, and experts during a White House first Frontiers Conference Thursday tried to sort out just what federal regulations on applications of the technology should look like.
During the conference, one panel’s participants still grappled with exactly what role the government should play going forward in regulating applications of the technology.
Andrew McAfee, co-director of the MIT Initiative on the Digital Economy, noted there is “a real intellectual and important battle going on between two schools of thought” — one that thinks it shouldn’t have to ask for government permission to innovate, and another that thinks the government should sign off on things before they go out into the world.
The White House report notes that most of the respondents to its request for information on artificial intelligence advised against broad regulation right now. The report recommends instead adapting existing regulations to account for AI as needed, such as evolving the Transportation Department’s regulations to enable integration of fully autonomous vehicles and unmanned aircraft.
Hayes told FedScoop in a conversation unrelated to the conference “the existing regulatory framework is a really good starting point for most AI regulation, which I think the report does a nice job of acknowledging and discussing ways that can be expanded and improved as we move forward.”
“I think the approach laid out by the report of keeping a firm hand and a close eye on existing regulatory frameworks and making sure that they’re prepared to quickly adapt and change to address the needs of AI as it advances is the right approach,” she said.
During the panel discussion, McAfee advised against heavy-handed regulation, or “signing off,” on technology before it is allowed to be used. He said he is “uncomfortable with the mismatch” of putting a fast-moving driver of the economy into the hands of a slow-moving part of the country: the government.
Others on the panel, however, stood firmly in the opposite camp.
“The thing that could hold back technology the most is actually for us to make some horrible early AI mistakes,” said Andrew W. Moore, dean of the Carnegie Mellon University School of Computer Science.
Moore said if industry walks the line too much between protecting safety and experimentation, he worries the whole society could give up on AI for years because they are worried about the danger of the technology.
Moore said he was personally “appalled” when Tesla started experimenting with self-driving cars with real people.
“From that they get very, very valuable data, data worth hundreds of millions of dollars, in terms of being able to quickly develop great, and probably very safe self-driving technology,” Moore said. “But in doing so, that data is coming at the cost of real risk for humans. And that risk has actually resulted in at least one death so far.”
Ryan Calo a law professor at University of Washington, said when human lives are involved, government should sign off on new technology.
“My basic view is that someone is going to have to sign off on this stuff, especially when we’re talking about artificial intelligence that acts directly with the world in a physical way,” Calo said.
When asked about the Tesla example, Hayes called the example “a bit of a red herring.”
“Man-powered vehicles, as a proportion, are involved in far more accidents every day than vehicles that are being driven by artificial intelligence or robotics,” Hayes said. “Accidents will certainly happen in both and are tragic, but I haven’t seen anything suggesting there’s a need for immediate regulation while these technologies are still being developed and are not yet widespread.”