A commitment to transparency and consistent human oversight are two of the key factors in ensuring that federal agencies use artificial intelligence ethically, according to a new report by International Data Corporation, a market intelligence and consulting firm.
The emphasis is on how government AI might affect individuals: “Responsible and ethical use of AI includes protecting individuals from harm based on algorithmic or data bias or unintended correlation of personally identifiable information (PII) even when using anonymous data,” the report says. “And since machine learning is trained by humans, these biases may be inherently and inadvertently built into machine-based AI systems.”
The IDC analysts cite examples in health care, cybersecurity and other fields where AI has been deployed for mission-critical operations in government. In any use of the emerging technology — which has drawn more and more interest from federal officials — “there are technical challenges in machine learning that can result in errors, such as false positives or false negatives that trigger automated actions or other events,” the report says.
Human oversight of the systems, including subjecting the algorithms to “performance review” style audits, is key to mitigating these risks, the report argues. “Agencies should also provide mechanisms to protect citizen interest, especially when sharing information with ecosystem partners,” it states. “Formal governance policies should be created to address who owns, who analyzes, and who has access to each granularity of data to protect confidentiality and PII.”
The report also encourages federal agencies to be transparent around the use and impact of AI systems, both with citizens and with employees. Agencies should provide explanations of where and how algorithms are being deployed, it states. And on the workforce side, “leadership should also explain how workers are impacted by AI and how the nature of work will change.”
“AI can improve quality of life for government workers by processing a vast number of documents and identifying those that are optimal and required to solve a specific case,” the report states, echoing a familiar and favorite talking point of government officials. “However, specific roles may change, and over time, some positions may be eliminated while new ones are created.”
The report includes a number of other suggestions as well, including that agencies develop contingency plans in case “something goes wrong.”
“For nations to thrive, the ethics of AI must catch up with the technology so that governments,
industries, and individuals learn to trust this transformative technology,” Adelaide O’Brien,
research director at IDC Government Insights, and author of the report, said in a statement.