The value of ethics in AI, from development to implementation

AI experts explain why ethics and proper training matter as agencies consider developing autonomous applications.
artificial intelligence
(Getty Images)

As federal agencies look to acquire, adopt and develop artificial intelligence, they must inject ethics into the models that AI is built upon. And, according to experts, ethics cannot be an afterthought — it must play a central role from the start and get consideration at every step of development.

During a panel discussion on ethical AI, Krista Kinnard, director of the General Services Administration’s AI Center of Excellence, and Gretchen Stewart, Intel’s public sector data science technical director, discussed the importance of bringing ethics into the AI equation early on. The SNG Live event was hosted by FedScoop on Nov. 17, 2020.

“So even from the design phase, right, you need to be asking yourself questions like how is this going to be used? Who is using it? What are some unintended consequences that we may be having from this, this idea that we want to build, right?” Kinnard said.

She adds that it’s not just a matter recognizing that there is a problem but having leaders who understand and think about ethics as a decision-making framework to make sure the application is being developed the right way — because every engineering decision ends up being an ethical decision.

Stewart pointed to the need to document those questions at every step of the way. For instance, with facial recognition, bias has run rampant because most images used to train models don’t represent the actual diversity of the world. The result is that these systems are run against “pictures of a middle aged African American, a 12-year-old Asian, and they came up with a 37-year-old white man.”

“It was totally unintended, but absolutely should have been something that they thought about from the beginning,” Stewart said. “If we’re going to do this, and we’re going to think about facial recognition, we need to have the training sets. And we need to start with what the globe looks like.”

Documentation is incredibly important for explaining the result that an AI application comes up with.

“We need to work together so that we can, at a very minimum, document our process,” Kinnard said. “Because, sure, if you choose a logistic regression model to make the predictions about whatever it is you’re building, we can maybe explain that. But some of these deep neural networks, we can’t explain how they’re making their decision.”

She continued: “It’s not as simple as, ‘Oh, just follow this framework and check off these questions.’ The complexity of the problem, the complexity of the data, and the potential impact of that problem are going to dictate the important questions being asked. But the key thing here is developers and owners of the system need to be working together and iteratively asking those questions.”

View the full panel discussion on “Ethical AI”.

This article was produced by FedScoop and underwritten by Dell Technologies and Intel.

Latest Podcasts