The government’s eye in the sky is working on a way to understand more about what’s happening inside its own workers’ heads.
The National Geospatial-Intelligence Agency has been building a suite of software to assess the likelihood its employees could leak classified information, harm someone in the workplace or harm themselves. And now it’s planning to try a tool that “looks for emotion” by analyzing text in emails, work chats, social media and more.
Bob Lamon, director of NGA’s Insider Threat Program, told FedScoop that the software, which will be deployed as a pilot project, isn’t entirely devoted to catching violators. In particular, it will help analysts easily identify and dismiss false positives, which can be a costly problem for an agency with about 14,500 workers who provide crucial intelligence about what’s happening on the ground all over the world.
“We get a lot of hits that we have to investigate that we find out are absolutely nothing,” Lamon said. “We’re hoping that something that is looking at the sentiment of what people are writing can help us eliminate those false positives.”
The new emotion-gauging software would only pick up traffic on work devices, Lamon said. Agency personnel know that “the rule of thumb is, if they do it on our systems, we know about it,” he said.
The pilot project comes as NGA and other intelligence agencies are looking for novel ways to expand the use of analytics to root out insider threats. Infamous leakers like Edward Snowden and Chelsea Manning have created an intense focus on blocking unauthorized disclosures. But Lamon is quick to point to the NGA project’s potential for helping to prevent workplace violence and threat-to-self incidents.
“While, OK, espionage and unauthorized disclosures and all that are a huge problem, it’s really to protect our people,” Lamon said of the new tool. “It’s to protect our data. And it’s to protect our facilities.”
‘Advanced psycholinguistics and proprietary algorithms’
NGA officials referred questions on the specific technical specs of the proprietary technology to Stroz Friedberg, the Washington, D.C., company that developed it. Stroz Friedberg did not respond to multiple emails and phone calls, but a promotional video on its website provides more insight into the way the technology operates.
The application, called SCOUT, “uses advanced psycholinguistics and proprietary algorithms protected by 10 U.S. patents to identify troubling content in emails, texts and other communications sent through your networks,” according to the video.
An expert at NGA did say the software looks at words and combinations of words, and it compares those to examples the company has collected that it believes indicate certain sentiments.
“When you look at it, that is what they’re saying — that they can detect emotion within the text,” Lamon said. “That is why we’re doing a pilot, to see if that truly is something that we can leverage.”
The SCOUT tool, Lamon said, tracks only text — not images or video. NGA employees can use social media such as Facebook,”for a limited amount of time” at work, and that would be tracked if it was done on the NGA network he said. The agency also has a classified chat platform for employees “that can be understood” by SCOUT, Lamon said.
When asked to specify what emotions SCOUT might track, Lamon said: “I mean I think they’re probably hard to put your finger on. I mean obviously depression or things like that that we might focus in on, and use other data sources to clarify or to understand.”
On the Stroz Friedberg promotional video when a narrator mentions the need to track “early insider threat signals,” words flash on the screen: disgruntlement, victimization, revenge, anger and blame.
The technology has been deployed successfully in government agencies, according to the promotional video, but the company does not specify which ones.
“The tool in particular that we’re looking to pilot we have heard through a variety of means that it has some capability,” Lamon said. “We want to look at that in a pilot program and see if it aids us in what we’re charged with to do in the building.”
The agency is still working through a contractual issue with the company to get the technology in the building, but Lamon said the agency hopes to begin the pilot within the next month.
At an Atlantic Council and Thomson Reuters event on Jan. 13 in Washington, NGA Director Robert Cardillo acknowledged that insider threat technology “is not a perfect science.”
“But there’s some correlation between disgruntlement with your job, some, or anger with a lack of a promotion or whatnot, that could correlate to doing something inappropriate with the data, whether it’s revenge or whatnot,” Cardillo said at the time.
The technical journey to predicting espionage
Asked if the new tool was chosen based on successful deployment in another agency in the U.S. intelligence community, Lamon noted that NGA’s system “is not predicated on anybody else’s.”
“Insider threat systems around the community are all different in a lot of ways, with their tools and technologies,” he said. “This one, again, we felt like it may be helpful in helping us eliminate the false positives we’re getting. But I can’t speak to other folks in the community.”
Congress has been working to assert its oversight of insider threat programs. For example, the House recently passed a bill to codify the Department of Homeland Security’s activities for detecting and mitigating insider threats.
An expectation of privacy?
At the Atlantic Council event, Cardillo noted that NGA hires sign paperwork indicating they are giving up their right to privacy while on agency networks.
“By the way, when you log on to your workstation in the morning the first thing you see is that same disclaimer: Congratulations you are now online on the NGA system. Everything you do on this system now will be tracked, and sorted and scored,” Cardillo said. “OK, so everyone knows what’s happening, this is not a covert action.”
He added: “It’s part of the trade you make to work in an intelligence organization. But only at work.”
But “the real key” to tackling insider threats, according to Lamon, “is to put a suite of tools together to make a risk assessment on every one of your folks that allows you to understand risk associated with each one of your folks.”
At NGA, Lamon said identifying a potential insider threat is not an automated decision, but one made by humans aided by automated tools.
“There are people in the middle that look at all this data and assess the totality of the person and the information where that issue needs to go,” he said.
And identification is not usually based on one comment, he said, “although it could be if that one comment were very discrete. More often than not, it’s based on a variety of information sources before we would ever come to that determination.”
For example, an employee would not be flagged just for being disgruntled.
“It’s not against the rules to be unhappy at work,” he said. “And until it is something that we need to look at, from their ability to retain classified information or that kind of thing, we don’t act on it at all.”