Deep learning is cutting edge artificial intelligence. It’s what Google used to build AlphaGo, which beat the world champion of board game Go earlier this year in China. It’s what powers a lot of self-driving cars, by giving their machine vision human-level accuracy. And it’s being used by many of the world’s top tech companies as the basis for recommender systems, fraud detection and cybersecurity.
Government should be using deep learning, because it is a sophisticated tool that can help agencies fulfill their mission for use cases as diverse as risk profiling, cost forecasting and the analysis of satellite imagery.
An additional benefit that supports both recent governmentwide policy and tight budgets is that most of the best deep-learning algorithms are open source. That is, they are already implemented in free, human-readable code that agencies can test and pilot without large, multi-year procurements. The most advanced research is being published in the public domain.
Government stands to gain significantly from open-source AI, just as it has benefited from other open-source tools in big data such as Hadoop, Spark, Kafka, ElasticSearch and Cassandra. Open-source AI conveys the same advantages: flexibility rather than vendor lock-in, a permissive license, transparent and customizable code and strong quality assurance.
Given the waves of catastrophic hacking that engulfed government bodies over the past few years, exposing weaknesses at state and federal levels, government should be doing everything it can to accelerate the adoption of technology that can better protect and contain against threats. Being conservative about AI has a huge downside, because other actors and nation-states are investing in it aggressively.
Beyond being open-source, the deep learning tools that will serve government most effectively should be scalable and fast to handle nation-state level data volumes, and they should integrate with the government’s existing technologies well. Newer AI technologies shouldn’t require the government to rip out and replace hundreds of millions of dollars’ worth of existing infrastructure.
Working with the existing stack should be a requirement, because if it’s not, AI will become just another cost center, rather than a source of greater effectiveness and efficiency.
As an open-source software company, Skymind has had the chance to see how large organizations both inside and outside the public sector manage their technology stacks, and we’ve witnessed the pitfalls and obstacles that AI solutions run into.
Integrations are a big one. One of the dirty secrets of data science and AI projects is that they never go to production. That is, they never see real-world use, and never produce real value. A few insights are generated and discussed in meetings, and then they’re forgotten. One reason is because most data science tools don’t integrate well with the enterprise production stack. For one reason, most data scientists work in Python and R, while many enterprise stacks use Java and the JVM.
It was this experience that led us to develop Deeplearning4j, the most widely used deep-learning tool for Java, and it’s why the Department of Homeland Security chose our framework to perform risk profiling for the Global Traveler Assessment System, as they seek to identify persons of interest from the high volume of travelers making their way to the U.S.
Tools that plug into previously adopted frameworks can ensure that data science doesn’t die on insight; that is, some data science tools produce insights but can’t be easily integrated into production systems, so those insights are difficult to apply to real-time data. By the same token, an AI solution has to serve multiple teams: not just data scientists but also the data engineers building data pipelines, and the DevOps teams that maintain the solution in production.
Open-source software is the bedrock of enterprise and government applications, from Linux through to Hadoop. The next layer to go open-source is AI, and that’s great news for government agencies. But open-source alone is insufficient: those agencies should make sure their tools till play well with others in the stack, so that they can march their AI solutions to the finish line. During our time in the government-focused startup accelerator, DCode42, the Skymind team learned firsthand the kind of partner and collaboration that agencies and departments require to adopt and implement new technology.
Chris Nicholson is CEO of Skymind.