A revolution is afoot. A set of decades-old algorithms has finally met with adequate data and computing power. Organizations around the world are using this artificial intelligence to make better decisions; government agencies are not far behind either. AI has shown with tremendous potential and unbelievable promise. It is but natural that AI be applied to automate workflows based on something each citizen uses everyday — language.
Major companies like IBM, Amazon and Microsoft, as well as upstarts like ours, Coseer, are investing in AI for language. The obvious course is to start with algorithms that have been so successful elsewhere. For example, deep learning, one such technique, can recognize objects in pictures and videos better than humans. It has predicted crop yields better than USDA professionals, and diagnosed cancers better than physicians. Deep learning and related algorithms should succeed with English as well, or so it has been presumed.
Turns out, natural language is different. We have found that deep learning and related algorithms do not work for natural language based use cases.
- To train effectively, deep learning needs lots of data that looks the same, or is “homogenous”. In our experience, most organizations do not have ready access to such data.
- Deep learning needs training data to be annotated by subject matter experts. To train an AI based on deep learning, every aspect and every feature of interest must be manually tagged. The cost of SMEs becomes prohibitively expensive. This is the single biggest barrier to adoption of AI for natural language.
- Because of these onerous requirements, data scientists using deep learning for natural language often can’t iterate multiple times. Such iteration is the key to AI systems’ accuracy.
Silicon Valley is already responding with alternatives. For example, something called Calibrated Quantum Mesh (CQM) begins with a simple idea – it may just be possible for the same set of keywords to mean different things. Think about “Good Day.” It means certain temperature, solar insolation, wind chill factor, etc. in San Francisco. However, if ever Byrd Station in Antarctica got to the same temperature, it will be closer to “end of world” than to a “good day.” Or consider the word “loss,” whose meaning changes significantly when you move from general English to a specific domain like financial regulation.
CQM then goes on to develop a completely new AI technique for natural language, which is already seeing success. It has been applied for use cases like natural language search, intelligence gathering, or classification/ redaction of sensitive information. Even a basic application like extracting structured knowledge from unstructured data begins to work much better. There is no need to annotate the training data, and the need for the data itself is reduced to fraction.
Each government executive passionate about their mission already knows this – specific tools for specific goals yield a much better result. Its time they started thinking about specific kinds of AI for their biggest use cases – an AI specifically designed for English.
This past year Coseer went through the Dcode Accelerator that supports companies like Coseer in their public sector go-to-market. The program better prepared us as a company for the demands of the government market. But, just as importantly, it helped expose different types of emerging technology to the government executives and technologists who know what a valuable tool looks like for their mission. As Dcode ramps up for another AI cohort, we would encourage all civil servants interested in emerging tech to get involved and all AI and Big Data companies interested in the public sector market to apply.
Praful Krishna is CEO of Coseer.