Artificial intelligence can help transform health care by improving diagnosis, treatment, the delivery of patient care, and the efficiency of healthcare systems. But AI is only as good as the data it’s built on. AI developers in the health sector are facing multiple challenges including limited access to health data, poor data quality, and concerns over the ethical use of data for AI.
The Office of the Chief Technology Officer (CTO) in the U.S. Department of Health and Human Services (HHS) is now exploring the potential for a departmentwide AI strategy to facilitate the development of AI for health. As one important first step, the HHS Office of the CTO and the Center for Open Data Enterprise (CODE) co-hosted a roundtable in April to gather input from stakeholders outside of HHS. The “Roundtable on Sharing and Utilizing Health Data for AI Applications” convened more than 70 experts in AI and health data, bringing together HHS leaders with colleagues from other federal and state government agencies, industry, academia, and patient-centered research and advocacy organizations.
CODE has now published its report on the roundtable with recommendations for developing AI for health, and particularly for HHS in developing its AI strategy. Based on the roundtable and other research before and after the event, CODE has identified five main types of AI applications in health care:
- Reducing costs and administrative burden. AI can improve workflow for both clinicians and administrators. Natural language processing (NLP) and other AI tools can rapidly process electronic health records (EHRs), automatically transcribe handwritten medical notes, and free up time and reduce errors in other ways. The Centers for Medicare and Medicaid Services (CMS) uses AI to identify fraudulent and improper payments – a program that CMS says has saved the government about $42 billion so far.
- Connecting patients to resources and care. AI can be a critical tool to connect patients to resources and care, especially in rural areas where resources are scarce. New applications are using sensors and mobile health technology to help patients monitor their health in real time and provide coaching for conditions including diabetes and Parkinson’s Disease.
- Informing population health management. Population health management (PHM) involves using population-level data to identify broad health risks and treatment opportunities for a group of individuals or a community. Researchers can use AI to combine PHM with clinical data to identify individuals at risk for opioid abuse or other health concerns.
- Improving diagnosis and early detection. AI can help physicians accurately diagnose medical conditions in their patients and treat disease at an early stage. Using technologies like image recognition, NLP, and deep learning, AI can help doctors diagnose diabetic retinopathy, prevent brain deterioration following injury or coma, and assess the risk for diseases including cancer and heart disease. The goal is not to replace the doctor’s clinical judgment but to help physicians by providing “augmented intelligence.”
- Developing new drugs and therapeutics. AI can improve drug development across the entire development lifecycle. AI can also be particularly useful in matching patients to appropriate clinical trials, as several companies demonstrated in a recent Health Tech Sprint run by HHS. And new startups are using AI to find correlations between biomarkers and drug activity that can help clinicians personalize cancer treatment.
HHS now has the challenge of advancing these high-impact AI applications in the larger context of federal and departmental AI policy. A February 2019 Executive Order on Maintaining American Leadership in Artificial Intelligence laid out a half-dozen objectives to guide federal agencies’ work on AI. These objectives, as well as the June 2019 National AI R&D Strategic Plan, are broadly aligned with six high-level recommendations in CODE’s report:
- Invest in IT infrastructure and expertise to support AI. HHS, like other federal agencies beginning to invest and use AI, need IT and data infrastructure as well as staff expertise to make AI applications possible. In addition to building this capacity within the department, HHS has the opportunity to create collaborative environments where data and code for AI applications can be tested, stored, and shared.
- Ensure access to data for AI while protecting privacy. Privacy concerns will become more pressing as individual-level health data is used for AI applications, especially as developers tap into data from wearable devices and other sources that are not now protected by privacy guidelines. HHS will need to provide guidance on privacy-protection strategies ranging from data de-identification to restricting access for qualified and vetted researchers.
- Use standards to improve data quality and interoperability. Roundtable participants identified many challenges in integrating data and metadata from multiple sources. HHS can help promote existing common data models, which make it easier to integrate different datasets.
- Remove administrative barriers to data sharing. The HHS Office of the CTO has been working to improve data sharing among HHS operating divisions, including CMS, the Food and Drug Administration (FDA), and the National Institutes of Health. One strategy is to create a set of standard data use agreements (DUAs) that HHS divisions can use to share data more efficiently and quickly. Standard DUAs for use within HHS could also become a model for agreements between HHS and outside partners.
- Clarify appropriate use of patient-generated data. Increasingly, patients are generating data about themselves – for example, through sensors and wearables and through social media – that can be used together with research and clinical data to generate new insights. Many of these data sources do not fall under regulations established by HIPAA, the Health Insurance Portability and Accountability Act. HHS can develop guidelines for patient-generated data to encourage its use under appropriate protections.
- Address concerns about accountability and bias. Many AI applications that use health data are being developed as a “black box” without clear information about the algorithms and data being used to make decisions. Guidelines are needed to ensure accountability and transparency and to reduce the risk of bias in health-related AI applications. The FDA may also have a role to play in overseeing AI applications and other health-related software.
These recommendations are one step in framing the issues and opportunities surrounding AI for health. “AI has the potential to revolutionize medical diagnosis, treatment, and the provision of healthcare,” says Mona Siddiqui, Chief Data Officer of HHS in the Office of the CTO. “This roundtable and CODE’s report will help HHS develop a department-wide strategy to realize AI’s benefits.”
Katarina Rebello is Director of Programs at CODE, where Paul Kuhne is Project Manager for the roundtables program. This roundtable and CODE’s report were funded through a Patient-Centered Outcomes Research Institute (PCORI) Engagement Award Initiative.