Early physician involvement key
The panel also included Eric Just, senior vice president for product development at Health Catalyst, which builds analytics and decision support tools for health systems; Chris Mansi, MD, a neurosurgeon and the co-founder and CEO of Viz, a deep-learning medical imaging company; and Eric Williams, vice president of data science and analytics at Omada Health, a company using machine learning to track patient behavior changes in the face of chronic disease. These experts emphasized that while AI can be a boon to health care efficiency, it is essential to maintain transparency and include physicians early in the development process.
“Transparency is a big deal,” said Just. “It is not enough to show [physicians] a risk score. You have to show risk factors so they can see the reasons why.”
There are multiple rationales for involving physicians early on in the development of AI-enabled health care solutions. Physician input can help identify key checkpoints where human input can improve algorithm performance and also boost physician confidence in the conclusions generated. Another relates to how humans handle anomalies or conflicting pieces of clinical information that software might overlook in favor of the dominant pattern.
“That strategic insight a physician can have—the ability to handle edge cases that break the rules and the patterns—is really important and we can’t lose that,” said Khoury.
Khoury also noted that physicians might be better equipped to identify biases than programmers working off of decontextualized data. As AI technologies become more sophisticated and begin to operate on “deep learning” protocols—which means they can learn to reprogram themselves without human intervention—it’s important that we avoid passing along unconscious human biases, including gender and workplace roles. Khoury cited materials published by Science that showed how machines can learn biases based on the language content commonly found on the web.
Panelists noted that the best use of AI likely resides in striking a happy medium.
“The way we think about it is to build a system that finds the most effective balance between human and artificial intelligence,” said Williams, referring to tests at Omada Health designed to determine when AI or intervention from non-physician health coaches results in better care.
Still, there are areas of health care that AI already seems suited to address.
“There are real barriers to being able to manage population health through our health systems,” said Khoury. “Some of those are scale challenges, some of them are technological, some come from existing shortages or limited access to care and coverage. These technologies, if they’re successful, have the potential to help overcome these barriers.”
Another example of an innovative AI-enabled health care solution is the Human Diagnosis Project, or Human Dx for short. By combining the collective intelligence of physicians with machine learning, Human Dx intends to enable more accurate, affordable and accessible care for all. Research on its application to clinical decision making is currently underway in partnership with some of the world's leading medical institutions.
AMA has voiced its support for Human Dx as part of the application Human Dx submitted to the John D. and Catherine T. MacArthur Foundation’s 100&Change competition. The winner of the competition will receive a $100 million grant to fund a single proposal that, according to the foundation’s website, “promises real and measurable progress in solving a critical problem of our time.” Human Dx has since been announced as one of eight semifinalists from a pool of nearly 1,900 applicants.
- Project crowdsources specialists' diagnoses for safety-net care
- Mayo, AMA CEOs urge: Seek physicians as innovation partners
- Mass General brings population health outreach to primary care
- 4 ways government sees technology can benefit health care
- Physicians benefit from expanded use of electronic health records data