
Tech entrepreneur Munjal Shah has built his career at the intersection of healthcare and artificial intelligence. As the founder and CEO of AI health startup Hippocratic AI, Shah sees tremendous potential for large language models to address systemic issues in the delivery of chronic and preventative care. However, realizing this potential requires nuanced application of AI capabilities, conceptual-level model training, and responsible development rooted in medical ethics.
Shah’s approach is informed by over a decade of experience commercializing bleeding-edge AI, including computer vision and conversational AI. With $50 million in early backing from top VC firms, Hippocratic AI aspires to lead the responsible application of generative AI in global healthcare.
The AI Talent Shortage in Healthcare
Munjal Shah cites staggering statistics about the growing mismatch between healthcare supply and demand. The World Health Organization projects a shortage of 10 million healthcare workers globally by 2030, stemming largely from an aging population with complex needs. “What if 350 million Americans all had 350 million health care workers to help them? Would we get better outcomes? Of course we would,” says Shah. “We just can’t afford it if it’s real people.”
With such acute talent shortages already straining healthcare budgets to the breaking point, there is simply no scalable or affordable way to meet growing demand strictly through hiring more staff. AI represents an pathway to provide consistent, personalized support for the 68 million Americans battling multiple chronic conditions today. Used judiciously as a force multiplier for human capabilities and effort, AI could help avoid 30% of nurse burnout cases and drastically improve patient outcomes.
Powerful Concept Learning with Large Language Models
At the core of Munjal Shah’s vision is the singular capability of large language models (LLMs) for versatile, conceptual learning. He contrasts this to previous generations of narrow AI, confined to interpolating between specific examples. “Now AI can learn conceptually. I’m not going to teach immunology by saying, go read 10 million health records and tell me how the immune system works. I’m going to say, ‘Go read three textbooks.’ And textbooks are conceptual documents with examples.”
By ingesting and contextualizing conceptual knowledge, today’s most advanced natural language models can generate new applications of those concepts appropriately. Munjal Shah believes this fundamental mode of learning uniquely suits LLMs to open-ended and conversational healthcare tasks. Rather than hard-coding every rule, exception, and branch, the AI instead develops an intrinsic sense of medical concepts.
Much like humans, this allows LLMs to apply conceptual knowledge to unfamiliar situations and lines of questioning. Their responses become more rounded, nuanced and adaptive as a result – critical strengths for interacting with unique patients in an ethical, responsible way.
Training AI the Medical Way at Hippocratic AI
Given the higher stakes involved, Munjal Shah knew training an AI for healthcare required more rigor and oversight than typical LLMs. That’s why Hippocratic AI curates custom datasets spanning peer-reviewed literature, case studies, medical exams and textbooks. This medical-specific content allows their models to deeply grasp concepts health practitioners expect one another to understand.
Hippocratic AI also solicits ongoing feedback from doctors, nurses and other experts as part of the training process. By critiquing model behavior during roleplays, staff help refine its responses to better match human norms. Over 106 mock medical certification exams, this technique has proven highly effective. Munjal Shah reports Hippocratic AI’s LLM outscoring even the likes of ChatGPT on most medical tests.
This meticulous, collaborative training regimen gives Hippocratic AI confidence in deploying its LLMs for low-risk interactions. All applications still undergo extensive reviews to uphold safety and quality standards befitting their motto “first do no harm.”
Super-Staffing Healthcare with Responsible AI Innovation
When asked about his vision for AI in medicine, Munjal Shah categorizes emerging use cases into three archetypes:
- Co-Piloting: Basic automation of rote workflows, offering incremental efficiency gains for human providers
- Autopiloting: Full automation of high-volume, low-risk tasks like patient outreach and education
- Super-Staffing: Strategic application of automation to drastically improve patient access and continuity of care
Munjal Shah considers super-staffing the most intriguing paradigm shift enabled by AI. In effect, responsible application of conversational LLMs could provide every patient their own personalized healthcare assistant. This always-available point of contact would track medications, vital signs and symptoms, offer coaching, coordinate care teams and more.
Despite conceptual competence to take on such tasks, most LLMs today lack the real-world knowledge to consistently operate safely. But with careful oversight, Shah sees autopilot and supervised autonomy applications transforming population health over the next decade. Hippocratic AI’s customized LLMs could help fill widespread gaps in preventative care and chronic condition support.
Guiding responsible innovation in this domain ties directly to Shah’s central mission. By pairing rigorous medical training with ethical feedback loops, his company strives to unlock AI’s benefits while minimizing its risks. This practical approach gives Munjal Shah faith that LLMs can soon provide good counsel and caring support at unprecedented scale.