I trained as a physician, then spent two decades building software for clinical workflows, first as a clinician, then as a product leader. Three years ago I became obsessed with LLMs and why they work so well. Now I want to make sure they work safely.
I've lived three technology waves in healthcare. Safety didn't appear till regulations forced it.
EHRs, cloud, now AI. I've been through three waves of technology entering healthcare. Each time the pattern repeats: commercial pressure determines where it gets applied, and the hard questions get deferred.
I was there when electronic health records transformed the industry. I watched the technology get applied in ways that served capital incentives more than patients or clinicians. Billing systems got inflated, and a disproportionate share of technology attention went into revenue cycle management, the cat-and-mouse game between providers and payers. Just because a technology is here does not mean it will be adopted for the greater good.
At Google Cloud, I was the healthcare GTM lead. The job was translating platform capability into market entry points, reading which problems had organizational readiness, not just technical feasibility. That experience gave me a window into how a technology wave reshapes not just healthcare but entire industries. The dynamics of AI deployment rhyme with what I observed during the cloud wave, but with higher stakes and faster timelines.
At Carrum Health, I led an organization-wide AI transformation for the technology and care operations functions. The strategy included voice agents, SMS-based nudging, real-time in-call assistance, and a conversational analytics layer. Four principles emerged: standardize before you automate, choose the right human-AI model (Centaur vs. Cyborg), maintain a skip list of what not to automate, and prove ROI with flat-file pilots before committing engineering. I saw, first-hand, the gap between "this works in a demo" and "this is running reliably in production."
Probabilistic systems are entering high-stakes domains with few hard questions being asked.
I've built software for clinical environments for over a decade. I know what it means when a system influences a care decision.
The agentic AI platforms I evaluated at Carrum all seemed to invest in monitoring and observability. They can tell you what the model did. But that observability didn't seem to be made specifically for healthcare contexts. The monitoring frameworks these platforms use were built for general customer service, not for clinical workflows where the consequences of a wrong action are fundamentally different. Getting observability right for healthcare is itself an unsolved problem.
Beyond observability lies a harder question: why did the model do what it did? That is the domain of interpretability, and it is where researchers like Chris Olah, Joshua Batson, and Neel Nanda, and teams at startups like Goodfire.ai, are doing foundational work. I have followed it closely, and I believe it represents the path to genuine reliability and safety in high-stakes domains like health. But interpretability research, like observability, still needs industry context to be actionable. The people doing this science need collaborators who understand the nuances of clinical workflows, patient safety, and the regulatory environment. I see myself as a supportive team member who can help AI researchers bridge the gap between research output and clinical practice.
I trained as a physician. I worked in clinical environments where software is part of the daily routine of care delivery. I have a visceral, not theoretical, sense of what is at stake when a probabilistic system enters that environment. Most people reasoning about AI safety in healthcare do so abstractly. I've been inside the workflow. That is a different kind of context.
Making an intentional change, toward the safety side of AI
After five years as CPO, I'm choosing to move toward the work I think matters most. It's the third time for me.
- In 2001 I left the predictable career of an anesthesiologist to explore my passion for computers
- In 2014 I exited a promising corporate career and took a leap of faith building startups
- In 2026, the most important work is making sure AI does not cause catastrophic harm
Operator path: Working inside or closely adjacent to a frontier lab or company to help make advanced AI systems safer in high-stakes domains. Healthcare is the first such domain where I can contribute uniquely. This path feels more concrete and more closely tied to my existing experience.
Institutional path: Helping build governance structures, standards, and evaluation frameworks for advanced AI risk across sectors. This is a genuine interest that I want to explore further.
Both paths need people who have been in the room where deployment decisions get made.
I'm looking for teams working on AI safety, evaluation, and governance, especially in high-stakes domains. If that's you, let's talk.