Michelle Mello
highactiveHer Thinking — "The AI Arms Race in Health Insurance Utilization Review"
Published: January 2026, Health Affairs (top-tier health policy journal) DOI: https://doi.org/10.1377/hlthaff.2025.00897 Co-authors: Artem Trotsyuk, Abdoul Jalil Djiberou Mahamadou, Danton Char
The Frame
She calls AI in UM an "arms race" — vendors selling tools to payers to deny faster, and to providers to appeal faster. Both sides automating against each other. She maps the full vendor landscape across prior auth, concurrent review, claims adjudication, and appeals.
Key Stats She Cites
- 84% of large health insurers using AI for some operational purpose (NAIC 2024 survey, 93 insurers, 16 states)
- 37% using AI for prior auth; 44% for claims adjudication; 56% for UM broadly
- 70% of large-employer group market using or exploring AI for prior auth
- Medicare Advantage: 93%+ approval rate on prior auth, but 82% overturn rate on appeal (meaning most denials were wrong)
- ACA Marketplace: 20% denial rate, <1% appealed, but ~50% of appeals won
The Promise (What AI Could Do Right)
- Auto-approve the easy stuff. Most requests get approved anyway. AI can clear them instantly, freeing reviewers for hard cases.
- Fix the information gap. Providers submit bad documentation because non-clinical staff prepare it. AI can pull EHR data, check completeness, explain medical necessity.
- Lower barriers to appeals. Predictive tools identify winnable appeals, generative tools draft appeal letters. Could rectify wrongful denials at scale.
The Problems (What's Actually Happening)
- Toothless "humans in the loop." Payers say a human reviews every denial, but AI assembles the evidence and presents a pre-baked recommendation. Anchoring bias. Productivity pressure (decisions per hour). Some companies actively discourage overriding AI.
- Users don't understand AI. Front-line insurance staff can't explain how generative AI works, don't know it can be biased, but have high confidence in it. Recipe for automation bias — especially under volume/time pressure.
- Opacity. Predictive models give little info about what drives a classification. Hard to challenge. Fewer than 25% of insurers tell providers when AI was used. Only 50% even have a process for deciding when to disclose.
- Underperformance. Models often omit social determinants (only structured EHR data). Training data may not reflect a specific insurer's policies. Coverage policies change frequently — models go stale. Models trained on human-era decisions may not predict AI-era decisions.
- Perverse incentives. Provider-side tools trained on historical appeal outcomes will deprioritize claims where the insurer has been recalcitrant — effectively rewarding bad-faith denials. Insurer models trained on past mistakes will encode and perpetuate those mistakes.
- Uneven governance. 25%+ of large insurers don't document model accuracy or test for bias. ~40% have no governance committee reviewing AI tools. Some insurers running 1,000+ AI applications — impossible to monitor all of them.
Her Recommendations
- Stronger institutional governance. Vet tools before adoption. Monitor performance. Don't deploy faster than you can govern.
- Monitor for underperformance. Look at persistently denied requests for patterns suggesting bias. Check training data representativeness. Collect user feedback on hallucinations.
- Train front-line staff. Simulations on common errors and biases. Users need to know where tools fail.
- Meaningful human review. Her strongest position: "AI should only be used for approvals. Route denials to medical professionals with NO AI pre-workup presented." No AI-curated files for denial decisions. Less time saved, but "conducting a clear-eyed, thoughtful review before denying coverage is paramount."
- Transparency. Clearer denial explanations. Public disclosure of AI use. CMS should require reporting on which tools were used and outcomes.
The Two Futures
She frames it as a fork:
- Sunny: AI approves the easy stuff fast, improves communication, conserves reviewer time for hard cases
- Dark: AI makes prior auth cheaper to administer, lowering the barrier to expanding its use — "supercharging flawed processes"
"Insurers have financial incentives to move in both directions; AI can facilitate their best and worst impulses."
Vendor Landscape She Mapped (Exhibit 1)
Insurer-facing tools:
- AuthAI (Availity) — predictive, prior auth auto-routing
- Cohere Health — predictive, auto-approve routine requests
- nH Predict (Optum) — concurrent review, length-of-stay forecasting
- InterQual AutoReview (Optum) — concurrent review, medical necessity screening
- TriZetto Facets (Cognizant) — claims adjudication
- Cotiviti, Optum Payment Integrity, Codoxo — postpayment audit
Provider-facing tools:
- TransUnion — eligibility/coverage discovery
- Waystar, Notable Health, AKASA, Myndshft — prior auth automation
- Doximity GPT — drafts PA request letters and appeals
- Epic (Denials Appeals Assistant + Likelihood of Payment) — appeals drafting + win probability
- Counterforce Health — reads denials, drafts appeals
- Waystar Altitude Create — payer-specific appeal packets
- Anomaly Insights — parses EOBs, classifies appeal opportunities
Collaborative platforms:
- Rhyme — gold carding analytics (identify low-risk providers to waive PA)
- XSOLIS Dragonfly — shared payer-provider scoring + UM note drafting
Why She Matters for DaisyAI
- She's mapping the battlefield we're on. This article is the definitive policy framework for AI in UM as of January 2026. Every investor, regulator, and health system exec reading Health Affairs will encounter this frame.
- Her "AI for approvals only" position is directionally aligned with us. If we can build a tool that helps payers make better, faster, more defensible decisions — not just cheaper denials — we're on the right side of where Mello says regulation should go.
- The vendor landscape exhibit is our competitive map. We should know every company on that list.
- Her concerns about the "arms race" frame are real. DaisyAI working for Premera on appeals defense while Dan builds appeal letters for providers — we're literally living the arms race she describes. The payer-provider clearing vision (Dan's end state) is her "collaborative platforms" category.
- Perverse incentives concern is worth internalizing. If we build tools that help payers deny more efficiently, we're the villain in her story. If we build tools that help them approve correctly and review denials more thoughtfully, we're the hero.
- A relationship with her = regulatory foresight + academic credibility. Her recommendations will likely influence CMS rulemaking and state regulation. Knowing where she's pushing means we can build ahead of the regulatory curve.
Awards & Recognition
- National Academy of Medicine (elected 2013)
- Alice S. Hersh New Investigator Award (AcademyHealth)
- Robert Wood Johnson Foundation Investigator Award in Health Policy Research
- Greenwall Faculty Scholars Award in Bioethics
- Open Science Champion Prize
- John A. Benson Jr., MD Professionalism Article Prize
Teaches
- Health Law: Improving Public Health
- Torts
Interactions
| Date | Type | Notes |
|---|---|---|
| (need to backfill — how did we connect? what's been discussed?) |
Notes
- The policy narrative cluster is getting strong: Jeremy Friese (CMS/WISeR credibility), Ross Hoffman (payer exec + policy interest), Michelle Mello (academic health law authority). This is a real strategic asset for fundraising and positioning.
- Her Stanford Health Care AI governance work means she's reviewing tools like ours in production — she has practical, not just theoretical, perspective.
- The Health Affairs piece will be widely read by exactly the people we're selling to and raising from. Worth referencing in pitch materials.
Who
- Professor of Law at Stanford Law School + Department of Health Policy at Stanford School of Medicine
- One of the leading empirical health law scholars in the country
- Elected to the National Academy of Medicine (2013) — one of the highest honors in health/medicine
- BA '93, previously at Harvard School of Public Health (directed Program in Law and Public Health)
- 250+ publications on medical liability, public health law, AI, data ethics, pharmaceuticals, research ethics
- Editorial boards: JAMA Health Forum, Journal of Health Politics, Policy and Law
- Co-authors with Stanford Health Care AI governance team — she's not just writing theory, she's reviewing actual tools in production
Connection to Us
- Writing about UM / utilization management topics — directly relevant to DaisyAI's space
- (How did we connect? Need to backfill)