The Vision
What DaisyAI is building — and why the center is the only place to build it from.
The Real Problem
Healthcare's biggest source of waste isn't on the payer side or the provider side. It's at the interface between them.
When a patient is admitted, two organizations start working the same case from opposite ends. The hospital's case manager is documenting clinical status. The health plan's UM nurse is reviewing medical necessity. They're looking at the same patient, often the same clinical data, trying to answer the same fundamental question: does this patient need this care?
But they can't see each other's work. They're operating from incomplete, siloed versions of the same clinical reality. The payer builds an interpretation. The provider builds a different interpretation. When the interpretations don't match, the result is a denial. Then an appeal. Then a peer-to-peer review. Then sometimes litigation. Billions of dollars a year spent fighting over what the right answer was — when in most cases, if both sides had been looking at the same complete picture and applying the same rigorous analysis, they would have agreed.
The adversarial dynamic isn't inevitable. It's an artifact of bad infrastructure. Both sides are actually trying to answer the same question. They just can't collaborate on the answer because the systems weren't built for collaboration. They were built for each side to defend its position.
The Orientation
DaisyAI does not optimize for payers. DaisyAI does not optimize for providers. DaisyAI optimizes for the correct clinical answer.
This is not a philosophical stance. It's a business position. Because the correct clinical answer — arrived at rigorously, transparently, with full access to the relevant data — is the cheapest possible outcome for everyone. Correct first-time decisions eliminate denials, appeals, peer-to-peer reviews, and the administrative overhead that surrounds all of it. The payer saves money. The provider saves time. The patient gets the right care without bureaucratic delay.
When you optimize for one side, you create waste. A tool that helps payers deny more efficiently makes providers fight harder. A tool that helps providers appeal more effectively makes payers tighten criteria. The arms race escalates. Everyone's costs go up. The patient sits in the middle, waiting.
When you optimize for the right answer, both sides benefit. That's the only sustainable position. And it's the only position from which you can build something that both sides will trust.
The Core Insight
AI can do anything you can explain in words. LLMs operate on language — anything that's been articulated, structured, and encoded, they can reason about, act on, and scale. The ceiling on what AI can do in healthcare operations is set entirely by how well someone has translated the implicit knowledge of the system into explicit, operational form.
That implicit knowledge exists on both sides. The head nurse at the health plan who knows which cases are going to be problems. The case manager at the hospital who can feel when the documentation won't support the status. The medical director who understands why the criteria say one thing but the clinical reality is more nuanced. None of this is written down. None of it is in the software systems. It's carried in people's heads, in relationships, in institutional memory, in the things nobody says out loud.
DaisyAI's work is the translation of that implicit knowledge — from both sides — into a shared clinical intelligence layer that AI can operate on. Not the payer's version of the truth. Not the provider's version. The actual clinical picture, analyzed rigorously, made transparent and accessible to everyone who needs it.
The Translation Pipeline
The work has three stages. Each one is a different kind of hard.
Implicit → Explicit (The Human Work)
This is the part that requires being in the room. Sitting with nurses during case reviews. Listening to medical directors explain why they override criteria in certain situations. Understanding why the VP of Clinical Operations built that workaround in 2019 and why nobody's touched it since. Hearing what's not being said. Naming the elephant in the room.
This work happens on both sides of the interface. Inside the health plan, understanding how UM decisions really get made. Inside the hospital, understanding how clinical documentation actually flows and where the gaps are. At the interface, understanding why two reasonable people looking at the same case reach different conclusions.
This is not engineering work. It's not consulting work in the traditional sense. It requires genuine care — you have to actually invest in understanding these people, their pressures, their frustrations, their expertise. Nurses know when someone is there to "optimize" them versus when someone is there to help them. The translation only works if the translator has earned real trust. You build as if you're building for someone you care about, because you do.
The output: articulated workflows, named decision points, documented edge cases, structured understanding of how work actually gets done — on both sides — versus how the org chart says it gets done.
Explicit → Encoded (The Synthesis Work)
This is where healthcare understanding meets AI-native thinking. You take the explicit knowledge from both sides and structure it into forms that LLMs can operate on — shared ontologies, clinical decision frameworks, criteria mappings, agent specifications.
The key word is shared. The encoding doesn't privilege the payer's interpretation or the provider's interpretation. It encodes the clinical reality: what data exists, what criteria apply, what the evidence supports, where the judgment calls are. Both sides can connect to this. Both sides can see the reasoning. The analysis is honest.
This requires people who are LLM-native — who think in terms of what can be articulated versus what can't, who understand how to structure knowledge for AI consumption, who see the abstractions that connect one organization's workflow to another's. These people don't come from traditional engineering or traditional healthcare. They come from the intersection, and they're rare.
The output: encoded systems — agents that can analyze cases, reason about medical necessity, synthesize clinical documentation, and surface the shared clinical truth that both sides need. Not payer tools. Not provider tools. Shared intelligence.
Encoded → Operational (The Deployment Work)
The encoded systems go live. Integrated with the existing tech stacks on both sides — Jiva, TruCare, Epic, whatever they run. Working alongside existing teams. Producing real results: faster decisions, fewer disputes, better clinical accuracy, less administrative waste.
The operational systems generate feedback. Cases where the analysis was wrong. Edge cases nobody anticipated. New patterns in how the payer-provider interface creates friction. That feedback flows back through the pipeline — surfacing new implicit knowledge, improving the encoding, making the operations better.
The pipeline is a loop, not a line. Each cycle makes the shared intelligence layer smarter — and makes the adversarial dynamic a little less necessary.
The Data Position
Nobody owns the clinical data. It belongs to the patient's care episode. The payer has some of it. The provider has some of it. Neither has the complete picture.
DaisyAI's position is at the data layer — not owning it, but being the trusted analytical layer that both sides provision access to. We don't hoard data. We don't build proprietary databases. We build the intelligence that makes the data useful — the ontologies, the reasoning, the clinical analysis that turns raw clinical information into a shared understanding of what the patient needs.
Both sides connect to this. Both sides benefit from the analysis being right. Both sides trust it because it's not optimizing for either one — it's optimizing for the correct answer.
This is what a clearing house looks like in the AI era. Not a claims processing system. Not an interchange format. A shared clinical intelligence layer that both sides of the healthcare transaction can rely on.
What Compounds
Every engagement produces compounding value at multiple layers:
For the client (payer or provider): Their implicit operational knowledge becomes explicit, encoded, and operational. They get better outcomes, faster decisions, and institutional intelligence that persists regardless of staff turnover.
For the system: Each engagement deepens the shared intelligence layer. The understanding of how UM decisions actually get made — not theoretically, but in practice, across multiple organizations, on both sides — becomes the most comprehensive in the industry. The clinical ontologies get richer. The edge cases get mapped. The translation gets faster.
The moat is the accumulated translation from both sides. A competitor that only works with payers has half the picture. A competitor that only works with providers has the other half. Nobody else has sat on both sides of the interface, earned trust from both, and encoded the understanding into shared systems. That's the position we're building toward.
The People
You cannot hire 50 generic engineers to do this work. You cannot start from an existing organizational ontology — the legacy consulting playbooks were built for a pre-AI world. And you cannot hire people who think in terms of payer-first or provider-first. You need people who think in terms of the right answer.
The people who do this work are:
- LLM-native — they think in terms of what AI can and can't do, naturally
- Emotionally invested — they build with care and precision, as if building for someone they love, because the people on the other end of these systems are real nurses, real doctors, real patients
- Cross-domain — they hold healthcare knowledge and technical capability in the same mind, not in separate departments
- System-oriented — they see the payer-provider interface as one system, not two adversaries. They refuse to optimize for one side at the expense of the other
- Translators — they can sit in a room with a VP of Clinical Operations or a hospital CMO, hear what's not being said, and turn it into shared system specifications
Scaling the company means developing more of these people. The founders' intuition — the ability to see the system as a whole, to earn trust on both sides, to encode what they learn into shared intelligence — has to become teachable. The company scales when the process of translation has itself been translated into institutional practice.
12 / 24 / 36
12 Months: Build the Foundation
Deep FDE engagements starting with utilization management. Premera is the first client — a health plan. But the orientation from day one is the interface, not the payer.
What this means practically: when we're inside Premera building UM systems, we're not building denial optimization. We're building rigorous clinical analysis that gets the medical necessity decision right the first time. The output should be something that a provider could look at and say, "That's a fair and thorough analysis." If the provider would object to our reasoning, we've built the wrong thing.
What we're proving:
- Organizations will pay for this translation work (revenue validation)
- The patient-centered orientation produces better outcomes than side-optimized tools (value validation)
- The encoded systems work in production (technology validation)
- The ontologies partially transfer across organizations (compounding validation)
What we're learning:
- What the shared clinical intelligence layer actually looks like in practice
- How much of the encoding is reusable versus organization-specific
- Where the payer-provider interface creates the most friction and waste
- How to build trust with both sides simultaneously
24 Months: Cross the Interface
The ontologies from the first engagements have been tested. Patterns are emerging in how the payer-provider interface creates waste. We've been inside payer operations deeply enough to understand one side. Now we start working with providers — not to optimize their side, but to complete the picture.
With engagements on both sides, the shared intelligence layer starts to take real shape. We can see the same case from both perspectives. We can identify where the disagreements come from — and in most cases, it's not a difference of opinion, it's a difference of information. The analysis that resolves the disagreement is usually straightforward once both sides' knowledge is encoded.
Existing clients are expanding into adjacent workflows. New clients deploy faster because the foundational ontologies carry over. The team is growing — not by hiring bodies, but by developing translators who carry the company's orientation: the right answer for the patient, not for either side.
36 Months: The Clearing House Emerges
DaisyAI is the trusted analytical layer that sits at the payer-provider interface. Both sides connect to it. Both sides benefit from it. The shared clinical intelligence is the most comprehensive in the industry — built from thousands of hours of translation work on both sides of the interface, across multiple organizations.
New engagements deploy in weeks because the foundational work has been done. The adversarial dynamic around UM is measurably reduced at organizations where DaisyAI operates — fewer denials, fewer appeals, faster decisions, better outcomes. Both sides see this in their numbers.
The company has earned something no one else has: the trust of both payers and providers simultaneously. Not because we're neutral — we're not neutral, we're biased toward the correct clinical answer — but because both sides have seen that optimizing for the right answer serves them better than optimizing for their own position.
The Longer Arc
The clearing house position, once established, extends naturally:
Beyond UM: Every payer-provider interaction has the same structural problem — siloed data, adversarial incentives, duplicative work. Care management. Prior authorization. Quality measurement. Network adequacy. The shared intelligence layer applies to all of them.
The data becomes infrastructure. As more organizations connect to the shared analytical layer, the clinical intelligence compounds. Patterns that are invisible at one organization become visible across many. Best practices emerge from data, not from consultants' opinions. The system gets smarter in ways that benefit everyone connected to it.
Payer-provider integration becomes real. Not as a mandate or a regulation, but as a practical reality. When both sides are already using the same shared intelligence layer, the barriers to real-time collaboration dissolve. Prospective medical necessity determination instead of retrospective review. Shared clinical documentation instead of duplicative charting. Aligned incentives through transparency rather than adversarial review.
This is the long game. It starts with one UM engagement at one health plan. It ends with the infrastructure layer that makes healthcare operations work as a system rather than as two sides fighting.
Why This — Not Something Else
Why not just build for payers? Because payer-optimized tools make the adversarial dynamic worse. And because you only have half the picture. The most valuable position in healthcare isn't on either side — it's at the center, where you can see the whole system.
Why not just build SaaS? Because the ceiling on healthcare AI is set by the quality of the translation from implicit to explicit. SaaS vendors skip this step. They ship generic tools and hope the customer figures out how to apply them. In healthcare, they don't.
Why not just do consulting? Because consulting doesn't encode. The knowledge walks out the door. We go in, understand the problem, and encode it into systems that keep running and keep learning.
Why not wait? Because the technology isn't the bottleneck — the translation is. And the translation requires trust, which requires time in the room. Every month we're doing this work, we're building an advantage that a later entrant will take years to replicate. On both sides.
What We're Asking Investors to Believe
Three hypotheses:
-
The correct clinical answer is the cheapest answer. Most waste in healthcare operations comes from getting decisions wrong and then fighting about it. A shared intelligence layer that gets it right the first time saves money for both sides. This is the market.
-
The translation compounds across both sides. What we learn inside a health plan partially transfers to the next health plan. What we learn when we cross the interface to providers completes the picture in ways that unlock new value. The accumulated understanding of the payer-provider interface is the moat.
-
Trust at the center is buildable. Both payers and providers will work with a company that refuses to optimize for either side, if that company demonstrates that the patient-centered approach produces better results for everyone. This trust is the hardest thing to build and the hardest thing to replicate.
If all three are true, DaisyAI becomes the shared clinical intelligence layer for American healthcare — the infrastructure that makes the system work as a system. We're raising $2M to start building it.