docs/thinking/2026-03-03-conversation-first-ui

Conversation-First Clinical UI

March 3, 2026


The Insight

The default assumption in healthcare software is: build a dashboard, add buttons, train users. Every UR tool looks like an EHR bolt-on — queues, forms, structured workflows, mandatory training sessions.

But if you have an LLM doing the work, the interface question flips entirely.

Natural language is the zero-training interface. A nurse doesn't need to learn a layout, find buttons, or understand a new workflow. She just talks to the agent like a colleague.

The conventional wisdom — "buttons are easier than typing" — is true for power users who already know the software. It's wrong for someone encountering a new tool for the first time with no desire to learn it. "Approve" is easier than finding the green button on a screen you've never seen before.

The Product Implication

The primary interface is conversational, not a dashboard with a chat sidecar. The structured elements — case summaries, criteria matches, evidence — are things the agent presents within the conversation, not pages the nurse navigates to.

This scales naturally to complexity:

  • Simple case: Agent says "this looks straightforward, approve?" Nurse says "yes."
  • Complex case: Agent says "this one's tricky, here's why, what do you think about X?" Nurse engages deeper.

Same interface for both. The conversation adapts to the complexity. The UI doesn't need to.

The Throughput Question

If a nurse is doing 30+ cases a day, does conversation slow her down vs. rapid-fire clicking through a queue?

Maybe not — if the agent is processing everything autonomously and only surfacing what needs human input, the volume she actually touches drops dramatically. The agent batches, pre-processes, and presents a stream of "here's what I did, here's what I need from you."

No queue. No dashboard. Just a conversation about her caseload.

Why This Matters Strategically

  1. Zero training — the product's learning curve is "can you have a conversation?" This is a massive adoption advantage in healthcare, where training costs kill rollouts.

  2. Different from everything else — every competitor looks like enterprise software. DaisyAI looks like talking to a smart colleague. That's a memorable pitch.

  3. Agent-native architecture — instead of building a traditional app and wedging AI into it, you build around the conversation and let structured UI be the exception, not the rule.

  4. The Claude Code parallel — Claude Code proved that a conversational interface can be the primary way to get complex work done, even when structured tools exist. DaisyAI applies the same insight to clinical work: the agent does the work, shows its reasoning, and the human responds naturally.

What This Changes

  • Don't over-invest in complex dashboard UI. The dashboard is a secondary view, not the primary workspace.
  • The primary design problem is: how does the agent talk to a nurse? Tone, pacing, information density, when to summarize vs. show detail.
  • Train the agent on how to talk to nurses, not nurses on how to use the software.
  • The interface might look closer to iMessage than Salesforce.

Origin: Strategy conversation between Michael and Claude, March 3 2026. Started from "what's the generalization path from Claude Code" and arrived here by working through what zero-training actually means.

Daisy

v1

What do you need?

I can pull up the fundraise pipeline, CRM accounts, hot board, meeting notes — anything in the OS.

Sonnet · read-only