FDE Engagement Playbook: Technical Deep Dive
For Thomas Startz - DaisyAI February 2026
This is the technical reference for running FDE engagements with health plans on utilization management. It covers what you'll encounter inside a health plan's IT stack, how to run the engagement phases, what to learn, and what to build.
Table of Contents
- Health Plan Technical Architecture
- What FDE Engagements Actually Look Like
- Technical Skills to Develop
- Learning Resources
- What to Build Now
1. Health Plan Technical Architecture
1.1 The Core Systems Stack
A health plan runs on layered systems. You need to understand all of them because UM touches every layer.
┌─────────────────────────────────────────────────────┐
│ MEMBER-FACING LAYER │
│ Member portal, mobile apps, call center systems │
├─────────────────────────────────────────────────────┤
│ PROVIDER-FACING LAYER │
│ Provider portal, EDI gateway, fax intake, Availity │
├─────────────────────────────────────────────────────┤
│ CARE MANAGEMENT / UM LAYER │
│ Jiva (ZeOmega), TruCare, GuidingCare, InterQual, │
│ MCG, custom UM workflow engines │
├─────────────────────────────────────────────────────┤
│ CORE ADMIN PROCESSING SYSTEM (CAPS) │
│ QNXT, Facets, HealthEdge HealthRules Payer, │
│ PLEXIS, Optum CAPS │
├─────────────────────────────────────────────────────┤
│ INTEGRATION / DATA LAYER │
│ InterSystems HealthShare, MuleSoft, Rhapsody, │
│ FHIR APIs, EDI translators, data warehouse │
├─────────────────────────────────────────────────────┤
│ INFRASTRUCTURE LAYER │
│ Azure/AWS, on-prem data centers, Kubernetes, │
│ SAN storage, SQL Server/Oracle/Postgres │
└─────────────────────────────────────────────────────┘
1.2 Core Admin Processing Systems (CAPS)
These are the "heart and soul" of a health plan. They handle membership enrollment, claims adjudication, provider network management, billing, and benefits configuration.
TriZetto QNXT (Cognizant)
- Enterprise healthcare admin platform
- Modules: Claims, Enrollment, Provider, Billing, UM Workflow
- QNXT Utilization Management Workflow tracks pre-auth requests and medical necessity reviews
- Tech: Windows/.NET ecosystem, SQL Server backend
- Deployed at ~200+ health plans
- Integration: SOAP/REST APIs, EDI, HL7v2
TriZetto Facets (Cognizant)
- Competitor/sibling to QNXT under same parent
- Modular: claims, billing, enrollment, provider networks
- Typically larger plans (Blues, national payers)
- Tech: Mainframe origins, Java middleware, Oracle DB
HealthEdge HealthRules Payer
- Modern alternative, gaining market share
- API-first architecture, SOA design
- HealthEdge Source for payment integrity
- Single API integration with any CAPS
- Auto-adjudication engine with configurable rules
- Content libraries updated biweekly
PLEXIS Healthcare Systems
- Claims adjudication + admin for smaller/mid-size plans
- Rules engine, API integration, HIPAA-compliant
Oracle Health Insurance
- Claims adjudication and pricing
- Data model: claim → claim lines → messages → diagnoses → status history
- Adjudication workflow: intake → validation → pricing → benefits → COB → payment
- Pend reasons tracked at both claim and claim-line level
- Limit consumption data for utilization tracking
1.3 Utilization Management Systems
These are where DaisyAI's AI agents plug in. This is your primary integration target.
ZeOmega Jiva (Best in KLAS 2022-2025 for Payer Care Management)
- Population health enterprise management platform
- UM capabilities: medical necessity assessment, care review, stay requests, procedure review
- Real-time auto-adjudication based on pre-configured or client-defined rules
- Provider self-service: providers enter auth requests and get real-time approvals
- FHIR-based interoperability gateway
- Supports Patient Access API, Provider Directory API, Payer-to-Payer Data Exchange
- AI-powered analytics and workflow automation
- EHR-enabled care plans with decision support
TruCare (Conduent)
- Care management and population health
- Competes with Jiva for care coordination workflows
- Workflow configuration, task routing, clinical documentation
GuidingCare (HealthEdge)
- Care management platform integrated with HealthRules Payer
- UM, case management, disease management modules
- API-based integration with CAPS
Custom/Legacy Systems
- Many large plans (Blues especially) have homegrown UM systems
- Often decades-old, mainframe-connected
- Integration usually via batch files, HL7v2 messages, or screen scraping
- This is where the pain is highest and the opportunity is greatest
1.4 Clinical Decision Support
These are the criteria engines that nurses use to make medical necessity determinations. Understanding how they work technically is critical for DaisyAI.
InterQual (Optum/UnitedHealth)
- Clinical decision support for payers, providers, government
- Evidence-based criteria organized by level of care (inpatient, observation, outpatient, etc.)
- Software includes UI for reviewers to input clinical data points and get pass/fail against criteria
- More strict and precise clinical benchmarks than MCG
- Licensing: per-seat, annual subscription
- Integration: Optum provides embedded modules; custom integration via proprietary APIs
- UnitedHealthcare recently shifted to InterQual from MCG, signaling industry consolidation
- Key limitation: criteria are rigid decision trees; can't handle clinical complexity alone, hence the physician advisor override pathway
MCG (Hearst Health)
- "Milliman Care Guidelines" - the other major criteria set
- Simpler and more user-friendly than InterQual
- MCG Cite AutoAuth: automated authorization when criteria met, pends to nurse reviewer when not
- MCG Health AI API: new AI-powered clinical decision support endpoint
- Integration via Availity platform (many Blues plans use this pathway)
- 28th Edition currently in use
- Evidence-based but updated less frequently than InterQual
- Digital integration: configurable rules engine that plugs into UM workflow platforms
Milliman (separate from MCG now)
- Health cost guidelines for actuarial/financial analysis
- Less commonly used for individual case review
- More relevant for population-level analytics
How criteria engines work technically:
Input: Clinical data points from patient chart
- Diagnosis codes (ICD-10)
- Procedure codes (CPT/HCPCS)
- Vital signs, lab values
- Current medications
- Comorbidities
- Functional status
Process: Decision tree evaluation
- Does the patient meet criteria for [level of care]?
- Each criterion is a boolean check against a threshold
- All required criteria must pass (AND logic)
- Some criteria are alternative pathways (OR logic)
Output:
- MEETS CRITERIA → auto-approve or approve with nurse sign-off
- DOES NOT MEET → pend for physician advisor review
- INSUFFICIENT INFO → request additional documentation
1.5 The Premera-Specific Stack
Based on public information, here's what Premera Blue Cross runs:
Infrastructure
- Azure Kubernetes Service (AKS) - migrated from on-prem
- Reduced TCO by 60% with AKS migration
- Microsoft ecosystem: Windows Server, SQL Server, Outlook
- Dell EMC SAN for storage
- Cloudflare CDN/security
- Cloudera for big data/analytics
Integration Platform
- InterSystems HealthShare: unifies claims + clinical data
- Connected to Health Information Exchanges (HIEs) across 11 US states
- Receives inpatient/ED alerts from 400+ EMRs
- InterSystems Health Connect (Ensemble) as integration engine
- HealthInsight for analytics
- Cache database (InterSystems proprietary, now IRIS)
Data Sources Premera Aggregates
- Claims data (their own adjudication system)
- Clinical data from HIEs (ADT feeds, lab results, discharge summaries)
- EMR data from 400+ provider EMR systems
- Pharmacy claims
- Provider directory data
Notable: Premera uses VIM (now Vim.ai) for real-time provider alerting - sends secure messages to PCPs when their patient has an ER visit, including the ED physician's name and medications prescribed. This tells you they're technologically progressive and open to AI/real-time integrations.
1.6 Data Flow: Claim → UR Review → Determination → Appeal
1. SERVICE REQUEST
Provider submits prior auth request via:
├── EDI X12 278 transaction (electronic, ~35% of requests)
├── Provider portal (web form)
├── Fax (still 50%+ of clinical documentation)
└── Phone call (transcribed by intake staff)
2. INTAKE & TRIAGE
UM system receives request:
├── Validate member eligibility (270/271 check)
├── Check if service requires prior auth (benefit config)
├── Classify urgency: standard (7 days) vs urgent (72 hours)
├── Auto-approve if meets AutoAuth rules (MCG Cite, etc.)
└── Route to appropriate review queue if not auto-approved
3. CLINICAL REVIEW (where DaisyAI fits)
Nurse reviewer:
├── Opens case in UM system (Jiva, TruCare, etc.)
├── Reviews submitted clinical documentation
│ ├── Progress notes, H&P, discharge summaries
│ ├── Lab results, imaging reports
│ ├── Medication lists, treatment history
│ └── Often: 20-100 pages of faxed/scanned PDFs
├── Applies clinical criteria (InterQual or MCG)
│ ├── Inputs clinical data points from chart into criteria tool
│ ├── Tool evaluates against evidence-based thresholds
│ └── Returns: meets / does not meet / insufficient info
├── If meets criteria → APPROVE
├── If insufficient info → request additional documentation
└── If does not meet → escalate to physician advisor
4. PHYSICIAN ADVISOR REVIEW
(Only for cases that don't meet initial criteria)
├── MD reviews nurse's findings + original documentation
├── May conduct peer-to-peer call with requesting provider
├── Can override criteria based on clinical judgment
└── Final determination: APPROVE, MODIFY, or DENY
5. DETERMINATION & COMMUNICATION
├── Decision recorded in UM system
├── Auth number generated for approved services
├── Notification sent to provider (portal, fax, 278 response)
├── Member notification (letter, portal)
├── Claims system updated with auth for adjudication matching
└── CMS reporting: approval/denial rates, turnaround times
6. APPEAL (if denied)
├── First-level appeal: internal review by different physician
├── Second-level appeal: external independent review
├── Expedited appeal available for urgent cases
└── State regulatory oversight of appeal timelines
1.7 Integration Points & Standards
EDI X12 Transactions (HIPAA-mandated)
| Transaction | Purpose | DaisyAI Relevance |
|---|---|---|
| X12 278 | Prior auth request/response | Primary UM transaction - this is the electronic PA pathway |
| X12 837P/I | Professional/Institutional claims | Claims data for retrospective review and analytics |
| X12 835 | Payment/remittance advice | Outcome data - what got paid, denied, adjusted |
| X12 270/271 | Eligibility inquiry/response | Member eligibility verification |
| X12 275 | Additional documentation attachment | Sending clinical docs with 278 requests |
X12 278 Structure (what you need to know):
ISA*00* *00* *ZZ*SENDER... (Interchange header)
GS*HI*SENDER*RECEIVER*20260216*1200*1*X*005010X217 (Functional group)
ST*278*0001*005010X217 (Transaction set)
BHT*0078*11*REF123*20260216*1200 (Beginning of transaction)
HL*1**20*1 (Loop 2000A - Utilization Mgmt Org / Payer)
NM1*X3*2*PREMERA BLUE CROSS...
HL*2*1*21*1 (Loop 2000B - Requester / Provider)
NM1*1P*1*SMITH*JOHN****XX*1234567890
HL*3*2*22*1 (Loop 2000C - Subscriber)
NM1*IL*1*DOE*JANE****MI*ABC123
DMG*D8*19800101*F
HL*4*3*EV*1 (Loop 2000E - Patient Event)
UM*HS*I*2*11:B (UM info: Health Services Review, Inpatient, Admission)
DTP*472*RD8*20260301-20260305 (Service dates)
HI*BK:M5460 (Diagnosis: ICD-10 code)
SV1*HC:99213*... (Procedure/service info)
SE*...*0001
GE*1*1
IEA*1*...
HL7 FHIR (the future, mandated by CMS)
Da Vinci Prior Authorization Support (PAS) Implementation Guide:
- FHIR R4.0.1 based
- Submits a PASBundle containing a Claim resource (FHIR's PA mechanism) plus supporting resources
- Intermediary converts FHIR Bundle to/from X12 278 (bridge pattern)
- CMS enforcement discretion: payers can use FHIR PAS API instead of X12 278
CMS-0057-F Rule (critical timeline for DaisyAI):
| Deadline | Requirement |
|---|---|
| Jan 1, 2026 (NOW) | Operational: 72-hour urgent / 7-day standard PA decisions |
| Mar 31, 2026 | Public reporting of PA metrics: approval/denial rates, appeal outcomes, turnaround times |
| Jan 1, 2027 | Five FHIR APIs required: Patient Access, Provider Access, Payer-to-Payer, Provider Directory, Prior Authorization |
| Jan 1, 2027 | Prior Auth API must support: checking if PA required, surfacing doc needs, electronic submission, electronic decisions |
This is a massive tailwind. Every payer is scrambling to comply. DaisyAI can position as a compliance accelerator.
Other Integration Patterns
| Pattern | Where Used | Notes |
|---|---|---|
| HL7v2 ADT (A01/A03/A04) | Hospital admit/discharge/transfer feeds | Real-time census for concurrent review |
| C-CDA documents | Clinical document exchange | Structured clinical summaries |
| Direct Secure Messaging | Provider-to-payer clinical docs | HIPAA-compliant email alternative |
| SFTP batch files | Claims data, eligibility files | Legacy but ubiquitous |
| REST APIs | Modern integrations | Provider portals, mobile apps |
| SOAP/WSDL | Older enterprise systems | QNXT, many CAPS systems |
1.8 What the Data Actually Looks Like
Claims Data (X12 837 → adjudicated)
member_id: ABC123
claim_id: CLM-2026-00451
service_date: 2026-02-10
provider_npi: 1234567890
facility_npi: 9876543210
diagnosis_codes: [M54.5, G89.29] // Low back pain, chronic pain
procedure_codes: [99213, 72148] // Office visit, lumbar MRI
place_of_service: 11 // Office
billed_amount: 1250.00
allowed_amount: 875.00
paid_amount: 700.00
member_responsibility: 175.00
auth_number: PA-2026-08123 // Links to UM system
status: PAID
adjudication_date: 2026-02-15
Authorization Data (UM system)
auth_id: PA-2026-08123
member_id: ABC123
requesting_provider_npi: 1234567890
servicing_provider_npi: 9876543210
service_type: OUTPATIENT_IMAGING
requested_service: 72148 (Lumbar MRI)
diagnosis: M54.5 (Low back pain)
clinical_info_source: FAX // vs PORTAL, EDI, PHONE
criteria_set: MCG_28TH_EDITION
criteria_result: MEETS_CRITERIA
determination: APPROVED
determination_date: 2026-02-08
effective_dates: 2026-02-01 to 2026-03-01
reviewer_id: RN_JONES_4521
reviewer_type: RN
physician_advisor_review: false
turnaround_hours: 18
documents_received: 3 // Number of clinical doc pages
urgency: STANDARD
Clinical Documentation (the unstructured mess) This is typically a multi-page faxed PDF containing:
- History & Physical (H&P)
- Progress notes (handwritten or EHR-generated)
- Lab results (CBC, BMP, imaging reports)
- Medication lists
- Operative reports
- Discharge summaries
- Referral letters
- Sometimes: entire chart dumps of 50-100+ pages
The clinical data arrives as images (fax) or PDFs. It is not structured. This is exactly where LLM-powered extraction creates massive value.
2. What FDE Engagements Actually Look Like
2.1 Lessons from the FDE Pioneers
Palantir (originated the model)
- FDEs called "Deltas" - small teams (4-5 people) operating autonomously at customer sites
- Onsite 3-4 days/week, learning actual business processes
- Senior leadership sets objectives ("Auftragstaktik"), FDE teams decide everything else
- Key insight from Nabeel Qureshi (8-year Palantir FDE): "The obstacles were mostly political, not technical. When we were allowed to work, it tended to work very well."
- FDEs did "cruft work" manually at customer sites, which led product engineers to build platform tools: Magritte (data ingestion), Contour (visualization), Workshop (web app UI)
- The model appears inefficient through a SaaS lens - individual deployments can have terrible margins - because the implementation is primarily an opportunity to build and learn
OpenAI (adopted 2024-2025)
- FDE team grew from 2 to 52 engineers in one year
- Embeds with Fortune 500s to apply generative AI
- Focus on "zero-to-one" problem solving, not long-term consulting
- Use cases: wealth management (Morgan Stanley), semiconductor verification, automotive supply chain
- FDEs fine-tune models, build agentic workflows, prove business cases
- Goal: production deployments generating tens of millions to billions in value
Ramp (adopted 2023)
- Started fall 2023 with 2 engineers, now 16 FDEs in pods
- "Implementation is the most complex and demanding part of enterprise - it's usually the bottleneck on ability to scale"
- Four core operating principles (not publicly detailed)
- Expanded across product domains and verticals in 18 months
2.2 The DaisyAI FDE Engagement Timeline
Pre-Engagement (Weeks -4 to -1)
What to do before you show up:
- Research their public filings, press releases, job postings (tells you their tech stack and pain)
- Map their org chart: who owns UM, who owns IT, who owns clinical operations
- Identify their CAPS (QNXT? Facets? HealthEdge?) and UM platform (Jiva? Custom?)
- Check their CMS star ratings and prior auth metrics (public data)
- Prepare a "day-zero demo" - show DaisyAI working on a representative UM case
- Get BAA signed, security questionnaires completed, VPN access requested
Week 1: Discovery & Relationship Building
Monday-Tuesday: Shadow and listen
- Sit with 2-3 UR nurses for full shifts
- Watch them work cases end-to-end: receive request → find docs → review → apply criteria → determine
- Note every friction point: how many clicks, how many screens, where do they wait
- Count: how many minutes per case, how many cases per day, what % are auto-approved vs nurse review vs MD review
- Ask: "What's the worst part of your day?" and "If you could change one thing..."
Wednesday: Map the technical landscape
- Meeting with IT: what systems, what databases, what APIs exist
- Get read-only access to test/staging environments if possible
- Understand their data warehouse and analytics capabilities
- Document every integration touchpoint between UM system and other systems
Thursday: Stakeholder alignment
- Present initial observations to clinical leadership
- Frame in their language: "I watched your team spend 40% of review time just finding the right information in faxed documents"
- Propose 2-3 quick wins and 1-2 strategic opportunities
- Get agreement on a "first win" target
Friday: Internal synthesis
- Write up findings in
ops/calls/format - Map their architecture (actual, not documented)
- Identify the fastest path to demonstrating value
- Plan week 2
Weeks 2-4: First Win Delivery
Goal: Ship something that saves nurses time within 30 days.
The most likely first win for DaisyAI at a health plan:
CLINICAL DOCUMENT TRIAGE & EXTRACTION
Input: Faxed/scanned clinical documents (PDF)
Currently: nurse manually reads 20-100 pages per case
Process:
1. OCR + document classification (what type of document is each page?)
2. Clinical entity extraction (diagnoses, procedures, lab values, medications)
3. Summarization (1-page clinical summary per case)
4. Criteria pre-screening (map extracted data to InterQual/MCG criteria points)
Output: Structured case summary with criteria mapping
Nurse reviews summary instead of raw documents
Saves 15-30 minutes per case
This is achievable in 2-3 weeks with Claude + Vercel AI SDK + good prompt engineering. You don't need deep system integration for this - you need PDF input and structured output.
Month 2-3: Integration & Expansion
Once you've proved value with the document triage tool:
- Integrate into their actual UM workflow (API into Jiva/TruCare/custom system)
- Add auto-determination for straightforward cases (meets criteria → auto-approve with nurse confirmation)
- Build dashboards showing turnaround time improvement, cases per nurse per day
- Start working on the harder problems: peer-to-peer prep, appeal analysis, retrospective review
- Feed learnings back into DaisyAI core product
Month 3-6: Operationalize & Expand
- Full production deployment with monitoring and alerting
- Expand to additional service types (imaging → surgical → DME → behavioral health)
- Build reporting for CMS compliance (CMS-0057-F metrics)
- Train clinical staff on new workflows
- Document everything for the next engagement
- Begin discussions on long-term SaaS contract vs continued FDE engagement
2.3 Discovery Session Playbook
Session 1: Clinical Operations (2 hours) Attendees: UM Director, Lead UR Nurse, Medical Director
Questions to ask:
- Walk me through a typical prior auth case from receipt to determination
- What criteria set do you use? InterQual or MCG? What version?
- What percentage of cases are auto-approved vs nurse review vs MD review?
- What's your average turnaround time? Where do delays happen?
- How does clinical documentation arrive? (fax %, portal %, EDI %)
- What's your current staffing model? How many cases per nurse per day?
- What's your denial rate? Appeal overturn rate?
- Where do nurses spend the most time that feels unproductive?
Session 2: IT Architecture (2 hours) Attendees: IT Director/CTO, Integration Engineer, DBA
Questions to ask:
- What is your core admin processing system (CAPS)?
- What UM platform do you use? Version?
- How does data flow between CAPS and UM system?
- What's your integration architecture? (MuleSoft, Rhapsody, InterSystems, custom?)
- Do you have FHIR APIs deployed? What maturity level?
- Where is your data warehouse? What tools access it?
- What's your cloud strategy? (Azure, AWS, hybrid, on-prem?)
- What are your security and compliance requirements for vendors?
- Do you have a test/staging environment we can work in?
Session 3: Strategic Alignment (1 hour) Attendees: CTO, CMO, VP of Operations
Questions to ask:
- What are your top 3 operational priorities this year?
- How are you approaching CMS-0057-F compliance?
- Where does AI fit in your technology roadmap?
- What has worked/failed with AI or automation initiatives before?
- What does success look like for this engagement in 90 days?
2.4 Building Trust with Client Engineering Teams
This is critical. Client engineers can be your strongest allies or your biggest blockers.
Do:
- Ask questions before proposing solutions. Show you respect their existing work.
- Offer to pair program. Write code together, not just for them.
- Share your screen. Show how you work with Claude, AI SDK, modern tooling. Engineers are curious.
- Fix a small bug or improve a small thing in their system. Goodwill gesture.
- Acknowledge technical debt without judging. "This is clearly load-bearing code" beats "why is this so messy?"
- Use their tools and processes. Git workflow, JIRA/whatever, deployment process.
- Invite them into the solution. "What if we built X together?" not "I'll build X for you."
Don't:
- Come in with a "we're going to modernize everything" attitude.
- Bypass IT to go directly to clinical stakeholders.
- Introduce tools or processes without discussing with the team first.
- Move faster than they're comfortable with. Earn trust incrementally.
- Promise things that depend on their systems without their buy-in.
2.5 Common Failure Modes
From research across Palantir, OpenAI, Ramp, and enterprise AI deployments:
1. Building one-off solutions instead of reusable components
- Symptom: Every engagement starts from scratch
- Fix: "Service extraction" rule - every deployment must produce at least one reusable component
- DaisyAI application: Every Premera feature should be abstractable into the core product
2. Pilot-to-production gap
- Symptom: Demo works great, production deployment stalls
- Fix: Build for production from day 1. No "demo mode" code.
- DaisyAI application: Use real clinical documents in testing (with proper de-identification), test with actual UM workflow, not toy examples
3. Late governance/compliance
- Symptom: Legal/compliance kills the project at month 3
- Fix: Embed compliance from the start. Get BAA signed before writing code. Build audit trails into every feature.
- DaisyAI application: HIPAA audit logging, PHI handling protocols, model output documentation - all in week 1
4. Insufficient risk containment
- Symptom: AI output is 80-90% correct, the remaining 10-20% causes harm
- Fix: Human-in-the-loop checkpoints, deterministic fallback responses, output validation
- DaisyAI application: AI never auto-denies. AI assists nurses, doesn't replace them. Every AI output has confidence scores and source citations.
5. Political obstacles
- Symptom: Technical solution works but adoption fails due to organizational resistance
- Fix: Identify and align with internal champions early. Show value to end users (nurses), not just executives.
- DaisyAI application: The nurse who saves 2 hours per day is your best advocate to leadership
6. Scope creep into services company
- Symptom: Revenue per engagement flatlines, custom work accumulates
- Fix: Track what percentage of each engagement feeds back into product vs stays custom
- DaisyAI application: Maintain a strict "product backlog" fed by engagement learnings. Every custom build gets evaluated: "Is this a product feature or a one-off?"
2.6 Balancing "Build for This Client" vs "Build for the Product"
The Palantir framework:
Week 1-2: 100% client-specific (understanding their world)
Week 3-4: 80% client / 20% product thinking (what generalizes?)
Month 2: 60% client / 40% product (building reusable components)
Month 3+: 50/50 (client gets custom config of general platform)
For DaisyAI at Premera, concretely:
- Client-specific: Premera's InterSystems HealthShare integration, their specific MCG version config, their nurse workflow screens
- Product-general: Clinical document extraction engine, criteria matching engine, case summarization, turnaround time analytics
- Rule of thumb: If 3+ health plans would need it, it's product. If only Premera needs it, it's custom.
3. Technical Skills to Develop
3.1 Healthcare Data Standards
HL7 FHIR (highest priority)
FHIR (Fast Healthcare Interoperability Resources) is the future of healthcare data exchange. CMS is mandating it.
Key resources to work through:
- FHIR R4.0.1 specification: https://hl7.org/fhir/R4/
- Resource types you need to know:
Claim(used for prior auth requests in FHIR)ClaimResponse(PA decisions)Patient,Practitioner,OrganizationCondition(diagnoses)Procedure,ServiceRequestDocumentReference(clinical documents)Bundle(wraps multiple resources for submission)
- Da Vinci PAS Implementation Guide: https://hl7.org/fhir/us/davinci-pas/
- This is the FHIR-based prior auth standard
- Defines how to submit PA requests as FHIR Bundles
- Includes mapping to X12 278
Study path:
- HL7 FHIR Fundamentals course (4-week online workshop, official HL7): https://www.hl7.org/training/fhir-fundamentals.cfm
- Google's Getting Started with FHIR: https://developers.google.com/open-health-stack/resources/getting-started-with-fhir
- Udemy: HL7 FHIR Mastery course
- Build something: write a FHIR client that creates a Claim resource for a PA request
X12 EDI (must understand, won't love it)
X12 is the legacy standard that still handles 90%+ of electronic healthcare transactions. The 278 is the prior auth transaction.
Key transaction sets:
- 278: Prior authorization request/response
- 837P/837I: Professional/Institutional claims
- 835: Payment/remittance advice
- 270/271: Eligibility inquiry/response
- 275: Additional documentation
Structure: flat text files with segment delimiters
ISA*00* *00* *ZZ*SENDER_ID *ZZ*RECEIVER_ID *...
- Segment terminator: ~ (tilde)
- Element separator: * (asterisk)
- Composite separator: : (colon)
- Hierarchical loops: HL segments define parent-child relationships
Resources:
- Stedi X12 837 reference: https://www.stedi.com/edi/x12/transaction-set/837
- X12.org 278 examples: https://x12.org/examples/005010x217
- IntuitionLabs X12 EDI guide: https://intuitionlabs.ai/articles/x12-edi-transactions-guide
ICD-10 and CPT/HCPCS Codes
You need to be conversational with these. DaisyAI already has the ICD-10 MCP.
- ICD-10-CM: Diagnosis codes (e.g., M54.5 = Low back pain)
- ~70,000 codes, hierarchical structure
- First 3 characters = category, remaining = specificity
- Use the ICD-10 MCP tools:
lookup_code,search_codes,validate_code
- CPT (Current Procedural Terminology): Procedure codes (e.g., 99213 = Office visit)
- Published by AMA, requires license for full access
- Categories: E&M (99xxx), Surgery (1xxxx-6xxxx), Radiology (7xxxx), Pathology (8xxxx), Medicine (9xxxx)
- HCPCS Level II: Supplies, DME, drugs (e.g., J0129 = Abatacept injection)
- Used for Medicare/Medicaid billing
- Especially relevant for Part B coverage decisions
CMS National/Local Coverage Determinations
These are Medicare's official coverage policies. The CMS Coverage MCP gives you direct access.
- NCDs: National-level coverage decisions
- LCDs: Regional (MAC-level) coverage decisions
- Use
search_national_coverageandsearch_local_coverageMCP tools - Critical for understanding what Medicare considers "medically necessary"
3.2 Clinical Criteria Systems - Technical Deep Dive
How InterQual Actually Works (for an engineer):
Think of it as a multi-dimensional decision tree with clinical thresholds:
// Conceptual model of how InterQual criteria evaluate
interface CriteriaEvaluation {
levelOfCare: 'inpatient' | 'observation' | 'outpatient' | 'snf' | 'home_health';
serviceType: 'acute' | 'rehab' | 'behavioral_health' | 'substance_use';
// Each criterion is a clinical checkpoint
criteria: ClinicalCriterion[];
// Logic: ALL required criteria must pass
// Some criteria have alternative pathways (OR branches)
evaluationLogic: 'AND' | 'OR_BRANCHES';
}
interface ClinicalCriterion {
id: string;
description: string; // "Heart rate > 100 bpm"
dataType: 'vital_sign' | 'lab_value' | 'diagnosis' | 'functional_status' | 'medication';
// The actual check
threshold: {
operator: '>' | '<' | '>=' | '<=' | '==' | 'present' | 'absent';
value: number | string | boolean;
unit?: string;
};
required: boolean;
alternativeGroup?: string; // For OR logic - any one in group satisfies
}
// Example: Inpatient admission criteria for pneumonia
const pneumoniaAdmissionCriteria: CriteriaEvaluation = {
levelOfCare: 'inpatient',
serviceType: 'acute',
evaluationLogic: 'AND',
criteria: [
{
id: 'dx_confirmed',
description: 'Confirmed pneumonia diagnosis',
dataType: 'diagnosis',
threshold: { operator: 'present', value: true },
required: true
},
{
id: 'severity_marker_1',
description: 'Temperature > 38.3C or < 36C',
dataType: 'vital_sign',
threshold: { operator: '>', value: 38.3, unit: 'celsius' },
required: false,
alternativeGroup: 'severity' // Any severity marker suffices
},
{
id: 'severity_marker_2',
description: 'O2 saturation < 92% on room air',
dataType: 'vital_sign',
threshold: { operator: '<', value: 92, unit: 'percent' },
required: false,
alternativeGroup: 'severity'
},
{
id: 'severity_marker_3',
description: 'Respiratory rate > 24/min',
dataType: 'vital_sign',
threshold: { operator: '>', value: 24, unit: 'breaths_per_min' },
required: false,
alternativeGroup: 'severity'
},
{
id: 'outpatient_failure',
description: 'Failed outpatient antibiotic therapy',
dataType: 'medication',
threshold: { operator: 'present', value: true },
required: false,
alternativeGroup: 'severity'
}
]
};
How MCG Cite AutoAuth Works:
Rule evaluation pipeline:
1. Auth request enters UM system
2. MCG AutoAuth rules evaluate structured data:
- Service type + diagnosis → applicable guideline identified
- Clinical data points compared against guideline criteria
- If ALL required criteria met → AUTO-APPROVE
- If ANY required criteria NOT met → PEND for nurse review
3. Auto-approved cases get auth number immediately
4. Pended cases enter nurse review queue with pre-populated criteria checklist
Key metric: What % of cases auto-approve?
- Industry average: 40-60% auto-approval
- Well-configured systems: 70-80%
- DaisyAI opportunity: increase auto-approval by improving documentation quality at submission
3.3 NLP/AI Techniques for Clinical Document Review
Document Processing Pipeline:
RAW INPUT (fax/PDF)
│
▼
┌──────────────┐
│ OCR Layer │ Convert images to text
│ (Tesseract, │ Handle handwritten notes, poor fax quality
│ Amazon │ Output: raw text per page
│ Textract, │
│ Azure DI) │
└──────┬───────┘
│
▼
┌──────────────┐
│ Document │ What type of document is each page?
│ Classifier │ Types: H&P, progress note, lab result, imaging report,
│ (LLM-based) │ med list, operative report, discharge summary, referral
└──────┬───────┘
│
▼
┌──────────────┐
│ Clinical │ Extract structured entities:
│ Entity │ - Diagnoses (mapped to ICD-10)
│ Extraction │ - Procedures (mapped to CPT)
│ (LLM + │ - Lab values with units and reference ranges
│ NER) │ - Vital signs with timestamps
│ │ - Medications with dosages
│ │ - Functional status descriptors
└──────┬───────┘
│
▼
┌──────────────┐
│ Clinical │ 1-page summary per case:
│ Summarizer │ - Chief complaint and diagnosis
│ (LLM) │ - Key clinical findings supporting medical necessity
│ │ - Treatment history and response
│ │ - Current clinical status
│ │ - Requested service and rationale
└──────┬───────┘
│
▼
┌──────────────┐
│ Criteria │ Map extracted data to InterQual/MCG criteria:
│ Pre-Screen │ - Which criteria points are addressed?
│ (LLM + │ - Which are missing (need more documentation)?
│ rules) │ - Preliminary meets/does-not-meet assessment
│ │ - Confidence score per criterion
└──────┬───────┘
│
▼
STRUCTURED OUTPUT → Nurse review interface
Key LLM Techniques for This Pipeline:
- Structured Output with Zod Schemas (Vercel AI SDK)
import { generateObject } from 'ai';
import { z } from 'zod';
import { anthropic } from '@ai-sdk/anthropic';
const ClinicalExtraction = z.object({
documentType: z.enum([
'history_and_physical',
'progress_note',
'lab_result',
'imaging_report',
'medication_list',
'operative_report',
'discharge_summary',
'referral_letter',
'other'
]),
patient: z.object({
name: z.string().optional(),
dob: z.string().optional(),
mrn: z.string().optional(),
}),
diagnoses: z.array(z.object({
description: z.string(),
icd10Code: z.string().optional(),
isPrimary: z.boolean(),
clinicalEvidence: z.string(), // quote from document
})),
procedures: z.array(z.object({
description: z.string(),
cptCode: z.string().optional(),
date: z.string().optional(),
findings: z.string().optional(),
})),
vitalSigns: z.array(z.object({
type: z.string(), // "blood_pressure", "heart_rate", etc.
value: z.string(),
unit: z.string(),
date: z.string().optional(),
abnormal: z.boolean(),
})),
labResults: z.array(z.object({
testName: z.string(),
value: z.string(),
unit: z.string(),
referenceRange: z.string().optional(),
abnormal: z.boolean(),
date: z.string().optional(),
})),
medications: z.array(z.object({
name: z.string(),
dosage: z.string().optional(),
frequency: z.string().optional(),
route: z.string().optional(),
})),
clinicalNarrative: z.string(), // 2-3 sentence summary
medicalNecessityIndicators: z.array(z.string()), // key phrases supporting necessity
});
const result = await generateObject({
model: anthropic('claude-sonnet-4-20250514'),
schema: ClinicalExtraction,
prompt: `Extract structured clinical information from this document.
Be precise with medical terminology. Map diagnoses to ICD-10 codes
where possible. Flag abnormal lab values and vital signs.
Document text:
${ocrText}`,
});
- Criteria Matching Engine
const CriteriaMatch = z.object({
criteriaSet: z.string(), // "InterQual 2025" or "MCG 28th Edition"
levelOfCare: z.string(),
criteriaChecklist: z.array(z.object({
criterionId: z.string(),
criterionDescription: z.string(),
status: z.enum(['met', 'not_met', 'insufficient_info', 'not_applicable']),
supportingEvidence: z.string().optional(), // quote from clinical docs
confidence: z.number().min(0).max(1),
})),
overallAssessment: z.enum(['likely_meets', 'likely_does_not_meet', 'needs_more_info']),
missingInformation: z.array(z.string()), // what docs/data are needed
summary: z.string(),
});
- Multi-document synthesis For cases with 50+ pages across multiple document types, you need a multi-pass approach:
Pass 1: Classify and extract each document independently (parallel)
Pass 2: Merge extractions, resolve conflicts, build timeline
Pass 3: Synthesize into case summary + criteria evaluation
3.4 Compliance & Security
What Premera (and every health plan) will require before you touch their data:
-
BAA (Business Associate Agreement)
- Must be signed before any PHI access
- Covers: permitted uses of PHI, breach notification obligations, subcontractor requirements
- DaisyAI status: need a standard BAA template reviewed by healthcare attorney
-
Security Assessment Questionnaire
- Expect 100-300 questions covering:
- Encryption (at rest and in transit)
- Access controls (RBAC, MFA)
- Audit logging
- Incident response plan
- Employee training
- Vulnerability management
- Penetration testing
- Data retention/destruction
- Reference: HIMSS IT Security Practices Questionnaire
- Key frameworks: NIST CSF, SOC 2 Type II, HITRUST CSF
- Expect 100-300 questions covering:
-
SOC 2 Type II
- Not legally required but increasingly expected by health plans
- Covers: Security, Availability, Processing Integrity, Confidentiality, Privacy
- Takes 6-12 months to achieve for first time
- Consider using Vanta, Drata, or Secureframe to accelerate
- Pragmatic approach for a 2-person startup: get SOC 2 Type I first (point-in-time), work toward Type II (over a period)
-
HITRUST CSF
- Gold standard for healthcare security certification
- More comprehensive than SOC 2
- Expensive and time-consuming ($50K+ and 12-18 months)
- Defer this until you have revenue to justify it
-
Technical Requirements
- Encryption: AES-256 at rest, TLS 1.2+ in transit
- PHI must not leave approved environments (no sending PHI to external APIs without BAA)
- Anthropic offers BAA coverage for Claude API - verify current status
- All PHI access must be logged with who, what, when, why
- Data must be stored in US-based data centers
- Need ability to delete all client data on contract termination
4. Learning Resources
4.1 FDE Model & Enterprise Deployment
Essential Reading (start here)
| Resource | Author | Why |
|---|---|---|
| Reflections on Palantir | Nabeel Qureshi | 8-year Palantir retrospective. Best single resource on FDE culture, what works, political obstacles. |
| Forward Deployed Engineering (Ramp) | Ramp Engineering | How Ramp scaled FDE from 2→16 in 18 months. Practical lessons on pods, operating principles, scaling. |
| Understanding Forward Deployed Engineering | Barry McCardel (ex-Palantir) | Culture and costs of FDE. Why it appears inefficient but works. Auftragstaktik doctrine. |
| Forward Deployed Engineers (Pragmatic Engineer) | Gergely Orosz | Industry overview: who's hiring FDEs, compensation, how the role differs from SE/SA. |
| FDE: The Essential 2026 Guide | Rocketlane | Comprehensive guide covering history, role definition, failure modes. |
| FDE Profession of the Future | Max Dauber | Why AI specifically needs FDEs and the implementation gap. |
Podcasts
| Podcast | Episode | Duration | Why |
|---|---|---|---|
| Lenny's Podcast | How Palantir built the ultimate founder factory - Nabeel Qureshi | ~1.5 hrs | Deep dive into Palantir culture, FDE model, building trust with customers. Best audio resource. |
| Lenny's Podcast | Palantir deployment strategist episodes | Various | Day-in-the-life of deployment roles |
Palantir Blog Posts
| Post | Focus |
|---|---|
| Day in the Life of a Palantir FDSE | Concrete day-by-day work of a Delta: data pipelines, access controls, customer workflows |
| Forward Deployed Infrastructure Engineering | Infrastructure side of FDE: deployment, monitoring, production systems |
| How We Do Agile | Palantir's approach to agile across FDE teams |
4.2 Healthcare IT & Standards
HL7/FHIR Training
| Resource | Format | Cost | Notes |
|---|---|---|---|
| HL7 FHIR Fundamentals | 4-week online workshop | ~$500 | Official HL7 course. Hands-on exercises. Best structured learning path. |
| HL7 Fundamentals | Self-paced, 12 weeks | Varies | Covers V2, CDA, and FHIR. Good for understanding the full HL7 family. |
| Google: Getting Started with FHIR | Free online guide | Free | Practical developer guide |
| HL7 Courses on Demand | Self-paced | Varies | Official HL7 education catalog |
| Udemy: HL7 FHIR Mastery | Video course | ~$20-50 | Budget option, good for quick ramp-up |
| Firely: CMS-0057-F Decoded | Blog | Free | What payers actually need to implement for 2026-2027 compliance |
X12 EDI
| Resource | Notes |
|---|---|
| Stedi X12 Reference | Best developer-friendly X12 documentation |
| X12.org 278 Examples | Official 278 request/response examples |
| IntuitionLabs X12 EDI Guide | Practical walkthrough of 270/271 and 278 |
| PilotFish 278 Example | Integration-focused 278 examples |
Healthcare IT General
| Resource | Notes |
|---|---|
| HIMSS (Healthcare Information and Management Systems Society) | Annual conference, learning center, certifications |
| Da Vinci PAS IG | FHIR Prior Authorization Support - the implementation guide you'll be coding against |
| CMS Interoperability Final Rule | The regulatory driver for everything |
4.3 Clinical Knowledge
| Resource | Why |
|---|---|
| Utilization Management - StatPearls (NCBI) | Clinical overview of UM: pre-auth, concurrent review, retro review. Free. |
| Nurse's Guide to MCG and InterQual | How nurses actually use criteria tools day-to-day |
| MCG Care Guidelines 28th Edition overview | MCG documentation from a payer perspective |
| CMS Coverage MCPs | Use search_national_coverage and search_local_coverage to understand Medicare coverage policies |
| NPI Registry MCP | Use npi_lookup and npi_search for provider verification |
4.4 AI in Healthcare
| Resource | Notes |
|---|---|
| LLMs in Healthcare and Medical Applications (PMC) | Comprehensive review of LLM applications in healthcare |
| John Snow Labs Healthcare LLM | Clinical NER, entity extraction, de-identification |
| AI-Powered Medical Record Review (LevelShift) | Practical guide to AI medical record review |
| Cohere Health | Study this competitor closely. They're doing AI prior auth at scale. |
| Vercel AI SDK Structured Data Extraction | How to use generateObject + Zod for clinical extraction |
| AI SDK Core: Generating Structured Data | Official docs for structured output with Claude |
4.5 Books
| Book | Author | Why |
|---|---|---|
| The Hard Thing About Hard Things | Ben Horowitz | Enterprise sales, organizational politics, startup survival |
| Inspired (and Empowered) | Marty Cagan / SVPG | Product management in enterprise contexts, SVPG wrote about FDEs |
| The Mom Test | Rob Fitzpatrick | Discovery conversations - how to ask questions that reveal truth |
| Crossing the Chasm | Geoffrey Moore | Enterprise technology adoption - critical for understanding health plan buyer psychology |
| The Innovator's Prescription | Clayton Christensen | Healthcare-specific disruption theory |
| Trillion Dollar Coach | Schmidt, Rosenberg, Eagle | Building trust-based advisory relationships |
5. What to Build Now
5.1 Tools That Would Impress a Health Plan CTO in Week 1
1. Clinical Document Intelligence Demo
Build a working demo that takes a multi-page faxed clinical document (PDF) and produces:
- Document classification (what type is each page)
- Structured clinical entity extraction
- 1-page case summary
- Criteria pre-screening against MCG/InterQual patterns
This is the single highest-impact demo you can build. Every health plan CTO has seen their nurses spending 30+ minutes per case reading faxes. Show them 30 seconds.
// Skeleton architecture
// /app/api/clinical-extract/route.ts
import { generateObject, generateText } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';
import { z } from 'zod';
export async function POST(req: Request) {
const formData = await req.formData();
const file = formData.get('document') as File;
// Step 1: PDF to text (OCR if needed)
const pages = await extractPagesFromPDF(file);
// Step 2: Classify each page
const classifiedPages = await Promise.all(
pages.map(page => classifyDocument(page))
);
// Step 3: Extract clinical entities from each page
const extractions = await Promise.all(
classifiedPages.map(page => extractClinicalEntities(page))
);
// Step 4: Synthesize into case summary
const caseSummary = await synthesizeCaseSummary(extractions);
// Step 5: Pre-screen against criteria
const criteriaMatch = await prescreenCriteria(caseSummary);
return Response.json({
pages: classifiedPages,
extractions,
caseSummary,
criteriaMatch,
processingTime: elapsed,
});
}
2. Prior Auth Turnaround Dashboard
Build a dashboard that visualizes:
- Cases by status (pending, approved, denied, in review)
- Average turnaround time by service type
- Nurse productivity (cases/day)
- Auto-approval rate
- CMS-0057-F compliance metrics
Feed it with synthetic data that mirrors real UM data structures. When you walk into Premera, you can say: "This is what your data could look like. Let's plug in your actual numbers."
3. FHIR Prior Authorization Client
Build a working Da Vinci PAS client that can:
- Construct a PA request as a FHIR Bundle
- Submit to a test FHIR server
- Parse the response
- Display the determination
This proves you understand the CMS-0057-F mandate and can help them comply. Very few vendors can demo this today.
5.2 Claude Code Skills & Automations for Engagements
Skill: /clinical-extract
- Input: PDF or text of clinical documentation
- Output: Structured extraction (diagnoses, procedures, labs, vitals, meds) + summary
- Use: Live demos, development testing, case processing
Skill: /criteria-match
- Input: Clinical summary + criteria set (MCG/InterQual) + level of care
- Output: Criteria checklist with met/not-met/insufficient-info status per criterion
- Use: Demonstrate AI-assisted medical necessity determination
Skill: /fhir-build
- Input: Patient demographics + service request details
- Output: Valid FHIR PAS Bundle (JSON) ready for submission
- Use: Integration testing, CMS compliance demos
Skill: /coverage-check
- Input: Diagnosis code + procedure code
- Output: NCD/LCD coverage determination from CMS MCPs
- Use: Quick coverage lookups during case review
Skill: /case-prep
- Input: Raw clinical documents for a UR case
- Output: Full case preparation package: document classification, entity extraction, clinical summary, criteria pre-screen, coverage check, missing information list
- Use: End-to-end demo of DaisyAI's value proposition
5.3 Knowledge Bases to Compile
1. UR Decision Tree Reference
Create a structured knowledge base mapping service types to criteria pathways:
## Inpatient Admission Criteria Map
### Cardiac
- Acute MI → InterQual: Acute Care, Cardiology, Acute MI
- Key criteria: troponin elevation, ECG changes, hemodynamic instability
- Typical auth: 3-5 days initial
### Orthopedic
- Total Knee Replacement → MCG: Surgical, Musculoskeletal
- Key criteria: failed conservative therapy (3-6 months), functional limitation, imaging findings
- Typical auth: 1-2 days inpatient (trend toward outpatient)
### Behavioral Health
- Inpatient Psych → InterQual: Behavioral Health, Acute Care
- Key criteria: danger to self/others, inability to care for self, acute psychosis
- Typical auth: 3-7 days, frequent continued stay reviews
2. Health Plan Systems Cheat Sheet
Build a reference mapping common health plan systems, their APIs, and integration patterns. Update it with every engagement.
3. CMS Compliance Tracker
Track CMS-0057-F deadlines, requirements, and DaisyAI's readiness for each:
- Which APIs are required?
- Which metrics must be reported?
- What's the timeline?
- How can DaisyAI help plans comply?
5.4 Prototype: Clinical Document Parser
The highest-value prototype to build today. Not pseudocode - actually build it.
Architecture:
/app
/api
/extract # POST: PDF → structured extraction
/summarize # POST: extraction → case summary
/criteria-match # POST: summary → criteria evaluation
/coverage-check # POST: dx + procedure → NCD/LCD lookup
/dashboard
/cases # Case management view
/analytics # Turnaround time, productivity metrics
/demo
/upload # Upload clinical docs, see real-time extraction
/compare # Side-by-side: raw doc vs AI extraction
/lib
/clinical
/extraction.ts # Clinical entity extraction with Zod schemas
/classification.ts # Document type classification
/summarization.ts # Case summarization
/criteria.ts # Criteria matching engine
/fhir
/bundle.ts # FHIR PAS Bundle construction
/client.ts # FHIR server client
/data
/icd10.ts # ICD-10 code lookup (via MCP)
/ncd-lcd.ts # Coverage determination (via MCP)
/npi.ts # Provider lookup (via MCP)
Priority build order:
- Clinical entity extraction (
/lib/clinical/extraction.ts) - the core value - Document classification (
/lib/clinical/classification.ts) - page-level triage - Case summarization (
/lib/clinical/summarization.ts) - nurse-facing output - Upload demo (
/app/demo/upload) - visual proof of concept - Criteria matching (
/lib/clinical/criteria.ts) - the harder, higher-value layer - FHIR bundle construction (
/lib/fhir/bundle.ts) - compliance play - Analytics dashboard (
/app/dashboard/analytics) - executive-facing metrics
5.5 What Would Actually Win the Premera Engagement
Based on everything above, here's the sequence of demonstrations that would build maximum trust:
Day 1: "I understand your world"
- Show you know their tech stack (InterSystems HealthShare, Azure AKS, etc.)
- Reference their CMS-0057-F compliance timeline
- Speak their language: InterQual, MCG, concurrent review, peer-to-peer, turnaround time
Week 1: "I can solve your immediate pain"
- Demo the clinical document parser on a representative case
- Show: 30-page fax → structured summary in 30 seconds
- Have the lead UR nurse validate the output for clinical accuracy
- Quantify: "This could save your nurses 15-30 minutes per case. At 20 cases/nurse/day, that's 5-10 hours/day recovered per nurse."
Week 2: "I can integrate into your workflow"
- Show a prototype plugged into their UM system (even if it's a mock integration)
- Demonstrate the criteria pre-screening output mapped to their actual criteria set
- Present the turnaround time dashboard with their metrics
Month 1: "I'm already delivering value"
- Nurses are using the tool on real cases
- Turnaround times are measurably improving
- Auto-approval rates are increasing (better documentation extraction → more cases clearly meet criteria)
- CMS compliance metrics are being tracked
Month 3: "You can't imagine going back"
- Full production deployment
- Expanding to new service types
- Building the case for long-term SaaS contract
- Your champion (lead nurse or UM director) is advocating internally
Appendix A: Competitor Landscape
| Company | Approach | Funding | Notes |
|---|---|---|---|
| Cohere Health | AI-powered PA platform for payers | $106M+ | Biggest direct competitor. Processes millions of auths. Clinician-trained AI. EHR-integrated. FHIR-native. |
| Myndshft | PA automation for providers and payers | ~$20M | Real-time PA decisions, benefit checks |
| eviCore (Evernorth/Cigna) | Specialty benefit management + PA | Subsidiary | intelliPath for electronic PA. Massive scale. |
| Waystar | Revenue cycle + PA automation | Public (WAY) | Broad RCM platform with PA module |
| Olive AI | Healthcare automation (pivoted/struggled) | $900M+ raised | Cautionary tale: raised too much, tried to do everything, laid off 450+ |
DaisyAI Differentiation: FDE model means you're embedded, not just a SaaS vendor. You understand their specific workflow, build for their specific integration points, and improve faster because you're in the room. Cohere is a platform play; DaisyAI is a platform + embedded engineering play.
Appendix B: Premera-Specific Preparation Checklist
- Review Premera's public CMS star ratings and quality metrics
- Check their job postings for UM/IT roles (reveals pain points and tech stack)
- Read their InterSystems HealthShare case study in detail
- Understand their Azure AKS migration and cloud strategy
- Review their provider manual for PA requirements by service type
- Check EDI/claims submission guides on premera.com/provider
- Identify their InterQual vs MCG usage (call provider relations if needed)
- Prepare demo with Pacific Northwest-relevant clinical scenarios
- Get BAA template reviewed by healthcare attorney
- Prepare security questionnaire responses
- Set up Azure-based development environment (mirrors their infrastructure)
Appendix C: Key Acronyms
| Acronym | Full Name |
|---|---|
| CAPS | Core Administrative Processing System |
| PA | Prior Authorization |
| UM | Utilization Management |
| UR | Utilization Review |
| MCG | Milliman Care Guidelines |
| NCD | National Coverage Determination |
| LCD | Local Coverage Determination |
| EDI | Electronic Data Interchange |
| FHIR | Fast Healthcare Interoperability Resources |
| PAS | Prior Authorization Support (Da Vinci IG) |
| HIE | Health Information Exchange |
| ADT | Admit/Discharge/Transfer |
| C-CDA | Consolidated Clinical Document Architecture |
| BAA | Business Associate Agreement |
| PHI | Protected Health Information |
Sources
FDE Model & Enterprise Deployment
- Nabeel Qureshi: Reflections on Palantir
- Lenny's Podcast: How Palantir Built the Ultimate Founder Factory
- Ramp Engineering: Forward Deployed Engineering
- Barry McCardel: Understanding Forward Deployed Engineering
- Pragmatic Engineer: Forward Deployed Engineers
- Rocketlane: FDE Essential 2026 Guide
- Max Dauber: FDE Profession of the Future
- SVPG: Forward Deployed Engineers
- Palantir: Day in the Life of a FDSE
- Everest Group: Palantir Category of One
- Sundeep Teki: Forward Deployed AI Engineer Guide
- OpenAI FDE Job Posting
- ZenML: OpenAI Forward Deployed Engineering for Enterprise LLM Deployments
Health Plan Architecture & Systems
- Premera + Azure AKS (Microsoft Case Study)
- InterSystems + Premera: Clinical Data Integration
- Premera Clinical Data Strategy (Health Data Management)
- ZeOmega Jiva Platform
- ZeOmega Utilization Management
- HealthEdge HealthRules Payer
- HealthEdge Payer-Source Integration
- QNXT Overview (Sumble)
- TriZetto Facets (Alivia Analytics)
- Oracle Health Insurance Claims Adjudication
Healthcare Data Standards
- Da Vinci PAS Implementation Guide
- Da Vinci PAS Formal Specification
- CMS-0057-F Final Rule
- Firely: CMS-0057-F Decoded
- Stedi X12 837 Reference
- X12.org 278 Examples
- IntuitionLabs X12 EDI Guide
- Invene: Healthcare EDI Transactions Decoded
- HL7 FHIR Fundamentals Course
- HL7 Fundamentals
Clinical Criteria & UM Process
- MCG Health AI API
- Nurse Fern: Guide to MCG and InterQual
- Optum InterQual Solutions
- Utilization Management - StatPearls
- Cohere Health: AI-Powered PA
- Cohere Health: CMS PA API Compliance
AI/NLP in Healthcare
- LLMs in Healthcare (PMC)
- John Snow Labs: Healthcare NER
- Clinical NLP Overview (AI21)
- Deep Learning NLP Pipeline for EHR Documents (PMC)
- Vercel AI SDK: Structured Data Extraction
- AI SDK Core: Generating Structured Data
Compliance & Security
- HIPAA for Startups: SOC 2 vs HITRUST Guide (IntuitionLabs)
- HIPAA-Compliant AI Guide (Glacis)
- HIPAA Vendor Onboarding (Censinet)
- HHS Guidance on Risk Analysis
- HIPAA-Compliant AI Coding Guide (Augment Code)
Last updated: 2026-02-16 Internal reference only. Not for external distribution.