docs/archive/research/fde-manifesto/fde-technical-craft

FDE Technical Craft — What Top Engineers Actually Build

Research compiled February 16, 2026


The Market Reality

FDE job postings up 800-1000% in 2025. Analysis of 1,000 job postings (Bloomberry):

Skill% of Postings
Python66%
Working directly with customers55%
TypeScript35%
Building/deploying AI/ML systems37%
Integrating systems/APIs32%
LLM experience31%
SQL~30%
RAG12%
OpenAI8%
Anthropic/Claude7%

Key insight: FDEs are full-stack. TypeScript at 35% means they're building customer-facing dashboards, not just backend pipes.


What Top FDEs Actually Build (Real Projects)

Palantir Deployments

BP Oil & Gas — Data integration across offshore platforms (North Sea, Gulf of Mexico, Oman). Ontology maps physical assets to digital twins. Real-time production dashboards.

Airbus Skywise — 25,000+ airline users. Fleet maintenance data + sensor telemetry + flight operations. Predictive maintenance reducing flight delays.

Ferrari F1 — Millions of high-frequency sensor data points per race. Data cleaning pipeline → ontology framework → real-time decision support during races.

NHS COVID-19 — Stood up in days. COVID Data Store integrating infection rates, bed availability, medicine stocks, vaccine rollouts across fragmented NHS systems. Evolved into £330M Federated Data Platform contract.

OpenAI Deployments

Morgan Stanley (AskResearchGPT) — Internal chatbot for financial advisors. Document retrieval efficiency: 20% → 80%. 98% adoption rate. Custom eval framework. Iterated with users for months.

Klarna — Parameterized instructions and tools per customer intent. Wrapped each intent with evaluation sets. Became a reusable pattern extracted into product features.

Key insight: OpenAI's FDE team grew 2 → 39 in one year. Every engagement extracts learnings into reusable products (Swarm → Agents SDK). The FDE team is a product discovery engine, not a consulting org.

Ramp

  • 2 → 16 FDEs, organized into pods
  • Custom ERP integrations (SAP, Oracle, NetSuite) via SFTP, API, Workato
  • 30+ HRIS integrations via Merge Unified API
  • "Heavy users of Cursor and Claude Code"
  • Each pod focuses on a vertical (healthcare, government, etc.)

Salesforce Agentforce

  • Pods of 1 deployment strategist + 2 FDEs
  • 3-month full-time engagement per client
  • FDEs discovered need → product team built Agentforce Observability

Anterior (Direct Competitor — $40M Series, Feb 2026)

  • "Forward Deployed Clinician" model — embeds clinicians alongside health plan staff
  • 5-day average deployment timeline
  • 99.24% clinical accuracy (KLAS-validated)
  • Reduced clinical review cycles by 75% across hundreds of nurses
  • Deployed at Geisinger, integrated with HealthEdge
  • 50M lives covered
  • Investors: NEA, Sequoia

Common FDE Project Patterns

First Wins (Week 1-2)

  1. Dashboard replacing a spreadsheet — find the Excel report everyone hates, rebuild it live
  2. Automation script — find manual data copy/paste, automate it
  3. Bug fix — fast trust-builder
  4. Data quality report — surface problems everyone knows about but hasn't quantified
  5. Integration connector — connect two systems requiring manual data transfer

Lock-In Projects (Create switching costs)

  1. Ontology/data model — structure client data in your platform
  2. Operational workflows — daily operations run through your system
  3. AI decision support — clinical, financial, or operational
  4. Multi-system integration layer — become the connective tissue between legacy systems

The FDE Technical Toolkit

Palantir Foundry Platform

ToolPurposeDaisyAI Equivalent
Pipeline BuilderVisual ETL, data integrationdbt + custom transforms
Code RepositoriesWeb IDE, production codeGitHub + Cursor/Claude Code
ContourInteractive data explorationDuckDB + notebooks
WorkshopLow-code app builderNext.js App Router
SlateFlexible dashboard builderReact + Tailwind
QuiverPoint-and-click analyticsRecharts/Tremor dashboards
OntologyTyped data model (THE moat)Postgres schema + FHIR models
AIP Agent StudioAI assistantsVercel AI SDK + Claude
OSDKGenerated SDKs (Python/TS/Java)REST API + client SDKs
Palantir MCPAI IDE integrationClaude Code + MCP servers

Core Palantir concept — The Ontology: FDEs map raw data into typed objects (Patient, Order, etc.) with properties and relationships. Everything else operates on this. This creates lock-in.

OpenAI FDE Toolkit

ToolPurpose
Agents SDK (ex-Swarm)Multi-agent orchestration with handoffs
Function CallingStructured tool use in LLM responses
Structured OutputsGuaranteed JSON schema conformance
Eval FrameworksPer-customer accuracy measurement
GuardrailsInput/output validation
RAG PipelinesDocument retrieval + generation

Key pattern: "Evaluation flywheel" — build evals first, then iterate prompts/architecture until evals pass.

What an FDE's Workspace Actually Looks Like

  • IDE: Cursor or VS Code (Ramp: "heavy Cursor + Claude Code users")
  • AI: Claude Code, GitHub Copilot, ChatGPT
  • Languages: Python (primary), TypeScript (dashboards), SQL (data)
  • Frameworks: Next.js/React, FastAPI, LangChain/LangGraph
  • Data: PostgreSQL, BigQuery/Snowflake, pgvector/Pinecone
  • Cloud: AWS (most common), Azure, GCP
  • CI/CD: GitHub Actions, Docker, Kubernetes
  • Monitoring: Datadog, custom eval dashboards
  • Comms: Slack (customer channels), Linear/Notion (project tracking)

Day 1 Kit — What FDEs Bring to Engagements

Discovery Phase

  1. System/Data Landscape Map — structured questionnaire: what systems, where data lives, formats, manual processes, leadership reports
  2. Stakeholder Map — decision-makers, end users, IT gatekeepers, data owners
  3. Pain Point Prioritization — pain severity × frequency × data availability × quick-win potential
  4. Architecture Assessment — current state vs. desired state diagram

Demo/POC

  1. Demo Environment with Synthetic Data — working instance with realistic fake data matching client domain
  2. Pre-Built Dashboard Templates — "Here's what this could look like with YOUR data"
  3. Integration Testing Framework — automated tests for data connectors, APIs, pipelines

Business Case

  1. ROI Model — quantified before/after (time per review, auto-approval rate, FTE savings, TAT reduction)
  2. Technical Design Doc Template — architecture, data flow, security, timeline, milestones
  3. Engagement Runbook — week-by-week plan

Implementation

  1. Data Mapping Templates — source fields → target schema
  2. Security/Compliance Checklist — BAA, SOC2, HIPAA
  3. UAT Scripts — structured test cases for end users

"The first deliverable is not code — it's a map."


10 Replicable Project Patterns for DaisyAI

Pattern 1: Replace the Spreadsheet (Week 1-2)

What: Find the Excel report the UR team runs weekly. Replace with live dashboard. Stack: Next.js + Postgres + Recharts/Tremor Value: Eliminates 4-8 hrs/week manual reporting. Gets you in front of leadership.

Pattern 2: AI-Powered Auth Request Triage (Week 3-6)

What: AI reads incoming PA requests. Auto-routes: approve obvious cases, flag complex for review, identify missing docs upfront. Stack: Claude + structured outputs + CMS criteria RAG + FHIR/PDF parsing Value: Reduces nurse review burden 30-50% on routine cases. Anterior claims 90% automation.

Pattern 3: Clinical Criteria Knowledge Base (Week 3-6)

What: RAG over medical policies, NCDs/LCDs, InterQual/MCG. Nurses ask natural language questions, get structured answers with citations. Stack: pgvector + embeddings + Claude + CMS Coverage MCPs Value: 15-30 min research per case → 2-3 min. Standardizes decisions.

Pattern 4: System Integration Bridge (Week 8-12)

What: Connect DaisyAI to existing UM platform (HealthEdge, Jiva, QNXT). Bi-directional. Stack: REST APIs, FHIR R4, HL7v2, SFTP, OAuth2/SMART Value: Eliminates "another login" adoption blocker. The integration IS the moat.

Pattern 5: Denial Analytics + Appeal Automation (Expansion)

What: Analyze denial patterns, auto-generate appeal letters with clinical evidence and criteria citations. Stack: SQL analytics + Claude + PubMed RAG + template engine Value: Better appeals = recovered revenue. Immediately quantifiable ROI.

Pattern 6: Nurse Productivity Copilot (Week 6-8)

What: AI co-pilot during clinical review. Pre-fills template, suggests criteria, highlights missing docs, drafts determination letter. Stack: Claude function calling + structured outputs + criteria RAG + doc parser Value: Review time 20-30 min → 5-10 min per case. Same staff handles 30%+ more volume.

Pattern 7: Compliance Audit Trail (Week 6-8)

What: Auto-document every AI-assisted decision: criteria applied, evidence considered, AI recommendation vs. human decision, timestamps. Stack: Event logging + Postgres + exportable reports Value: CMS/state audit readiness. De-risks AI adoption.

Pattern 8: P2P Review Prep (Expansion)

What: When denial → P2P review, generate prep packet: case summary, criteria, evidence, anticipated arguments, talking points. Stack: Claude + criteria RAG + case extraction + structured docs Value: High pain, low competition. Most UM platforms don't touch this. Differentiator.

Pattern 9: Submission Quality Scoring (Week 1-2)

What: Score incoming requests for completeness. Flag missing docs, wrong codes, mismatched diagnosis/procedure. Route back to providers. Stack: Rule engine + Claude + ICD-10 MCP + NPI MCP Value: 20-30% of requests are incomplete. Catching upfront saves massive nurse time.

Pattern 10: Executive Command Center (Week 8-12)

What: C-suite dashboard: auth volume, auto-approval rates, TAT, nurse productivity, denial outcomes, compliance scores, AI accuracy, cost savings. Stack: Next.js + Postgres + automated reports + alerts Value: Gets you from "department tool" to "enterprise contract."

Recommended Sequencing

Week 1-2:   Pattern 9 (Submission Quality) + Pattern 1 (Dashboard)
            → Quick wins, build trust, get data access

Week 3-6:   Pattern 2 (Auto-Triage) + Pattern 3 (Criteria KB)
            → Core AI value, the reason they bought

Week 6-8:   Pattern 7 (Audit Trail) + Pattern 6 (Nurse Copilot)
            → Deepen adoption, reduce risk

Week 8-12:  Pattern 4 (Integration) + Pattern 10 (Exec Dashboard)
            → Lock in, expand, get the enterprise contract

Parallel:   Pattern 5 (Denial Analytics) + Pattern 8 (P2P Prep)
            → Expansion upsell projects

FDE Learning Roadmap

Books (Priority Order)

  1. The Trusted Advisor (Maister, Green, Galford) — Trust Equation: (Credibility + Reliability + Intimacy) / Self-Orientation
  2. The First 90 Days (Watkins) — Perception shaped in first 3 months at client site
  3. Working Effectively with Legacy Code (Feathers) — Every health plan runs legacy systems
  4. The Phoenix Project (Kim) — How enterprises think about IT and change management

Certifications (ROI-Ordered)

CertTimeWhy
HL7 FHIR Fundamentals4 weeksHealth plans recognize this. The standard you'll integrate daily.
dbt Analytics Engineering6+ months expSignals data fluency. Health plan data teams use dbt.
AWS Solutions Architect Associate2-3 monthsGets past procurement gatekeepers.

Healthcare Standards to Learn

  • HL7 FHIR — Modern standard. Resources, Bundles, Search params. CoverageEligibilityRequest/Response for UM.
  • X12 EDI — Legacy standard that still runs healthcare. 278 (PA), 837 (claims), 835 (remittance), 270/271 (eligibility).
  • HL7 v2 — Dominant messaging inside hospitals. ADT, ORM/ORU messages.
  • OMOP CDM — Open standard for healthcare observational data. The Book of OHDSI (free).

Key Repos to Study

RepoWhy
MedplumOpen-source FHIR platform. TypeScript/React/Node. Most relevant to your stack.
medspaCyClinical NLP: negation detection, NER, section segmentation. Rule-based complement to LLM extraction.
HAPI FHIRJava FHIR server. Most widely deployed. Understand the architecture.
Awesome-FDE-RoadmapComprehensive FDE learning roadmap.
Palantir Open SourceSee how they build reusable infra.

AI/ML Engineering

  • Anthropic: Building Effective AI Agents — Sequential, parallel, evaluator-optimizer, orchestrator-worker patterns
  • RAG state of the art (2026): Agentic RAG with autonomous agents orchestrating multiple retrieval strategies. Semantic chunking (0.79-0.82 faithfulness) vs naive (0.47-0.51). Semantic caching cuts costs 68%.
  • LLM Observability: Portkey (Gartner Cool Vendor), Arize Phoenix (open source), Galileo (healthcare-certified)
  • Human-in-the-loop: Confidence-based routing, approval flows, context preservation, audit trails

Daily Habits of Excellent FDEs

The First Hour at a Client Site

  1. Walk the floor (or Slack channels). Observe before proposing.
  2. Find the "power user" — the nurse/analyst who actually knows how things work
  3. Ask: "Walk me through what you did this morning." Not "what's broken?"

Time Allocation

  • 60% coding (protected build blocks)
  • 25% client meetings (discovery, demos, standups)
  • 15% documentation (field notes for product team)

Ratio shifts: early = more meetings, mid = more building, late = more training/handoff.

Documentation Pattern

Keep a running "Field Notes" doc: Date | Observation | Product Implication | Priority

Weekly 15-min product sync:

  1. What patterns repeat across clients?
  2. What am I building that should be in the platform?
  3. What's the platform missing?

Political Navigation

  • Map the org chart day 1: sponsor (pays), champion (wants you to succeed), blocker (feels threatened), end user
  • "Ensure your work solves one of the CEO's top five problems" — this gives air cover
  • Never go around someone. Involve your sponsor to resolve blockers.
  • Deliver small wins early. "I noticed you spend 2 hours on this report. Here's a 5-minute version."
  • Listen 80%, talk 20% in the first 30 days.

The Strategic Frame

From a16z: "If you only copy the embedded-engineer part, you end up with thousands of bespoke deployments impossible to maintain. Every pattern must be built on shared primitives."

DaisyAI's product IS the reusable platform. The FDE engagement customizes it per client. Each engagement should make the product better for ALL future clients.

When to say no:

  • Can't serve 3+ future clients → consulting, not product
  • Requires forking core codebase → push back hard
  • Client's ask reveals a platform gap → gold — extract the abstraction

Sources

Daisy

v1

What do you need?

I can pull up the fundraise pipeline, CRM accounts, hot board, meeting notes — anything in the OS.

Sonnet · read-only