FDE Technical Craft — What Top Engineers Actually Build
Research compiled February 16, 2026
The Market Reality
FDE job postings up 800-1000% in 2025. Analysis of 1,000 job postings (Bloomberry):
| Skill | % of Postings |
|---|---|
| Python | 66% |
| Working directly with customers | 55% |
| TypeScript | 35% |
| Building/deploying AI/ML systems | 37% |
| Integrating systems/APIs | 32% |
| LLM experience | 31% |
| SQL | ~30% |
| RAG | 12% |
| OpenAI | 8% |
| Anthropic/Claude | 7% |
Key insight: FDEs are full-stack. TypeScript at 35% means they're building customer-facing dashboards, not just backend pipes.
What Top FDEs Actually Build (Real Projects)
Palantir Deployments
BP Oil & Gas — Data integration across offshore platforms (North Sea, Gulf of Mexico, Oman). Ontology maps physical assets to digital twins. Real-time production dashboards.
Airbus Skywise — 25,000+ airline users. Fleet maintenance data + sensor telemetry + flight operations. Predictive maintenance reducing flight delays.
Ferrari F1 — Millions of high-frequency sensor data points per race. Data cleaning pipeline → ontology framework → real-time decision support during races.
NHS COVID-19 — Stood up in days. COVID Data Store integrating infection rates, bed availability, medicine stocks, vaccine rollouts across fragmented NHS systems. Evolved into £330M Federated Data Platform contract.
OpenAI Deployments
Morgan Stanley (AskResearchGPT) — Internal chatbot for financial advisors. Document retrieval efficiency: 20% → 80%. 98% adoption rate. Custom eval framework. Iterated with users for months.
Klarna — Parameterized instructions and tools per customer intent. Wrapped each intent with evaluation sets. Became a reusable pattern extracted into product features.
Key insight: OpenAI's FDE team grew 2 → 39 in one year. Every engagement extracts learnings into reusable products (Swarm → Agents SDK). The FDE team is a product discovery engine, not a consulting org.
Ramp
- 2 → 16 FDEs, organized into pods
- Custom ERP integrations (SAP, Oracle, NetSuite) via SFTP, API, Workato
- 30+ HRIS integrations via Merge Unified API
- "Heavy users of Cursor and Claude Code"
- Each pod focuses on a vertical (healthcare, government, etc.)
Salesforce Agentforce
- Pods of 1 deployment strategist + 2 FDEs
- 3-month full-time engagement per client
- FDEs discovered need → product team built Agentforce Observability
Anterior (Direct Competitor — $40M Series, Feb 2026)
- "Forward Deployed Clinician" model — embeds clinicians alongside health plan staff
- 5-day average deployment timeline
- 99.24% clinical accuracy (KLAS-validated)
- Reduced clinical review cycles by 75% across hundreds of nurses
- Deployed at Geisinger, integrated with HealthEdge
- 50M lives covered
- Investors: NEA, Sequoia
Common FDE Project Patterns
First Wins (Week 1-2)
- Dashboard replacing a spreadsheet — find the Excel report everyone hates, rebuild it live
- Automation script — find manual data copy/paste, automate it
- Bug fix — fast trust-builder
- Data quality report — surface problems everyone knows about but hasn't quantified
- Integration connector — connect two systems requiring manual data transfer
Lock-In Projects (Create switching costs)
- Ontology/data model — structure client data in your platform
- Operational workflows — daily operations run through your system
- AI decision support — clinical, financial, or operational
- Multi-system integration layer — become the connective tissue between legacy systems
The FDE Technical Toolkit
Palantir Foundry Platform
| Tool | Purpose | DaisyAI Equivalent |
|---|---|---|
| Pipeline Builder | Visual ETL, data integration | dbt + custom transforms |
| Code Repositories | Web IDE, production code | GitHub + Cursor/Claude Code |
| Contour | Interactive data exploration | DuckDB + notebooks |
| Workshop | Low-code app builder | Next.js App Router |
| Slate | Flexible dashboard builder | React + Tailwind |
| Quiver | Point-and-click analytics | Recharts/Tremor dashboards |
| Ontology | Typed data model (THE moat) | Postgres schema + FHIR models |
| AIP Agent Studio | AI assistants | Vercel AI SDK + Claude |
| OSDK | Generated SDKs (Python/TS/Java) | REST API + client SDKs |
| Palantir MCP | AI IDE integration | Claude Code + MCP servers |
Core Palantir concept — The Ontology: FDEs map raw data into typed objects (Patient, Order, etc.) with properties and relationships. Everything else operates on this. This creates lock-in.
OpenAI FDE Toolkit
| Tool | Purpose |
|---|---|
| Agents SDK (ex-Swarm) | Multi-agent orchestration with handoffs |
| Function Calling | Structured tool use in LLM responses |
| Structured Outputs | Guaranteed JSON schema conformance |
| Eval Frameworks | Per-customer accuracy measurement |
| Guardrails | Input/output validation |
| RAG Pipelines | Document retrieval + generation |
Key pattern: "Evaluation flywheel" — build evals first, then iterate prompts/architecture until evals pass.
What an FDE's Workspace Actually Looks Like
- IDE: Cursor or VS Code (Ramp: "heavy Cursor + Claude Code users")
- AI: Claude Code, GitHub Copilot, ChatGPT
- Languages: Python (primary), TypeScript (dashboards), SQL (data)
- Frameworks: Next.js/React, FastAPI, LangChain/LangGraph
- Data: PostgreSQL, BigQuery/Snowflake, pgvector/Pinecone
- Cloud: AWS (most common), Azure, GCP
- CI/CD: GitHub Actions, Docker, Kubernetes
- Monitoring: Datadog, custom eval dashboards
- Comms: Slack (customer channels), Linear/Notion (project tracking)
Day 1 Kit — What FDEs Bring to Engagements
Discovery Phase
- System/Data Landscape Map — structured questionnaire: what systems, where data lives, formats, manual processes, leadership reports
- Stakeholder Map — decision-makers, end users, IT gatekeepers, data owners
- Pain Point Prioritization — pain severity × frequency × data availability × quick-win potential
- Architecture Assessment — current state vs. desired state diagram
Demo/POC
- Demo Environment with Synthetic Data — working instance with realistic fake data matching client domain
- Pre-Built Dashboard Templates — "Here's what this could look like with YOUR data"
- Integration Testing Framework — automated tests for data connectors, APIs, pipelines
Business Case
- ROI Model — quantified before/after (time per review, auto-approval rate, FTE savings, TAT reduction)
- Technical Design Doc Template — architecture, data flow, security, timeline, milestones
- Engagement Runbook — week-by-week plan
Implementation
- Data Mapping Templates — source fields → target schema
- Security/Compliance Checklist — BAA, SOC2, HIPAA
- UAT Scripts — structured test cases for end users
"The first deliverable is not code — it's a map."
10 Replicable Project Patterns for DaisyAI
Pattern 1: Replace the Spreadsheet (Week 1-2)
What: Find the Excel report the UR team runs weekly. Replace with live dashboard. Stack: Next.js + Postgres + Recharts/Tremor Value: Eliminates 4-8 hrs/week manual reporting. Gets you in front of leadership.
Pattern 2: AI-Powered Auth Request Triage (Week 3-6)
What: AI reads incoming PA requests. Auto-routes: approve obvious cases, flag complex for review, identify missing docs upfront. Stack: Claude + structured outputs + CMS criteria RAG + FHIR/PDF parsing Value: Reduces nurse review burden 30-50% on routine cases. Anterior claims 90% automation.
Pattern 3: Clinical Criteria Knowledge Base (Week 3-6)
What: RAG over medical policies, NCDs/LCDs, InterQual/MCG. Nurses ask natural language questions, get structured answers with citations. Stack: pgvector + embeddings + Claude + CMS Coverage MCPs Value: 15-30 min research per case → 2-3 min. Standardizes decisions.
Pattern 4: System Integration Bridge (Week 8-12)
What: Connect DaisyAI to existing UM platform (HealthEdge, Jiva, QNXT). Bi-directional. Stack: REST APIs, FHIR R4, HL7v2, SFTP, OAuth2/SMART Value: Eliminates "another login" adoption blocker. The integration IS the moat.
Pattern 5: Denial Analytics + Appeal Automation (Expansion)
What: Analyze denial patterns, auto-generate appeal letters with clinical evidence and criteria citations. Stack: SQL analytics + Claude + PubMed RAG + template engine Value: Better appeals = recovered revenue. Immediately quantifiable ROI.
Pattern 6: Nurse Productivity Copilot (Week 6-8)
What: AI co-pilot during clinical review. Pre-fills template, suggests criteria, highlights missing docs, drafts determination letter. Stack: Claude function calling + structured outputs + criteria RAG + doc parser Value: Review time 20-30 min → 5-10 min per case. Same staff handles 30%+ more volume.
Pattern 7: Compliance Audit Trail (Week 6-8)
What: Auto-document every AI-assisted decision: criteria applied, evidence considered, AI recommendation vs. human decision, timestamps. Stack: Event logging + Postgres + exportable reports Value: CMS/state audit readiness. De-risks AI adoption.
Pattern 8: P2P Review Prep (Expansion)
What: When denial → P2P review, generate prep packet: case summary, criteria, evidence, anticipated arguments, talking points. Stack: Claude + criteria RAG + case extraction + structured docs Value: High pain, low competition. Most UM platforms don't touch this. Differentiator.
Pattern 9: Submission Quality Scoring (Week 1-2)
What: Score incoming requests for completeness. Flag missing docs, wrong codes, mismatched diagnosis/procedure. Route back to providers. Stack: Rule engine + Claude + ICD-10 MCP + NPI MCP Value: 20-30% of requests are incomplete. Catching upfront saves massive nurse time.
Pattern 10: Executive Command Center (Week 8-12)
What: C-suite dashboard: auth volume, auto-approval rates, TAT, nurse productivity, denial outcomes, compliance scores, AI accuracy, cost savings. Stack: Next.js + Postgres + automated reports + alerts Value: Gets you from "department tool" to "enterprise contract."
Recommended Sequencing
Week 1-2: Pattern 9 (Submission Quality) + Pattern 1 (Dashboard)
→ Quick wins, build trust, get data access
Week 3-6: Pattern 2 (Auto-Triage) + Pattern 3 (Criteria KB)
→ Core AI value, the reason they bought
Week 6-8: Pattern 7 (Audit Trail) + Pattern 6 (Nurse Copilot)
→ Deepen adoption, reduce risk
Week 8-12: Pattern 4 (Integration) + Pattern 10 (Exec Dashboard)
→ Lock in, expand, get the enterprise contract
Parallel: Pattern 5 (Denial Analytics) + Pattern 8 (P2P Prep)
→ Expansion upsell projects
FDE Learning Roadmap
Books (Priority Order)
- The Trusted Advisor (Maister, Green, Galford) — Trust Equation: (Credibility + Reliability + Intimacy) / Self-Orientation
- The First 90 Days (Watkins) — Perception shaped in first 3 months at client site
- Working Effectively with Legacy Code (Feathers) — Every health plan runs legacy systems
- The Phoenix Project (Kim) — How enterprises think about IT and change management
Certifications (ROI-Ordered)
| Cert | Time | Why |
|---|---|---|
| HL7 FHIR Fundamentals | 4 weeks | Health plans recognize this. The standard you'll integrate daily. |
| dbt Analytics Engineering | 6+ months exp | Signals data fluency. Health plan data teams use dbt. |
| AWS Solutions Architect Associate | 2-3 months | Gets past procurement gatekeepers. |
Healthcare Standards to Learn
- HL7 FHIR — Modern standard. Resources, Bundles, Search params. CoverageEligibilityRequest/Response for UM.
- X12 EDI — Legacy standard that still runs healthcare. 278 (PA), 837 (claims), 835 (remittance), 270/271 (eligibility).
- HL7 v2 — Dominant messaging inside hospitals. ADT, ORM/ORU messages.
- OMOP CDM — Open standard for healthcare observational data. The Book of OHDSI (free).
Key Repos to Study
| Repo | Why |
|---|---|
| Medplum | Open-source FHIR platform. TypeScript/React/Node. Most relevant to your stack. |
| medspaCy | Clinical NLP: negation detection, NER, section segmentation. Rule-based complement to LLM extraction. |
| HAPI FHIR | Java FHIR server. Most widely deployed. Understand the architecture. |
| Awesome-FDE-Roadmap | Comprehensive FDE learning roadmap. |
| Palantir Open Source | See how they build reusable infra. |
AI/ML Engineering
- Anthropic: Building Effective AI Agents — Sequential, parallel, evaluator-optimizer, orchestrator-worker patterns
- RAG state of the art (2026): Agentic RAG with autonomous agents orchestrating multiple retrieval strategies. Semantic chunking (0.79-0.82 faithfulness) vs naive (0.47-0.51). Semantic caching cuts costs 68%.
- LLM Observability: Portkey (Gartner Cool Vendor), Arize Phoenix (open source), Galileo (healthcare-certified)
- Human-in-the-loop: Confidence-based routing, approval flows, context preservation, audit trails
Daily Habits of Excellent FDEs
The First Hour at a Client Site
- Walk the floor (or Slack channels). Observe before proposing.
- Find the "power user" — the nurse/analyst who actually knows how things work
- Ask: "Walk me through what you did this morning." Not "what's broken?"
Time Allocation
- 60% coding (protected build blocks)
- 25% client meetings (discovery, demos, standups)
- 15% documentation (field notes for product team)
Ratio shifts: early = more meetings, mid = more building, late = more training/handoff.
Documentation Pattern
Keep a running "Field Notes" doc: Date | Observation | Product Implication | Priority
Weekly 15-min product sync:
- What patterns repeat across clients?
- What am I building that should be in the platform?
- What's the platform missing?
Political Navigation
- Map the org chart day 1: sponsor (pays), champion (wants you to succeed), blocker (feels threatened), end user
- "Ensure your work solves one of the CEO's top five problems" — this gives air cover
- Never go around someone. Involve your sponsor to resolve blockers.
- Deliver small wins early. "I noticed you spend 2 hours on this report. Here's a 5-minute version."
- Listen 80%, talk 20% in the first 30 days.
The Strategic Frame
From a16z: "If you only copy the embedded-engineer part, you end up with thousands of bespoke deployments impossible to maintain. Every pattern must be built on shared primitives."
DaisyAI's product IS the reusable platform. The FDE engagement customizes it per client. Each engagement should make the product better for ALL future clients.
When to say no:
- Can't serve 3+ future clients → consulting, not product
- Requires forking core codebase → push back hard
- Client's ask reveals a platform gap → gold — extract the abstraction
Sources
- Bloomberry: 1000 FDE Job Postings Analysis
- Pragmatic Engineer: Forward Deployed Engineers
- a16z: The Palantirization of Everything
- a16z: Trading Margin for Moat (Services-Led Growth)
- SVPG: Forward Deployed Engineers
- Palantir: A Day in the Life of an FDSE
- Ramp Engineering: Forward Deployed Engineering
- Salesforce: 5 Skills for FDEs
- Baseten: What I Learned as an FDE
- Anterior: $40M Forward Deployed Clinician Model
- OpenAI: Enterprise AI Playbook
- Anthropic: Building Effective AI Agents
- Medplum: Open Source FHIR Platform
- Awesome-FDE-Roadmap