Combine two complementary FHIR specifications—SQL on FHIR v2 ViewDefinitions and FHIR R4 Subscriptions—to create an event-driven system where materialized analytic views stay current as clinical data changes in real-time. This transforms traditional batch ETL pipelines into reactive data flows.
Target Presentation: DevDays 2026
Branch: feature/subscription-sqlonfhir (based on feature/subscription-engine, merged with latest main)
| # | Phase | Task | Status |
|---|---|---|---|
| 1 | Foundation | Rebase feature branch onto main | ✅ Done |
| 2 | Foundation | Add Ignixa NuGet packages | ✅ Done |
| 3 | Foundation | IElement adapter layer | ✅ Done |
| 4 | Foundation | Ignixa integration smoke test | ✅ Done |
| 5 | Materialization | SQL Table Schema Manager (sqlfhir schema) |
✅ Done |
| 6 | Materialization | Incremental row updater | ✅ Done |
| 7 | Materialization | Full population background job | ✅ Done |
| 8 | Materialization | Materialization integration tests | ✅ Done |
| 9 | Subscription | ViewDefinition Refresh Channel | ✅ Done |
| 10 | Subscription | Auto-subscription registration | ✅ Done |
| 11 | Subscription | End-to-end flow test | ✅ Done |
| 12 | Multi-Target | Parquet materializer for Fabric | ✅ Done |
| 13 | API | $viewdefinition-run operation |
✅ Done |
| 14 | API | Materialization status tracking | ✅ Done |
| 15 | Docs | Documentation and ADR | ✅ Done |
- ViewDefinition: A portable JSON format for defining tabular projections of FHIR data. Each targets a single resource type and uses FHIRPath expressions for columns, filters (
where), and unnesting (forEach,forEachOrNull,repeat). - SQLQuery: A FHIR Library profile for shareable SQL queries that join/aggregate materialized ViewDefinition tables.
- HTTP API:
$viewdefinition-run(sync),$viewdefinition-export(async bulk),$sqlquery-run,$sqlquery-export. - Key constraint: A single ViewDefinition targets exactly one resource type. Cross-resource joins happen downstream in the analytics layer.
- View Runners: "In-memory" (ETL-style, resource→rows→output) vs "In-database" (translate ViewDefinition to SQL over FHIR-native schema). We'll likely need an in-memory runner for incremental updates.
- SubscriptionManager: Caches active subscriptions in memory, syncs from FHIR store.
- SubscriptionsOrchestratorJob: Triggered per transaction, evaluates subscription filter criteria using in-memory search indexing (reuses existing
SearchIndexer+SearchQueryInterpreter). - SubscriptionProcessingJob: Delivers notifications via pluggable channels.
- Channels: RestHook, Storage (Azure Blob), DataLake (NDJSON to ADLS).
- Filter matching: Parses criteria like
Patient?name=John, builds expression tree, evaluates against in-memory index of transaction resources. - Validation pipeline:
CreateOrUpdateSubscriptionBehaviorvalidates and activates subscriptions via handshake. - Heartbeat: Background service sends periodic heartbeats.
- Status: Functional but limited test coverage. R4-only (uses backport profile).
Ignixa FHIR (MIT license) is a modular .NET FHIR ecosystem that includes three NuGet packages that solve our biggest problems:
- Compile-time optimizations: Constant folding, short-circuiting, algebraic simplification
- Expression caching: Compiled expressions cached for repeated use
- Compiled delegate mode: 80% faster for common patterns (simple paths, where clauses, first())
- Custom function registration: Extend with
getResourceKey(),getReferenceKey()via subclass - Works with
IElementabstraction: Not tied to Firely models - NuGet:
dotnet add package Ignixa.FhirPath
Already implements the SQL on FHIR v2 spec! Key classes:
ViewDefinitionmodel:Resource,Select,Where,Constant— all parsedSelectGroup:Column,ForEach,ForEachOrNull— unnesting logic built inViewColumnDefinition:Name,Path,Type— column definitionsWhereClause: FHIRPath boolean filtersViewConstant: Parameterized constantsSqlOnFhirEvaluator: Core evaluator — takes a ViewDefinition + resources → produces rows- NuGet:
dotnet add package Ignixa.SqlOnFhir
Usage is exactly what we need:
var evaluator = new SqlOnFhirEvaluator(schema);
var rows = evaluator.Evaluate(viewDefinition, resources);
// Each row is a dictionary of column_name → valueCsvFileWriter: Write ViewDefinition results to CSVParquetFileWriter: Write ViewDefinition results to Parquet files- Perfect for Fabric/OneLake/ADLS materialization targets
- NuGet:
dotnet add package Ignixa.SqlOnFhir.Writers
Instead of building a ViewDefinition runner from scratch (the hardest part), we:
- Reference Ignixa NuGet packages for the runner + FHIRPath engine
- Build only the integration layer: SQL Server materializer + subscription channel
- Get Parquet output for free via
Ignixa.SqlOnFhir.Writers
This cuts Phase 1 from "build a FHIRPath engine and runner" to "integrate existing NuGet packages."
- Ignixa uses
IElementabstraction, not Firely'sBasemodel. We'll need an adapter between the FHIR server's resource model and Ignixa'sIElementinterface. - Ignixa targets .NET 9.0 (same as our FHIR server's global.json SDK version)
- MIT licensed — compatible with our project
The UsCoreBloodPressures ViewDefinition is the ideal demo example:
{
"resource": "Observation",
"name": "us_core_blood_pressures",
"constant": [
{"name": "systolic_bp", "valueCode": "8480-6"},
{"name": "diastolic_bp", "valueCode": "8462-4"},
{"name": "bp_code", "valueCode": "85354-9"}
],
"select": [
{"column": [
{"path": "getResourceKey()", "name": "id"},
{"path": "subject.getReferenceKey(Patient)", "name": "patient_id"},
{"path": "effective.ofType(dateTime)", "name": "effective_date_time"}
]},
{"forEach": "component.where(code.coding.exists(system='http://loinc.org' and code=%systolic_bp)).first()",
"column": [
{"path": "value.ofType(Quantity).value", "name": "sbp_quantity_value"}
]},
{"forEach": "component.where(code.coding.exists(system='http://loinc.org' and code=%diastolic_bp)).first()",
"column": [
{"path": "value.ofType(Quantity).value", "name": "dbp_quantity_value"}
]}
],
"where": [{"path": "code.coding.exists(system='http://loinc.org' and code=%bp_code)"}]
}Why this highlights incremental updates:
- A hospital records thousands of BP Observations per day
- Batch ETL: Re-process ALL Observations nightly → hours of compute, 24h stale data
- Subscription-driven: New BP recorded → subscription fires → runner evaluates that ONE Observation → one row inserted into
us_core_blood_pressurestable → sub-second freshness - The
wherefilter means non-BP observations are ignored by the subscription (no wasted work) - When a BP is corrected (updated), only that row is replaced
The ConditionFlat ViewDefinition shows forEachOrNull with coding arrays:
- When a condition's
clinicalStatuschanges fromactive→resolved, the subscription fires - The runner re-evaluates that Condition → updates the
clinical_statuscolumn in the materialized table - Downstream queries (e.g., "all active diabetics") immediately reflect the change
| Aspect | Batch ETL | Subscription-Driven |
|---|---|---|
| Data freshness | 24h (nightly) | Sub-second |
| Compute cost | Full re-scan of all resources | Only changed resources |
| Complexity | Custom pipeline per view | Standard ViewDefinition + auto-subscription |
| Failure blast radius | Entire pipeline re-run | Retry single resource |
| Adding a new view | Build new ETL pipeline | POST a ViewDefinition JSON |
- Pros: Already the FHIR server's data store; enables joins with FHIR data; no external dependencies; low latency
- Use case: Real-time operational analytics, CDS, quality dashboards
- Schema:
sqlfhir.*schema in the same database
Fabric is the natural next step and a compelling DevDays demo angle:
- The existing subscription engine already has a DataLake channel that writes NDJSON to Azure Data Lake Storage (ADLS)
- Fabric's Lakehouse sits directly on OneLake (which is ADLS Gen2 under the hood)
- Approach: A "Fabric Channel" writes Parquet files (not NDJSON) organized by ViewDefinition name
- Fabric auto-discovers Parquet in OneLake → tables appear in the SQL Analytics Endpoint
- Power BI, Spark notebooks, and SQL all work immediately
- Incremental benefit: Append new Parquet files per subscription event; Fabric handles compaction
- Demo: Show a Power BI dashboard over a Fabric Lakehouse that updates as FHIR resources change
- The SQL on FHIR spec's
$viewdefinition-exportoperation explicitly supports Parquet as an output format - Parquet is columnar, compressed, and the lingua franca of analytics tools
- Use case: Research data exports, bulk analytics, ML training datasets
- Works with: Spark, Databricks, BigQuery, Snowflake, DuckDB, Pandas
Subscription Event
│
▼
┌──────────────────┐
│ ViewDefinition │
│ Refresh Channel │
│ │
│ ┌─────────────┐ │
│ │ Runner │ │ (evaluates ViewDef → rows)
│ └──────┬──────┘ │
│ │ │
│ ┌──────▼──────┐ │
│ │ Materializer│ │ (pluggable output target)
│ │ Interface │ │
│ └──────┬──────┘ │
└─────────┼─────────┘
│
┌─────┼─────────┬──────────────┐
▼ ▼ ▼ ▼
┌──────┐ ┌───────┐ ┌────────────┐ ┌──────────┐
│ SQL │ │Parquet│ │ Fabric/ │ │ Future: │
│Server│ │ File │ │ OneLake │ │ Snowflake│
│ │ │ │ │ │ │ BigQuery │
└──────┘ └───────┘ └────────────┘ └──────────┘
This makes the ViewDefinition Refresh Channel a two-part design:
- Runner (spec-standard): ViewDefinition → rows (shared across all targets)
- Materializer (pluggable): rows → target-specific storage (SQL INSERT, Parquet write, API call)
When a ViewDefinition is submitted, a FHIR query populates the data into a SQL table. The query is registered as a subscription, and subscription triggers update the materialized view.
- Elegant spec synergy — Uses two standard FHIR specs together, each doing what it's designed for.
- Event-driven > batch — Eliminates polling/scheduling; views update as data changes.
- Natural mapping — ViewDefinition's
resourcefield maps directly to subscription resource type filtering. - Existing infrastructure — The subscription engine already does in-memory search filtering and has pluggable notification channels—a "ViewDefinition refresh" channel is a natural extension.
- SQL Server is the right target — The FHIR server already uses SQL Server; materialized views alongside FHIR data enables powerful joins.
- ViewDefinition
whereclauses use FHIRPath (e.g.,code.coding.exists(system='http://loinc.org' and code='8480-6')) - Subscription criteria use FHIR search parameters (e.g.,
Observation?code=http://loinc.org|8480-6) - Risk: Not all FHIRPath filters can be expressed as search parameters. The auto-generated subscription may be broader than the ViewDefinition filter, causing unnecessary refreshes (but not correctness issues—just efficiency).
- Mitigation: Use a "best-effort" subscription filter (match on resource type + key search params), then re-evaluate FHIRPath
whereduring materialization to filter false positives.
- The proposal implies a full re-query on each subscription event. This doesn't scale.
- Better: Incremental upsert—when a resource changes, re-evaluate the ViewDefinition for just that resource and upsert/delete its rows in the materialized table.
- The subscription notification already includes the changed resource(s), so we have exactly what we need.
- ViewDefinitions with
forEach/forEachOrNull/repeatcan produce multiple rows from a single resource (e.g., a Patient with 3 addresses → 3 rows inpatient_addresses). - Challenge: Incremental update must delete all existing rows for a resource before inserting new ones (not a simple upsert by resource ID alone).
- Solution: Use a composite key of
(resource_key, row_index)or simplyDELETE WHERE resource_key = Xthen re-insert all rows for that resource.
- If a resource is deleted, or is updated so it no longer matches the
wherefilter, rows must be removed. - The subscription engine fires on creates, updates, and deletes—so we have the signal. On delete: remove all rows for that resource. On update: re-evaluate and if zero rows result, effectively a delete.
- When a ViewDefinition is first submitted (or the server restarts), the materialized table needs to be fully populated from existing data before incremental mode can begin.
- Solution:
$viewdefinition-runor a background job does the initial full scan. The subscription kicks in for subsequent changes. Need a state machine:Creating → Populating → Active → Error.
- ViewDefinition columns define the table schema. What happens when a ViewDefinition is updated with new columns?
- Solution: Schema evolution—add new columns (nullable), or drop-and-recreate. Flag to user that schema changes may require re-population.
- High-throughput FHIR servers may see thousands of writes/second. Each triggering a ViewDefinition re-evaluation could be expensive.
- Mitigation: Batch incremental updates. The orchestrator job already batches by
MaxCount. Group multiple resource changes and apply them in a single SQL transaction.
┌─────────────────────────────────────────────────────────────────┐
│ FHIR Server (R4) │
│ │
│ ┌──────────────┐ ┌──────────────────┐ ┌───────────────┐ │
│ │ FHIR REST API │───▶│ MediatR Pipeline │───▶│ Data Store │ │
│ └──────────────┘ └────────┬─────────┘ └───────────────┘ │
│ │ │
│ ┌──────────▼──────────┐ │
│ │ Subscription Engine │ │
│ │ (Orchestrator Job) │ │
│ └──────────┬──────────┘ │
│ │ │
│ ┌────────────────┼────────────────┐ │
│ ▼ ▼ ▼ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────────┐ │
│ │ RestHook │ │ DataLake │ │ ViewDefinition │ │
│ │ Channel │ │ Channel │ │ Refresh Channel │ │
│ └──────────────┘ └──────────────┘ │ (NEW) │ │
│ └────────┬─────────┘ │
│ │ │
│ ┌─────────▼──────────┐ │
│ │ ViewDefinition │ │
│ │ Runner (In-Memory) │ │
│ │ - FHIRPath eval │ │
│ │ - Column mapping │ │
│ │ - Row generation │ │
│ └─────────┬──────────┘ │
│ │ │
│ ┌─────────▼──────────┐ │
│ │ Materialization │ │
│ │ Layer │ │
│ │ - SQL DDL mgmt │ │
│ │ - Incremental │ │
│ │ upsert/delete │ │
│ │ - Full refresh │ │
│ └─────────┬──────────┘ │
│ │ │
│ ┌─────────▼──────────┐ │
│ │ SQL Server │ │
│ │ (Materialized │ │
│ │ View Tables) │ │
│ └────────────────────┘ │
└─────────────────────────────────────────────────────────────────┘
- Parse/validate ViewDefinition JSON submitted to the server
- Store ViewDefinitions as FHIR resources (custom resource or use Binary/Basic)
- API:
POST /ViewDefinition,GET /ViewDefinition/{id},$viewdefinition-run
- Evaluate FHIRPath expressions from ViewDefinition
selectagainst a FHIR resource - Handle
forEach,forEachOrNull,repeat,unionAll - Map FHIRPath results to typed columns
- Support
getResourceKey()andgetReferenceKey()functions - Output:
IEnumerable<Dictionary<string, object>>(rows of column name → value)
- Schema Manager: Translate ViewDefinition columns → SQL DDL (
CREATE TABLE)- Column type mapping: FHIR types → SQL types (string→nvarchar, dateTime→datetime2, etc.)
- Include
_resource_keycolumn for incremental update tracking
- Incremental Updater:
DELETE FROM [view_table] WHERE _resource_key = @resourceKey- Re-run ViewDefinition for single resource
INSERTnew rows
- Full Populator: Background job that runs ViewDefinition against all matching resources
- Table naming:
sqlfhir_[viewdefinition_name](namespaced to avoid conflicts)
- Implements
ISubscriptionChannel - On notification: receives changed resource(s) + subscription info
- Looks up associated ViewDefinition(s)
- Runs ViewDefinition against changed resources
- Applies incremental updates to materialized table
- Channel type:
"view-refresh"(custom)
- When a ViewDefinition is submitted and materialization is requested:
- Create/update the SQL table schema
- Kick off full population job
ViewDefinitionSubscriptionManagergenerates N Subscription resources:- At minimum: one broad subscription with
criteria:{ViewDefinition.resource}? - Optionally: narrower subscriptions with search param equivalents of
whereclauses - All subscriptions share:
channel.type:view-refreshchannel.endpoint: internal reference to the ViewDefinition
- The manager tracks the 1:N relationship (ViewDefinition → Subscriptions)
- At minimum: one broad subscription with
- Subscription engine handles the rest
- On ViewDefinition removal, the manager deletes all associated Subscriptions
- Rebase feature branch — Get
feature/subscription-engineup to date withmain - Add Ignixa NuGet packages — Reference
Ignixa.SqlOnFhir,Ignixa.FhirPath,Ignixa.SqlOnFhir.Writers - IElement adapter — Bridge between FHIR server's resource model and Ignixa's
IElementinterface - Smoke test — Evaluate a PatientDemographics ViewDefinition against a Patient resource using Ignixa
- SQL Table Schema Manager — Translate ViewDefinition columns → CREATE TABLE DDL in
sqlfhirschema - Incremental Updater — Delete-then-insert for a single resource's rows
- Full Population Job — Background job: scan all resources of type, run ViewDefinition via Ignixa, bulk insert
- Materialization integration tests
- ViewDefinition Refresh Channel — New
ISubscriptionChannelimplementation using Ignixa evaluator - Auto-subscription registration — On ViewDefinition Library submit, auto-create Subscription
- End-to-end flow — Submit ViewDefinition → table created → data populated → resource CRUD → table updated
- E2E tests
- Parquet materializer — Use
Ignixa.SqlOnFhir.Writers.ParquetFileWriterfor Fabric/ADLS output - $viewdefinition-run operation — Sync evaluation endpoint per spec
- Status tracking — ViewDefinition materialization state (Creating/Populating/Active/Error)
- Documentation & ADR
- Show the ViewDefinition JSON for
patient_demographics(from the spec example) - POST it to the FHIR server with
?materialize=true - Show the auto-created SQL table with existing patients
- Show the auto-created Subscription
- Create a new Patient via FHIR API
- Query the SQL table — new patient appears within seconds
- Takeaway: Zero-config analytics table that stays current
Scenario: ICU Blood Pressure Tracking
An ICU needs real-time BP trends across all patients. Today this requires custom integrations.
- Show the
UsCoreBloodPressuresViewDefinition (from the spec — uses constants, forEach, where filter) - Materialize it → SQL table
sqlfhir.us_core_blood_pressuresauto-created - Show existing data:
SELECT patient_id, effective_date_time, sbp_quantity_value, dbp_quantity_value FROM sqlfhir.us_core_blood_pressures - Post a new BP Observation (systolic=145, diastolic=95) via FHIR API
- Query the table again — new row appears in sub-seconds
- Post a non-BP Observation (e.g., heart rate) — table is unchanged (subscription filter ignores it)
- Show the before/after comparison:
- Batch: Re-scan 500K Observations nightly, rebuild entire table → hours, stale
- Subscription: Process 1 Observation → 1 row insert → milliseconds, fresh
- Takeaway: Only changed resources are processed; irrelevant resources are filtered out
Scenario: Population Health Dashboard
- Create two ViewDefinitions:
patient_demographics+condition_flat - Materialize to Fabric OneLake (via Parquet materializer channel)
- Show Parquet files appearing in Fabric Lakehouse
- Open SQL Analytics Endpoint — tables auto-discovered
- Open Power BI dashboard showing patient demographics + condition distribution
- Create a new Patient with a diabetes Condition via FHIR API
- Dashboard updates automatically (Fabric picks up new Parquet file)
- Takeaway: FHIR server → Fabric → Power BI with zero custom ETL pipeline
- Walk through the subscription engine flow with diagrams
- Show the pluggable Runner + Materializer architecture
- Show the incremental update path (delete old rows → re-evaluate → insert new rows)
- Show how adding a new output target (Fabric, Snowflake) is just a new Materializer implementation
- Takeaway: Clean, extensible architecture leveraging existing FHIR specs
- Hospitals submit quality measures (eCQMs) to CMS quarterly
- Current workflow: Nightly ETL extracts FHIR data → transforms → loads into analytics DB
- Pain points:
- Staleness: Data is always 24+ hours old
- Complexity: Custom ETL pipelines for each measure
- Brittleness: Schema changes break pipelines
- Cost: Full re-extraction even for small changes
- Define quality measure data needs as ViewDefinitions (standardized, portable)
- Materialized views update in real-time as clinical data changes
- Quality dashboards always show current data
- Adding a new measure = adding a ViewDefinition (no ETL pipeline to build)
- Clinical Decision Support: Real-time views of patient medications, allergies, conditions for CDS rules
- Population Health Management: Materialized views of chronic disease cohorts, updated as diagnoses change
- Research Cohort Discovery: Views filtering patients by inclusion/exclusion criteria, always current
- Operational Analytics: Views of appointments, encounters, wait times for operational dashboards
- Public Health Reporting: Syndromic surveillance views that update as new encounters arrive
-
ViewDefinition as a FHIR resource type → Library resource with ViewDefinition extension
- Store as a
Libraryresource with a profile/extension containing the ViewDefinition JSON - Enables standard FHIR CRUD, search, versioning
- Store as a
-
External vs Internal SQL tables → Same database,
sqlfhirschema- Materialized views live in
sqlfhir.*schema in the FHIR SQL Server database - Enables joins with FHIR data while keeping concerns separated
- Materialized views live in
-
FHIRPath engine → Ignixa.FhirPath (see details below)
- Use the Ignixa FHIR compiled FHIRPath engine
- This also gives us the complete Ignixa.SqlOnFhir ViewDefinition runner for free
-
Concurrency during full population → Queue incoming subscription events
- Events arriving during initial population are queued
- Applied after full population completes using a watermark timestamp
-
Multi-tenancy → Not in scope
- Single-tenant only for this implementation
-
Spec contribution → Yes, write up after implementation works
- Document how subscription-based refresh could be incorporated into the SQL on FHIR spec
-
ViewDefinition-to-Subscription cardinality → 1:N (one ViewDefinition, many Subscriptions)
- A single FHIR Subscription supports only one criteria string (e.g.,
Observation?code=http://loinc.org|85354-9). It cannot express multiple independent filter queries. - A ViewDefinition's
whereclauses use FHIRPath, which may not map cleanly to a single FHIR search query — or may require multiple search queries for full coverage. - Design: One ViewDefinition can produce N Subscriptions, all pointing to the same ViewDefinition Refresh Channel:
ViewDefinition (1) ──► (N) Subscriptions ──► (1) ViewDefinition Refresh Channel - A
ViewDefinitionSubscriptionManagerowns the lifecycle: when a ViewDefinition is registered for materialization, it generates the appropriate subscription(s); when removed, it cleans them up. - Safe default: At minimum, one broad subscription per resource type (
Observation?) guarantees no missed updates. Narrower criteria are a pure optimization — the ViewDefinition evaluator always re-applies FHIRPathwhereclauses, so over-triggering is safe (just less efficient). - Criteria generation strategy (phased):
- Phase 1: Resource-type-only (
Observation?) — simple, correct, no missed updates. - Phase 2: Pattern-match common FHIRPath idioms to search params (e.g.,
code.coding.exists(system='X' and code='Y')→?code=X|Y). Multiple patterns may produce multiple subscriptions for the same ViewDefinition. - Future: Reverse-match against FHIR search parameter FHIRPath definitions for broader coverage.
- Phase 1: Resource-type-only (
- Correctness guarantee: The evaluator's FHIRPath
wherefiltering is the single source of truth. Subscription criteria only control when the evaluator runs — a broader subscription means more evaluator invocations (cost), but never incorrect results.
- A single FHIR Subscription supports only one criteria string (e.g.,
- Start from
feature/subscription-engine - Rebase onto latest
main - Create new branch:
feature/sql-on-fhir-subscriptions - Work in phases, PR each phase back to the feature branch
- Eventually PR the complete feature to
main
- SQL on FHIR v2 Spec
- ViewDefinition Structure
- SQL on FHIR HTTP API
- FHIR Subscriptions R5
- FHIR Subscriptions Backport IG
- feature/subscription-engine branch
- Ignixa FHIR Server — Source of FHIRPath engine, SQL on FHIR runner, and Parquet writer
- Ignixa.FhirPath README
- Ignixa.SqlOnFhir README
- Ignixa.SqlOnFhir.Writers (CsvFileWriter, ParquetFileWriter)
- SQL on FHIR Reference Tests — 20+ JSON conformance test fixtures
- SQL on FHIR JS Reference Runner —
sof-jsreference implementation