Process Healthcare Price Transparency Machine-Readable Files (MRFs) using Snowflake Openflow's Apache NiFi engine.
Under the Transparency in Coverage rule, U.S. healthcare payers must publish monthly MRF files detailing negotiated rates. This demo shows how to ingest these large, deeply-nested JSON files into Snowflake using Openflow.
What this demo does:
- Downloads MRF files from payer websites (e.g., Blue Cross Blue Shield)
- Parses nested JSON structures for negotiated rates and provider information
- Writes data to Snowflake tables using Snowpipe Streaming
- Provides sample analytical queries for price analysis
- Snowflake Account: Capacity account or on-demand account with credit card (trial accounts cannot use Openflow)
- ACCOUNTADMIN Role: Required for initial setup
- Non-ACCOUNTADMIN Default Role: Users with ACCOUNTADMIN as default role cannot login to Openflow
-- Run sql/01_setup_openflow_admin.sql as ACCOUNTADMIN- Navigate to Ingestion → Openflow
- Click Launch Openflow
- Click Create a Deployment
- Select Snowflake as deployment location
- Wait 15-20 minutes for deployment creation
-- Run sql/02_setup_database_and_tables.sql-- Run sql/03_setup_runtime_role.sql-- Run sql/04_setup_network_access.sql- In Openflow Control Plane, create a new Runtime
- Size: Large (for optimal processing)
- Min/Max Nodes: 1 (increase for larger files)
- Snowflake Role:
OPENFLOW_RUNTIME_ROLE_PRICE_TRANSPARENCY - External Access Integration:
PRICE_TRANSPARENCY_INTEGRATION - Wait 5-10 minutes for runtime creation
- In the Openflow canvas, drag Process Group icon onto canvas
- Click browse icon and select
flows/in_network_processing.json - Double-click the process group
- Right-click canvas → Enable All Controller Services
- Right-click canvas → Start
- Return to root canvas (right-click → Leave group)
- Drag another Process Group and import
flows/provider_reference_processing.json - Enable controller services and start
-- Run queries/sample_analytics.sqlprice-transparency-dev/
├── README.md
├── sql/
│ ├── 01_setup_openflow_admin.sql # Create OPENFLOW_ADMIN role
│ ├── 02_setup_database_and_tables.sql # Create database and tables
│ ├── 03_setup_runtime_role.sql # Create runtime role with grants
│ ├── 04_setup_network_access.sql # Network rules and EAI
│ └── 99_cleanup.sql # Remove all demo resources
├── flows/
│ ├── in_network_processing.json # NiFi flow for IN_NETWORK array
│ └── provider_reference_processing.json # NiFi flow for providers
└── queries/
└── sample_analytics.sql # Sample analytical queries
Stores negotiated rates from the IN_NETWORK array:
| Column | Description |
|---|---|
| FILE_URL | Source MRF file URL |
| NAME | Plan name |
| DESCRIPTION | Service description |
| NEGOTIATED_TYPE | Type of negotiation (fee schedule, etc.) |
| NEGOTIATED_RATE | The negotiated price |
| BILLING_CODE | CPT/HCPCS/DRG code |
| BILLING_CODE_TYPE | Type of billing code |
| PROVIDER_REFERENCES | Array of provider reference IDs |
Stores provider information from PROVIDER_REFERENCE array:
| Column | Description |
|---|---|
| PROVIDER_GROUP_ID | Reference ID (links to HEALTH_PLAN_RATES) |
| TIN_TYPE | Tax ID type (EIN, NPI) |
| TIN_VALUE | Tax identification number |
| NPI | Array of National Provider Identifiers |
To process different MRF files:
- Stop the process group
- Double-click the InvokeHTTP processor
- Change the HTTP URL to the new file URL
- Start the process group
Blue Cross Blue Shield of Illinois (smaller, ~10 min):
https://app0004702110a5prdnc868.blob.core.windows.net/output/2025-07-18_Blue-Cross-and-Blue-Shield-of-Illinois_Blue-Options-or-Blue-Choice-Options_in-network-rates.json.gz
UnitedHealthcare of Washington (larger, ~9 hours with 5 nodes):
https://mrfstorageprod.blob.core.windows.net/public-mrf/2025-11-01/2025-11-01_UnitedHealthcare-of-Washington--Inc-_Insurer_Choice-EPO_561_in-network-rates.json.gz
For larger files, increase runtime nodes:
- Go to Runtime tab
- Click three dots → Edit
- Increase Min/Max nodes (5-10 recommended for files >10GB)
- Click Apply
-- Run sql/99_cleanup.sqlAlso suspend/delete runtime in Openflow UI before dropping roles.
Normal during initial writes. As long as data appears in tables, ignore this error.
Your default role is ACCOUNTADMIN. Change it:
ALTER USER YOUR_USERNAME SET DEFAULT_ROLE = OPENFLOW_ADMIN;Ensure the EAI is attached to the runtime in the Openflow UI (not just created in SQL).
This demo includes a skill file for fully automated deployment using Cortex Code (Snowflake's AI coding assistant).
- Account Type: NOT a trial account (Openflow requires capacity or on-demand with credit card)
- Role: ACCOUNTADMIN or role with CREATE ROLE, CREATE COMPUTE POOL, CREATE OPENFLOW INTEGRATION privileges
- Browser: Snowsight must be accessible for Openflow UI automation
Open this project in Cortex Code and simply say:
deploy this demo
Or use any of these trigger phrases:
- "deploy price transparency"
- "install"
- "fresh deployment"
- "start deployment"
The skill handles the entire deployment process:
- SQL Setup: Creates roles, database, tables, network rules, and integrations
- Openflow Deployment: Automates browser interactions to create deployment (15-20 min wait)
- Runtime Creation: Creates and configures the NiFi runtime (5-10 min wait)
- Flow Import: Uses NiFi REST API to import flow definitions programmatically
- Controller Services: Enables all controller services via API
- Start Processing: Starts all processors to begin data ingestion
- Analytics: Runs sample queries once data is loaded
Ask Cortex Code to teardown when done:
| Level | Command | What it Does |
|---|---|---|
| Suspend | "suspend the runtime" | Pauses compute, preserves data |
| Full | "full teardown" | Deletes everything (runtime, deployment, SQL objects) |
Important: Full teardown must delete Openflow components (runtime, deployment) BEFORE dropping SQL objects to avoid orphaned deployments.