Skip to content

Commit 77e4517

Browse files
[DOCS] Health AI Suite - Release Notes article (open-edge-platform#1886)
Co-authored-by: Wiktor Iwaszko <wiktorx.iwaszko@intel.com>
1 parent d025534 commit 77e4517

File tree

7 files changed

+111
-101
lines changed

7 files changed

+111
-101
lines changed
Lines changed: 6 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
## Initial Application: Multi-Modal Patient Monitoring
1+
# Initial Application: Multi-Modal Patient Monitoring
22

33
The Multi-Modal Patient Monitoring application helps medical AI developers and systems engineers at medical OEMs/ODMs (GE Healthcare, Philips, Mindray) evaluate Intel® Core™ Ultra processors for AI‑enabled patient monitoring. It demonstrates that you can run **multiple AI workloads concurrently on a single Intel‑powered edge device** without a discrete GPU.
44

@@ -15,29 +15,26 @@ The solution is intended to:
1515

1616
- Showcase multi‑modal AI capabilities of Intel Core Ultra
1717
- Run on Ubuntu 24.04 with containerized workloads
18-
- Be startable with a **single command** from a clean system
19-
(end‑to‑end setup and launch targeted in ≤ 30 minutes)
18+
- Be startable with a **single command** from a clean system (end‑to‑end setup and launch targeted in ≤ 30 minutes)
2019

2120
Secure provisioning (for example, Polaris Peak integration) is not part of the initial implementation, but the architecture is intended to be extensible for future security integrations.
2221

23-
---
2422
## Get Started
2523

2624
To see the system requirements and other installations, see the following guides:
2725

2826
- [Get Started](./docs/user-guide/get-started.md): Follow step-by-step instructions to set up the application.
29-
- [System Requirements](./docs/user-guide/system-requirements.md): Check the hardware and software requirements for deploying the application.
27+
- [System Requirements](./docs/user-guide/get-started/system-requirements.md): Check the hardware and software requirements for deploying the application.
3028
- [Run the application](./docs/user-guide/run-multi-modal-app.md): Run Multi-Modal Patient Monitoring application.
3129

3230
## How It Works
3331

3432
At a high level, the system is composed of several microservices that work together to ingest patient signals and video, run AI models on Intel hardware (CPU, GPU, and NPU), aggregate results, and expose them to a UI for clinicians.
3533

36-
![System design](./_assets/system-design.png)
34+
![System design](./docs/user-guide/_assets/system-design.png)
3735

3836
## Learn More
3937

40-
For detailed information about system requirements, architecture, and how the application works, see the
41-
42-
- [Full Documentation](docs/user-guide/index.md)
38+
For detailed information about system requirements, architecture, and how the application works, see the
4339

40+
- [Full Documentation](./docs/user-guide/index.md)

health-and-life-sciences-ai-suite/multi_modal_patient_monitoring/docs/user-guide/get-started.md

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,6 @@ ensure your environment meets the recommended hardware and software prerequisite
1414
1515
If you have not already cloned the repository that contains this workload, do so now:
1616

17-
1817
```bash
1918
git clone --no-checkout https://github.com/open-edge-platform/edge-ai-suites.git
2019

@@ -44,7 +43,7 @@ To configure these:
4443
1. Open `configs/device.env` in a text editor.
4544
2. Locate the entries for `ECG_DEVICE`, `RPPG_DEVICE`, `MDPNP_DEVICE`, and `POSE_3D_DEVICE`.
4645
3. Set each to the appropriate device string supported on your system (typically `CPU` or
47-
`GPU`, and `NPU` where available and supported).
46+
`GPU`, and `NPU` where available and supported).
4847

4948
When you run `make run` or `make run REGISTRY=false`, the compose file reads
5049
`configs/device.env` and passes these values into the corresponding services so that each
@@ -102,9 +101,9 @@ The rPPG service provides a simple HTTP control API (hosted by an internal FastA
102101
start and stop streaming:
103102

104103
- **Start streaming:**
105-
- Send a request to the `/start` endpoint on the rPPG control port (default 8084).
104+
- Send a request to the `/start` endpoint on the rPPG control port (default 8084).
106105
- **Stop streaming:**
107-
- Send a request to the `/stop` endpoint on the same port.
106+
- Send a request to the `/stop` endpoint on the same port.
108107

109108
Exact URLs and endpoints may differ slightly depending on how the control API is exposed in
110109
your environment; refer to the rPPG service documentation for details.
@@ -116,14 +115,14 @@ the `metrics` directory on the host, and may also expose summarized metrics via
116115

117116
- Inspect raw logs under the `metrics` directory mounted in the compose file.
118117
- Combine these metrics with the rPPG output and UI dashboards to evaluate accelerator
119-
utilization and end‑to‑end performance.
118+
utilization and end‑to‑end performance.
120119

121120
## Next Steps
122121

123122
- Learn more about [How It Works](./how-it-works.md) for a high-level architectural overview.
124123
- Experiment with different `RPPG_DEVICE` values to compare CPU, GPU, and NPU behavior.
125124
- Replace the sample video or models with your own assets by updating the `models` and `videos`
126-
volumes and configuration.
125+
volumes and configuration.
127126

128127
<!--hide_directive
129128
:::{toctree}

health-and-life-sciences-ai-suite/multi_modal_patient_monitoring/docs/user-guide/get-started/system-requirements.md

Lines changed: 51 additions & 54 deletions
Original file line numberDiff line numberDiff line change
@@ -10,94 +10,91 @@ This section lists the hardware, software, and network requirements for running
1010
## Hardware Requirements
1111

1212
- **CPU:**
13-
- 4 physical cores (8 threads) or more recommended.
14-
- x86_64 architecture with support for AVX2.
13+
- 4 physical cores (8 threads) or more recommended.
14+
- x86_64 architecture with support for AVX2.
1515

1616
- **System Memory (RAM):**
17-
- Minimum: 16 GB.
18-
- Recommended: 32 GB for smoother multi‑service operation and development work.
17+
- Minimum: 16 GB.
18+
- Recommended: 32 GB for smoother multi‑service operation and development work.
1919

2020
- **Storage:**
21-
- Minimum free disk space: 30 GB.
22-
- Recommended: 50 GB+ to accommodate Docker images, models, logs, and metrics.
21+
- Minimum free disk space: 30 GB.
22+
- Recommended: 50 GB+ to accommodate Docker images, models, logs, and metrics.
2323

2424
- **Graphics / Accelerators:**
25-
- Required: Intel CPU.
26-
- Optional (recommended for full experience):
27-
- Intel integrated GPU supported by Intel® Graphics Compute Runtime.
28-
- Intel NPU supported by the linux‑npu‑driver stack.
29-
- The host must expose GPU and NPU devices to Docker, for example:
30-
- `/dev/dri` (GPU)
31-
- `/dev/accel/accel0` (NPU)
25+
- Required: Intel CPU.
26+
- Optional (recommended for full experience):
27+
- Intel integrated GPU supported by Intel® Graphics Compute Runtime.
28+
- Intel NPU supported by the linux‑npu‑driver stack.
29+
- The host must expose GPU and NPU devices to Docker, for example:
30+
- `/dev/dri` (GPU)
31+
- `/dev/accel/accel0` (NPU)
3232

3333
## Software Requirements
3434

3535
- **Docker and Container Runtime:**
36-
- Docker Engine 24.x or newer.
37-
- Docker Compose v2 (integrated as `docker compose`) or compatible compose plugin.
38-
- Ability to run containers with:
39-
- `--privileged` (for metrics‑collector).
40-
- Device mappings for GPU/NPU (for rPPG and metrics‑collector).
36+
- Docker Engine 24.x or newer.
37+
- Docker Compose v2 (integrated as `docker compose`) or compatible compose plugin.
38+
- Ability to run containers with:
39+
- `--privileged` (for metrics‑collector).
40+
- Device mappings for GPU/NPU (for rPPG and metrics‑collector).
4141

4242
- **Python (for helper scripts and tools):**
43-
- Python 3.10 or newer recommended.
44-
- Used primarily for asset preparation scripts and local tooling; application containers
45-
include their own Python runtimes (for example, Python 3.12 in the rPPG service image).
46-
43+
- Python 3.10 or newer recommended.
44+
- Used primarily for asset preparation scripts and local tooling; application containers
45+
include their own Python runtimes (for example, Python 3.12 in the rPPG service image).
4746

4847
- **Git and Make:**
49-
- `git` for cloning the repository and managing submodules.
50-
- `make` to run provided automation targets (e.g., `make run`, `make init-mdpnp`).
48+
- `git` for cloning the repository and managing submodules.
49+
- `make` to run provided automation targets (e.g., `make run`, `make init-mdpnp`).
5150

5251
## AI Models and Workloads
5352

5453
The application bundles several AI workloads, each with its own model(s) and inputs/outputs:
5554

5655
- **RPPG (Remote Photoplethysmography) Workload:**
57-
- **Model:** MTTS‑CAN (Multi‑Task Temporal Shift Convolutional Attention Network)
58-
converted to OpenVINO IR (`/models/rppg/mtts_can.xml`).
59-
- **Input:** Facial video frames (RGB) from the shared `videos` volume.
60-
- **Output:** Pulse and respiration waveforms, heart rate (HR) in BPM, and respiratory
61-
rate (RR) in BrPM.
62-
- **Target devices:** Intel CPU, Intel integrated GPU, or Intel NPU via OpenVINO
63-
(`RPPG_DEVICE`).
56+
- **Model:** MTTS‑CAN (Multi‑Task Temporal Shift Convolutional Attention Network)
57+
converted to OpenVINO IR (`/models/rppg/mtts_can.xml`).
58+
- **Input:** Facial video frames (RGB) from the shared `videos` volume.
59+
- **Output:** Pulse and respiration waveforms, heart rate (HR) in BPM, and respiratory
60+
rate (RR) in BrPM.
61+
- **Target devices:** Intel CPU, Intel integrated GPU, or Intel NPU via OpenVINO
62+
(`RPPG_DEVICE`).
6463

6564
- **3D‑Pose Estimation Workload:**
66-
- **Model:** `human-pose-estimation-3d-0001` from Open Model Zoo, converted to OpenVINO
67-
IR (`/models/3d-pose/human-pose-estimation-3d-0001.xml`).
68-
- **Input:** RGB video of a person in motion (`face-demographics-walking.mp4` under
69-
`/videos/3d-pose` has been provided for demonstration purposes).
70-
- **Output:** 3D human keypoints and pose estimation, streamed to the aggregator for
71-
visualization.
72-
- **Target devices:** Intel CPU and GPU via OpenVINO.
65+
- **Model:** `human-pose-estimation-3d-0001` from Open Model Zoo, converted to OpenVINO
66+
IR (`/models/3d-pose/human-pose-estimation-3d-0001.xml`).
67+
- **Input:** RGB video of a person in motion (`face-demographics-walking.mp4` under
68+
`/videos/3d-pose` has been provided for demonstration purposes).
69+
- **Output:** 3D human keypoints and pose estimation, streamed to the aggregator for
70+
visualization.
71+
- **Target devices:** Intel CPU and GPU via OpenVINO.
7372

7473
- **AI‑ECG Workload:**
75-
- **Models:** OpenVINO IR models for ECG rhythm classification located under
76-
`/models/ai-ecg`, for example:
77-
- `ecg_8960_ir10_fp16.xml`
78-
- `ecg_17920_ir10_fp16.xml`
79-
- **Input:** Preprocessed multi‑lead ECG time‑series segments of supported lengths (e.g.,
80-
8960 or 17920 samples).
81-
- **Output:** Rhythm classification labels (e.g., Normal sinus rhythm, Atrial Fibrillation,
82-
Other rhythm, or Too noisy to classify) with associated waveforms and timings.
83-
- **Target devices:** Intel CPU, GPU, or other OpenVINO‑supported devices configured via `ECG_DEVICE`.
74+
- **Models:** OpenVINO IR models for ECG rhythm classification located under
75+
`/models/ai-ecg`, for example: - `ecg_8960_ir10_fp16.xml` - `ecg_17920_ir10_fp16.xml`
76+
- **Input:** Preprocessed multi‑lead ECG time‑series segments of supported lengths (e.g.,
77+
8960 or 17920 samples).
78+
- **Output:** Rhythm classification labels (e.g., Normal sinus rhythm, Atrial Fibrillation,
79+
Other rhythm, or Too noisy to classify) with associated waveforms and timings.
80+
- **Target devices:** Intel CPU, GPU, or other OpenVINO‑supported devices configured via `ECG_DEVICE`.
8481

8582
## Network and Proxy
8683

8784
- **Network Access:**
88-
- Local network connectivity to access the UI (default: `http://<HOST_IP>:3000`).
89-
- Optional outbound internet access to download Docker base images, models, and assets
90-
(if not pre‑cached).
85+
- Local network connectivity to access the UI (default: `http://<HOST_IP>:3000`).
86+
- Optional outbound internet access to download Docker base images, models, and assets
87+
(if not pre‑cached).
9188

9289
- **Proxy Support (optional):**
93-
- If your environment uses HTTP/HTTPS proxies, configure:
94-
- `HTTP_PROXY`, `HTTPS_PROXY`, `NO_PROXY` in the shell before running `make`.
95-
90+
- If your environment uses HTTP/HTTPS proxies, configure:
91+
- `HTTP_PROXY`, `HTTPS_PROXY`, `NO_PROXY` in the shell before running `make`.
92+
9693
## Permissions
9794

9895
- Ability to run Docker as a user in the `docker` group or with `sudo`.
9996
- Sufficient permissions to access device nodes for GPU and NPU (typically via membership in
100-
groups such as `video` or via explicit `devices` configuration in Docker Compose).
97+
groups such as `video` or via explicit `devices` configuration in Docker Compose).
10198

10299
## Browser Requirements
103100

health-and-life-sciences-ai-suite/multi_modal_patient_monitoring/docs/user-guide/how-it-works.md

Lines changed: 19 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -26,11 +26,11 @@ The **patient-monitoring-assets** service is responsible for preparing all AI as
2626
by the workload:
2727

2828
- Downloads or generates AI models (for example, the MTTS‑CAN model used by the rPPG service)
29-
and converts them to OpenVINO IR format.
29+
and converts them to OpenVINO IR format.
3030
- Downloads reference video assets and places them in shared volumes (for example, the `videos`
31-
volume consumed by rPPG).
31+
volume consumed by rPPG).
3232
- Writes models into a shared `models` volume, making them available to downstream services
33-
without embedding them directly in each container.
33+
without embedding them directly in each container.
3434

3535
This service typically runs to completion at startup and then exits once all artifacts are
3636
prepared.
@@ -40,10 +40,10 @@ prepared.
4040
The **patient-monitoring-aggregator** service is the central gRPC endpoint for vital signs:
4141

4242
- Exposes gRPC APIs that accept waveform and numeric vitals from producer services such as
43-
rPPG.
43+
rPPG.
4444
- Maintains per‑patient (or per‑device) state, including time‑series histories of vitals.
4545
- Computes or stores aggregated metrics that can be consumed by the UI or other downstream
46-
components.
46+
components.
4747

4848
In the default configuration, the aggregator listens on a gRPC port (for example, 50051) and
4949
is reachable by all AI producer services over the host network.
@@ -53,15 +53,15 @@ is reachable by all AI producer services over the host network.
5353
The rPPG service performs remote photoplethysmography on patient video streams to estimate heart rate (HR) and respiratory rate (RR):
5454

5555
- **Video ingestion:** Reads frames from the shared `videos` volume (for example, `sample.mp4`)
56-
using `VideoHandler`, with optional looping and frame‑rate adaptation.
56+
using `VideoHandler`, with optional looping and frame‑rate adaptation.
5757
- **Preprocessing:** Crops a region of interest (ROI) on the face and resizes frames to the
58-
model input size using `Preprocessor`. Frames are accumulated into batches.
58+
model input size using `Preprocessor`. Frames are accumulated into batches.
5959
- **AI inference:** Runs the MTTS‑CAN OpenVINO model using the `InferenceEngine`, targeting
60-
Intel GPU or NPU (`RPPG_DEVICE`) with automatic fallback to CPU when needed.
60+
Intel GPU or NPU (`RPPG_DEVICE`) with automatic fallback to CPU when needed.
6161
- **Post‑processing:** Converts raw model output into pulse and respiration waveforms and
62-
derives numeric HR and RR estimates using `SignalPostprocessor`.
62+
derives numeric HR and RR estimates using `SignalPostprocessor`.
6363
- **Streaming to aggregator:** Packages results into waveform and numeric vitals and streams
64-
them to the patient‑monitoring‑aggregator via the `RPPGGRPCClient`.
64+
them to the patient‑monitoring‑aggregator via the `RPPGGRPCClient`.
6565

6666
The rPPG service also exposes a small HTTP control API to start/stop streaming, allowing dynamic control during demos or testing.
6767

@@ -71,11 +71,11 @@ The **metrics-collector** service gathers hardware and system metrics from the h
7171
observability of AI workloads:
7272

7373
- Runs in a privileged container with access to host devices (for example, `/dev/dri`,
74-
`/dev/accel/accel0`) and system paths under `/sys` and `/proc`.
74+
`/dev/accel/accel0`) and system paths under `/sys` and `/proc`.
7575
- Collects GPU, NPU, CPU, memory, and power statistics from telemetry tools and kernel
76-
interfaces.
76+
interfaces.
7777
- Writes raw logs (for example, qmassa JSON and NPU CSV) into a shared metrics directory, and
78-
can expose summarized metrics via its own API.
78+
can expose summarized metrics via its own API.
7979

8080
These metrics are useful for validating that AI workloads are correctly utilizing Intel
8181
accelerators and for performance benchmarking.
@@ -86,7 +86,7 @@ The **ui** service provides a web‑based dashboard for clinicians or developers
8686

8787
- Connects to the patient‑monitoring‑aggregator to retrieve current and historical vitals.
8888
- Visualizes waveforms (e.g., pulse and respiration) and numeric vitals (e.g., HR, RR) in real
89-
time.
89+
time.
9090
- May also integrate system‑level metrics from the metrics‑collector to show hardware utilization alongside clinical signals.
9191

9292
The UI is typically exposed on a configurable HTTP port (for example, 3000) and accessed via a standard web browser.
@@ -96,14 +96,14 @@ The UI is typically exposed on a configurable HTTP port (for example, 3000) and
9696
Putting the pieces together:
9797

9898
1. **Assets initialization** – patient-monitoring-assets populates the shared `models` and
99-
`videos` volumes.
99+
`videos` volumes.
100100
2. **RPPG inference** – the rPPG service reads video frames, preprocesses them, and runs the
101-
MTTS‑CAN model on Intel hardware (CPU/GPU/NPU) via OpenVINO.
101+
MTTS‑CAN model on Intel hardware (CPU/GPU/NPU) via OpenVINO.
102102
3. **Vitals aggregation** – rPPG streams waveform and numeric vitals to
103-
patient-monitoring-aggregator over gRPC.
103+
patient-monitoring-aggregator over gRPC.
104104
4. **Monitoring and observability** – metrics-collector continuously records hardware
105-
utilization and other system metrics.
105+
utilization and other system metrics.
106106
5. **Visualization** – the UI queries the aggregator (and optionally metrics endpoints) to
107-
present vitals and system status to end‑users.
107+
present vitals and system status to end‑users.
108108

109109
This modular architecture allows each component to be developed, deployed, and scaled independently while sharing common assets and infrastructure.

health-and-life-sciences-ai-suite/multi_modal_patient_monitoring/docs/user-guide/index.md

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -17,27 +17,27 @@ consolidated monitoring for a virtual patient.
1717
It combines several AI services:
1818

1919
- **rPPG (Remote Photoplethysmography):** Contactless heart and respiratory rate estimation
20-
from facial video.
20+
from facial video.
2121
- **3D-Pose Estimation:** 3D human pose detection from video.
2222
- **AI-ECG:** ECG rhythm classification from simulated ECG waveforms.
2323
- **MDPNP:** Getting metrics of three simulated devices such as ECG, BP and CO2
2424
- **Patient Monitoring Aggregator:** Central service that collects and aggregates vitals from
25-
all AI workloads.
25+
all AI workloads.
2626
- **Metrics Collector:** Gathers hardware and system telemetry (CPU, GPU, NPU, power) from
27-
the host.
27+
the host.
2828
- **UI:** Web-based dashboard for visualizing waveforms, numeric vitals, and system status.
2929

3030
Together, these components illustrate how vision- and signal-based AI workloads can be orchestrated, monitored, and visualized in a clinical-style scenario.
3131

3232
## Supporting Resources
3333

3434
- [Get Started](./get-started.md) – Step-by-step instructions to build and run the application
35-
using `make` and Docker.
35+
using `make` and Docker.
3636
- [System Requirements](./get-started/system-requirements.md) – Hardware, software, and network requirements, plus an overview of the AI models used by each workload.
3737
- [How It Works](./how-it-works.md) – High-level architecture, service responsibilities, and
38-
data/control flows.
38+
data/control flows.
3939

40-
> This application is provided for development and evaluation purposes only and is *not* intended for clinical or diagnostic use.
40+
> This application is provided for development and evaluation purposes only and is _not_ intended for clinical or diagnostic use.
4141
4242
<!--hide_directive
4343
:::{toctree}
@@ -46,6 +46,7 @@ data/control flows.
4646
get-started.md
4747
how-it-works.md
4848
run-multi-modal-app.md
49+
release-notes.md
4950
5051
:::
5152
hide_directive-->
Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
# Release Notes - Multi-modal Patient Monitoring
2+
3+
## Version 1.0.0
4+
5+
**Release Date**: 2026-03-25
6+
7+
This is the initial release of the application, therefore, it is considered a preview version.
8+
9+
### New
10+
11+
The initial feature set of the application is now available:
12+
13+
- monitoring heart and respiratory rates
14+
- Integration with medical devices
15+
- Pose estimation with joint tracking
16+
- ECG analysis with 12-lead classification

0 commit comments

Comments
 (0)