You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
and [xdp-tools](https://github.com/open-edge-platform/edge-microvisor-toolkit/blob/3.0/SPECS/xdp-tools/xdp-tools.spec). Click on any of the names to go to
461
-
their SPEC files and learn about specific patches related to TSN optimizations.
461
+
and [xdp-tools](https://github.com/open-edge-platform/edge-microvisor-toolkit/blob/3.0/SPECS/xdp-tools/xdp-tools.spec) packages.
Copy file name to clipboardExpand all lines: docs/developer-guide/emt-bootkit.md
+6-4Lines changed: 6 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,8 @@
1
-
:::
2
-
orphan: true
3
-
:::
1
+
<!--hide_directive
2
+
```{eval-rst}
3
+
:orphan:
4
+
```
5
+
hide_directive-->
4
6
5
7
# Edge Microvisor Bootkit
6
8
@@ -18,7 +20,7 @@ the resulting image are defined in [edge-image-bootkit.json](https://github.com/
18
20
as well as
19
21
[Bootkit specific packages](https://github.com/open-edge-platform/edge-microvisor-toolkit/blob/3.0/toolkit/imageconfigs/packagelists/bootkit-packages.json).
20
22
21
-
Before you can build the image, make sure you have [installed prerequisites and built the toolchain](./get-started/emt-building-howto.md).
23
+
Before you can build the image, make sure you have [installed prerequisites and built the toolchain](../get-started/emt-building-howto.md).
22
24
To build the Bootkit OS image, run the following command:
| Real-time & deterministic workloads | Run latency-sensitive workloads with guaranteed bounded jitter and repeatable execution timelines across one or more hosts, maintainable under steady-state and failure-recovery conditions | <br> - Bounded end-to-end latency & jitter <br> - Repeatable scheduling windows under load <br> - Cross-host timing consistency for distributed stages <br> - Fast, predictable recovery without violating SLOs | <br> - [PREEMPT_RT kernel](../emt-architecture-overview.md#preempt-rt-kernel) <br> - [Resource Director Technologies](../emt-architecture-overview.md#resource-director-technology) <br> - [Intel GPU RT](../emt-architecture-overview.md#intel-device-plugins-for-kubernetes) <br> - [CPU & Scheduler Isolation](../emt-architecture-overview.md#isolcpuslist) <br> - [Memory Determinism](../emt-architecture-overview.md#preempt-rt-kernel) <br> - Time & Clocks <br> - [Network Determinism (TSN)](../emt-architecture-overview.md#time-sensitive-networking-support) | - [PREEMPT_RT](../architecture/emt-extensions-and-patches.md#preempt_rt) <br> - [Time-Sensitive Networking](../architecture/emt-extensions-and-patches.md#time-sensitive-networking-tsn) |
26
-
| VM-based workloads on Kubernetes with shared GPUs | Run multiple virtual machines on Kubernetes that concurrently share one or more physical GPUs, with predictable fairness, isolation, and policy-driven placement—using a KubeVirt stack extended for GPU sharing | <br> - Stable, repeatable GPU performance per VM under contention <br> - Hard/soft sharing policies (fair-share, priority tiers, or quotas) <br> - Safe isolation between tenants/VMs (memory, contexts, resets) <br> - Schedulable resources with clear admission signals (no surprise fails) <br> - Operational guardrails: health checks, graceful drain/eviction, rollback | <br> - [SRIOV](./deployment/emt-vm-host.md) <br> - [Intel GPU](../emt-system-requirements.md#discrete-gpu) <br> - [kubevirt](https://github.com/open-edge-platform/edge-microvisor-toolkit-standalone-node/blob/main/standalone-node/docs/user-guide/desktop-virtualization-image-guide.md) <br> - [Host virtualization](./deployment/emt-vm-host.md) <br> - [Intel GPU device plugin](../emt-architecture-overview.md#intel-device-plugins-for-kubernetes) | - [SR-IOV](../architecture/emt-extensions-and-patches.md#sr-iov) <br> - [DRM](../architecture/emt-extensions-and-patches.md#drm) |
27
-
| AI & Vision workloads | Enable AI inference and computer-vision workloads on edge nodes using Intel GPU and NPU acceleration, exposing unified hardware-assisted pipelines through standard APIs and user-space libraries | <br> - Efficient execution of deep-learning and vision inference on-device without cloud dependency <br> - Unified GPU/NPU compute abstraction for developers (OpenVINO backend, media pipelines) <br> - Deterministic frame-rate and latency for multi-stream analytics workloads (e.g., camera ingest) <br> - Seamless integration with containers or pods, including dynamic device discovery and sharing <br> - Stable ABI/API interface across [OS updates](../architecture/emt-updates.md) and driver versions | <br> - [Edge AI packages](https://eci.intel.com/docs/3.3/packages_list.html) <br> - [OpenVino](https://docs.openvino.ai) <br> - [Intel GPU and NPU drivers](https://docs.openvino.ai/2025/openvino-workflow/running-inference/inference-devices-and-modes.html) <br> - [Intel GPU device plugin](../emt-architecture-overview.md#intel-device-plugins-for-kubernetes) | |
23
+
<!--hide_directive::::{tab-set}
24
+
:::{tab-item}hide_directive--> Real Time & Deterministic
25
+
<!--hide_directive:sync: tab1hide_directive-->
26
+
\
27
+
Run latency-sensitive workloads with guaranteed bounded jitter and repeatable
28
+
execution timelines across one or more hosts, maintainable under steady-state
29
+
and failure-recovery conditions.
30
+
31
+
*Primary outcomes:*
32
+
33
+
- Bounded end-to-end latency & jitter
34
+
- Repeatable scheduling windows under load
35
+
- Cross-host timing consistency for distributed stages
36
+
- Fast, predictable recovery without violating SLOs
0 commit comments