Skip to content

[OV][ITT][CPU Plugin] Align ITT markers to standard convention and add async support #33312

Merged
aobolensk merged 3 commits intoopenvinotoolkit:masterfrom
tovinkere:itt_markers_cpu
Jan 21, 2026
Merged

[OV][ITT][CPU Plugin] Align ITT markers to standard convention and add async support #33312
aobolensk merged 3 commits intoopenvinotoolkit:masterfrom
tovinkere:itt_markers_cpu

Conversation

@tovinkere
Copy link
Contributor

@tovinkere tovinkere commented Dec 18, 2025

Standardizing CPU ITT marker + support for asynchronous execution - Part 2

This PR is the second of a series of PRs to standardize the ITT markers in OpenVINO that will be enabled by default through host-side instrumentation.

  1. The first PR addresses the enhancements required in ITT and the framework to support the creation and propagation of IDs when asynchronous execution is in play PR#33639.
  2. This second PR will standardize ITT markers in the CPU and enhance support to include asynchronous execution.
  3. The third PR will enable default markers for GPU plugin to allow visibility into inference pass begin/end and operator preparation and submission within each inference.
  4. The final PR will extend the same host side markers for NPU execution, which capturing the inference span and pipeline activity.

Summary of the current PR (PR#2)

  • Use the same convention standardized in PR#33639
  • Ensures the namespace for CPU Plugin activity falls under:
    • ov::phases::inference
    • ov::phases::cpu::inference
    • ov::op::cpu::exec
    • ov::op::cpu::details

Details:
CPU support with default enabled ITT markers was limited to synchronous execution. This PR enhances the default support to include asynchronous behavior and ensures a standardized convention is followed in namespaces used.

@aobolensk Please review as this is an enhancement of your work with synchronous execution.

See CVS-179230 benchmark data for smoke tests that include low-latency models.

+ Use the same convention standardized in PR#33311
+ Ensures the namespace for CPU Plugin activity falls
  under:
   ov::phases::inference
   ov::phases::cpu::inference
   ov::op::cpu::exec
   ov::op::cpu::details

Signed-off-by: Vasanth Tovinkere <vasanth.tovinkere@intel.com>
@tovinkere tovinkere requested review from a team as code owners December 18, 2025 19:09
@github-actions github-actions bot added the category: CPU OpenVINO CPU plugin label Dec 18, 2025
@sys-openvino-ci sys-openvino-ci added the ExternalIntelPR External contributor from Intel label Dec 18, 2025
+ The value of ov:op::cpu::details won't be understood until
  the graph model mapping between the input model and the optimized
  execution model requirements are finalized.

Signed-off-by: Vasanth Tovinkere <vasanth.tovinkere@intel.com>
Signed-off-by: Vasanth Tovinkere <vasanth.tovinkere@intel.com>
@maxnick
Copy link
Contributor

maxnick commented Jan 15, 2026

build_jenkins

@maxnick maxnick added this to the 2026.0 milestone Jan 21, 2026
@aobolensk aobolensk added this pull request to the merge queue Jan 21, 2026
Merged via the queue into openvinotoolkit:master with commit 967e454 Jan 21, 2026
215 of 219 checks passed
Naseer-010 pushed a commit to Naseer-010/openvino that referenced this pull request Feb 18, 2026
…d async support (openvinotoolkit#33312)

Standardizing CPU ITT marker + support for asynchronous execution - Part
2

This PR is the second of a series of PRs to standardize the ITT markers
in OpenVINO that will be enabled by default through host-side
instrumentation.

1. The first PR addresses the enhancements required in ITT and the
framework to support the creation and propagation of IDs when
asynchronous execution is in play
[PR#33639](openvinotoolkit#33639).
2. This ***second PR*** will standardize ITT markers in the CPU and
enhance support to include asynchronous execution.
3. The third PR will enable default markers for GPU plugin to allow
visibility into inference pass begin/end and operator preparation and
submission within each inference.
4. The final PR will extend the same host side markers for NPU
execution, which capturing the inference span and pipeline activity.

Summary of the current PR (PR#2)
+ Use the same convention standardized in
[PR#33639](openvinotoolkit#33639)
+ Ensures the namespace for CPU Plugin activity falls under: 
  - ov::phases::inference 
  - ov::phases::cpu::inference 
  - ov::op::cpu::exec 
  - ov::op::cpu::details

Details:
CPU support with default enabled ITT markers was limited to synchronous
execution. This PR enhances the default support to include asynchronous
behavior and ensures a standardized convention is followed in namespaces
used.

@aobolensk Please review as this is an enhancement of your work with
synchronous execution.

See [CVS-179230](https://jira.devtools.intel.com/browse/CVS-179230)
benchmark data for smoke tests that include low-latency models.

---------

Signed-off-by: Vasanth Tovinkere <vasanth.tovinkere@intel.com>
github-merge-queue bot pushed a commit that referenced this pull request Feb 19, 2026
… submission (#33313)

- Enables default ITT markers for higher level operations such as
inference pass, op preparation and submission
- Follows the same guidelines to standardize the conventions for
namespaces: ov::phases::gpu::inference ov::op::gpu
- Supports both synchronous and asynchronous operations

Enabling default GPU ITT markers using standard convention - Part 3

This PR is the **third** of a series of PRs to standardize the ITT
markers in OpenVINO that will be enabled by default through host-side
instrumentation.

1. The first PR addresses the enhancements required in ITT and the
framework to support the creation and propagation of IDs when
asynchronous execution is in play
[PR#33639](#33639).
2. The second PR will standardize ITT markers in the CPU and enhance
support to include asynchronous execution
[PR#33312](#33312).
3. This **third** PR will enable default markers for GPU plugin to allow
visibility into inference pass begin/end and operator preparation and
submission within each inference. Follow standardized conventions as
described in 1 and 2
4. The final PR will extend the same host side markers for NPU
execution, which capturing the inference span and pipeline activity.

Summary of the current PR (PR#3)

Use the same convention standardized in
[PR#33639](#33639)
Ensures the namespace for GPU Plugin activity falls under:
  ov::phases::gpu::inference
  ov::op::gpu

Details:
GPU support is enabled with default ITT markers that support synchronous
an asynchronous execution. This PR ensures a standardized convention is
followed in namespaces used.

Tickets:
[CVS-179230](https://jira.devtools.intel.com/browse/CVS-179230)

@isanghao Please review this as you are generally aware of what was
discussed

---------

Signed-off-by: Vasanth Tovinkere <vasanth.tovinkere@intel.com>
github-merge-queue bot pushed a commit that referenced this pull request Feb 20, 2026
… submission (#33313)

- Enables default ITT markers for higher level operations such as
inference pass, op preparation and submission
- Follows the same guidelines to standardize the conventions for
namespaces: ov::phases::gpu::inference ov::op::gpu
- Supports both synchronous and asynchronous operations

Enabling default GPU ITT markers using standard convention - Part 3

This PR is the **third** of a series of PRs to standardize the ITT
markers in OpenVINO that will be enabled by default through host-side
instrumentation.

1. The first PR addresses the enhancements required in ITT and the
framework to support the creation and propagation of IDs when
asynchronous execution is in play
[PR#33639](#33639).
2. The second PR will standardize ITT markers in the CPU and enhance
support to include asynchronous execution
[PR#33312](#33312).
3. This **third** PR will enable default markers for GPU plugin to allow
visibility into inference pass begin/end and operator preparation and
submission within each inference. Follow standardized conventions as
described in 1 and 2
4. The final PR will extend the same host side markers for NPU
execution, which capturing the inference span and pipeline activity.

Summary of the current PR (PR#3)

Use the same convention standardized in
[PR#33639](#33639)
Ensures the namespace for GPU Plugin activity falls under:
  ov::phases::gpu::inference
  ov::op::gpu

Details:
GPU support is enabled with default ITT markers that support synchronous
an asynchronous execution. This PR ensures a standardized convention is
followed in namespaces used.

Tickets:
[CVS-179230](https://jira.devtools.intel.com/browse/CVS-179230)

@isanghao Please review this as you are generally aware of what was
discussed

---------

Signed-off-by: Vasanth Tovinkere <vasanth.tovinkere@intel.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

category: CPU OpenVINO CPU plugin ExternalIntelPR External contributor from Intel

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants