Skip to content

Conversation

@vishalharkal15
Copy link

This commit fixes a compiler crash that occurred when compiling models with If operators containing F16 or BF16 constants during constant folding.

Problem:

  • When compile_model() performs constant folding on If/Loop operators, it evaluates subgraphs to compute constant outputs
  • F16/BF16 tensors were being accessed with F32 pointers, causing: 'Check is_pointer_representable(element_type) failed at src/inference/src/dev/make_tensor.cpp:83'

Root Cause:

  • The function() helper in src/core/reference/src/op/function.cpp created output tensors matching the result type directly
  • F16/BF16 types cannot be safely accessed as F32 pointers
  • This led to a type safety violation during evaluation

Solution:

  • Detect F16/BF16 result types before creating output tensors
  • Create F32 tensors for evaluation (compatible with pointer access)
  • Evaluate the subgraph with F32 tensors
  • Convert results back to F16/BF16 after evaluation
  • Both function() overloads updated for consistency

Testing:

  • Comprehensive test suite added in test_if_f16_fix.py
  • Tests cover: ONNX model loading, compilation, inference, and programmatic model creation with F16 constants
  • Validates both then/else branches of If operator

Fixes: #33425

Details:

  • item1
  • ...

Tickets:

  • ticket-id

…onstant folding

This commit fixes a compiler crash that occurred when compiling models
with If operators containing F16 or BF16 constants during constant folding.

Problem:
- When compile_model() performs constant folding on If/Loop operators,
  it evaluates subgraphs to compute constant outputs
- F16/BF16 tensors were being accessed with F32 pointers, causing:
  'Check is_pointer_representable(element_type) failed at
   src/inference/src/dev/make_tensor.cpp:83'

Root Cause:
- The function() helper in src/core/reference/src/op/function.cpp
  created output tensors matching the result type directly
- F16/BF16 types cannot be safely accessed as F32 pointers
- This led to a type safety violation during evaluation

Solution:
- Detect F16/BF16 result types before creating output tensors
- Create F32 tensors for evaluation (compatible with pointer access)
- Evaluate the subgraph with F32 tensors
- Convert results back to F16/BF16 after evaluation
- Both function() overloads updated for consistency

Testing:
- Comprehensive test suite added in test_if_f16_fix.py
- Tests cover: ONNX model loading, compilation, inference, and
  programmatic model creation with F16 constants
- Validates both then/else branches of If operator

Fixes: openvinotoolkit#33425
Signed-off-by: Vishal <[email protected]>
@vishalharkal15 vishalharkal15 requested review from a team as code owners January 6, 2026 17:30
@vishalharkal15 vishalharkal15 requested review from mlukasze and removed request for a team January 6, 2026 17:30
@github-actions github-actions bot added category: Core OpenVINO Core (aka ngraph) no-match-files labels Jan 6, 2026
@sys-openvino-ci sys-openvino-ci added the ExternalPR External contributor label Jan 6, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

category: Core OpenVINO Core (aka ngraph) ExternalPR External contributor no-match-files

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: Compiler Crash "f16 is not representable as pointer to f32" during compile_model

2 participants