Open
Description
Describe the issue
When i tried to inferenceSession on WebGL , I encountered this error
To reproduce
- Download Yolov8n onnx model here MODEL
- Run this HTML page in a webserver (LiveServer in Visual Studio Code fi):
`
<script src="https://cdn.jsdelivr.net/npm/onnxruntime-web/dist/ort.webgl.min.js"></script>
let model = await ort.InferenceSession.create("yolov8n.onnx", { executionProviders: ['webgl'] });
const tensor = new ort.Tensor("float32",new Float32Array(modelInputShape.reduce((a, b) => a * b)),modelInputShape);
await model.run({ images: tensor })
Urgency
Yes , i should solve this error immediately
Platform
Windows
OS Version
10
ONNX Runtime Installation
Built from Source
ONNX Runtime Version or Commit ID
1.17.1
ONNX Runtime API
Python
Architecture
X64
Execution Provider
Default CPU
Execution Provider Library Version
Webgl
Model File
No response
Is this a quantized model?
No