-
Notifications
You must be signed in to change notification settings - Fork 3.7k
Closed
Labels
ep:CoreMLissues related to CoreML execution providerissues related to CoreML execution providerstaleissues that have not been addressed in a while; categorized by a botissues that have not been addressed in a while; categorized by a bot
Description
Describe the issue
I'm using kokoro-onnx for TTS generation with CoreML execution provider on macOS with an M1 and it failed with the following error:
2025-01-06 20:04:29.684416 [W:onnxruntime:, helper.cc:88 IsInputSupported] CoreML does not support shapes with dimension values of 0. Input:/Slice_1_output_0, shape: {0}
2025-01-06 20:04:29.684759 [W:onnxruntime:, helper.cc:88 IsInputSupported] CoreML does not support shapes with dimension values of 0. Input:/decoder/generator/m_source/l_sin_gen/Slice_output_0, shape: {0}
2025-01-06 20:04:29.685270 [W:onnxruntime:, helper.cc:82 IsInputSupported] CoreML does not support input dim > 16384. Input:decoder.generator.stft.stft.window_sum, shape: {5000015}
2025-01-06 20:04:29.686710 [W:onnxruntime:, coreml_execution_provider.cc:115 GetCapability] CoreMLExecutionProvider::GetCapability, number of partitions supported by CoreML: 123 number of nodes in the graph: 2361 number of nodes supported by CoreML: 949
Traceback (most recent call last):
File "/Volumes/Internal/audio/kokoro-onnx/examples/with_session.py", line 14, in <module>
session = InferenceSession("kokoro-v0_19.onnx", providers=["CoreMLExecutionProvider"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/Internal/audio/kokoro-onnx/.venv/lib/python3.12/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 465, in __init__
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "/Volumes/Internal/audio/kokoro-onnx/.venv/lib/python3.12/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 537, in _create_inference_session
sess.initialize_session(providers, provider_options, disabled_optimizers)
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : model_builder.cc:768 RegisterModelInputOutput Unable to get shape for output: /Squeeze_output_0Related:
To reproduce
"""
pip install kokoro-onnx==0.2.3 soundfile
wget https://github.com/thewh1teagle/kokoro-onnx/releases/download/model-files/kokoro-v0_19.onnx
wget https://github.com/thewh1teagle/kokoro-onnx/releases/download/model-files/voices.json
python examples/custom_session.py
"""
import soundfile as sf
from kokoro_onnx import Kokoro
from onnxruntime import InferenceSession
# See list of providers https://github.com/microsoft/onnxruntime/issues/22101#issuecomment-2357667377
session = InferenceSession("kokoro-v0_19.onnx", providers=["CoreMLExecutionProvider"])
kokoro = Kokoro.from_session(session, "voices.json")
samples, sample_rate = kokoro.create(
"Hello. This audio generated by kokoro!", voice="af_sarah", speed=1.0, lang="en-us"
)
sf.write("audio.wav", samples, sample_rate)
print("Created audio.wav")Urgency
It's too slow on CPU
Platform
Mac
OS Version
14.5 (23F79)
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
onnxruntime v1.20.1
ONNX Runtime API
Python
Architecture
X64
Execution Provider
CoreML
Execution Provider Library Version
No response
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
ep:CoreMLissues related to CoreML execution providerissues related to CoreML execution providerstaleissues that have not been addressed in a while; categorized by a botissues that have not been addressed in a while; categorized by a bot