Closed
Description
Describe the issue
I'm trying using Numpy.NET and ONNXRuntime C# to inference model.
It's actually runs on .NET with result shape(1,3590,768) but in Python is (1,7180,768).
To reproduce
Here is code example for inference:
Tensor<float> audioTensor = new DenseTensor<float>(audio16k.GetData<float>(), new int[] { 1, audio16k.len });
var inputs = new List<NamedOnnxValue>
{
NamedOnnxValue.CreateFromTensor("feats", audioTensor)
};
using IDisposableReadOnlyCollection<DisposableNamedOnnxValue> hubertOutput = this.HubertModle?.Run(inputs);
NDarray hubertOutputArray = np.array(hubertOutput[0].AsTensor<float>().ToArray(), dtype: np.float32);
hubertOutputArray = hubertOutputArray.reshape(1, hubertOutputArray.shape[0] / 768 , 768).transpose(0,2,1);
hubertOutputArray = np.repeat(hubertOutputArray, [2], axis: 2).transpose(0,2,1).astype(np.float32);
int hubertLength = hubertOutputArray.shape[1];
Urgency
No response
Platform
Windows
OS Version
Windows11 24H2
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
1.16.0
ONNX Runtime API
C#
Architecture
X64
Execution Provider
CUDA
Execution Provider Library Version
CUDA 11.6