-
Notifications
You must be signed in to change notification settings - Fork 3.7k
Open
Labels
needs-triagePRs or issues that need to be investigated by maintainers to find the right assignees to address itPRs or issues that need to be investigated by maintainers to find the right assignees to address ittype: bug
Description
Expected behavior
TVM should compile the model correctly.
Actual behavior
For the following model,
TVM crashes:
Traceback (most recent call last):
File "/home/ubuntu/Documents/test1.py", line 67, in <module>
test(onnx_model)
File "/home/ubuntu/Documents//test1.py", line 49, in test
ex = tvm.compile(tvm_model, target="llvm")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/Documents/DLCompilers/tvm/python/tvm/driver/build_module.py", line 104, in compile
return tvm.relax.build(
^^^^^^^^^^^^^^^^
File "/home/ubuntu/Documents/DLCompilers/tvm/python/tvm/relax/vm_build.py", line 263, in build
return _vmlink(
^^^^^^^^
File "/home/ubuntu/Documents/DLCompilers/tvm/python/tvm/relax/vm_build.py", line 158, in _vmlink
lib = tvm.tir.build(tir_mod, target=target, pipeline=tir_pipeline)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/Documents/DLCompilers/tvm/python/tvm/tir/build.py", line 239, in build
return tir_to_runtime(host_mod, device_mod_dict, target_host)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/Documents/DLCompilers/tvm/python/tvm/tir/build.py", line 149, in tir_to_runtime
mhost = codegen_build(mhost_all, target_host)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/Documents/DLCompilers/tvm/python/tvm/tir/build.py", line 131, in codegen_build
return bf(mod, target)
^^^^^^^^^^^^^^^
File "python/tvm_ffi/cython/function.pxi", line 904, in tvm_ffi.core.Function.__call__
File "<unknown>", line 0, in tvm::codegen::LLVMModuleNode::Init(tvm::IRModule const&, tvm::Target const&)
File "<unknown>", line 0, in tvm::codegen::CodeGenCPU::Finish()
File "<unknown>", line 0, in tvm::codegen::CodeGenLLVM::Finish()
File "<unknown>", line 0, in tvm::codegen::CodeGenLLVM::Verify() const
File "<unknown>", line 0, in tvm::runtime::detail::LogFatal::Entry::Finalize()
tvm.error.InternalError: LLVM module verification failed with the following errors:
location of #dbg_declare must be a pointer or int
#dbg_declare(float %v_input_red_temp.v1, !192, !DIExpression(), !170)
float %v_input_red_temp.v1
label %if_end3
ptr @argmax_compute_
location of #dbg_declare must be a pointer or int
#dbg_declare(float %v_input_red_temp.v1, !192, !DIExpression(), !170)
float %v_input_red_temp.v1
label %if_end3
ptr @argmax_compute_
Environment
OS: Ubuntu 20.04
TVM: 0.23.dev0 (f4e28d3)
onnxruntime: 1.23.2
Steps to reproduce
This bug can be reproduced by the following code with the model in the attachment. As shown in the code, the model can be executed by onnxruntime.
from typing import Dict, List, Literal, Optional
import sys
import os
import numpy as np
import onnx
import onnxruntime
from onnx import ModelProto, TensorProto, helper
import tvm
import tvm.testing
from tvm import relax
from tvm.relax.frontend.onnx import from_onnx
import argparse
import pickle
def test(
model: ModelProto,
inputs: Optional[Dict[str, np.ndarray]] = None,
ir_version: int = 8,
opset: int = 14,
) -> None:
# Configure model format.
if ir_version is not None:
model.ir_version = ir_version
if opset is not None:
model.opset_import[0].version = opset
with open("inputs.pkl", 'rb') as fp:
inputs = pickle.load(fp)
# Run the model through onnx to get the expected result.
try:
ort_session = onnxruntime.InferenceSession(
model.SerializeToString(), providers=["CPUExecutionProvider"]
)
ort_output = ort_session.run([], inputs)
except Exception as e:
print(e)
print("This model cannot be executed by onnxruntime!")
sys.exit(1)
tvm_model = from_onnx(model, opset=opset, keep_params_in_input=True)
tvm_model = relax.transform.DecomposeOpsForInference()(tvm_model)
tvm_model = relax.transform.LegalizeOps()(tvm_model)
tvm_model, params = relax.frontend.detach_params(tvm_model)
with tvm.transform.PassContext(opt_level=3):
ex = tvm.compile(tvm_model, target="llvm")
if __name__ == "__main__":
onnx_model = onnx.load("11.onnx")
test(onnx_model)Triage
Please refer to the list of label tags here to find the relevant tags and add them below in a bullet format (example below).
- needs-triage
Metadata
Metadata
Assignees
Labels
needs-triagePRs or issues that need to be investigated by maintainers to find the right assignees to address itPRs or issues that need to be investigated by maintainers to find the right assignees to address ittype: bug