add int8 quantization support for llm models #1183
build-test-linux-x86_64_rtx.yml
on: pull_request
generate-matrix
/
generate
5s
Matrix: build
Matrix: L0-dynamo-converter-tests
Waiting for pending jobs
Matrix: L0-dynamo-core-tests
Waiting for pending jobs
Matrix: L0-py-core-tests
Waiting for pending jobs
Matrix: L1-dynamo-compile-tests
Waiting for pending jobs
Matrix: L1-dynamo-core-tests
Waiting for pending jobs
Matrix: L1-torch-compile-tests
Waiting for pending jobs
Matrix: L2-dynamo-compile-tests
Waiting for pending jobs
Matrix: L2-dynamo-core-tests
Waiting for pending jobs
Matrix: L2-dynamo-plugin-tests
Waiting for pending jobs
Matrix: L2-torch-compile-tests
Waiting for pending jobs
Annotations
1 error
|
RTX - Build Linux x86_64 torch-tensorrt whl package / build-wheel-3.10-cu130-cuda-x86_64-true-false
Process completed with exit code 1.
|
Artifacts
Produced during runtime
| Name | Size | Digest | |
|---|---|---|---|
|
pytorch_tensorrt__3.10_cu129_x86_64
|
3.15 MB |
sha256:c8a5d7a9cd80ca33dca8177217a670ab4d92dd7d76dc0c922c715f796f1eb286
|
|