Skip to content

Conversation

@yeetypete
Copy link

@yeetypete yeetypete commented Nov 3, 2025

Description

Ensures that torchtrtc precision settings do not always contain a default fp32 precision when the precision is explicitly passed as an argument. This is particularly important when compiling a model to run on the DLA which does not allow fp32 precision. Currently this must not have been possible to do with the torchtrtc cli.

Bug example:

torchtrtc ssd_traced.jit.pt ssd_trt_dla.ts "(1,3,300,300)@f16%contiguous" -p fp16 --device-type=dla -v
...
INFO: Settings requested for TensorRT engine:
    Enabled Precisions: Float32 Float16
    TF32 Floating Point Computation Enabled: 1
    Truncate Long and Double: 0
    Make Refittable Engine: 0
    Debuggable Engine: 0
    GPU ID: 0
    Allow GPU Fallback (if running on DLA): 0
    Avg Timing Iterations: 1
    Max Workspace Size: 0
    DLA SRAM Size: 1048576
    DLA Local DRAM Size: 1073741824
    DLA Global DRAM Size: 536870912
    Device Type: DLA
    GPU ID: 0
    DLACore: 0
    Engine Capability: standard
    Calibrator Created: 0

This should only report Float16 enabled precision.

Type of change

Please delete options that are not relevant and/or add your own.

  • Bug fix (non-breaking change which fixes an issue)

Checklist:

  • My code follows the style guidelines of this project (You can use the linters)
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas and hacks
  • I have made corresponding changes to the documentation
  • I have added tests to verify my fix or my feature
  • New and existing unit tests pass locally with my changes
  • I have added the relevant labels to my PR in so that relevant reviewers are notified

@meta-cla
Copy link

meta-cla bot commented Nov 3, 2025

Hi @yeetypete!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at [email protected]. Thanks!

@github-actions github-actions bot added the component: api [C++] Issues re: C++ API label Nov 3, 2025
@meta-cla
Copy link

meta-cla bot commented Nov 3, 2025

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

@meta-cla meta-cla bot added the cla signed label Nov 3, 2025
@github-actions github-actions bot requested a review from narendasan November 3, 2025 19:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant