Today, if a specialization for a symint happens inside a tensor subclass's torch_dispatch, we never actually report it in a call stack.
Instead, we see the user callstack, which can be confusing. Repro: run tlparse on the following, https://gist.github.com/zou3519/476a049ffd8070973dd31a4eba9f2ca7 , this produces the following tlparse output:
Now, everytime I see this, I think the stack trace is wrong. Why is torch.cos causing a specialization??. Then, I realize that Tensor subclasses are a thing. I wish here it would tell me what the Tensor subclass is (TwoTensor in this case!) or the LOC at the Tensor subclass that does the specialization.
Not sure if this is possible, but this would really help with debugging vLLM (where Tensor subclasses are now abundant).
cc @chauhang @penguinwu @ezyang @bobrenjc93