Replies: 2 comments 1 reply
-
FYI @mergennachin |
Beta Was this translation helpful? Give feedback.
-
Thank you @awgr for the suggestion! As you mentioned, there could be multiple options where we want to draw the abstractions levels, in particular, defining what's considered input source and what's considered compiled source. One immediate thing that would be quite valuable is "PTE file explorer". For example, input source would be:
output artifact:
the explorer would explore this pte file where can help us figure out whether it was lowered correctly. cc @iseeyuan @dbort, @JacobSzwejbka, @tarun292 - runtime folks cc @avikchaudhuri @gmagogsfm @zhxchen17 - pytorch compiler experts |
Beta Was this translation helpful? Give feedback.
-
🚀 The feature, motivation and pitch
https://godbolt.org/ provides a very simple interface for producing a clear representation of output instructions from input source code.
Generally, it's a very convenient playground for writing and exploring the relationship between code written in a high level language, and the compiled representation of that code.
It has implemented many features related to viewing these representations, which add value for users. It's free to use, and sponsor supported, so it does not have ads or other distractions.
It should be relatively straightforward to collaborate with the project and add support for a pytorch function on the left, and a variety of different compiled outputs via the compiler dropdown selector on the right.
One problem with this approach is that the compiled function may not reflect what the function will be compiled to in a model, since adjacent code may change, but this is absolutely normal for the compilers this tool exposes.
Many things could be added here: ATen, ONNX, Pytorch... or even exposed as a separate app hosted by meta. I'm not picky.
Alternatives
An alternative is executorch providing a tool like this in the repository: effectively a script that consumes a specific python file, and compiles a single function in that file (or the file consists of a single function) to a specific backend passed as a parameter.
Also, It's already possible to produce any of the intermediate outputs using executorch, but it requires quite a bit of work compared to filling in a function on godbolt.org.
Additional context
Having a low barrier to view compiled instructions is extremely valuable for ideating, prototyping, debugging, and reproducing defects.
RFC (Optional)
Explanation of how godbolt works: https://xania.org/201609/how-compiler-explorer-runs-on-amazon
Github: https://github.com/compiler-explorer/compiler-explorer
Discord: https://discord.gg/B5WacA7
Configuring compiler explorer is achieved via configuration files in the etc/config directory. Values are key=value. Options in a {type}.local.properties file (where {type} is c++ or similar) override anything in the {type}.defaults.properties file. There is a .gitignore file to ignore .local. files, so these won't be checked into git, and you won't find yourself fighting with updated versions when you git pull. For more information see Adding a compiler.
Languages can be requested here: https://github.com/compiler-explorer/compiler-explorer/issues/new?assignees=&labels=request%2Cnew-language&projects=&template=language_request.yml&title=%5BLANGUAGE+REQUEST%5D%3A+
Compilers can be requested here: https://github.com/compiler-explorer/compiler-explorer/issues/new?assignees=&labels=request%2Cnew-compilers&projects=&template=compiler_request.yml&title=%5BCOMPILER+REQUEST%5D%3A+
I suspect that requesting, and then collaborating to add Pytorch + executorch + associated IRs will lead to the best result.
Beta Was this translation helpful? Give feedback.
All reactions