We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
A strategy discussed in discord:
The easiest way to benefit from parallelism is to use function passes. Then MLIR will process all functions in parallel. This does not work if your pass does module-level changes such as deleting unreachable functions. However, it is perfect if the pass does function local rewrites. 1 Minute compilation time sounds quite slow but I guess it depends a lot on what transformations you are actually running
Is it possible to create the operations for separate functions in parallel? If so, I could create a work queue for threads to go build each function in order to leverage all available cores. Currently I do this on a single thread.
I guess in theory you could first create the functions sequentially and then in parallel fill the function bodies yes.
Filling the functions in parallel works! I'll need to test it with a much larger program to see the benefits though. [...] Generating 100% of the IR takes 2 seconds now. My reachable function analysis brings that down to a fraction of a second during generation now and of course the passes have less to work on so they're faster too.