Skip to content

Shrink Crystal::System.print_error's output size #15490

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

HertzDevil
Copy link
Contributor

@HertzDevil HertzDevil commented Feb 18, 2025

Crystal::System is by far the single largest LLVM module when compiling a blank source file, even though all the module does by itself is defining .print_error and friends. On my machine, with debug information stripped, C-rystal5858S-ystem.o0.bc is 213.4 KiB big, compared to _main.o0.bc's 101.7 KiB. Disassembling the bytecode back to LLVM IR produces a monstrosity with 33k lines. This PR brings the numbers down to 48.0 KiB and 6.1k lines, while slightly improving performance, using the following tricks:

  • The type of .as?(T) is always T? and does not perform intersection, so even simple types like Int32 are upcast into the whole Int::Primitive?, leading to a lot of redundant downcasts later. A simple is_a? will suffice as a type filter in read_arg. (I believe this is mentioned somewhere but couldn't find it)
  • In .to_int_slice, the num variable is cast into an Int32 | UInt32 | Int64 | UInt64, and each subsequent line dispatches over that union. The fix here is to split the rest of the body into a separate method, and call it with each variant of the union. This form of dispatching is akin to rewriting .to_int_slice as an instance method on the integers.
  • .to_int_slice is now non-yielding, as the inlining added too much bloat. The caller is responsible for preparing a suitably sized buffer.

Additionally, this reduces the time for the bytecode generation phase from an average of 0.35s down to 0.26s.

@ysbaddaden
Copy link
Contributor

Terrific 👍

I've been reusing the to_int_slice calls in Crystal.trace (https://github.com/crystal-lang/crystal/blob/master/src/crystal/tracing.cr) so we'll have to fix the calls there too.

Copy link
Contributor

@ysbaddaden ysbaddaden left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you 🙇

@ysbaddaden ysbaddaden added this to the 1.16.0 milestone Feb 21, 2025
@straight-shoota straight-shoota merged commit 8d5e093 into crystal-lang:master Feb 22, 2025
33 checks passed
@HertzDevil HertzDevil deleted the refactor/crystal-system-print-error branch February 23, 2025 05:10
kojix2 pushed a commit to kojix2/crystal that referenced this pull request Feb 23, 2025
`Crystal::System` is by far the single largest LLVM module when compiling a blank source file, even though all the module does by itself is defining `.print_error` and friends. On my machine, with debug information stripped, `C-rystal5858S-ystem.o0.bc` is 213.4 KiB big, compared to `_main.o0.bc`'s 101.7 KiB. Disassembling the bytecode back to LLVM IR produces a monstrosity with 33k lines. This PR brings the numbers down to 48.0 KiB and 6.1k lines, while slightly improving performance, using the following tricks:

* The type of `.as?(T)` is always `T?` and does not perform intersection, so even simple types like `Int32` are upcast into the whole `Int::Primitive?`, leading to a lot of redundant downcasts later. A simple `is_a?` will suffice as a type filter in `read_arg`. (I believe this is mentioned somewhere but couldn't find it)
* In `.to_int_slice`, the `num` variable is cast into an `Int32 | UInt32 | Int64 | UInt64`, and each subsequent line dispatches over that union. The fix here is to split the rest of the body into a separate method, and call it with each variant of the union. This form of dispatching is akin to rewriting `.to_int_slice` as an instance method on the integers.
* `.to_int_slice` is now non-yielding, as the inlining added too much bloat. The caller is responsible for preparing a suitably sized buffer.

Additionally, this reduces the time for the bytecode generation phase from an average of 0.35s down to 0.26s.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants