Skip to content

move expensive layout sanity check to debug assertions #141039

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

lqd
Copy link
Member

@lqd lqd commented May 15, 2025

It is hard to fix the slowness in the uninhabitedness computation for very big types but we can fix the very specific case of them being called during the layout sanity checks, as described in #140944.

So this PR moves this uninhabitedness check to the other expensive layout sanity checks that are ran under debug_assertions.

It makes building the lemmy_api_routes crate's self-profile layout_of query go from

+--------------------------------------------------------+-----------+-----------------+----------+------------+---------------------------------+
| Item                                                   | Self time | % of total time | Time     | Item count | Incremental result hashing time |
+--------------------------------------------------------+-----------+-----------------+----------+------------+---------------------------------+
| layout_of                                              | 63.02s    | 41.895          | 244.26s  | 123703     | 50.30ms                         |
+--------------------------------------------------------+-----------+-----------------+----------+------------+---------------------------------+

on master (2m17s total), to

| layout_of                                              | 330.21ms  | 0.372           | 26.90s   | 123703     | 53.19ms                         |

with this PR (1m15s total).

(Note that the perf run results below look a bit better than an earlier run I did in another PR. There may be some positive noise there, or post-merge results could differ a bit)

Since we discussed this today, r? @compiler-errors — and cc @lcnr and @RalfJung.

@rustbot rustbot added the T-compiler Relevant to the compiler team, which will review and decide on the PR/issue. label May 15, 2025
@lqd
Copy link
Member Author

lqd commented May 15, 2025

@bors try @rust-timer queue

@rust-timer

This comment has been minimized.

@rustbot rustbot added the S-waiting-on-perf Status: Waiting on a perf run to be completed. label May 15, 2025
@bors
Copy link
Collaborator

bors commented May 15, 2025

⌛ Trying commit 102cc2f with merge 194541cc222213675a542b521ddb9ddd912b3bcb...

bors added a commit to rust-lang-ci/rust that referenced this pull request May 15, 2025
move expensive layout sanity check to debug assertions

r? ghost
@bors
Copy link
Collaborator

bors commented May 15, 2025

☀️ Try build successful - checks-actions
Build commit: 194541c (194541cc222213675a542b521ddb9ddd912b3bcb)

@rust-timer

This comment has been minimized.

@rust-timer
Copy link
Collaborator

Finished benchmarking commit (194541c): comparison URL.

Overall result: ❌✅ regressions and improvements - please read the text below

Benchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf.

Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @rustbot label: +perf-regression-triaged along with sufficient written justification. If you cannot justify the regressions please fix the regressions and do another perf run. If the next run shows neutral or positive results, the label will be automatically removed.

@bors rollup=never
@rustbot label: -S-waiting-on-perf +perf-regression

Instruction count

This is the most reliable metric that we have; it was used to determine the overall result at the top of this comment. However, even this metric can sometimes exhibit noise.

mean range count
Regressions ❌
(primary)
0.4% [0.1%, 1.0%] 3
Regressions ❌
(secondary)
- - 0
Improvements ✅
(primary)
-0.2% [-1.1%, -0.1%] 79
Improvements ✅
(secondary)
-0.4% [-2.1%, -0.0%] 99
All ❌✅ (primary) -0.2% [-1.1%, 1.0%] 82

Max RSS (memory usage)

Results (primary -0.5%, secondary 4.9%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
1.4% [1.4%, 1.4%] 3
Regressions ❌
(secondary)
5.3% [2.3%, 9.8%] 13
Improvements ✅
(primary)
-2.5% [-2.7%, -2.3%] 3
Improvements ✅
(secondary)
-1.2% [-1.2%, -1.2%] 1
All ❌✅ (primary) -0.5% [-2.7%, 1.4%] 6

Cycles

This benchmark run did not return any relevant results for this metric.

Binary size

This benchmark run did not return any relevant results for this metric.

Bootstrap: 773.204s -> 772.578s (-0.08%)
Artifact size: 365.39 MiB -> 365.28 MiB (-0.03%)

@rustbot rustbot added perf-regression Performance regression. and removed S-waiting-on-perf Status: Waiting on a perf run to be completed. labels May 15, 2025
@lqd lqd marked this pull request as ready for review May 15, 2025 20:30
@rustbot rustbot added the S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. label May 15, 2025
@lqd
Copy link
Member Author

lqd commented May 15, 2025

Although, maybe this PR doesn't really fix #140944 enough to actually close the issue, but just removes the worst offender, so I've removed the "Fixes #140944." from the PR description.

@RalfJung
Copy link
Member

@bors try @rust-timer queue

@rust-timer

This comment has been minimized.

@rustbot rustbot added the S-waiting-on-perf Status: Waiting on a perf run to be completed. label May 15, 2025
bors added a commit to rust-lang-ci/rust that referenced this pull request May 15, 2025
move expensive layout sanity check to debug assertions

It is [hard to fix](rust-lang#141006 (comment)) the slowness in the uninhabitedness computation for very big types but we can fix the very specific case of them being called during the layout sanity checks, as described in rust-lang#140944.

So this PR moves this uninhabitedness check to the other expensive layout sanity checks that are ran under `debug_assertions`.

It makes building the `lemmy_api_routes` crate's self-profile `layout_of` query go from

```
+--------------------------------------------------------+-----------+-----------------+----------+------------+---------------------------------+
| Item                                                   | Self time | % of total time | Time     | Item count | Incremental result hashing time |
+--------------------------------------------------------+-----------+-----------------+----------+------------+---------------------------------+
| layout_of                                              | 63.02s    | 41.895          | 244.26s  | 123703     | 50.30ms                         |
+--------------------------------------------------------+-----------+-----------------+----------+------------+---------------------------------+
```

on master (2m17s total), to

```
| layout_of                                              | 330.21ms  | 0.372           | 26.90s   | 123703     | 53.19ms                         |
```

with this PR (1m15s total).

(Note that the [perf run results](rust-lang#141039 (comment)) below look a bit better than [an earlier run](https://perf.rust-lang.org/compare.html?start=4eca99a18eab3d4e28ed1ce3ee620d442955a470&end=c4a00993f8ee02c7565e7be652608817ea2fb97d&stat=instructions:u) I did in another PR. There may be some positive noise there, or post-merge results could differ a bit)

Since we discussed this today, r? `@compiler-errors` — and cc `@lcnr` and `@RalfJung.`
@bors
Copy link
Collaborator

bors commented May 15, 2025

⌛ Trying commit 102cc2f with merge e15359f3b7488212f52ac8595bec0a7cd8e490ce...

@RalfJung
Copy link
Member

Oh wait that was already done, I just didn't get the notifications. Odd. I got a notification for the PR description but not all of the other comments...

I don't know a way to cancel the try build or perf run, sorry 🤷

Copy link
Member

@compiler-errors compiler-errors left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

r=me after try build is done

@lqd
Copy link
Member Author

lqd commented May 15, 2025

I got a notification for the PR description but not all of the other comments...

Sorry if that was confusing: I did the perf run before filling in the PR description pinging you :)

@bors
Copy link
Collaborator

bors commented May 15, 2025

☀️ Try build successful - checks-actions
Build commit: e15359f (e15359f3b7488212f52ac8595bec0a7cd8e490ce)

@rust-timer

This comment has been minimized.

@rust-timer
Copy link
Collaborator

Finished benchmarking commit (e15359f): comparison URL.

Overall result: ❌✅ regressions and improvements - please read the text below

Benchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf.

Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @rustbot label: +perf-regression-triaged along with sufficient written justification. If you cannot justify the regressions please fix the regressions and do another perf run. If the next run shows neutral or positive results, the label will be automatically removed.

@bors rollup=never
@rustbot label: -S-waiting-on-perf +perf-regression

Instruction count

This is the most reliable metric that we have; it was used to determine the overall result at the top of this comment. However, even this metric can sometimes exhibit noise.

mean range count
Regressions ❌
(primary)
1.0% [1.0%, 1.0%] 1
Regressions ❌
(secondary)
0.5% [0.5%, 0.5%] 1
Improvements ✅
(primary)
-0.4% [-1.1%, -0.1%] 18
Improvements ✅
(secondary)
-0.8% [-2.0%, -0.4%] 11
All ❌✅ (primary) -0.3% [-1.1%, 1.0%] 19

Max RSS (memory usage)

Results (primary -0.5%, secondary -0.0%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
1.4% [1.1%, 1.7%] 2
Regressions ❌
(secondary)
1.6% [1.6%, 1.6%] 2
Improvements ✅
(primary)
-2.3% [-2.4%, -2.3%] 2
Improvements ✅
(secondary)
-3.2% [-3.2%, -3.2%] 1
All ❌✅ (primary) -0.5% [-2.4%, 1.7%] 4

Cycles

This benchmark run did not return any relevant results for this metric.

Binary size

This benchmark run did not return any relevant results for this metric.

Bootstrap: 773.554s -> 773.241s (-0.04%)
Artifact size: 365.48 MiB -> 365.32 MiB (-0.04%)

@rustbot rustbot removed the S-waiting-on-perf Status: Waiting on a perf run to be completed. label May 16, 2025
@lqd
Copy link
Member Author

lqd commented May 17, 2025

The "regression" on nalgebra opt seems present in other perf runs and is likely noise, like some of the "improvements" in the previous run as well. This PR does improve lemmy as seen in the measurements, so we should be good to go.

@bors r=compiler-errors

@bors
Copy link
Collaborator

bors commented May 17, 2025

📌 Commit 102cc2f has been approved by compiler-errors

It is now in the queue for this repository.

@bors bors added S-waiting-on-bors Status: Waiting on bors to run and complete tests. Bors will change the label on completion. and removed S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. labels May 17, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
perf-regression Performance regression. S-waiting-on-bors Status: Waiting on bors to run and complete tests. Bors will change the label on completion. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants