-
-
Notifications
You must be signed in to change notification settings - Fork 31.6k
src: improve TextEncoder encodeInto performance #58080
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #58080 +/- ##
==========================================
- Coverage 90.17% 90.16% -0.02%
==========================================
Files 630 630
Lines 186473 186495 +22
Branches 36613 36615 +2
==========================================
- Hits 168160 168157 -3
- Misses 11128 11139 +11
- Partials 7185 7199 +14
🚀 New features to boost your workflow:
|
@anonrig even if the benchmark is not hitting the fast path: the benchmarks currently show a regression. I am surprised that is the case. Is there a possibility to prevent that for non-optimized code? |
I think that's due to unreliable benchmarks, I'll re-run the benchmarks to see if they persist. Edit 2: New benchmarks: https://ci.nodejs.org/view/Node.js%20benchmark/job/benchmark-node-micro-benchmarks/1712/ |
635769a
to
06de6fa
Compare
It seems it still degrades performance. Opened an issue on v8-dev https://groups.google.com/g/v8-dev/c/x2Fs6do0jDA |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These changes seem to regress the performance (see both benchmark runs), not improve it. I could imagine that there is overhead involved with gathering the data during the runs.
cc @erikcorry |
It makes sense to me that the "fast API" is slower in this benchmark than regular API calls. "fast API" is mostly a marketing name, a more telling name would be something like "unboxed API", because primitive values get passed unboxed, i.e. not boxed in a If I understood the benchmark correctly, then you even open a Therefore, there is no reason why the fast API would be faster in this example, and there is a reason why regular API calls would be faster, so in total the result is not surprising. |
I wonder: what is the benefit of adding Local as a parameter? Only for DX?
@gahaas Where is it getting called automatically? Would you mind sharing the code? |
There are API functions that have parameters of primitive type and reference type. i.e. web gpu APIs that take a
Opening the |
Improves the performance of TextEncoder.encodeInto method
Benchmark CI: https://ci.nodejs.org/view/Node.js%20benchmark/job/benchmark-node-micro-benchmarks/1712/
It seems none of the benchmarks are triggering the fast path