Skip to content

Conversation

@dylan-conway
Copy link
Member

No description provided.

@coderabbitai
Copy link

coderabbitai bot commented Jan 30, 2026

Walkthrough

Refactored the JIT probe trampoline mechanism from a conditional macro-based approach with AVX variants to a unified inline assembly implementation. Removed the ctiMasmProbeTrampolineAVX declaration and consolidated the probe setup, stack handling, and state restoration logic into a single trampoline path.

Changes

Cohort / File(s) Summary
Probe Trampoline Refactoring
Source/JavaScriptCore/assembler/MacroAssemblerX86_64.cpp
Removed AVX-specific trampoline declaration (ctiMasmProbeTrampolineAVX) and replaced conditional macro-based trampoline selection with a single inline asm sequence. Consolidated stack frame management, parameter saving, probe invocation, and state restoration into unified implementation. Net change: +191/-265 lines.
🚥 Pre-merge checks | ❌ 2
❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Description check ⚠️ Warning No pull request description was provided; the template requires a bug reference, explanation, and file change details, all of which are missing. Add a description following the WebKit template with: a Bugzilla bug reference, explanation of the changes, reviewer attribution, and list of modified files with function/class details.
Title check ❓ Inconclusive The title 'Build for testing baseline builds' is vague and generic, using non-descriptive terms that don't clearly convey what actual changes were made to the codebase. Replace with a more specific title that describes the main technical change, such as 'Remove AVX-specific trampoline and consolidate probe invocation to single path' or 'Refactor MacroAssemblerX86_64 probe trampoline implementation'.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@Source/JavaScriptCore/assembler/MacroAssemblerX86_64.cpp`:
- Around line 220-235: The trampoline currently uses AVX-only vmovaps
instructions to save/restore XMM registers (the sequence writing to
PROBE_CPU_XMM0...PROBE_CPU_XMM15_VECTOR_OFFSET), which will fault on non-AVX
CPUs; either add a runtime AVX gate back into the trampoline or change the
save/restore sequence to SSE instructions (use movaps instead of vmovaps for the
lines that reference PROBE_CPU_XMM*_VECTOR_OFFSET) so the trampoline is safe on
all x86_64 targets; update both the save block (the shown vmovaps sequence) and
the corresponding restore block (the similar sequence at lines ~297-312)
consistently.

Comment on lines +220 to 235
"vmovaps %xmm0, " STRINGIZE_VALUE_OF(PROBE_CPU_XMM0_VECTOR_OFFSET) "(%rsp)" "\n"
"vmovaps %xmm1, " STRINGIZE_VALUE_OF(PROBE_CPU_XMM1_VECTOR_OFFSET) "(%rsp)" "\n"
"vmovaps %xmm2, " STRINGIZE_VALUE_OF(PROBE_CPU_XMM2_VECTOR_OFFSET) "(%rsp)" "\n"
"vmovaps %xmm3, " STRINGIZE_VALUE_OF(PROBE_CPU_XMM3_VECTOR_OFFSET) "(%rsp)" "\n"
"vmovaps %xmm4, " STRINGIZE_VALUE_OF(PROBE_CPU_XMM4_VECTOR_OFFSET) "(%rsp)" "\n"
"vmovaps %xmm5, " STRINGIZE_VALUE_OF(PROBE_CPU_XMM5_VECTOR_OFFSET) "(%rsp)" "\n"
"vmovaps %xmm6, " STRINGIZE_VALUE_OF(PROBE_CPU_XMM6_VECTOR_OFFSET) "(%rsp)" "\n"
"vmovaps %xmm7, " STRINGIZE_VALUE_OF(PROBE_CPU_XMM7_VECTOR_OFFSET) "(%rsp)" "\n"
"vmovaps %xmm8, " STRINGIZE_VALUE_OF(PROBE_CPU_XMM8_VECTOR_OFFSET) "(%rsp)" "\n"
"vmovaps %xmm9, " STRINGIZE_VALUE_OF(PROBE_CPU_XMM9_VECTOR_OFFSET) "(%rsp)" "\n"
"vmovaps %xmm10, " STRINGIZE_VALUE_OF(PROBE_CPU_XMM10_VECTOR_OFFSET) "(%rsp)" "\n"
"vmovaps %xmm11, " STRINGIZE_VALUE_OF(PROBE_CPU_XMM11_VECTOR_OFFSET) "(%rsp)" "\n"
"vmovaps %xmm12, " STRINGIZE_VALUE_OF(PROBE_CPU_XMM12_VECTOR_OFFSET) "(%rsp)" "\n"
"vmovaps %xmm13, " STRINGIZE_VALUE_OF(PROBE_CPU_XMM13_VECTOR_OFFSET) "(%rsp)" "\n"
"vmovaps %xmm14, " STRINGIZE_VALUE_OF(PROBE_CPU_XMM14_VECTOR_OFFSET) "(%rsp)" "\n"
"vmovaps %xmm15, " STRINGIZE_VALUE_OF(PROBE_CPU_XMM15_VECTOR_OFFSET) "(%rsp)" "\n"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid unconditional AVX-only instructions in the trampoline.

vmovaps requires AVX. With the AVX/non‑AVX dispatch removed, running probes on non‑AVX x86_64 will trap. Either reintroduce the runtime AVX gating or switch these to SSE movaps so the trampoline is safe on all x86_64 targets.

🛠️ Suggested SSE fallback (use movaps for all XMM saves/restores)
-    "vmovaps %xmm0,  " STRINGIZE_VALUE_OF(PROBE_CPU_XMM0_VECTOR_OFFSET) "(%rsp)" "\n"
-    "vmovaps %xmm1,  " STRINGIZE_VALUE_OF(PROBE_CPU_XMM1_VECTOR_OFFSET) "(%rsp)" "\n"
-    "vmovaps %xmm2,  " STRINGIZE_VALUE_OF(PROBE_CPU_XMM2_VECTOR_OFFSET) "(%rsp)" "\n"
-    "vmovaps %xmm3,  " STRINGIZE_VALUE_OF(PROBE_CPU_XMM3_VECTOR_OFFSET) "(%rsp)" "\n"
-    "vmovaps %xmm4,  " STRINGIZE_VALUE_OF(PROBE_CPU_XMM4_VECTOR_OFFSET) "(%rsp)" "\n"
-    "vmovaps %xmm5,  " STRINGIZE_VALUE_OF(PROBE_CPU_XMM5_VECTOR_OFFSET) "(%rsp)" "\n"
-    "vmovaps %xmm6,  " STRINGIZE_VALUE_OF(PROBE_CPU_XMM6_VECTOR_OFFSET) "(%rsp)" "\n"
-    "vmovaps %xmm7,  " STRINGIZE_VALUE_OF(PROBE_CPU_XMM7_VECTOR_OFFSET) "(%rsp)" "\n"
-    "vmovaps %xmm8,  " STRINGIZE_VALUE_OF(PROBE_CPU_XMM8_VECTOR_OFFSET) "(%rsp)" "\n"
-    "vmovaps %xmm9,  " STRINGIZE_VALUE_OF(PROBE_CPU_XMM9_VECTOR_OFFSET) "(%rsp)" "\n"
-    "vmovaps %xmm10, " STRINGIZE_VALUE_OF(PROBE_CPU_XMM10_VECTOR_OFFSET) "(%rsp)" "\n"
-    "vmovaps %xmm11, " STRINGIZE_VALUE_OF(PROBE_CPU_XMM11_VECTOR_OFFSET) "(%rsp)" "\n"
-    "vmovaps %xmm12, " STRINGIZE_VALUE_OF(PROBE_CPU_XMM12_VECTOR_OFFSET) "(%rsp)" "\n"
-    "vmovaps %xmm13, " STRINGIZE_VALUE_OF(PROBE_CPU_XMM13_VECTOR_OFFSET) "(%rsp)" "\n"
-    "vmovaps %xmm14, " STRINGIZE_VALUE_OF(PROBE_CPU_XMM14_VECTOR_OFFSET) "(%rsp)" "\n"
-    "vmovaps %xmm15, " STRINGIZE_VALUE_OF(PROBE_CPU_XMM15_VECTOR_OFFSET) "(%rsp)" "\n"
+    "movaps %xmm0,  " STRINGIZE_VALUE_OF(PROBE_CPU_XMM0_VECTOR_OFFSET) "(%rsp)" "\n"
+    "movaps %xmm1,  " STRINGIZE_VALUE_OF(PROBE_CPU_XMM1_VECTOR_OFFSET) "(%rsp)" "\n"
+    "movaps %xmm2,  " STRINGIZE_VALUE_OF(PROBE_CPU_XMM2_VECTOR_OFFSET) "(%rsp)" "\n"
+    "movaps %xmm3,  " STRINGIZE_VALUE_OF(PROBE_CPU_XMM3_VECTOR_OFFSET) "(%rsp)" "\n"
+    "movaps %xmm4,  " STRINGIZE_VALUE_OF(PROBE_CPU_XMM4_VECTOR_OFFSET) "(%rsp)" "\n"
+    "movaps %xmm5,  " STRINGIZE_VALUE_OF(PROBE_CPU_XMM5_VECTOR_OFFSET) "(%rsp)" "\n"
+    "movaps %xmm6,  " STRINGIZE_VALUE_OF(PROBE_CPU_XMM6_VECTOR_OFFSET) "(%rsp)" "\n"
+    "movaps %xmm7,  " STRINGIZE_VALUE_OF(PROBE_CPU_XMM7_VECTOR_OFFSET) "(%rsp)" "\n"
+    "movaps %xmm8,  " STRINGIZE_VALUE_OF(PROBE_CPU_XMM8_VECTOR_OFFSET) "(%rsp)" "\n"
+    "movaps %xmm9,  " STRINGIZE_VALUE_OF(PROBE_CPU_XMM9_VECTOR_OFFSET) "(%rsp)" "\n"
+    "movaps %xmm10, " STRINGIZE_VALUE_OF(PROBE_CPU_XMM10_VECTOR_OFFSET) "(%rsp)" "\n"
+    "movaps %xmm11, " STRINGIZE_VALUE_OF(PROBE_CPU_XMM11_VECTOR_OFFSET) "(%rsp)" "\n"
+    "movaps %xmm12, " STRINGIZE_VALUE_OF(PROBE_CPU_XMM12_VECTOR_OFFSET) "(%rsp)" "\n"
+    "movaps %xmm13, " STRINGIZE_VALUE_OF(PROBE_CPU_XMM13_VECTOR_OFFSET) "(%rsp)" "\n"
+    "movaps %xmm14, " STRINGIZE_VALUE_OF(PROBE_CPU_XMM14_VECTOR_OFFSET) "(%rsp)" "\n"
+    "movaps %xmm15, " STRINGIZE_VALUE_OF(PROBE_CPU_XMM15_VECTOR_OFFSET) "(%rsp)" "\n"
...
-    "vmovaps " STRINGIZE_VALUE_OF(PROBE_CPU_XMM0_VECTOR_OFFSET) "(%rbp), %xmm0" "\n"
-    "vmovaps " STRINGIZE_VALUE_OF(PROBE_CPU_XMM1_VECTOR_OFFSET) "(%rbp), %xmm1" "\n"
-    "vmovaps " STRINGIZE_VALUE_OF(PROBE_CPU_XMM2_VECTOR_OFFSET) "(%rbp), %xmm2" "\n"
-    "vmovaps " STRINGIZE_VALUE_OF(PROBE_CPU_XMM3_VECTOR_OFFSET) "(%rbp), %xmm3" "\n"
-    "vmovaps " STRINGIZE_VALUE_OF(PROBE_CPU_XMM4_VECTOR_OFFSET) "(%rbp), %xmm4" "\n"
-    "vmovaps " STRINGIZE_VALUE_OF(PROBE_CPU_XMM5_VECTOR_OFFSET) "(%rbp), %xmm5" "\n"
-    "vmovaps " STRINGIZE_VALUE_OF(PROBE_CPU_XMM6_VECTOR_OFFSET) "(%rbp), %xmm6" "\n"
-    "vmovaps " STRINGIZE_VALUE_OF(PROBE_CPU_XMM7_VECTOR_OFFSET) "(%rbp), %xmm7" "\n"
-    "vmovaps " STRINGIZE_VALUE_OF(PROBE_CPU_XMM8_VECTOR_OFFSET) "(%rbp), %xmm8" "\n"
-    "vmovaps " STRINGIZE_VALUE_OF(PROBE_CPU_XMM9_VECTOR_OFFSET) "(%rbp), %xmm9" "\n"
-    "vmovaps " STRINGIZE_VALUE_OF(PROBE_CPU_XMM10_VECTOR_OFFSET) "(%rbp), %xmm10" "\n"
-    "vmovaps " STRINGIZE_VALUE_OF(PROBE_CPU_XMM11_VECTOR_OFFSET) "(%rbp), %xmm11" "\n"
-    "vmovaps " STRINGIZE_VALUE_OF(PROBE_CPU_XMM12_VECTOR_OFFSET) "(%rbp), %xmm12" "\n"
-    "vmovaps " STRINGIZE_VALUE_OF(PROBE_CPU_XMM13_VECTOR_OFFSET) "(%rbp), %xmm13" "\n"
-    "vmovaps " STRINGIZE_VALUE_OF(PROBE_CPU_XMM14_VECTOR_OFFSET) "(%rbp), %xmm14" "\n"
-    "vmovaps " STRINGIZE_VALUE_OF(PROBE_CPU_XMM15_VECTOR_OFFSET) "(%rbp), %xmm15" "\n"
+    "movaps " STRINGIZE_VALUE_OF(PROBE_CPU_XMM0_VECTOR_OFFSET) "(%rbp), %xmm0" "\n"
+    "movaps " STRINGIZE_VALUE_OF(PROBE_CPU_XMM1_VECTOR_OFFSET) "(%rbp), %xmm1" "\n"
+    "movaps " STRINGIZE_VALUE_OF(PROBE_CPU_XMM2_VECTOR_OFFSET) "(%rbp), %xmm2" "\n"
+    "movaps " STRINGIZE_VALUE_OF(PROBE_CPU_XMM3_VECTOR_OFFSET) "(%rbp), %xmm3" "\n"
+    "movaps " STRINGIZE_VALUE_OF(PROBE_CPU_XMM4_VECTOR_OFFSET) "(%rbp), %xmm4" "\n"
+    "movaps " STRINGIZE_VALUE_OF(PROBE_CPU_XMM5_VECTOR_OFFSET) "(%rbp), %xmm5" "\n"
+    "movaps " STRINGIZE_VALUE_OF(PROBE_CPU_XMM6_VECTOR_OFFSET) "(%rbp), %xmm6" "\n"
+    "movaps " STRINGIZE_VALUE_OF(PROBE_CPU_XMM7_VECTOR_OFFSET) "(%rbp), %xmm7" "\n"
+    "movaps " STRINGIZE_VALUE_OF(PROBE_CPU_XMM8_VECTOR_OFFSET) "(%rbp), %xmm8" "\n"
+    "movaps " STRINGIZE_VALUE_OF(PROBE_CPU_XMM9_VECTOR_OFFSET) "(%rbp), %xmm9" "\n"
+    "movaps " STRINGIZE_VALUE_OF(PROBE_CPU_XMM10_VECTOR_OFFSET) "(%rbp), %xmm10" "\n"
+    "movaps " STRINGIZE_VALUE_OF(PROBE_CPU_XMM11_VECTOR_OFFSET) "(%rbp), %xmm11" "\n"
+    "movaps " STRINGIZE_VALUE_OF(PROBE_CPU_XMM12_VECTOR_OFFSET) "(%rbp), %xmm12" "\n"
+    "movaps " STRINGIZE_VALUE_OF(PROBE_CPU_XMM13_VECTOR_OFFSET) "(%rbp), %xmm13" "\n"
+    "movaps " STRINGIZE_VALUE_OF(PROBE_CPU_XMM14_VECTOR_OFFSET) "(%rbp), %xmm14" "\n"
+    "movaps " STRINGIZE_VALUE_OF(PROBE_CPU_XMM15_VECTOR_OFFSET) "(%rbp), %xmm15" "\n"

Also applies to: 297-312

🤖 Prompt for AI Agents
In `@Source/JavaScriptCore/assembler/MacroAssemblerX86_64.cpp` around lines 220 -
235, The trampoline currently uses AVX-only vmovaps instructions to save/restore
XMM registers (the sequence writing to
PROBE_CPU_XMM0...PROBE_CPU_XMM15_VECTOR_OFFSET), which will fault on non-AVX
CPUs; either add a runtime AVX gate back into the trampoline or change the
save/restore sequence to SSE instructions (use movaps instead of vmovaps for the
lines that reference PROBE_CPU_XMM*_VECTOR_OFFSET) so the trampoline is safe on
all x86_64 targets; update both the save block (the shown vmovaps sequence) and
the corresponding restore block (the similar sequence at lines ~297-312)
consistently.

@github-actions
Copy link

Preview Builds

Commit Release Date
8fb3f004 autobuild-preview-pr-151-8fb3f004 2026-01-30 03:17:40 UTC

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants