-
Notifications
You must be signed in to change notification settings - Fork 27
fix libpas assertion on windows #147
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Caution Review failedThe pull request is closed. WalkthroughRefactors the build-preview GitHub Actions workflow to run on pull_request events and consolidate PR comments into a single markdown table; adjusts Windows build flags for allocator selection; adds a mimalloc include and replaces a single DiscardVirtualMemory call with a VirtualQuery-driven chunked purge/commit loop. Changes
Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 1 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Warning Review ran into problems🔥 ProblemsGit: Failed to clone repository. Please run the Comment |
|
test |
|
This may have been caused by a newer build being triggered for the same PR. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In @.github/workflows/build-preview.yml:
- Around line 106-111: The call to github.rest.issues.listComments only
retrieves a single page (up to 100) so the existing marker may be missed on
large PRs; update the logic that uses github.rest.issues.listComments to
paginate all comment pages (for example using github.paginate or an explicit
loop) to collect every comment for the PR before searching for the marker.
Locate usages of github.rest.issues.listComments and replace them with a
paginated fetch (e.g., github.paginate(github.rest.issues.listComments, { owner,
repo, issue_number: prNumber, per_page: 100 })) or equivalent, then continue
existing marker search/update logic on the complete comment set.
- Around line 105-113: The comment lookup currently uses comments.find(c =>
c.body.includes(marker)) which can match user comments; change the predicate to
also verify the comment author is the bot (e.g., c.user?.login ===
'github-actions[bot]') so the find only selects bot-authored comments that
include the marker; update the existing variable assignment (existing) to use
this combined condition to avoid overwriting user content.
In `@Source/bmalloc/bmalloc/VMAllocate.h`:
- Around line 390-404: The loop advances currentPtr and totalSeen by
memInfo.RegionSize which is wrong when chunkSize < memInfo.RegionSize; update
the advancement to use chunkSize (advance currentPtr by chunkSize and add
chunkSize to totalSeen) so the loop covers exactly vmSize bytes and doesn't skip
or overshoot, and also check the return value of VirtualQuery (it returns 0 on
failure) and assert/handle failure before using memInfo; apply these changes
around the variables totalSeen, currentPtr, vmSize, memInfo, chunkSize and the
VirtualQuery/VirtualFree/VirtualAlloc sequence.
In `@Source/JavaScriptCore/heap/StructureAlignedMemoryAllocator.cpp`:
- Around line 143-145: The code computes offset = max(MarkedBlock::blockSize,
MI_ARENA_SLICE_SIZE) then subtracts it from g_jscConfig.sizeOfStructureHeap
which can underflow if the configured structure heap is smaller than offset; add
a guard before doing the subtraction: compute available =
g_jscConfig.sizeOfStructureHeap and if available <= offset either assert/log and
return/skip the mi_manage_os_memory_ex call or clamp offset to available (e.g.,
set size = 0 and avoid calling mi_manage_os_memory_ex), ensuring
g_jscConfig.startOfStructureHeap and mi_manage_os_memory_ex are only used with a
valid non-underflowing range; reference offset, MarkedBlock::blockSize,
MI_ARENA_SLICE_SIZE, g_jscConfig.sizeOfStructureHeap,
g_jscConfig.startOfStructureHeap and mi_manage_os_memory_ex.
| // Find existing comment | ||
| const { data: comments } = await github.rest.issues.listComments({ | ||
| owner: context.repo.owner, | ||
| repo: context.repo.repo, | ||
| issue_number: ${{ needs.trigger.outputs.pr_number }}, | ||
| body: body | ||
| issue_number: prNumber, | ||
| per_page: 100 | ||
| }); | ||
| const existing = comments.find(c => c.body.includes(marker)); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
cat -n .github/workflows/build-preview.yml | head -130 | tail -40Repository: oven-sh/WebKit
Length of output: 172
🏁 Script executed:
grep -n "marker" .github/workflows/build-preview.ymlRepository: oven-sh/WebKit
Length of output: 172
🏁 Script executed:
sed -n '95,120p' .github/workflows/build-preview.ymlRepository: oven-sh/WebKit
Length of output: 172
🌐 Web query:
GitHub Actions API listComments response user field structure github-actions[bot] username
💡 Result:
For listComments (Issue/PR comments) in the GitHub REST API, each comment includes a user field that is a “simple user” object (e.g., login, id, avatar_url, url, html_url, type, site_admin, etc.). [1]
When the author is the Actions bot, that same user object looks like a normal user object, but with bot identity values:
login:"github-actions[bot]"[2]type:"Bot"[2]id:41898282[2][3]html_url: points to the GitHub App page:"https://github.com/apps/github-actions"[2][3]url: API user endpoint uses URL-encoding for[and]:"https://api.github.com/users/github-actions%5Bbot%5D"[2][3]
So in your listComments response, check comment.user.login === "github-actions[bot]" (and/or comment.user.type === "Bot") to detect comments made by GitHub Actions. [1][2]
Sources: [1] GitHub Docs – REST “List issue comments …” example response schema includes user object fields; [2] /user/41898282 output showing github-actions[bot] fields; [3] GitHub Marketplace action showing canonical bot ID/URLs.
Filter comments by bot author to prevent overwriting user content.
comments.find(c => c.body.includes(marker)) can match any comment containing the marker string, not just bot comments. This risks overwriting user content if the marker appears in user comments. Filter by the bot author using c.user?.login === 'github-actions[bot]'.
🔧 Proposed fix
- const existing = comments.find(c => c.body.includes(marker));
+ const existing = comments.find(
+ c => c.body?.includes(marker) && c.user?.login === 'github-actions[bot]'
+ );🤖 Prompt for AI Agents
In @.github/workflows/build-preview.yml around lines 105 - 113, The comment
lookup currently uses comments.find(c => c.body.includes(marker)) which can
match user comments; change the predicate to also verify the comment author is
the bot (e.g., c.user?.login === 'github-actions[bot]') so the find only selects
bot-authored comments that include the marker; update the existing variable
assignment (existing) to use this combined condition to avoid overwriting user
content.
| const { data: comments } = await github.rest.issues.listComments({ | ||
| owner: context.repo.owner, | ||
| repo: context.repo.repo, | ||
| issue_number: ${{ needs.trigger.outputs.pr_number }}, | ||
| body: body | ||
| issue_number: prNumber, | ||
| per_page: 100 | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
cat .github/workflows/build-preview.yml | head -150 | tail -60Repository: oven-sh/WebKit
Length of output: 172
🏁 Script executed:
rg "listComments|paginate" .github/workflows/build-preview.yml -A 3 -B 3Repository: oven-sh/WebKit
Length of output: 172
🌐 Web query:
GitHub REST API issues.listComments pagination per_page default behavior
💡 Result:
For issues.listComments (REST: GET /repos/{owner}/{repo}/issues/{issue_number}/comments), GitHub paginates results:
per_pagedefault: 30 (maximum 100) if you don’t passper_page. [1]pagedefault: 1 if you don’t passpage. [1]- To fetch subsequent pages, read the response
Linkheader (it includesrel="next",rel="last", etc. when more pages exist). [2] - If you set
per_page, GitHub includes it in theLinkheader URLs for the next/last pages automatically. [2]
Sources: [1] GitHub REST “List issue comments” params (defaults/max) (docs.github.com), [2] GitHub REST pagination guide (docs.github.com)
Citations:
- 1: https://docs.github.com/en/enterprise-server%403.19/rest/issues/comments?utm_source=openai
- 2: https://docs.github.com/v3/guides/traversing-with-pagination?utm_source=openai
🌐 Web query:
actions/github-script github.paginate() usage documentation
💡 Result:
In actions/github-script, github is a pre-authenticated Octokit client and includes github.paginate() (same API as octokit.paginate()). You use it to automatically fetch all pages from a paginated REST endpoint and return one combined array. [1][2]
Common pattern in github-script: build endpoint options, then paginate
- uses: actions/github-script@v8
with:
script: |
const opts = github.rest.issues.listForRepo.endpoint.merge({
owner: context.repo.owner,
repo: context.repo.repo,
state: "all",
per_page: 100,
})
const issues = await github.paginate(opts)
core.info(`Fetched ${issues.length} issues`)This is the pattern shown in the actions/github-script README. [1]
Alternative: paginate by route + params
const issues = await github.paginate("GET /repos/{owner}/{repo}/issues", {
owner: context.repo.owner,
repo: context.repo.repo,
per_page: 100,
})paginate() accepts the same options as request(). [2][3]
Optional: map each page (reduce memory / transform results)
const titles = await github.paginate(
"GET /repos/{owner}/{repo}/issues",
{ owner, repo, per_page: 100 },
(response) => response.data.map(i => i.title)
)The optional map function runs per page; response.data is guaranteed to be an array for pagination endpoints. [2]
References: actions/github-script README (paginate example), Octokit REST.js pagination docs, GitHub Docs on Octokit pagination. [1][2][3]
Paginate PR comments to reliably find the existing marker.
listComments only fetches the first page (100 comments). On PRs with more than 100 comments, the marker may be beyond page 1, causing duplicate comments instead of updates.
🔧 Proposed fix
- const { data: comments } = await github.rest.issues.listComments({
- owner: context.repo.owner,
- repo: context.repo.repo,
- issue_number: prNumber,
- per_page: 100
- });
+ const comments = await github.paginate(github.rest.issues.listComments, {
+ owner: context.repo.owner,
+ repo: context.repo.repo,
+ issue_number: prNumber,
+ per_page: 100
+ });📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const { data: comments } = await github.rest.issues.listComments({ | |
| owner: context.repo.owner, | |
| repo: context.repo.repo, | |
| issue_number: ${{ needs.trigger.outputs.pr_number }}, | |
| body: body | |
| issue_number: prNumber, | |
| per_page: 100 | |
| }); | |
| const comments = await github.paginate(github.rest.issues.listComments, { | |
| owner: context.repo.owner, | |
| repo: context.repo.repo, | |
| issue_number: prNumber, | |
| per_page: 100 | |
| }); |
🤖 Prompt for AI Agents
In @.github/workflows/build-preview.yml around lines 106 - 111, The call to
github.rest.issues.listComments only retrieves a single page (up to 100) so the
existing marker may be missed on large PRs; update the logic that uses
github.rest.issues.listComments to paginate all comment pages (for example using
github.paginate or an explicit loop) to collect every comment for the PR before
searching for the marker. Locate usages of github.rest.issues.listComments and
replace them with a paginated fetch (e.g.,
github.paginate(github.rest.issues.listComments, { owner, repo, issue_number:
prNumber, per_page: 100 })) or equivalent, then continue existing marker
search/update logic on the complete comment set.
|
|
||
| size_t totalSeen = 0; | ||
| void* currentPtr = p; | ||
| while (totalSeen < vmSize) { | ||
| MEMORY_BASIC_INFORMATION memInfo; | ||
| VirtualQuery(currentPtr, &memInfo, sizeof(memInfo)); | ||
| RELEASE_BASSERT(memInfo.RegionSize > 0); | ||
| size_t chunkSize = std::min(memInfo.RegionSize, vmSize - totalSeen); | ||
| BOOL freeResult = VirtualFree(currentPtr, chunkSize, MEM_DECOMMIT); | ||
| RELEASE_BASSERT(freeResult); | ||
| void* allocResult = VirtualAlloc(currentPtr, chunkSize, MEM_COMMIT, PAGE_READWRITE); | ||
| RELEASE_BASSERT(allocResult == currentPtr); | ||
| currentPtr = reinterpret_cast<void*>(reinterpret_cast<uintptr_t>(currentPtr) + memInfo.RegionSize); | ||
| totalSeen += memInfo.RegionSize; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Pointer and counter advancement should use chunkSize, not memInfo.RegionSize.
When a region extends beyond the requested range, chunkSize is smaller than memInfo.RegionSize. The code correctly processes only chunkSize bytes, but then advances both currentPtr and totalSeen by memInfo.RegionSize. This causes:
- Skipping memory within the target range if the loop continues
- Potentially exiting early because
totalSeenjumps pastvmSize - Memory within the requested range not being zeroed/purged
Additionally, VirtualQuery return value should be checked—it returns 0 on failure.
🐛 Proposed fix
size_t totalSeen = 0;
void* currentPtr = p;
while (totalSeen < vmSize) {
MEMORY_BASIC_INFORMATION memInfo;
- VirtualQuery(currentPtr, &memInfo, sizeof(memInfo));
+ SIZE_T queryResult = VirtualQuery(currentPtr, &memInfo, sizeof(memInfo));
+ RELEASE_BASSERT(queryResult != 0);
RELEASE_BASSERT(memInfo.RegionSize > 0);
size_t chunkSize = std::min(memInfo.RegionSize, vmSize - totalSeen);
BOOL freeResult = VirtualFree(currentPtr, chunkSize, MEM_DECOMMIT);
RELEASE_BASSERT(freeResult);
void* allocResult = VirtualAlloc(currentPtr, chunkSize, MEM_COMMIT, PAGE_READWRITE);
RELEASE_BASSERT(allocResult == currentPtr);
- currentPtr = reinterpret_cast<void*>(reinterpret_cast<uintptr_t>(currentPtr) + memInfo.RegionSize);
- totalSeen += memInfo.RegionSize;
+ currentPtr = reinterpret_cast<void*>(reinterpret_cast<uintptr_t>(currentPtr) + chunkSize);
+ totalSeen += chunkSize;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| size_t totalSeen = 0; | |
| void* currentPtr = p; | |
| while (totalSeen < vmSize) { | |
| MEMORY_BASIC_INFORMATION memInfo; | |
| VirtualQuery(currentPtr, &memInfo, sizeof(memInfo)); | |
| RELEASE_BASSERT(memInfo.RegionSize > 0); | |
| size_t chunkSize = std::min(memInfo.RegionSize, vmSize - totalSeen); | |
| BOOL freeResult = VirtualFree(currentPtr, chunkSize, MEM_DECOMMIT); | |
| RELEASE_BASSERT(freeResult); | |
| void* allocResult = VirtualAlloc(currentPtr, chunkSize, MEM_COMMIT, PAGE_READWRITE); | |
| RELEASE_BASSERT(allocResult == currentPtr); | |
| currentPtr = reinterpret_cast<void*>(reinterpret_cast<uintptr_t>(currentPtr) + memInfo.RegionSize); | |
| totalSeen += memInfo.RegionSize; | |
| } | |
| size_t totalSeen = 0; | |
| void* currentPtr = p; | |
| while (totalSeen < vmSize) { | |
| MEMORY_BASIC_INFORMATION memInfo; | |
| SIZE_T queryResult = VirtualQuery(currentPtr, &memInfo, sizeof(memInfo)); | |
| RELEASE_BASSERT(queryResult != 0); | |
| RELEASE_BASSERT(memInfo.RegionSize > 0); | |
| size_t chunkSize = std::min(memInfo.RegionSize, vmSize - totalSeen); | |
| BOOL freeResult = VirtualFree(currentPtr, chunkSize, MEM_DECOMMIT); | |
| RELEASE_BASSERT(freeResult); | |
| void* allocResult = VirtualAlloc(currentPtr, chunkSize, MEM_COMMIT, PAGE_READWRITE); | |
| RELEASE_BASSERT(allocResult == currentPtr); | |
| currentPtr = reinterpret_cast<void*>(reinterpret_cast<uintptr_t>(currentPtr) + chunkSize); | |
| totalSeen += chunkSize; | |
| } |
🤖 Prompt for AI Agents
In `@Source/bmalloc/bmalloc/VMAllocate.h` around lines 390 - 404, The loop
advances currentPtr and totalSeen by memInfo.RegionSize which is wrong when
chunkSize < memInfo.RegionSize; update the advancement to use chunkSize (advance
currentPtr by chunkSize and add chunkSize to totalSeen) so the loop covers
exactly vmSize bytes and doesn't skip or overshoot, and also check the return
value of VirtualQuery (it returns 0 on failure) and assert/handle failure before
using memInfo; apply these changes around the variables totalSeen, currentPtr,
vmSize, memInfo, chunkSize and the VirtualQuery/VirtualFree/VirtualAlloc
sequence.
| constexpr size_t offset = std::max(MarkedBlock::blockSize, static_cast<size_t>(MI_ARENA_SLICE_SIZE)); | ||
| void* memory = reinterpret_cast<void*>(reinterpret_cast<uintptr_t>(g_jscConfig.startOfStructureHeap) + offset); | ||
| size_t size = g_jscConfig.sizeOfStructureHeap - offset; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard against offset exceeding the structure heap size.
If structureHeapSizeInKB is configured smaller than MI_ARENA_SLICE_SIZE (but ≥ MarkedBlock::blockSize), size underflows and mi_manage_os_memory_ex receives an invalid range. Add an assert (or clamp/fallback) before subtracting.
🔧 Proposed fix
- constexpr size_t offset = std::max(MarkedBlock::blockSize, static_cast<size_t>(MI_ARENA_SLICE_SIZE));
- void* memory = reinterpret_cast<void*>(reinterpret_cast<uintptr_t>(g_jscConfig.startOfStructureHeap) + offset);
- size_t size = g_jscConfig.sizeOfStructureHeap - offset;
+ constexpr size_t offset = std::max(MarkedBlock::blockSize, static_cast<size_t>(MI_ARENA_SLICE_SIZE));
+ RELEASE_ASSERT(g_jscConfig.sizeOfStructureHeap > offset, g_jscConfig.sizeOfStructureHeap, offset);
+ void* memory = reinterpret_cast<void*>(reinterpret_cast<uintptr_t>(g_jscConfig.startOfStructureHeap) + offset);
+ size_t size = g_jscConfig.sizeOfStructureHeap - offset;🤖 Prompt for AI Agents
In `@Source/JavaScriptCore/heap/StructureAlignedMemoryAllocator.cpp` around lines
143 - 145, The code computes offset = max(MarkedBlock::blockSize,
MI_ARENA_SLICE_SIZE) then subtracts it from g_jscConfig.sizeOfStructureHeap
which can underflow if the configured structure heap is smaller than offset; add
a guard before doing the subtraction: compute available =
g_jscConfig.sizeOfStructureHeap and if available <= offset either assert/log and
return/skip the mi_manage_os_memory_ex call or clamp offset to available (e.g.,
set size = 0 and avoid calling mi_manage_os_memory_ex), ensuring
g_jscConfig.startOfStructureHeap and mi_manage_os_memory_ex are only used with a
valid non-underflowing range; reference offset, MarkedBlock::blockSize,
MI_ARENA_SLICE_SIZE, g_jscConfig.sizeOfStructureHeap,
g_jscConfig.startOfStructureHeap and mi_manage_os_memory_ex.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In @.github/workflows/build-preview.yml:
- Around line 85-89: The notify job currently only runs on successful builds
because of the condition "if: needs.build.result == 'success'"; change this so
PR comments are updated on failures/cancellations too by replacing that
condition with a broader one (e.g., "if: always()") or by removing the if and
adding logic inside the notify job to post different messages based on
needs.build.result (referencing the notify job and the existing "if:
needs.build.result == 'success'" line) so failed or cancelled builds also
produce an update.
♻️ Duplicate comments (3)
Source/JavaScriptCore/heap/StructureAlignedMemoryAllocator.cpp (1)
143-146: Guard against potential underflow in size calculation.The offset is computed as
max(MarkedBlock::blockSize, MI_ARENA_SLICE_SIZE), but there's no check ensuringg_jscConfig.sizeOfStructureHeap > offsetbefore the subtraction on line 145. If the configured heap size is smaller than the offset,sizewill underflow, causingmi_manage_os_memory_exto receive an invalid range.Source/bmalloc/bmalloc/VMAllocate.h (1)
390-404: Bug: Loop advancement uses wrong variable, causing memory to be skipped.The loop correctly computes
chunkSizeas the minimum ofmemInfo.RegionSizeand remaining bytes (line 397), but then advances bothcurrentPtrandtotalSeenbymemInfo.RegionSize(lines 402-403). When a region extends beyond the requested range, this causes:
- Memory within the target range to be skipped
totalSeento overshootvmSize, potentially exiting early- Some memory not being zeroed/purged
Additionally,
VirtualQueryreturn value should be checked (returns 0 on failure)..github/workflows/build-preview.yml (1)
106-112: Paginate comments and filter by bot author.Two issues persist in the comment lookup:
listCommentsonly fetches the first 100 comments. On PRs with more comments, the marker may be missed, resulting in duplicate comments instead of updates.
comments.find(c => c.body.includes(marker))can match any comment containing the marker string, not just bot comments. This risks overwriting user content.🔧 Proposed fix
- const { data: comments } = await github.rest.issues.listComments({ + const comments = await github.paginate(github.rest.issues.listComments, { owner: context.repo.owner, repo: context.repo.repo, issue_number: prNumber, per_page: 100 }); - const existing = comments.find(c => c.body.includes(marker)); + const existing = comments.find( + c => c.body?.includes(marker) && c.user?.login === 'github-actions[bot]' + );
| notify: | ||
| name: Notify Completion | ||
| name: Update PR Comment | ||
| needs: [trigger, build] | ||
| runs-on: ubuntu-latest | ||
| if: always() | ||
| if: needs.build.result == 'success' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Consider notifying on build failures.
The notify job only runs on successful builds (needs.build.result == 'success'). Failed or cancelled builds will not update the PR comment, leaving contributors without feedback when builds fail.
If this is intentional (to avoid noise), consider adding a brief failure notification or relying on GitHub's built-in workflow status indicators.
🤖 Prompt for AI Agents
In @.github/workflows/build-preview.yml around lines 85 - 89, The notify job
currently only runs on successful builds because of the condition "if:
needs.build.result == 'success'"; change this so PR comments are updated on
failures/cancellations too by replacing that condition with a broader one (e.g.,
"if: always()") or by removing the if and adding logic inside the notify job to
post different messages based on needs.build.result (referencing the notify job
and the existing "if: needs.build.result == 'success'" line) so failed or
cancelled builds also produce an update.
Preview Builds
|
No description provided.