Skip to content

Conversation

@dylan-conway
Copy link
Member

No description provided.

@coderabbitai
Copy link

coderabbitai bot commented Jan 25, 2026

Caution

Review failed

The pull request is closed.

Walkthrough

Refactors the build-preview GitHub Actions workflow to run on pull_request events and consolidate PR comments into a single markdown table; adjusts Windows build flags for allocator selection; adds a mimalloc include and replaces a single DiscardVirtualMemory call with a VirtualQuery-driven chunked purge/commit loop.

Changes

Cohort / File(s) Summary
GitHub Actions workflow
\.github/workflows/build-preview.yml
Switched trigger from issue_comment to pull_request (opened, synchronize, reopened); updated concurrency key to use pull_request.number; removed legacy PR-number extraction; consolidated comment flow to maintain a single table-based PR comment with an idempotent marker; restricted update job to successful builds.
Windows build flags
build.ts, windows-release.ps1
Removed Windows-specific allocator flags: dropped -DUSE_SYSTEM_MALLOC=ON from common CMake flags and removed -DUSE_MIMALLOC=OFF / -DUSE_LIBPAS=ON from the release script.
Native memory management
Source/JavaScriptCore/heap/StructureAlignedMemoryAllocator.cpp, Source/bmalloc/bmalloc/VMAllocate.h
Added mimalloc/types.h include in StructureAlignedMemoryAllocator; replaced a single DiscardVirtualMemory call with a loop using VirtualQuery and per-region MEM_DECOMMIT/MEM_COMMIT (VirtualFree/VirtualAlloc) to purge/commit memory chunks.

Possibly related PRs

Suggested reviewers

  • Jarred-Sumner
🚥 Pre-merge checks | ✅ 1 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description is missing entirely. The repository template requires a Bugzilla reference, explanation of the fix, and affected files/functions. Provide a description following the template: include Bugzilla bug ID/URL, explanation of why this fixes the assertion, and list of changed files with function/class context.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title 'fix libpas assertion on windows' is clear, concise, and directly related to the main changes in the PR which address libpas and memory allocator configuration on Windows.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

Warning

Review ran into problems

🔥 Problems

Git: Failed to clone repository. Please run the @coderabbitai full review command to re-trigger a full review. If the issue persists, set path_filters to include or exclude specific files.


Comment @coderabbitai help to get the list of available commands and usage tips.

@dylan-conway
Copy link
Member Author

test

@github-actions
Copy link

⚠️ Preview build was cancelled

This may have been caused by a newer build being triggered for the same PR.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Fix all issues with AI agents
In @.github/workflows/build-preview.yml:
- Around line 106-111: The call to github.rest.issues.listComments only
retrieves a single page (up to 100) so the existing marker may be missed on
large PRs; update the logic that uses github.rest.issues.listComments to
paginate all comment pages (for example using github.paginate or an explicit
loop) to collect every comment for the PR before searching for the marker.
Locate usages of github.rest.issues.listComments and replace them with a
paginated fetch (e.g., github.paginate(github.rest.issues.listComments, { owner,
repo, issue_number: prNumber, per_page: 100 })) or equivalent, then continue
existing marker search/update logic on the complete comment set.
- Around line 105-113: The comment lookup currently uses comments.find(c =>
c.body.includes(marker)) which can match user comments; change the predicate to
also verify the comment author is the bot (e.g., c.user?.login ===
'github-actions[bot]') so the find only selects bot-authored comments that
include the marker; update the existing variable assignment (existing) to use
this combined condition to avoid overwriting user content.

In `@Source/bmalloc/bmalloc/VMAllocate.h`:
- Around line 390-404: The loop advances currentPtr and totalSeen by
memInfo.RegionSize which is wrong when chunkSize < memInfo.RegionSize; update
the advancement to use chunkSize (advance currentPtr by chunkSize and add
chunkSize to totalSeen) so the loop covers exactly vmSize bytes and doesn't skip
or overshoot, and also check the return value of VirtualQuery (it returns 0 on
failure) and assert/handle failure before using memInfo; apply these changes
around the variables totalSeen, currentPtr, vmSize, memInfo, chunkSize and the
VirtualQuery/VirtualFree/VirtualAlloc sequence.

In `@Source/JavaScriptCore/heap/StructureAlignedMemoryAllocator.cpp`:
- Around line 143-145: The code computes offset = max(MarkedBlock::blockSize,
MI_ARENA_SLICE_SIZE) then subtracts it from g_jscConfig.sizeOfStructureHeap
which can underflow if the configured structure heap is smaller than offset; add
a guard before doing the subtraction: compute available =
g_jscConfig.sizeOfStructureHeap and if available <= offset either assert/log and
return/skip the mi_manage_os_memory_ex call or clamp offset to available (e.g.,
set size = 0 and avoid calling mi_manage_os_memory_ex), ensuring
g_jscConfig.startOfStructureHeap and mi_manage_os_memory_ex are only used with a
valid non-underflowing range; reference offset, MarkedBlock::blockSize,
MI_ARENA_SLICE_SIZE, g_jscConfig.sizeOfStructureHeap,
g_jscConfig.startOfStructureHeap and mi_manage_os_memory_ex.

Comment on lines +105 to +113
// Find existing comment
const { data: comments } = await github.rest.issues.listComments({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: ${{ needs.trigger.outputs.pr_number }},
body: body
issue_number: prNumber,
per_page: 100
});
const existing = comments.find(c => c.body.includes(marker));

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n .github/workflows/build-preview.yml | head -130 | tail -40

Repository: oven-sh/WebKit

Length of output: 172


🏁 Script executed:

grep -n "marker" .github/workflows/build-preview.yml

Repository: oven-sh/WebKit

Length of output: 172


🏁 Script executed:

sed -n '95,120p' .github/workflows/build-preview.yml

Repository: oven-sh/WebKit

Length of output: 172


🌐 Web query:

GitHub Actions API listComments response user field structure github-actions[bot] username

💡 Result:

For listComments (Issue/PR comments) in the GitHub REST API, each comment includes a user field that is a “simple user” object (e.g., login, id, avatar_url, url, html_url, type, site_admin, etc.). [1]

When the author is the Actions bot, that same user object looks like a normal user object, but with bot identity values:

  • login: "github-actions[bot]" [2]
  • type: "Bot" [2]
  • id: 41898282 [2][3]
  • html_url: points to the GitHub App page: "https://github.com/apps/github-actions" [2][3]
  • url: API user endpoint uses URL-encoding for [ and ]: "https://api.github.com/users/github-actions%5Bbot%5D" [2][3]

So in your listComments response, check comment.user.login === "github-actions[bot]" (and/or comment.user.type === "Bot") to detect comments made by GitHub Actions. [1][2]

Sources: [1] GitHub Docs – REST “List issue comments …” example response schema includes user object fields; [2] /user/41898282 output showing github-actions[bot] fields; [3] GitHub Marketplace action showing canonical bot ID/URLs.


Filter comments by bot author to prevent overwriting user content.

comments.find(c => c.body.includes(marker)) can match any comment containing the marker string, not just bot comments. This risks overwriting user content if the marker appears in user comments. Filter by the bot author using c.user?.login === 'github-actions[bot]'.

🔧 Proposed fix
-            const existing = comments.find(c => c.body.includes(marker));
+            const existing = comments.find(
+              c => c.body?.includes(marker) && c.user?.login === 'github-actions[bot]'
+            );
🤖 Prompt for AI Agents
In @.github/workflows/build-preview.yml around lines 105 - 113, The comment
lookup currently uses comments.find(c => c.body.includes(marker)) which can
match user comments; change the predicate to also verify the comment author is
the bot (e.g., c.user?.login === 'github-actions[bot]') so the find only selects
bot-authored comments that include the marker; update the existing variable
assignment (existing) to use this combined condition to avoid overwriting user
content.

Comment on lines +106 to 111
const { data: comments } = await github.rest.issues.listComments({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: ${{ needs.trigger.outputs.pr_number }},
body: body
issue_number: prNumber,
per_page: 100
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

cat .github/workflows/build-preview.yml | head -150 | tail -60

Repository: oven-sh/WebKit

Length of output: 172


🏁 Script executed:

rg "listComments|paginate" .github/workflows/build-preview.yml -A 3 -B 3

Repository: oven-sh/WebKit

Length of output: 172


🌐 Web query:

GitHub REST API issues.listComments pagination per_page default behavior

💡 Result:

For issues.listComments (REST: GET /repos/{owner}/{repo}/issues/{issue_number}/comments), GitHub paginates results:

  • per_page default: 30 (maximum 100) if you don’t pass per_page. [1]
  • page default: 1 if you don’t pass page. [1]
  • To fetch subsequent pages, read the response Link header (it includes rel="next", rel="last", etc. when more pages exist). [2]
  • If you set per_page, GitHub includes it in the Link header URLs for the next/last pages automatically. [2]

Sources: [1] GitHub REST “List issue comments” params (defaults/max) (docs.github.com), [2] GitHub REST pagination guide (docs.github.com)

Citations:


🌐 Web query:

actions/github-script github.paginate() usage documentation

💡 Result:

In actions/github-script, github is a pre-authenticated Octokit client and includes github.paginate() (same API as octokit.paginate()). You use it to automatically fetch all pages from a paginated REST endpoint and return one combined array. [1][2]

Common pattern in github-script: build endpoint options, then paginate

- uses: actions/github-script@v8
  with:
    script: |
      const opts = github.rest.issues.listForRepo.endpoint.merge({
        owner: context.repo.owner,
        repo: context.repo.repo,
        state: "all",
        per_page: 100,
      })

      const issues = await github.paginate(opts)
      core.info(`Fetched ${issues.length} issues`)

This is the pattern shown in the actions/github-script README. [1]

Alternative: paginate by route + params

const issues = await github.paginate("GET /repos/{owner}/{repo}/issues", {
  owner: context.repo.owner,
  repo: context.repo.repo,
  per_page: 100,
})

paginate() accepts the same options as request(). [2][3]

Optional: map each page (reduce memory / transform results)

const titles = await github.paginate(
  "GET /repos/{owner}/{repo}/issues",
  { owner, repo, per_page: 100 },
  (response) => response.data.map(i => i.title)
)

The optional map function runs per page; response.data is guaranteed to be an array for pagination endpoints. [2]

References: actions/github-script README (paginate example), Octokit REST.js pagination docs, GitHub Docs on Octokit pagination. [1][2][3]


Paginate PR comments to reliably find the existing marker.

listComments only fetches the first page (100 comments). On PRs with more than 100 comments, the marker may be beyond page 1, causing duplicate comments instead of updates.

🔧 Proposed fix
-            const { data: comments } = await github.rest.issues.listComments({
-              owner: context.repo.owner,
-              repo: context.repo.repo,
-              issue_number: prNumber,
-              per_page: 100
-            });
+            const comments = await github.paginate(github.rest.issues.listComments, {
+              owner: context.repo.owner,
+              repo: context.repo.repo,
+              issue_number: prNumber,
+              per_page: 100
+            });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const { data: comments } = await github.rest.issues.listComments({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: ${{ needs.trigger.outputs.pr_number }},
body: body
issue_number: prNumber,
per_page: 100
});
const comments = await github.paginate(github.rest.issues.listComments, {
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: prNumber,
per_page: 100
});
🤖 Prompt for AI Agents
In @.github/workflows/build-preview.yml around lines 106 - 111, The call to
github.rest.issues.listComments only retrieves a single page (up to 100) so the
existing marker may be missed on large PRs; update the logic that uses
github.rest.issues.listComments to paginate all comment pages (for example using
github.paginate or an explicit loop) to collect every comment for the PR before
searching for the marker. Locate usages of github.rest.issues.listComments and
replace them with a paginated fetch (e.g.,
github.paginate(github.rest.issues.listComments, { owner, repo, issue_number:
prNumber, per_page: 100 })) or equivalent, then continue existing marker
search/update logic on the complete comment set.

Comment on lines +390 to +404

size_t totalSeen = 0;
void* currentPtr = p;
while (totalSeen < vmSize) {
MEMORY_BASIC_INFORMATION memInfo;
VirtualQuery(currentPtr, &memInfo, sizeof(memInfo));
RELEASE_BASSERT(memInfo.RegionSize > 0);
size_t chunkSize = std::min(memInfo.RegionSize, vmSize - totalSeen);
BOOL freeResult = VirtualFree(currentPtr, chunkSize, MEM_DECOMMIT);
RELEASE_BASSERT(freeResult);
void* allocResult = VirtualAlloc(currentPtr, chunkSize, MEM_COMMIT, PAGE_READWRITE);
RELEASE_BASSERT(allocResult == currentPtr);
currentPtr = reinterpret_cast<void*>(reinterpret_cast<uintptr_t>(currentPtr) + memInfo.RegionSize);
totalSeen += memInfo.RegionSize;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Bug: Pointer and counter advancement should use chunkSize, not memInfo.RegionSize.

When a region extends beyond the requested range, chunkSize is smaller than memInfo.RegionSize. The code correctly processes only chunkSize bytes, but then advances both currentPtr and totalSeen by memInfo.RegionSize. This causes:

  1. Skipping memory within the target range if the loop continues
  2. Potentially exiting early because totalSeen jumps past vmSize
  3. Memory within the requested range not being zeroed/purged

Additionally, VirtualQuery return value should be checked—it returns 0 on failure.

🐛 Proposed fix
     size_t totalSeen = 0;
     void* currentPtr = p;
     while (totalSeen < vmSize) {
         MEMORY_BASIC_INFORMATION memInfo;
-        VirtualQuery(currentPtr, &memInfo, sizeof(memInfo));
+        SIZE_T queryResult = VirtualQuery(currentPtr, &memInfo, sizeof(memInfo));
+        RELEASE_BASSERT(queryResult != 0);
         RELEASE_BASSERT(memInfo.RegionSize > 0);
         size_t chunkSize = std::min(memInfo.RegionSize, vmSize - totalSeen);
         BOOL freeResult = VirtualFree(currentPtr, chunkSize, MEM_DECOMMIT);
         RELEASE_BASSERT(freeResult);
         void* allocResult = VirtualAlloc(currentPtr, chunkSize, MEM_COMMIT, PAGE_READWRITE);
         RELEASE_BASSERT(allocResult == currentPtr);
-        currentPtr = reinterpret_cast<void*>(reinterpret_cast<uintptr_t>(currentPtr) + memInfo.RegionSize);
-        totalSeen += memInfo.RegionSize;
+        currentPtr = reinterpret_cast<void*>(reinterpret_cast<uintptr_t>(currentPtr) + chunkSize);
+        totalSeen += chunkSize;
     }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
size_t totalSeen = 0;
void* currentPtr = p;
while (totalSeen < vmSize) {
MEMORY_BASIC_INFORMATION memInfo;
VirtualQuery(currentPtr, &memInfo, sizeof(memInfo));
RELEASE_BASSERT(memInfo.RegionSize > 0);
size_t chunkSize = std::min(memInfo.RegionSize, vmSize - totalSeen);
BOOL freeResult = VirtualFree(currentPtr, chunkSize, MEM_DECOMMIT);
RELEASE_BASSERT(freeResult);
void* allocResult = VirtualAlloc(currentPtr, chunkSize, MEM_COMMIT, PAGE_READWRITE);
RELEASE_BASSERT(allocResult == currentPtr);
currentPtr = reinterpret_cast<void*>(reinterpret_cast<uintptr_t>(currentPtr) + memInfo.RegionSize);
totalSeen += memInfo.RegionSize;
}
size_t totalSeen = 0;
void* currentPtr = p;
while (totalSeen < vmSize) {
MEMORY_BASIC_INFORMATION memInfo;
SIZE_T queryResult = VirtualQuery(currentPtr, &memInfo, sizeof(memInfo));
RELEASE_BASSERT(queryResult != 0);
RELEASE_BASSERT(memInfo.RegionSize > 0);
size_t chunkSize = std::min(memInfo.RegionSize, vmSize - totalSeen);
BOOL freeResult = VirtualFree(currentPtr, chunkSize, MEM_DECOMMIT);
RELEASE_BASSERT(freeResult);
void* allocResult = VirtualAlloc(currentPtr, chunkSize, MEM_COMMIT, PAGE_READWRITE);
RELEASE_BASSERT(allocResult == currentPtr);
currentPtr = reinterpret_cast<void*>(reinterpret_cast<uintptr_t>(currentPtr) + chunkSize);
totalSeen += chunkSize;
}
🤖 Prompt for AI Agents
In `@Source/bmalloc/bmalloc/VMAllocate.h` around lines 390 - 404, The loop
advances currentPtr and totalSeen by memInfo.RegionSize which is wrong when
chunkSize < memInfo.RegionSize; update the advancement to use chunkSize (advance
currentPtr by chunkSize and add chunkSize to totalSeen) so the loop covers
exactly vmSize bytes and doesn't skip or overshoot, and also check the return
value of VirtualQuery (it returns 0 on failure) and assert/handle failure before
using memInfo; apply these changes around the variables totalSeen, currentPtr,
vmSize, memInfo, chunkSize and the VirtualQuery/VirtualFree/VirtualAlloc
sequence.

Comment on lines 143 to 145
constexpr size_t offset = std::max(MarkedBlock::blockSize, static_cast<size_t>(MI_ARENA_SLICE_SIZE));
void* memory = reinterpret_cast<void*>(reinterpret_cast<uintptr_t>(g_jscConfig.startOfStructureHeap) + offset);
size_t size = g_jscConfig.sizeOfStructureHeap - offset;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Guard against offset exceeding the structure heap size.
If structureHeapSizeInKB is configured smaller than MI_ARENA_SLICE_SIZE (but ≥ MarkedBlock::blockSize), size underflows and mi_manage_os_memory_ex receives an invalid range. Add an assert (or clamp/fallback) before subtracting.

🔧 Proposed fix
-        constexpr size_t offset = std::max(MarkedBlock::blockSize, static_cast<size_t>(MI_ARENA_SLICE_SIZE));
-        void* memory = reinterpret_cast<void*>(reinterpret_cast<uintptr_t>(g_jscConfig.startOfStructureHeap) + offset);
-        size_t size = g_jscConfig.sizeOfStructureHeap - offset;
+        constexpr size_t offset = std::max(MarkedBlock::blockSize, static_cast<size_t>(MI_ARENA_SLICE_SIZE));
+        RELEASE_ASSERT(g_jscConfig.sizeOfStructureHeap > offset, g_jscConfig.sizeOfStructureHeap, offset);
+        void* memory = reinterpret_cast<void*>(reinterpret_cast<uintptr_t>(g_jscConfig.startOfStructureHeap) + offset);
+        size_t size = g_jscConfig.sizeOfStructureHeap - offset;
🤖 Prompt for AI Agents
In `@Source/JavaScriptCore/heap/StructureAlignedMemoryAllocator.cpp` around lines
143 - 145, The code computes offset = max(MarkedBlock::blockSize,
MI_ARENA_SLICE_SIZE) then subtracts it from g_jscConfig.sizeOfStructureHeap
which can underflow if the configured structure heap is smaller than offset; add
a guard before doing the subtraction: compute available =
g_jscConfig.sizeOfStructureHeap and if available <= offset either assert/log and
return/skip the mi_manage_os_memory_ex call or clamp offset to available (e.g.,
set size = 0 and avoid calling mi_manage_os_memory_ex), ensuring
g_jscConfig.startOfStructureHeap and mi_manage_os_memory_ex are only used with a
valid non-underflowing range; reference offset, MarkedBlock::blockSize,
MI_ARENA_SLICE_SIZE, g_jscConfig.sizeOfStructureHeap,
g_jscConfig.startOfStructureHeap and mi_manage_os_memory_ex.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In @.github/workflows/build-preview.yml:
- Around line 85-89: The notify job currently only runs on successful builds
because of the condition "if: needs.build.result == 'success'"; change this so
PR comments are updated on failures/cancellations too by replacing that
condition with a broader one (e.g., "if: always()") or by removing the if and
adding logic inside the notify job to post different messages based on
needs.build.result (referencing the notify job and the existing "if:
needs.build.result == 'success'" line) so failed or cancelled builds also
produce an update.
♻️ Duplicate comments (3)
Source/JavaScriptCore/heap/StructureAlignedMemoryAllocator.cpp (1)

143-146: Guard against potential underflow in size calculation.

The offset is computed as max(MarkedBlock::blockSize, MI_ARENA_SLICE_SIZE), but there's no check ensuring g_jscConfig.sizeOfStructureHeap > offset before the subtraction on line 145. If the configured heap size is smaller than the offset, size will underflow, causing mi_manage_os_memory_ex to receive an invalid range.

Source/bmalloc/bmalloc/VMAllocate.h (1)

390-404: Bug: Loop advancement uses wrong variable, causing memory to be skipped.

The loop correctly computes chunkSize as the minimum of memInfo.RegionSize and remaining bytes (line 397), but then advances both currentPtr and totalSeen by memInfo.RegionSize (lines 402-403). When a region extends beyond the requested range, this causes:

  1. Memory within the target range to be skipped
  2. totalSeen to overshoot vmSize, potentially exiting early
  3. Some memory not being zeroed/purged

Additionally, VirtualQuery return value should be checked (returns 0 on failure).

.github/workflows/build-preview.yml (1)

106-112: Paginate comments and filter by bot author.

Two issues persist in the comment lookup:

  1. listComments only fetches the first 100 comments. On PRs with more comments, the marker may be missed, resulting in duplicate comments instead of updates.

  2. comments.find(c => c.body.includes(marker)) can match any comment containing the marker string, not just bot comments. This risks overwriting user content.

🔧 Proposed fix
-            const { data: comments } = await github.rest.issues.listComments({
+            const comments = await github.paginate(github.rest.issues.listComments, {
               owner: context.repo.owner,
               repo: context.repo.repo,
               issue_number: prNumber,
               per_page: 100
             });
-            const existing = comments.find(c => c.body.includes(marker));
+            const existing = comments.find(
+              c => c.body?.includes(marker) && c.user?.login === 'github-actions[bot]'
+            );

Comment on lines 85 to +89
notify:
name: Notify Completion
name: Update PR Comment
needs: [trigger, build]
runs-on: ubuntu-latest
if: always()
if: needs.build.result == 'success'
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Consider notifying on build failures.

The notify job only runs on successful builds (needs.build.result == 'success'). Failed or cancelled builds will not update the PR comment, leaving contributors without feedback when builds fail.

If this is intentional (to avoid noise), consider adding a brief failure notification or relying on GitHub's built-in workflow status indicators.

🤖 Prompt for AI Agents
In @.github/workflows/build-preview.yml around lines 85 - 89, The notify job
currently only runs on successful builds because of the condition "if:
needs.build.result == 'success'"; change this so PR comments are updated on
failures/cancellations too by replacing that condition with a broader one (e.g.,
"if: always()") or by removing the if and adding logic inside the notify job to
post different messages based on needs.build.result (referencing the notify job
and the existing "if: needs.build.result == 'success'" line) so failed or
cancelled builds also produce an update.

@github-actions
Copy link

Preview Builds

Commit Release Date
f8e66a68 autobuild-preview-pr-147-f8e66a68 2026-01-25 01:44:05 UTC

@dylan-conway dylan-conway merged commit f151e5c into main Jan 25, 2026
2 of 28 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants