Skip to content

Fix BI metrics misclassifying HTTP 500 errors as successful requests#560

Open
SasinduDilshara wants to merge 5 commits intowso2:mainfrom
SasinduDilshara:fix-154
Open

Fix BI metrics misclassifying HTTP 500 errors as successful requests#560
SasinduDilshara wants to merge 5 commits intowso2:mainfrom
SasinduDilshara:fix-154

Conversation

@SasinduDilshara
Copy link
Copy Markdown

@SasinduDilshara SasinduDilshara commented Apr 5, 2026

Summary

  • Root cause: The Fluent Bit Lua pipeline (extract_bal_metrics_data in scripts.lua) defaulted status = "successful" and only changed it to "failed" when http.status_code_group was "4xx" or "5xx". However, the Ballerina observability library emits http.status_code (an integer, e.g. 500) but does not emit http.status_code_group (e.g. "5xx"). Since the group field was always nil for BI runtimes, every request — including HTTP 500 errors — was classified as successful, making the error rate always 0% for BI integrations.
  • Fix: When http.status_code_group is absent or empty, derive it from http.status_code using integer division (math.floor(code / 100) .. "xx"). The derived group is then used for both status classification and stored in record["status_code_group"].
  • No frontend changes needed: Metrics.tsx already correctly uses tags.status === 'failed'; the bug was entirely in the Lua pipeline.
  • MI unaffected: MI metrics use a separate classification path in opensearch_adapter_service.bal that explicitly sets tags["status"].

Files Changed

  • icp_server/resources/observability/opensearch-observability-dashboard/config/fluent-bit/scripts/scripts.lua — fix status derivation fallback
  • icp_server/resources/observability/opensearch-observability-dashboard/config/fluent-bit/scripts/test_scripts.lua — new Lua unit tests covering the fixed and edge-case scenarios

Fixes #154

Summary by CodeRabbit

  • Bug Fixes

    • Fixed observability metrics to properly normalize missing status code groups by deriving them from HTTP status codes.
  • Documentation

    • Added comprehensive project documentation: architecture overview, contributor guidelines, deployment guide, feature reference, and testing procedures.
  • Tests

    • Added unit tests for observability pipeline Lua scripts.

SasinduDilshara and others added 4 commits April 5, 2026 19:04
Derive http.status_code_group from http.status_code in the Fluent Bit Lua
pipeline when the group field is absent. Ballerina emits http.status_code
(integer) but not http.status_code_group (string), causing all BI requests
to be classified as 'successful' regardless of their actual status code.

Fixes wso2#154

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@SasinduDilshara
Copy link
Copy Markdown
Author

Issue Analysis — [Issue #154]: Metrics dashboard not capturing 500 responses as errors

Classification

  • Type: Bug
  • Severity Assessment: High — Metrics dashboard silently misclassifies all BI 500 errors as successful
    requests, making error rate always 0% for BI integrations regardless of actual failures.
  • Affected Component(s): Fluent Bit log processing pipeline (BI metrics path)
  • Affected Feature(s): BI Metrics Dashboard — error/failure rate reporting

Reproducibility

  • Reproducible: Yes

  • Environment:

    • Branch: main
    • Ballerina: 2201.13.1 (Swan Lake Update 13)
    • ICP: 2.0.0-SNAPSHOT
    • OS: Darwin 24.0.0
    • Fluent Bit Lua script: icp_server/resources/observability/opensearch-observability-dashboard/config/fluent-bit/scripts/scripts.lua
  • Steps Executed:

    1. Reviewed the Fluent Bit Lua processing pipeline for BI metrics
      (scripts/scripts.luaextract_bal_metrics_data)

    2. Identified that status classification relies solely on record["http.status_code_group"]

    3. Reproduced the logic in isolation using lua:

      -- Simulates a Ballerina 500 response log WITHOUT http.status_code_group (the real scenario)
      local record = {
          logger = "metrics",
          ["response_time_seconds"] = 0.215,
          ["http.method"] = "GET",
          ["http.url"] = "/MusicMood/playlist",
          ["http.status_code"] = 500,
          -- http.status_code_group is ABSENT
      }
      -- After extract_bal_metrics_data:
      -- record["status"] == "successful"   ← BUG: should be "failed"
    4. Verified that when http.status_code_group IS present ("5xx"), status correctly becomes
      "failed".

    5. Built the ICP Ballerina server (bal build) — succeeded, confirming the baseline builds.

  • Expected Behavior: A BI service returning HTTP 500 should have its requests counted as
    errors in the metrics dashboard. The error rate chart should reflect the 500 responses.

  • Actual Behavior: All BI requests — including 500 errors — are classified with
    status = "successful" and counted as successful. Error rate always shows 0%.

  • Logs/Evidence:

    Lua test output (lua /tmp/test_bug_154.lua):
    
    === SCENARIO 1 (BUG): 500 error, Ballerina does NOT emit http.status_code_group ===
      Input:  http.status_code=500  http.status_code_group=nil
      Output: status=successful  status_code_group=""
      BUG CONFIRMED: true (500 error classified as 'successful')
    
    === SCENARIO 2 (EXPECTED): 500 error with http.status_code_group present ===
      Input:  http.status_code=500  http.status_code_group=5xx
      Output: status=failed  status_code_group="5xx"
      Correct: true
    
    === SCENARIO 4: Frontend impact (Metrics.tsx logic) ===
      Frontend code: const isFailed = m.tags.status === 'failed';
      With status='successful' for 500 errors:
        isFailed = false → 500 errors counted as successful requests → NOT shown as errors
    

Root Cause Analysis

The bug spans two layers, with the root cause in the Fluent Bit Lua script.

Layer 1: Fluent Bit Lua script (root cause)

File: icp_server/resources/observability/opensearch-observability-dashboard/config/fluent-bit/scripts/scripts.lua
Function: extract_bal_metrics_data (line 42)
Relevant lines (58–62):

local status = "successful"     -- default: always "successful"

if record["http.status_code_group"] == "4xx" or record["http.status_code_group"] == "5xx" then
    status = "failed"
end

The function defaults status = "successful" and only changes it to "failed" when
record["http.status_code_group"] is "4xx" or "5xx". However, the Ballerina observability
library (logfmt metrics output) emits http.status_code (an integer like 500) but does not
emit http.status_code_group
(the string group classification like "5xx"). Since
record["http.status_code_group"] is nil, the condition is never true, and every single
request — including 500 errors — gets status = "successful".

The field http.status_code_group was intended as the group classifier, but the source data
(Ballerina runtime) only provides http.status_code (raw integer). There is no fallback
to derive the group from the raw code.

Layer 2: Frontend (downstream effect)

File: frontend/src/pages/Metrics.tsx:137

const isFailed = m.tags.status === 'failed';

The frontend correctly uses tags.status === 'failed' to classify errors. This logic is sound
for MI (which explicitly sets status = "failed" in the Ballerina backend at
opensearch_adapter_service.bal:634). For BI, the status tag comes directly from the Lua
pipeline — so once the pipeline is fixed, the frontend will work correctly without changes.

Why MI is unaffected

MI metrics use a completely different classification path in the Ballerina backend:
opensearch_adapter_service.bal (line 1041–1051) uses payload.failure and
payload.faultResponse fields from the MI analytics log, and explicitly sets
tags["status"] = "failed" (line 634). This path does not go through the Lua script.

Fix

In scripts.lua, derive http.status_code_group from http.status_code when the group field
is absent:

-- In extract_bal_metrics_data, replace lines 58-62:
local status = "successful"
local status_code_group = record["http.status_code_group"]
if not status_code_group or status_code_group == "" then
    local code = tonumber(record["http.status_code"])
    if code then
        status_code_group = math.floor(code / 100) .. "xx"
    end
end
if status_code_group == "4xx" or status_code_group == "5xx" then
    status = "failed"
end
-- Update status_code_group storage to use the derived value
record["status"] = status
record["status_code_group"] = status_code_group or ""

Test Coverage Assessment

  • Existing tests covering this path:

    • None. The icp_server/tests/ directory contains tests only for auth, GraphQL runtime/
      environment/component/project management, and token handling. There are no tests for:
      • The Fluent Bit Lua pipeline
      • The OpenSearch adapter (opensearch_adapter_service.bal)
      • The observability service (observability_service.bal)
      • BI or MI metrics processing
  • Coverage gaps identified:

    • scripts/scripts.luaextract_bal_metrics_data: zero coverage
    • opensearch_adapter_service.balfetchBIMetrics: zero coverage
    • opensearch_adapter_service.balfetchMIMetrics: zero coverage
    • observability_service.bal → metrics endpoint: zero coverage
  • Proposed test plan:

    • Unit test — Lua script (highest priority): A standalone Lua test file (e.g.,
      config/fluent-bit/scripts/test_scripts.lua) covering:
      • extract_bal_metrics_data with only http.status_code=500 present (no group field) → status = "failed"
      • extract_bal_metrics_data with http.status_code=200status = "successful"
      • extract_bal_metrics_data with http.status_code_group="4xx"status = "failed"
      • extract_bal_metrics_data with http.status_code_group="5xx" and http.status_code=500status = "failed"
      • extract_bal_metrics_data with non-metrics logger → record unchanged
    • Integration test — OpenSearch adapter: Mock or inject sample ballerina-metrics-logs-*
      documents into OpenSearch (with status=successful for 500 errors and status=failed for
      correctly processed records) and verify fetchBIMetrics returns entries with correct
      tags.status values.
    • Negative/edge cases:
      • http.status_code=0 (malformed) → should not crash, should default to "successful"
      • http.status_code_group="" (empty string) → should fall back to http.status_code derivation
      • Both fields absent → status = "successful", status_code_group = ""
      • http.status_code=404status = "failed" (4xx group)
      • http.status_code=301status = "successful" (3xx group)

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 5, 2026

Walkthrough

These changes introduce comprehensive project documentation spanning AI agent instructions, system architecture, development contribution standards, deployment procedures, feature enumeration, and testing guidelines. A Lua observability script is enhanced to normalize missing status code groups from HTTP status codes, with corresponding unit tests added.

Changes

Cohort / File(s) Summary
Documentation & Guidelines
CLAUDE.md, specs/Architecture.md, specs/Contributing.md, specs/Deployement.md, specs/Features.md, specs/Testing.md
New documentation files establishing AI agent instructions, system architecture overview, contribution standards (naming/patterns/testing), deployment procedures, API/feature enumeration, and testing infrastructure guidelines.
Observability Lua Scripts
icp_server/resources/observability/.../scripts/scripts.lua, icp_server/resources/observability/.../scripts/test_scripts.lua
Enhanced extract_bal_metrics_data() to normalize missing status_code_group by deriving it from HTTP status codes (e.g., 4xx, 5xx mapping). New unit test suite validates the normalization logic and edge cases including malformed/missing status fields.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~15 minutes

Poem

🐰 Hops through the files with glee,
Docs and specs, all plain to see!
Scripts now tested, status codes aligned,
Guidelines crafted, best practices defined!
With architecture and features bright—
This project sparkles, pure delight!

🚥 Pre-merge checks | ✅ 1 | ❌ 4

❌ Failed checks (3 warnings, 1 inconclusive)

Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description is missing several required template sections: Purpose/Goals/Approach, User stories, Release note, Documentation, Training, Certification, Marketing, Automation tests details, Security checks, and other required sections from the template. Complete the PR description by filling in all required template sections, particularly automation test coverage details, security checks, and documentation/release note information.
Linked Issues check ⚠️ Warning The linked issue #154 is about MI artifact UI support (Carbon apps, APIs, Endpoints), but the PR fixes a BI metrics classification bug in Lua script—completely unrelated objectives.
Docstring Coverage ⚠️ Warning Docstring coverage is 25.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Out of Scope Changes check ❓ Inconclusive The PR includes substantial new documentation files (CLAUDE.md, Architecture.md, Contributing.md, Deployment.md, Features.md, Testing.md) plus the core bug fix; unclear if this documentation scope was pre-approved or belongs in this bug-fix PR.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately and concisely describes the main bug fix: BI metrics incorrectly classifying HTTP 500 errors as successful requests.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (1)
specs/Architecture.md (1)

32-32: Clarify the Ballerina initialization sequence.

The arrow notation main.bal → init.bal → config.bal → webserver.bal suggests an explicit call chain, but Ballerina uses implicit module initialization. The actual sequence is:

  1. Ballerina runtime loads config.bal configurables (including Config.toml overrides)
  2. Module-level init() function in init.bal runs automatically before main()
  3. main() in main.bal executes
  4. webserver.bal listeners are started as part of initialization

Consider rephrasing to clarify the implicit initialization behavior.

📝 Suggested clarification
-Init: `main.bal` → `init.bal` (DB, listeners) → `config.bal` (all configurables) → `webserver.bal` (SPA serving, updates `config.json`).
+Init: Ballerina loads `config.bal` configurables → module `init()` in `init.bal` (DB, listeners) → `main()` in `main.bal` → `webserver.bal` listeners start (SPA serving, updates `config.json`).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@specs/Architecture.md` at line 32, The current arrow-based line implies an
explicit call chain; rewrite it to describe Ballerina's implicit init order:
state that module configurables in config.bal (and Config.toml overrides) are
loaded first, then the module-level init() function in init.bal runs
automatically before main(), then main() in main.bal executes, and webserver.bal
listeners are started as part of module initialization—use the symbols
config.bal configurables, init() in init.bal, main() in main.bal, and
webserver.bal listeners in the revised sentence to make the implicit behavior
clear.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@specs/Contributing.md`:
- Line 24: Update the Contributing.md guidance to explicitly state that
classifySqlError() in error_mapper.bal is declared as an isolated (module-local)
function and is intended for use only inside the storage repository
implementations under modules/storage (or, if cross-module usage is desired,
change the function signature in error_mapper.bal from isolated function
classifySqlError() to public isolated function classifySqlError()); mention that
repository code should use classifySqlError() (or the documented pattern) and
that other modules must implement their own mapping or rely on a public wrapper
if you choose to make it public.

In `@specs/Deployement.md`:
- Line 1: The file name is misspelled as Deployement.md; rename the file to
Deployment.md and update any references or links that point to "Deployement.md"
(e.g., documentation indexes, README links, or import paths) to use
"Deployment.md" so references remain valid.
- Line 56: Update the incorrect port in Deployement.md from 9460 to 9445 so the
URL becomes https://localhost:9445; this aligns Deployement.md with
specs/Architecture.md and specs/Features.md and with the serverPort setting in
icp_server/config.bal (serverPort = 9445).

In `@specs/Testing.md`:
- Line 37: Update the Environment line in Testing.md to accurately reflect that
the primary build (pnpm install and integration tests) runs on Node.js 20 while
Node.js 22.19.0 is only used for the Prettier formatting check; edit the
Environment entry in Testing.md to either state "Node.js 20 (primary build: pnpm
install, integration tests); Node.js 22.19.0 (Prettier formatting check)" or
simply change the single version to "Node.js 20" if you prefer a simpler
statement.

---

Nitpick comments:
In `@specs/Architecture.md`:
- Line 32: The current arrow-based line implies an explicit call chain; rewrite
it to describe Ballerina's implicit init order: state that module configurables
in config.bal (and Config.toml overrides) are loaded first, then the
module-level init() function in init.bal runs automatically before main(), then
main() in main.bal executes, and webserver.bal listeners are started as part of
module initialization—use the symbols config.bal configurables, init() in
init.bal, main() in main.bal, and webserver.bal listeners in the revised
sentence to make the implicit behavior clear.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: a2af0b44-827f-4394-9cd7-5e1d4319846d

📥 Commits

Reviewing files that changed from the base of the PR and between 834f36e and 7a8e2fe.

📒 Files selected for processing (8)
  • CLAUDE.md
  • icp_server/resources/observability/opensearch-observability-dashboard/config/fluent-bit/scripts/scripts.lua
  • icp_server/resources/observability/opensearch-observability-dashboard/config/fluent-bit/scripts/test_scripts.lua
  • specs/Architecture.md
  • specs/Contributing.md
  • specs/Deployement.md
  • specs/Features.md
  • specs/Testing.md

Comment thread specs/Contributing.md Outdated
- One service per top-level `.bal` file bound to a port. One `*_repository.bal` per entity in `modules/storage/`.
- Types in `modules/types/` only. Auth checks via `modules/auth/permission_checker.bal`.
- All configurables in `config.bal` with defaults. Use `$secret{alias}` for encrypted values; code outside `config.bal` uses `resolved*` vars.
- Error handling: use `check` keyword, `error` type, `classifySqlError()` from `error_mapper.bal`. Never expose raw SQL errors.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Clarify classifySqlError() usage scope.

Line 24 states "use classifySqlError() from error_mapper.bal", but the function is defined as isolated function classifySqlError() (module-local) rather than public isolated function. This means it can only be called within the modules/storage/ module, not from other modules as the general guidance might suggest.

Consider clarifying that classifySqlError() is for use within storage repository implementations, or document the error handling pattern more precisely.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@specs/Contributing.md` at line 24, Update the Contributing.md guidance to
explicitly state that classifySqlError() in error_mapper.bal is declared as an
isolated (module-local) function and is intended for use only inside the storage
repository implementations under modules/storage (or, if cross-module usage is
desired, change the function signature in error_mapper.bal from isolated
function classifySqlError() to public isolated function classifySqlError());
mention that repository code should use classifySqlError() (or the documented
pattern) and that other modules must implement their own mapping or rely on a
public wrapper if you choose to make it public.

Comment thread specs/Deployement.md Outdated
@@ -0,0 +1,59 @@
# Deployment Guide
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: Filename is misspelled.

The filename Deployement.md should be Deployment.md (missing 'n').

📝 Proposed fix

Rename the file from specs/Deployement.md to specs/Deployment.md.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@specs/Deployement.md` at line 1, The file name is misspelled as
Deployement.md; rename the file to Deployment.md and update any references or
links that point to "Deployement.md" (e.g., documentation indexes, README links,
or import paths) to use "Deployment.md" so references remain valid.

Comment thread specs/Deployement.md Outdated

## UI

`https://localhost:9460` admin:admin
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Description: Verify the correct port for UI access by checking config and service bindings.

echo "=== Checking config for UI/SPA port ==="
rg -n "serverPort|9460|9445" icp_server/config.bal icp_server/Config.toml

echo -e "\n=== Checking webserver.bal for SPA serving ==="
rg -n -A5 -B5 "listener.*9445|listener.*9460" icp_server/webserver.bal

echo -e "\n=== Checking auth_service.bal for port binding ==="
rg -n -A3 "listener.*9445|listener.*9460" icp_server/auth_service.bal

Repository: wso2/integration-control-plane

Length of output: 640


🏁 Script executed:

#!/bin/bash
# Check the actual documentation files mentioned in the review

echo "=== specs/Deployement.md around line 56 ==="
sed -n '50,60p' specs/Deployement.md 2>/dev/null || echo "File not found or error reading"

echo -e "\n=== specs/Architecture.md around line 24 ==="
sed -n '20,30p' specs/Architecture.md 2>/dev/null || echo "File not found or error reading"

echo -e "\n=== specs/Features.md around line 17 ==="
sed -n '12,22p' specs/Features.md 2>/dev/null || echo "File not found or error reading"

echo -e "\n=== All files in specs/ directory ==="
ls -la specs/ 2>/dev/null || echo "specs directory not found"

Repository: wso2/integration-control-plane

Length of output: 3544


🏁 Script executed:

#!/bin/bash
# Check if 9460 is used anywhere in the codebase

echo "=== Searching for 9460 in entire codebase ==="
rg "9460" --type-list | head -20
echo "---"
rg "9460" -i 2>/dev/null || echo "No matches found for 9460"

echo -e "\n=== Searching for 9460 in markdown/documentation ==="
rg "9460" -t md 2>/dev/null || echo "No matches in markdown files"

echo -e "\n=== Searching for references to service ports ==="
rg "localhost:[0-9]{4}" specs/ -o 2>/dev/null | sort | uniq

Repository: wso2/integration-control-plane

Length of output: 897


Change port from 9460 to 9445 in Deployement.md

Port 9460 is incorrect. Per specs/Architecture.md and specs/Features.md, the SPA and auth services run on port 9445. This is also confirmed by icp_server/config.bal which sets serverPort = 9445. Update line 56 to reference https://localhost:9445 instead.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@specs/Deployement.md` at line 56, Update the incorrect port in Deployement.md
from 9460 to 9445 so the URL becomes https://localhost:9445; this aligns
Deployement.md with specs/Architecture.md and specs/Features.md and with the
serverPort setting in icp_server/config.bal (serverPort = 9445).

Comment thread specs/Testing.md Outdated
3. H2 database initialization test
4. Docker Compose integration tests against MySQL

Environment: Node.js 22.19.0, JDK 17, Ballerina 2201.13.1.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Description: Check all Node.js version references in the CI workflow.

echo "=== Checking pr-check.yml for Node.js version settings ==="
rg -n "node-version|Node" .github/workflows/pr-check.yml

echo -e "\n=== Checking for Node version in package.json ==="
fd -t f "package.json" -x cat {}

Repository: wso2/integration-control-plane

Length of output: 1570


🏁 Script executed:

#!/bin/bash
# Get the full pr-check.yml workflow to understand what each step does
cat -n .github/workflows/pr-check.yml | head -70

Repository: wso2/integration-control-plane

Length of output: 2303


🏁 Script executed:

#!/bin/bash
# Read specs/Testing.md to see the context around line 37
cat -n specs/Testing.md | head -50

Repository: wso2/integration-control-plane

Length of output: 1514


Update Testing.md line 37 to reflect the correct Node.js version used for the main build.

The documentation states "Environment: Node.js 22.19.0", but the workflow uses Node.js 22.19.0 only for the Prettier formatting check. The main build, pnpm install, and integration tests use Node.js 20. Update line 37 to clarify that the primary build environment uses Node.js 20, or explicitly state the version breakdown if both are intentional.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@specs/Testing.md` at line 37, Update the Environment line in Testing.md to
accurately reflect that the primary build (pnpm install and integration tests)
runs on Node.js 20 while Node.js 22.19.0 is only used for the Prettier
formatting check; edit the Environment entry in Testing.md to either state
"Node.js 20 (primary build: pnpm install, integration tests); Node.js 22.19.0
(Prettier formatting check)" or simply change the single version to "Node.js 20"
if you prefer a simpler statement.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Implement UI support to list MI artifacts

1 participant