Skip to content
Draft
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,10 @@ inputs:
required: false
default: ''
type: string
ai_model:
description: 'AI model to use for degradation analysis (e.g. claude-3-5-haiku-latest, gpt-4o-mini). Provider is auto-detected from the model name prefix.'
Copy link

Copilot AI Mar 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ai_model is configurable, but enabling the feature requires setting AI_TOKEN via environment variables; that requirement isn’t discoverable from action.yml. Consider adding an explicit ai_token input (intended for secrets.*) or at least documenting in the input description that an environment variable is required and that bundle diff JSON will be sent to a third-party AI provider when enabled.

Suggested change
description: 'AI model to use for degradation analysis (e.g. claude-3-5-haiku-latest, gpt-4o-mini). Provider is auto-detected from the model name prefix.'
description: 'AI model to use for degradation analysis (e.g. claude-3-5-haiku-latest, gpt-4o-mini). Provider is auto-detected from the model name prefix. Requires an AI_TOKEN environment variable (typically from secrets.*) for the provider API key, and when enabled the bundle diff JSON will be sent to the selected third-party AI provider.'

Copilot uses AI. Check for mistakes.
required: false
default: 'claude-3-5-haiku-latest'

runs:
using: 'node20'
Expand Down
168 changes: 162 additions & 6 deletions dist/index.js
Original file line number Diff line number Diff line change
Expand Up @@ -8945,8 +8945,8 @@ The following characters are not allowed in files that are uploaded due to limit
let headers = {};
let status;
let url;
const fetch = requestOptions.request && requestOptions.request.fetch || lib;
return fetch(requestOptions.url, Object.assign({
const fetch1 = requestOptions.request && requestOptions.request.fetch || lib;
return fetch1(requestOptions.url, Object.assign({
method: requestOptions.method,
body: requestOptions.body,
headers: requestOptions.headers,
Expand Down Expand Up @@ -45297,7 +45297,7 @@ The following characters are not allowed in files that are uploaded due to limit
this.emit('terminated', error);
}
}
function fetch(input, init = {}) {
function fetch1(input, init = {}) {
webidl.argumentLengthCheck(arguments, 1, {
header: 'globalThis.fetch'
});
Expand Down Expand Up @@ -45966,7 +45966,7 @@ The following characters are not allowed in files that are uploaded due to limit
}
}
module.exports = {
fetch,
fetch: fetch1,
Fetch,
fetching,
finalizeAndReportTiming
Expand Down Expand Up @@ -96551,6 +96551,112 @@ var __webpack_exports__ = {};
await core.summary.write();
console.log('✅ Bundle size report card generated successfully');
}
function detectProvider(model) {
return model.toLowerCase().startsWith('claude') ? 'anthropic' : 'openai';
}
async function callAnthropicAPI(prompt, token, model) {
const response = await fetch('https://api.anthropic.com/v1/messages', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'x-api-key': token,
'anthropic-version': '2023-06-01'
},
body: JSON.stringify({
model,
max_tokens: 2048,
messages: [
{
role: 'user',
content: prompt
}
]
})
});
if (!response.ok) {
const error = await response.text();
throw new Error(`Anthropic API error ${response.status}: ${error}`);
}
const data = await response.json();
return data.content[0].text;
}
async function callOpenAIAPI(prompt, token, model) {
const response = await fetch('https://api.openai.com/v1/chat/completions', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${token}`
},
body: JSON.stringify({
model,
max_tokens: 2048,
messages: [
{
role: 'user',
content: prompt
}
]
})
});
if (!response.ok) {
const error = await response.text();
throw new Error(`OpenAI API error ${response.status}: ${error}`);
}
const data = await response.json();
return data.choices[0].message.content;
}
function buildPrompt(diffData) {
const MAX_CHARS = 50000;
let diffStr = JSON.stringify(diffData, null, 2);
if (diffStr.length > MAX_CHARS) diffStr = diffStr.substring(0, MAX_CHARS) + '\n... (truncated due to size)';
return `You are a frontend performance expert analyzing a JavaScript bundle size diff report generated by Rsdoctor (a Webpack/Rspack bundle analyzer).

Please analyze the following bundle diff JSON data and provide a concise report covering:

1. **Size Regression Summary**: Which assets/chunks increased significantly in size
2. **Root Cause Analysis**: Likely causes of size increases based on the diff data
3. **Risk Assessment**: Overall severity — Low / Medium / High — with a brief justification
4. **Optimization Recommendations**: Specific, actionable steps to reduce the regressions

Focus especially on:
- Assets or chunks with >5% or >10 KB size increase
- Newly added large assets or modules
- Changes to initial/entry chunks (highest priority)
- Potential duplicate dependencies

Bundle diff data:
\`\`\`json
${diffStr}
\`\`\`

Respond in concise GitHub-flavored Markdown suitable for a PR comment. If there are no regressions, say so clearly.`;
}
async function analyzeWithAI(diffJsonPath, token, model = 'claude-3-5-haiku-latest') {
if (!token) {
console.log('ℹ️ No AI token provided, skipping AI analysis');
return null;
}
if (!external_fs_.existsSync(diffJsonPath)) {
console.log(`⚠️ Bundle diff JSON not found at ${diffJsonPath}, skipping AI analysis`);
return null;
}
try {
const diffData = JSON.parse(external_fs_.readFileSync(diffJsonPath, 'utf8'));
const prompt = buildPrompt(diffData);
const provider = detectProvider(model);
console.log(`🤖 Running AI analysis with ${provider} (${model})...`);
const analysis = 'anthropic' === provider ? await callAnthropicAPI(prompt, token, model) : await callOpenAIAPI(prompt, token, model);
console.log('✅ AI analysis completed');
return {
analysis,
provider,
model
};
} catch (error) {
console.warn(`⚠️ AI analysis failed: ${error}`);
return null;
}
}
var external_util_ = __webpack_require__("util");
var out = __webpack_require__("./node_modules/.pnpm/fast-glob@3.3.3/node_modules/fast-glob/out/index.js");
var out_default = /*#__PURE__*/ __webpack_require__.n(out);
Expand Down Expand Up @@ -96646,7 +96752,7 @@ var __webpack_exports__ = {};
}
return pathParts[0] || 'root';
}
async function processSingleFile(fullPath, currentCommitHash, targetCommitHash, baselineUsedFallback, baselineLatestCommitHash) {
async function processSingleFile(fullPath, currentCommitHash, targetCommitHash, baselineUsedFallback, baselineLatestCommitHash, aiToken, aiModel) {
const fileName = external_path_default().basename(fullPath);
const relativePath = external_path_default().relative(process.cwd(), fullPath);
const pathParts = relativePath.split(external_path_default().sep);
Expand Down Expand Up @@ -96747,6 +96853,43 @@ var __webpack_exports__ = {};
} catch (e) {
console.warn(`⚠️ Failed to upload diff html for ${projectName}: ${e}`);
}
if (aiToken) try {
const diffJsonPath = external_path_default().join(tempOutDir, `rsdoctor-diff-${projectName}.json`);
const defaultDiffJsonPath = external_path_default().join(tempOutDir, 'rsdoctor-diff.json');
try {
const cliEntry = require.resolve('@rsdoctor/cli', {
paths: [
process.cwd()
]
});
const binCliEntry = external_path_default().join(external_path_default().dirname(external_path_default().dirname(cliEntry)), 'bin', 'rsdoctor');
runRsdoctorViaNode(binCliEntry, [
'bundle-diff',
'--json',
`--baseline=${baselineJsonPath}`,
`--current=${fullPath}`
]);
} catch (e) {
console.log(`⚠️ rsdoctor CLI (json) not found in node_modules: ${e}`);
try {
const shellCmd = `npx @rsdoctor/cli bundle-diff --json --baseline="${baselineJsonPath}" --current="${fullPath}"`;
console.log(`🛠️ Running rsdoctor --json via npx: ${shellCmd}`);
await execFileAsync('sh', [
'-c',
shellCmd
], {
cwd: tempOutDir
});
} catch (npxError) {
console.log(`⚠️ npx approach (json) also failed: ${npxError}`);
}
}
if (external_fs_.existsSync(defaultDiffJsonPath) && !external_fs_.existsSync(diffJsonPath)) await external_fs_.promises.rename(defaultDiffJsonPath, diffJsonPath);
const resolvedJsonPath = external_fs_.existsSync(diffJsonPath) ? diffJsonPath : defaultDiffJsonPath;
report.aiAnalysis = await analyzeWithAI(resolvedJsonPath, aiToken, aiModel);
} catch (e) {
console.warn(`⚠️ Failed to generate JSON diff for AI analysis: ${e}`);
}
} catch (e) {
console.warn(`⚠️ rsdoctor bundle-diff failed for ${projectName}: ${e}`);
}
Expand All @@ -96769,6 +96912,9 @@ var __webpack_exports__ = {};
});
const currentCommitHash = githubService.getCurrentCommitHash();
console.log(`Current commit hash: ${currentCommitHash}`);
const aiToken = process.env.AI_TOKEN || '';
const aiModel = (0, core.getInput)('ai_model') || 'claude-3-5-haiku-latest';
if (aiToken) console.log(`🤖 AI analysis enabled (model: ${aiModel})`);
let targetCommitHash = null;
let baselineUsedFallback = false;
let baselineLatestCommitHash;
Expand Down Expand Up @@ -96825,7 +96971,7 @@ var __webpack_exports__ = {};
if (isDispatch) console.log('🔧 Processing workflow_dispatch event - uploading artifacts and comparing with baseline');
else console.log('📥 Detected pull request event - processing files');
for (const fullPath of matchedFiles){
const report = await processSingleFile(fullPath, currentCommitHash, targetCommitHash, baselineUsedFallback, baselineLatestCommitHash);
const report = await processSingleFile(fullPath, currentCommitHash, targetCommitHash, baselineUsedFallback, baselineLatestCommitHash, aiToken, aiModel);
projectReports.push(report);
if (isDispatch) {
const uploadResponse = await uploadArtifact(fullPath, currentCommitHash);
Expand Down Expand Up @@ -96934,6 +97080,16 @@ var __webpack_exports__ = {};
}
if (reportsWithChanges.length > 1) commentBody += '</details>\n\n';
}
const reportsWithAI = projectReports.filter((r)=>r.aiAnalysis);
if (reportsWithAI.length > 0) {
commentBody += '<details>\n<summary><b>🤖 AI Degradation Analysis</b> (Click to expand)</summary>\n\n';
for (const report of reportsWithAI)if (report.aiAnalysis) {
if (reportsWithAI.length > 1) commentBody += `#### 📁 ${report.projectName}\n\n`;
commentBody += report.aiAnalysis.analysis + '\n\n';
commentBody += `<sub>Analysis by ${report.aiAnalysis.model}</sub>\n\n`;
}
commentBody += '</details>\n\n';
}
commentBody += '*Generated by [Rsdoctor GitHub Action](https://rsdoctor.rs/guide/start/action)*';
try {
await githubService.updateOrCreateComment(context.payload.pull_request.number, commentBody);
Expand Down
4 changes: 2 additions & 2 deletions examples/rsbuild-demo/src/App.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -12,12 +12,12 @@ const App = () => {
<p>Start building amazing things with Rsbuild.</p>

<div className="button-container">
<button onClick={handleClick} className="primary-button">
{/* <button onClick={handleClick} className="primary-button">
Click Me!
</button>
<button onClick={() => console.log('Secondary button')} className="secondary-button">
Secondary Action
</button>
</button> */}
Comment on lines +15 to +20
Copy link

Copilot AI Mar 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change comments out an entire button block, but the file already renders the same buttons in the next <div className="button-container">. If this was only for local debugging, it should be removed/reverted to keep the example clean; if it’s intentional, consider deleting the duplicate section rather than leaving commented JSX in the source.

Copilot uses AI. Check for mistakes.
</div>

<div className="button-container">
Expand Down
131 changes: 131 additions & 0 deletions src/ai-analysis.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,131 @@
import * as fs from 'fs';

export interface AIAnalysisResult {
analysis: string;
provider: string;
model: string;
}

function detectProvider(model: string): 'anthropic' | 'openai' {
return model.toLowerCase().startsWith('claude') ? 'anthropic' : 'openai';
}

async function callAnthropicAPI(prompt: string, token: string, model: string): Promise<string> {
const response = await fetch('https://api.anthropic.com/v1/messages', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'x-api-key': token,
'anthropic-version': '2023-06-01',
},
body: JSON.stringify({
model,
max_tokens: 2048,
messages: [{ role: 'user', content: prompt }],
}),
});

if (!response.ok) {
const error = await response.text();
throw new Error(`Anthropic API error ${response.status}: ${error}`);
}

const data = await response.json() as any;
return data.content[0].text as string;
}
Comment on lines +37 to +39
Copy link

Copilot AI Mar 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

callAnthropicAPI assumes data.content[0].text exists. If the API returns an error shape (or content is empty), this will throw a confusing runtime error later. Consider validating the response JSON structure and throwing a clearer error when expected fields are missing (similar for OpenAI response parsing).

Copilot uses AI. Check for mistakes.

async function callOpenAIAPI(prompt: string, token: string, model: string): Promise<string> {
const response = await fetch('https://api.openai.com/v1/chat/completions', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${token}`,
},
body: JSON.stringify({
model,
max_tokens: 2048,
messages: [{ role: 'user', content: prompt }],
}),
});

if (!response.ok) {
const error = await response.text();
throw new Error(`OpenAI API error ${response.status}: ${error}`);
}

const data = await response.json() as any;
return data.choices[0].message.content as string;
}
Comment on lines +60 to +62
Copy link

Copilot AI Mar 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

callOpenAIAPI assumes data.choices[0].message.content exists. If the response is missing choices (rate limit, validation error, etc.), this will throw a non-actionable exception. Add minimal shape checks (e.g., ensure choices is an array and the content is a string) and include provider/model context in the thrown error to aid debugging.

Copilot uses AI. Check for mistakes.

function buildPrompt(diffData: unknown): string {
// Truncate large diff data to avoid token limits (~50k chars)
const MAX_CHARS = 50000;
let diffStr = JSON.stringify(diffData, null, 2);
if (diffStr.length > MAX_CHARS) {
diffStr = diffStr.substring(0, MAX_CHARS) + '\n... (truncated due to size)';
}

return `You are a frontend performance expert analyzing a JavaScript bundle size diff report generated by Rsdoctor (a Webpack/Rspack bundle analyzer).

Please analyze the following bundle diff JSON data and provide a concise report covering:

1. **Size Regression Summary**: Which assets/chunks increased significantly in size
2. **Root Cause Analysis**: Likely causes of size increases based on the diff data
3. **Risk Assessment**: Overall severity — Low / Medium / High — with a brief justification
4. **Optimization Recommendations**: Specific, actionable steps to reduce the regressions

Focus especially on:
- Assets or chunks with >5% or >10 KB size increase
- Newly added large assets or modules
- Changes to initial/entry chunks (highest priority)
- Potential duplicate dependencies

Bundle diff data:
\`\`\`json
${diffStr}
\`\`\`

Respond in concise GitHub-flavored Markdown suitable for a PR comment. If there are no regressions, say so clearly.`;
}

/**
* Run AI degradation analysis on a bundle-diff JSON file.
*
* @param diffJsonPath Path to the JSON file produced by `rsdoctor bundle-diff --json`
* @param token AI API key (Anthropic or OpenAI)
* @param model Model name — auto-detects provider from prefix (default: claude-3-5-haiku-latest)
*/
export async function analyzeWithAI(
diffJsonPath: string,
token: string,
model = 'claude-3-5-haiku-latest',
): Promise<AIAnalysisResult | null> {
if (!token) {
console.log('ℹ️ No AI token provided, skipping AI analysis');
return null;
}
Comment on lines +103 to +112
Copy link

Copilot AI Mar 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

New AI analysis functionality is introduced here, but there are no tests covering the new module. The repo already uses @rstest/core with mocks (see src/__tests__). Please add unit tests for analyzeWithAI covering: no token, missing JSON file, provider detection from model name, and successful/failed API calls (mock fetch responses).

Copilot uses AI. Check for mistakes.

if (!fs.existsSync(diffJsonPath)) {
console.log(`⚠️ Bundle diff JSON not found at ${diffJsonPath}, skipping AI analysis`);
return null;
}

try {
const diffData: unknown = JSON.parse(fs.readFileSync(diffJsonPath, 'utf8'));
const prompt = buildPrompt(diffData);
const provider = detectProvider(model);

console.log(`🤖 Running AI analysis with ${provider} (${model})...`);

const analysis =
provider === 'anthropic'
? await callAnthropicAPI(prompt, token, model)
: await callOpenAIAPI(prompt, token, model);

console.log('✅ AI analysis completed');
return { analysis, provider, model };
} catch (error) {
console.warn(`⚠️ AI analysis failed: ${error}`);
return null;
}
}
Loading
Loading