-
Notifications
You must be signed in to change notification settings - Fork 61.2k
集成 AWS Bedrock 作为新的 LLM 服务提供商 #6430
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Adds support for using models hosted on AWS Bedrock, specifically Anthropic Claude models. Key changes: - Added '@aws-sdk/client-bedrock-runtime' dependency. - Updated constants, server config, and auth logic for Bedrock. - Implemented backend API handler () to communicate with the Bedrock API, handling streaming and non-streaming responses, and formatting output to be OpenAI compatible. - Updated dynamic API router () to dispatch requests to the Bedrock handler. - Created frontend client () and updated client factory (). - Updated with necessary Bedrock environment variables (AWS keys, region, enable flag) and an example for using to alias Bedrock models.
- Added logging to `getClientApi` for better debugging of provider input and standardized provider names. - Updated `ChatActions` to handle lowercase provider IDs, converting them to TitleCase for consistency with the ServiceProvider enum. - Implemented defaulting to OpenAI when provider ID is missing and added relevant logging for session updates. - Enhanced logging in `useChatStore` to track API call preparations and provider configurations.
- Enhanced error messages in `getAwsCredentials` to provide clearer feedback when AWS Bedrock is not configured properly. - Added checks for missing Bedrock Access Key ID and Secret Access Key, with specific error messages for each case.
- Updated logging in the Bedrock API handler to include detailed information about the request model, stream status, and message count for improved debugging and monitoring.
Someone is attempting to deploy a commit to the NextChat Team on Vercel. A member of the Team first needs to authorize it. |
WalkthroughThis pull request integrates AWS Bedrock support into the application. Configuration changes in the Changes
Sequence Diagram(s)sequenceDiagram
participant C as Client
participant AP as API Route Handler
participant BH as Bedrock Handler
participant AWS as AWS Bedrock Service
participant CL as BedrockApi (Client)
C->>AP: Send request to Bedrock endpoint
AP->>BH: Forward request to bedrockHandler
BH->>AWS: Invoke model command (streaming or non-streaming)
AWS-->>BH: Return response (or stream data)
BH-->>AP: Format and relay response
AP-->>C: Deliver final response
Suggested reviewers
Poem
✨ Finishing Touches
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (11)
app/store/chat.ts (1)
459-478
: Enhanced API call logging for debugging Bedrock integration.The added logging helps debug provider resolution, particularly checking if the provider is Bedrock. This is valuable during integration but consider cleaning up excessive logging in production code.
Consider:
- Using structured logging instead of multiple concatenated values
- Removing or conditionalizing verbose logging in production
- Standardizing language (mixing Chinese comments with English logs may confuse non-Chinese speakers)
const providerNameFromConfig = modelConfig.providerName; -console.log( - "[onUserInput] Preparing API call. Provider from config:", - providerNameFromConfig, - "| Type:", - typeof providerNameFromConfig, - "| Is Enum value (Bedrock)?:", - providerNameFromConfig === ServiceProvider.Bedrock, // 与枚举比较 - "| Is 'Bedrock' string?:", - providerNameFromConfig === "Bedrock", // 与字符串比较 - "| Model:", - modelConfig.model, -); +if (process.env.NODE_ENV !== 'production') { + console.log("[onUserInput] API call details:", { + provider: providerNameFromConfig, + type: typeof providerNameFromConfig, + isBedrockEnum: providerNameFromConfig === ServiceProvider.Bedrock, + isBedrockString: providerNameFromConfig === "Bedrock", + model: modelConfig.model, + }); +}app/components/chat.tsx (1)
696-709
: Consider configurable log levels.
Lines 697-709 add multiple console logs that may be too verbose for production environments. Consider using a logging framework or adjusting log verbosity levels to avoid clutter.app/api/auth.ts (1)
117-120
: Evaluate sensitive logging.
Displaying the provider name in logs may be helpful for debugging but can be unnecessary in multi-tenant or production setups. Assess whether to remove or downgrade this log to reduce potential noise or confusion.app/api/bedrock/index.ts (5)
14-32
: Add fallback or improved error messages.
Presently,getAwsCredentials()
throws generic errors if Bedrock is misconfigured. For a more user-friendly experience, consider including instructions or next steps in the error messages to help resolve the missing or incorrect environment variables.
34-58
: Streamline subpath authorization logic.
Currently, only one path fromALLOWED_PATH
is authorized. If future expansions include multiple Bedrock endpoints, consider using a more scalable approach (e.g., partially matching known subpath patterns) to reduce manual maintenance of theALLOWED_PATH
set.
106-127
: Plan for broader model support.
Currently, only Anthropic Claude-based models are supported. If you plan to integrate other Bedrock models (e.g., Amazon Titan), building a more generic payload construction mechanism here will reduce refactoring later.
207-248
: Return "finish_reason" carefully.
The non-streaming path always uses"stop"
forfinish_reason
. In some edge cases, the model may reach a max token limit or encounter an error mid-generation. Differentiate these scenarios for a more accurate representation of why the generation ended.
250-253
: Log details on errors.
While the error is returned to the client, consider logging additional context (e.g., command parameters, partial responses) for troubleshooting, ensuring sensitive data (like AWS keys) is excluded.app/config/server.ts (1)
177-177
: Consider grouping Bedrock config under a single object.Currently, you return separate properties for
bedrockRegion
,bedrockAccessKeyId
,bedrockSecretAccessKey
, etc. Grouping them into a nested object (e.g.,bedrockConfig
) within the returned config may improve clarity and detect misconfigurations more easily.export const getServerSideConfig = () => { ... + const bedrockConfig = { + isEnabled: isBedrock, + region: process.env.AWS_REGION, + accessKeyId: process.env.AWS_ACCESS_KEY_ID, + secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY, + endpoint: process.env.AWS_BEDROCK_ENDPOINT, + }; const config = { ... - isBedrock, - bedrockRegion: process.env.AWS_REGION, - bedrockAccessKeyId: process.env.AWS_ACCESS_KEY_ID, - bedrockSecretAccessKey: process.env.AWS_SECRET_ACCESS_KEY, - bedrockEndpoint: process.env.AWS_BEDROCK_ENDPOINT, + bedrockConfig, ... }; return config; };Also applies to: 248-253
app/client/platforms/bedrock.ts (2)
16-120
: Check streaming chunk parsing reliability.
- Splitting lines by “\n” could fail if partial JSON lines span multiple chunks. This might lead to JSON parse errors or data loss.
- Consider buffering partial lines across chunks to handle SSE streaming more robustly.
Everything else, including comfort calls for
onUpdate
,onFinish
, andonError
, looks consistent.+ // Potential partial line buffer let leftover = ""; while (true) { const { done, value } = await reader.read(); if (done) break; - const text = decoder.decode(value, { stream: true }); - const lines = text.split("\n"); + const chunk = leftover + decoder.decode(value, { stream: true }); + const lines = chunk.split("\n"); + leftover = lines.pop() ?? ""; // store incomplete line for (const line of lines) { ... } }
136-140
: No direct TTS support.Throwing an error in
speech
is reasonable if AWS Bedrock TTS is not yet implemented. Consider returning a user-friendly error message or a specialized exception if you decide to add partial TTS handling later.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
yarn.lock
is excluded by!**/yarn.lock
,!**/*.lock
📒 Files selected for processing (11)
.env.template
(2 hunks)app/api/[provider]/[...path]/route.ts
(2 hunks)app/api/auth.ts
(2 hunks)app/api/bedrock/index.ts
(1 hunks)app/client/api.ts
(4 hunks)app/client/platforms/bedrock.ts
(1 hunks)app/components/chat.tsx
(1 hunks)app/config/server.ts
(3 hunks)app/constant.ts
(4 hunks)app/store/chat.ts
(1 hunks)package.json
(1 hunks)
🧰 Additional context used
🧬 Code Definitions (4)
app/api/bedrock/index.ts (2)
app/config/server.ts (1)
getServerSideConfig
(128-275)app/api/auth.ts (1)
auth
(27-129)
app/store/chat.ts (1)
app/client/api.ts (2)
ClientApi
(136-229)getClientApi
(363-436)
app/client/api.ts (1)
app/client/platforms/bedrock.ts (1)
BedrockApi
(8-140)
app/client/platforms/bedrock.ts (2)
app/client/api.ts (5)
ChatOptions
(76-86)getHeaders
(244-361)LLMUsage
(88-91)LLMModel
(93-99)SpeechOptions
(67-74)app/constant.ts (1)
Bedrock
(272-274)
🔇 Additional comments (18)
package.json (1)
24-24
: Added AWS Bedrock SDK dependency.The addition of
@aws-sdk/client-bedrock-runtime
is necessary for integrating AWS Bedrock as a new LLM provider. This aligns with the PR objective of supporting AWS Bedrock-hosted models.app/api/[provider]/[...path]/route.ts (2)
17-17
: Added Bedrock handler import.The import statement for the Bedrock handler follows the same pattern as other provider handlers. This is a clean implementation.
54-55
: Added Bedrock API path routing.The case statement for
ApiPath.Bedrock
correctly routes requests to the Bedrock handler, maintaining consistency with the existing API routing pattern.app/constant.ts (4)
75-75
: Added Bedrock to ApiPath enum.The addition follows the consistent naming pattern used for other providers.
134-134
: Added Bedrock to ServiceProvider enum.This is necessary for client-side provider identification and follows the established pattern.
161-161
: Added Bedrock to ModelProvider enum.Properly integrates Bedrock into the model provider ecosystem for use in model selection logic.
272-274
: Added Bedrock constant with ChatPath.The Bedrock constant correctly defines the API path for chat completions. Note that unlike some other providers (e.g., OpenAI), this only defines ChatPath without other capabilities like speech or image generation.
Is this intentional or are you planning to add additional capabilities for Bedrock in future updates? Some AWS Bedrock models support image generation and other multimodal capabilities that could be added later.
app/store/chat.ts (1)
475-478
: Improved API client instantiation with fallback.The modification correctly handles cases where
providerNameFromConfig
might be undefined by providing a default value ofServiceProvider.OpenAI
. This increases robustness.app/client/api.ts (3)
27-27
: Use consistent import grouping.While this import correctly references the new
BedrockApi
, ensure consistent import grouping or ordering if your coding guidelines require alphabetical or categorized ordering. Otherwise, this is fine.
177-179
: Good extension for Bedrock provider.Adding a
ModelProvider.Bedrock
case is straightforward and consistent with the existing pattern. No issues detected.
363-434
: Check for fallback correctness and future expansions.
- The conversion logic for string-based providers to
ServiceProvider
enum is well-structured. Logging the standardized provider aids debugging.- The final
default
case falling back to GPT is appropriate, but ensure that your team is aligned on defaulting to GPT if an unknown provider string is received. This behavior might hide configuration errors.No immediate issues, but consider adding additional known lowercase mappings (e.g., "anthropic", "baidu") to unify your usage across the codebase.
Would you like me to scan for other occurrences of string-based providers in the repository to suggest more mappings?
app/config/server.ts (2)
167-171
: Validate AWS credentials for Bedrock.The
isBedrock
logic correctly checks environment variables for AWS credentials. Ensure that you handle scenarios where credentials are invalid or missing at runtime, especially ifENABLE_AWS_BEDROCK
is set to"true"
but credentials are incorrect.Would you like a script to search for usage of these environment variables and confirm they are validated properly?
274-274
: Clear separation of concerns.Wrapping the configuration in a
config
object prior to returning is a neat approach — it decouples environment variable loading from usage. Code looks good..env.template (2)
60-60
: Verify custom models correctness.You have removed the
-all
option and added new models, including abedrock
-based model namedsonnet
. Ensure these custom tokens are recognized by the back end. Missing or mismatched model references could lead to unexpected errors.Please confirm that “us.anthropic.claude-3-5-sonnet-20241022-v2:0@bedrock=sonnet” is a valid internal or Bedrock model reference. If needed, I can provide a script to locate all references to “sonnet” in the codebase.
85-103
: Follow best practices for secrets in templates.Defining placeholders for AWS credentials indicates good security hygiene. Be sure to remove or comment these lines out if you do not want new developers to accidentally commit real credentials.
app/client/platforms/bedrock.ts (3)
1-14
: Ensure bedrock path constants are validated.The
path
method concatenatesApiPath.Bedrock
with a subpath. The final endpoint depends on your backend route definitions. Validate thatApiPath.Bedrock
is correct and that backend routes match this prefix.
122-128
: Usage reporting placeholder.Returning static usage data is acceptable as a placeholder. Ensure you revisit this method if future usage/billing metrics for Bedrock become relevant.
130-134
: Model fetch is intentionally unimplemented.Acknowledging that dynamic model retrieval from Bedrock is complex, the empty array fallback is fine for now. No issues.
case ModelProvider.Bedrock: | ||
console.log( | ||
"[Auth] Using AWS credentials for Bedrock, no API key override.", | ||
); | ||
return { error: false }; | ||
case ModelProvider.GPT: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Verify AWS credentials before returning.
This block immediately returns { error: false }
for Bedrock, skipping an explicit credential validation step. Consider ensuring that AWS credential checks (e.g., presence of required environment variables) have definitively passed before returning success.
🧰 Tools
🪛 Biome (1.9.4)
[error] 104-104: Useless case clause.
because the default clause is present:
Unsafe fix: Remove the useless case.
(lint/complexity/noUselessSwitchCase)
if (stream) { | ||
const command = new InvokeModelWithResponseStreamCommand({ | ||
modelId: model, | ||
contentType: "application/json", | ||
accept: "application/json", | ||
body: JSON.stringify(payload), | ||
}); | ||
const response = await client.send(command); | ||
|
||
if (!response.body) { | ||
throw new Error("Empty response stream from Bedrock"); | ||
} | ||
const responseBody = response.body; | ||
|
||
const encoder = new TextEncoder(); | ||
const decoder = new TextDecoder(); | ||
const readableStream = new ReadableStream({ | ||
async start(controller) { | ||
try { | ||
for await (const event of responseBody) { | ||
if (event.chunk?.bytes) { | ||
const chunkData = JSON.parse(decoder.decode(event.chunk.bytes)); | ||
let responseText = ""; | ||
let finishReason: string | null = null; | ||
|
||
if ( | ||
chunkData.type === "content_block_delta" && | ||
chunkData.delta.type === "text_delta" | ||
) { | ||
responseText = chunkData.delta.text || ""; | ||
} else if (chunkData.type === "message_stop") { | ||
finishReason = | ||
chunkData["amazon-bedrock-invocationMetrics"] | ||
?.outputTokenCount > 0 | ||
? "stop" | ||
: "length"; // Example logic | ||
} | ||
|
||
// Format as OpenAI SSE chunk | ||
const sseData = { | ||
id: `chatcmpl-${nanoid()}`, | ||
object: "chat.completion.chunk", | ||
created: Math.floor(Date.now() / 1000), | ||
model: model, | ||
choices: [ | ||
{ | ||
index: 0, | ||
delta: { content: responseText }, | ||
finish_reason: finishReason, | ||
}, | ||
], | ||
}; | ||
controller.enqueue( | ||
encoder.encode(`data: ${JSON.stringify(sseData)}\n\n`), | ||
); | ||
|
||
if (finishReason) { | ||
controller.enqueue(encoder.encode("data: [DONE]\n\n")); | ||
break; // Exit loop after stop message | ||
} | ||
} | ||
} | ||
} catch (error) { | ||
console.error("[Bedrock] Streaming error:", error); | ||
controller.error(error); | ||
} finally { | ||
controller.close(); | ||
} | ||
}, | ||
}); | ||
|
||
return new NextResponse(readableStream, { | ||
headers: { | ||
"Content-Type": "text/event-stream", | ||
"Cache-Control": "no-cache", | ||
Connection: "keep-alive", | ||
}, | ||
}); | ||
} else { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Handle partial chunk scenarios.
While streaming data, a partial chunk may be returned from Bedrock if the chunk boundaries don't align with JSON structures. Consider buffering incomplete segments before parsing to prevent JSON parse errors.
- Modified the assignment of `providerName` in `ChatActions` to assert non-null, ensuring type safety and preventing potential runtime errors.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
app/components/chat.tsx (1)
710-733
: 🛠️ Refactor suggestionImproved provider ID handling with case normalization.
The code now properly handles the conversion from lowercase provider IDs (e.g., "bedrock") to TitleCase format required by the ServiceProvider enum. This is a good improvement that will help maintain consistency when new providers are added.
However, when an unknown provider ID is encountered, the code only logs an error without setting a fallback provider, which could lead to undefined behavior.
Consider adding a fallback to OpenAI for unknown providers, similar to how you handle missing providers:
} else { console.error( `[ChatActions] Unknown provider ID: ${providerId}`, ); - // Handle error or fallback if needed + // Fallback to OpenAI for unknown providers + targetProvider = ServiceProvider.OpenAI; + console.warn( + `[ChatActions] Unknown provider ID: ${providerId}, defaulting to OpenAI.`, + ); }
🧹 Nitpick comments (1)
app/components/chat.tsx (1)
740-750
: Non-null assertion on targetProvider could be improved.Line 744 uses a non-null assertion (
targetProvider!
) which may cause runtime errors if targetProvider is somehow null despite the enclosing if check.Consider a safer approach:
if (targetProvider) { // Only update if we found a valid provider chatStore.updateTargetSession(session, (session) => { session.mask.modelConfig.model = model as ModelType; - session.mask.modelConfig.providerName = targetProvider!; // Use the Enum value (Assert non-null) + session.mask.modelConfig.providerName = targetProvider; // Already checked in if condition session.mask.syncGlobalConfig = false;
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
app/components/chat.tsx
(1 hunks)
🔇 Additional comments (2)
app/components/chat.tsx (2)
697-708
: Good addition of detailed logging for debugging provider selection.Adding logging for both raw and parsed values will help with troubleshooting model selection issues that may arise when implementing a new provider like Bedrock.
751-760
: Improved provider-specific toast message display.The code now correctly displays different toast messages based on the provider type, which improves the user experience by showing more relevant information.
- Removed redundant Bedrock configuration check in the API handler for cleaner code. - Enhanced logging in `ChatActions` to provide clearer feedback when an unknown provider ID is encountered, defaulting to OpenAI when necessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
app/api/bedrock/index.ts (2)
60-70
: 🛠️ Refactor suggestionConsolidate configuration checks.
The server-side config is retrieved twice - once at line 61 and again inside
getAwsCredentials()
called at line 68. This creates redundant code paths and potential inconsistencies.try { - const config = getServerSideConfig(); + const credentials = getAwsCredentials(); + const config = getServerSideConfig(); const bedrockRegion = config.bedrockRegion as string; const bedrockEndpoint = config.bedrockEndpoint; const client = new BedrockRuntimeClient({ region: bedrockRegion, - credentials: getAwsCredentials(), + credentials: credentials, endpoint: bedrockEndpoint || undefined, });Alternatively, modify
getAwsCredentials()
to return both credentials and the config:function getAwsCredentials() { const config = getServerSideConfig(); if (!config.isBedrock) { throw new Error( "AWS Bedrock is not configured properly (ENABLE_AWS_BEDROCK is not true)", ); } if (!config.bedrockAccessKeyId) { throw new Error("AWS Bedrock Access Key ID is missing or empty."); } if (!config.bedrockSecretAccessKey) { throw new Error("AWS Bedrock Secret Access Key is missing or empty."); } return { + config, accessKeyId: config.bedrockAccessKeyId as string, secretAccessKey: config.bedrockSecretAccessKey as string, }; }
140-142
: 🛠️ Refactor suggestionHandle partial chunk scenarios.
When streaming data from Bedrock, a partial chunk may be returned if the chunk boundaries don't align with complete JSON structures. Direct parsing could lead to JSON parse errors.
Implement a buffer for incomplete JSON:
try { + let buffer = ''; for await (const event of responseBody) { if (event.chunk?.bytes) { - const chunkData = JSON.parse(decoder.decode(event.chunk.bytes)); + buffer += decoder.decode(event.chunk.bytes); + + // Try to parse complete JSON objects from the buffer + try { + const chunkData = JSON.parse(buffer); + // Reset buffer after successful parse + buffer = '';
🧹 Nitpick comments (5)
app/api/bedrock/index.ts (5)
248-249
: Move import statement to the top of the file.The import for
nanoid
should be moved to the top of the file with other imports to follow standard import organization practices.-// Need nanoid for unique IDs -import { nanoid } from "nanoid";Add this to the imports section at the top:
import { BedrockRuntimeClient, InvokeModelWithResponseStreamCommand, InvokeModelCommand, } from "@aws-sdk/client-bedrock-runtime"; +import { nanoid } from "nanoid";
152-157
: Add documentation for finish reason logic.The determination of finish reason is complex and would benefit from clearer documentation explaining why different finish reasons are assigned based on token counts.
} else if (chunkData.type === "message_stop") { + // Determine finish reason based on output token count: + // - "stop" when tokens were generated (normal completion) + // - "length" when no tokens were generated (likely hit max token limit) finishReason = chunkData["amazon-bedrock-invocationMetrics"] ?.outputTokenCount > 0 ? "stop" : "length"; // Example logic }
90-97
: Add support for non-Claude Bedrock models.The implementation currently only supports Claude models from Anthropic on Bedrock, rejecting other models. This limits the flexibility of the Bedrock integration.
Consider implementing adapters for other Bedrock models like Llama, Titan, or Cohere to expand the available options:
// --- Payload formatting for Claude on Bedrock --- const isClaudeModel = model.includes("anthropic.claude"); -if (!isClaudeModel) { - return NextResponse.json( - { error: true, msg: "Unsupported Bedrock model: " + model }, - { status: 400 }, - ); -} + +let payload; +if (isClaudeModel) { + // Claude payload formatting + const systemPrompts = messages.filter((msg: any) => msg.role === "system"); + const userAssistantMessages = messages.filter( + (msg: any) => msg.role !== "system", + ); + + payload = { + anthropic_version: "bedrock-2023-05-31", + max_tokens: max_tokens || 4096, + temperature: temperature, + messages: userAssistantMessages.map((msg: any) => ({ + role: msg.role, // 'user' or 'assistant' + content: + typeof msg.content === "string" + ? [{ type: "text", text: msg.content }] + : msg.content, + })), + ...(systemPrompts.length > 0 && { + system: systemPrompts.map((msg: any) => msg.content).join("\n"), + }), + }; +} else if (model.includes("llama")) { + // Add Llama model support + // Format payload according to Llama requirements +} else if (model.includes("amazon.titan")) { + // Add Titan model support + // Format payload according to Titan requirements +} else { + return NextResponse.json( + { error: true, msg: "Unsupported Bedrock model: " + model }, + { status: 400 }, + ); +}
108-118
: Improve content type validation for messages.The current implementation assumes the message content is either a string or already in the expected format, but doesn't validate the structure for non-string content.
Add explicit validation for non-string content to ensure it matches Claude's expected format:
messages: userAssistantMessages.map((msg: any) => ({ role: msg.role, // 'user' or 'assistant' content: typeof msg.content === "string" ? [{ type: "text", text: msg.content }] - : msg.content, // Assuming MultimodalContent format is compatible + : validateMessageContent(msg.content), // Validate non-string content })), +// Add this validation function elsewhere in the file +function validateMessageContent(content: any) { + // Validate that content is an array + if (!Array.isArray(content)) { + throw new Error("Non-string message content must be an array"); + } + + // Validate each content item has required properties + for (const item of content) { + if (!item.type) { + throw new Error("Each content item must have a 'type' property"); + } + + if (item.type === "text" && typeof item.text !== "string") { + throw new Error("Text content items must have a valid 'text' property"); + } + // Add additional validation for other content types as needed + } + + return content; +}
227-238
: Simplify token usage calculation.The current token usage calculation logic is unnecessarily complex with multiple nullish coalescing operators and conditional expressions.
usage: { - prompt_tokens: - responseBody["amazon-bedrock-invocationMetrics"]?.inputTokenCount ?? - -1, - completion_tokens: - responseBody["amazon-bedrock-invocationMetrics"] - ?.outputTokenCount ?? -1, - total_tokens: - (responseBody["amazon-bedrock-invocationMetrics"] - ?.inputTokenCount ?? 0) + - (responseBody["amazon-bedrock-invocationMetrics"] - ?.outputTokenCount ?? 0) || -1, + prompt_tokens: responseBody["amazon-bedrock-invocationMetrics"]?.inputTokenCount ?? -1, + completion_tokens: responseBody["amazon-bedrock-invocationMetrics"]?.outputTokenCount ?? -1, + total_tokens: calculateTotalTokens(responseBody), }, +// Add this helper function +function calculateTotalTokens(responseBody: any) { + const metrics = responseBody["amazon-bedrock-invocationMetrics"]; + if (!metrics) return -1; + + const inputTokens = metrics.inputTokenCount ?? 0; + const outputTokens = metrics.outputTokenCount ?? 0; + + return inputTokens + outputTokens || -1; +}
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
app/api/bedrock/index.ts
(1 hunks)app/components/chat.tsx
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- app/components/chat.tsx
🧰 Additional context used
🧬 Code Definitions (1)
app/api/bedrock/index.ts (2)
app/config/server.ts (1)
getServerSideConfig
(128-275)app/api/auth.ts (1)
auth
(27-129)
💻 变更类型 | Change Type
🔀 变更说明 | Description of Change
集成 AWS Bedrock 作为新的 LLM 服务提供商。用户现在可以通过配置 AWS 凭证和区域,在 NextChat 中使用 Bedrock 托管的模型。
主要变更包括:
@aws-sdk/client-bedrock-runtime
依赖。app/constant.ts
中添加了 Bedrock 相关的枚举和模型定义。app/config/server.ts
以读取和处理 Bedrock 的环境变量配置。app/api/bedrock/index.ts
,负责:app/api/[provider]/[...path]/route.ts
以将/api/bedrock/...
请求分发给 Bedrock 处理器。app/api/auth.ts
以跳过为 Bedrock 请求添加 Authorization 头。app/client/platforms/bedrock.ts
。app/client/api.ts
(ClientApi
构造函数和getClientApi
) 以支持 Bedrock 客户端。.env.template
添加 Bedrock 相关配置项,并提供了一个使用CUSTOM_MODELS
重命名 Bedrock 模型名称的示例。📝 补充信息 | Additional Information
us.anthropic.claude-3-5-sonnet-20241022-v2:0
),因为其 API 结构与本项目中的其他 Anthropic 实现类似。理论上支持其他需要类似输入/输出格式的 Bedrock 模型,但可能需要调整 Payload 格式。.env
文件中提供有效的AWS_ACCESS_KEY_ID
、AWS_SECRET_ACCESS_KEY
和AWS_REGION
,并将ENABLE_AWS_BEDROCK
设置为true
来启用此功能。CUSTOM_MODELS
环境变量来添加或重命名 Bedrock 模型,例如:+us.anthropic.claude-3-5-sonnet-20241022-v2:0@bedrock=sonnet
。Summary by CodeRabbit
New Features
Refactor
Chores