genkitx-aws-bedrock is a community plugin for using AWS Bedrock APIs with
Genkit. Built by Xavier Portilla Edo.
This Genkit plugin allows to use AWS Bedrock through their official APIs.
Install the plugin in your project with your favourite package manager
npm install genkitx-aws-bedrockpnpm add genkitx-aws-bedrock
if you are using Genkit version <v0.9.0, please use the plugin version v1.9.0. If you are using Genkit >=v0.9.0, please use the plugin version >=v1.10.0.
To use the plugin, you need to configure it with your AWS credentials. There are several approaches depending on your environment.
You can configure the plugin by calling the genkit function with your AWS region and model:
import { genkit, z } from 'genkit';
import { awsBedrock, amazonNovaProV1 } from "genkitx-aws-bedrock";
const ai = genkit({
plugins: [
awsBedrock({ region: "<my-region>" }),
],
model: amazonNovaProV1,
});If you have set the AWS_ environment variables, you can initialize it like this:
import { genkit, z } from 'genkit';
import { awsBedrock, amazonNovaProV1 } from "genkitx-aws-bedrock";
const ai = genkit({
plugins: [
awsBedrock(),
],
model: amazonNovaProV1,
});In production environments, it is often necessary to install an additional library to handle authentication. One approach is to use the @aws-sdk/credential-providers package:
import { fromEnv } from "@aws-sdk/credential-providers";
const ai = genkit({
plugins: [
awsBedrock({
region: "us-east-1",
credentials: fromEnv(),
}),
],
});Ensure you have a .env file with the necessary AWS credentials. Remember that the .env file must be added to your .gitignore to prevent sensitive credentials from being exposed.
AWS_ACCESS_KEY_ID =
AWS_SECRET_ACCESS_KEY =
For local development, you can directly supply the credentials:
const ai = genkit({
plugins: [
awsBedrock({
region: "us-east-1",
credentials: {
accessKeyId: awsAccessKeyId.value(),
secretAccessKey: awsSecretAccessKey.value(),
},
}),
],
});Each approach allows you to manage authentication effectively based on your environment needs.
If you want to use a model that uses Cross-region Inference Endpoints, you can specify the region in the model configuration. Cross-region inference uses inference profiles to increase throughput and improve resiliency by routing your requests across multiple AWS Regions during peak utilization bursts:
import { genkit, z } from 'genkit';
import {awsBedrock, amazonNovaProV1, anthropicClaude35SonnetV2} from "genkitx-aws-bedrock";
const ai = genkit({
plugins: [
awsBedrock(),
],
model: anthropicClaude35SonnetV2("us"),
});You can check more information about the available models in the AWS Bedrock PLugin documentation.
The simplest way to call the text generation model is by using the helper function generate:
import { genkit, z } from 'genkit';
import {awsBedrock, amazonNovaProV1} from "genkitx-aws-bedrock";
// Basic usage of an LLM
const response = await ai.generate({
prompt: 'Tell me a joke.',
});
console.log(await response.text);// ...configure Genkit (as shown above)...
export const myFlow = ai.defineFlow(
{
name: 'menuSuggestionFlow',
inputSchema: z.string(),
outputSchema: z.string(),
},
async (subject) => {
const llmResponse = await ai.generate({
prompt: `Suggest an item for the menu of a ${subject} themed restaurant`,
});
return llmResponse.text;
}
);// ...configure Genkit (as shown above)...
const specialToolInputSchema = z.object({ meal: z.enum(["breakfast", "lunch", "dinner"]) });
const specialTool = ai.defineTool(
{
name: "specialTool",
description: "Retrieves today's special for the given meal",
inputSchema: specialToolInputSchema,
outputSchema: z.string(),
},
async ({ meal }): Promise<string> => {
// Retrieve up-to-date information and return it. Here, we just return a
// fixed value.
return "Baked beans on toast";
}
);
const result = ai.generate({
tools: [specialTool],
prompt: "What's for breakfast?",
});
console.log(result.then((res) => res.text));For more detailed examples and the explanation of other functionalities, refer to the official Genkit documentation.
If you want to use a model that is not exported by this plugin, you can register it using the customModels option when initializing the plugin:
import { genkit, z } from 'genkit';
import { awsBedrock } from 'genkitx-aws-bedrock';
const ai = genkit({
plugins: [
awsBedrock({
region: 'us-east-1',
customModels: ['openai.gpt-oss-20b-1:0'], // Register custom models
}),
],
});
// Use the custom model by specifying its name as a string
export const customModelFlow = ai.defineFlow(
{
name: 'customModelFlow',
inputSchema: z.string(),
outputSchema: z.string(),
},
async (subject) => {
const llmResponse = await ai.generate({
model: 'aws-bedrock/openai.gpt-oss-20b-1:0', // Use any registered custom model
prompt: `Tell me about ${subject}`,
});
return llmResponse.text;
}
);Alternatively, you can define a custom model outside of the plugin initialization:
import { defineAwsBedrockModel } from 'genkitx-aws-bedrock';
const customModel = defineAwsBedrockModel('openai.gpt-oss-20b-1:0', {
region: 'us-east-1'
});
const response = await ai.generate({
model: customModel,
prompt: 'Hello!'
});This plugin includes an onCallGenkit helper function (similar to Firebase Functions' onCallGenkit) that makes it easy to deploy Genkit flows as AWS Lambda functions.
import { genkit, z } from 'genkit';
import { awsBedrock, amazonNovaProV1, onCallGenkit } from 'genkitx-aws-bedrock';
const ai = genkit({
plugins: [awsBedrock()],
model: amazonNovaProV1(),
});
const myFlow = ai.defineFlow(
{
name: 'myFlow',
inputSchema: z.string(),
outputSchema: z.string(),
},
async (input) => {
const { text } = await ai.generate({ prompt: input });
return text;
}
);
// Export as Lambda handler
export const handler = onCallGenkit(myFlow);When streaming: true is set, onCallGenkit returns a streaming Lambda handler directly for real incremental streaming via Lambda Function URLs. This is compatible with streamFlow from genkit/beta/client.
const myStreamingFlow = ai.defineFlow(
{
name: 'myStreamingFlow',
inputSchema: z.object({ subject: z.string() }),
outputSchema: z.object({ joke: z.string() }),
streamSchema: z.string(),
},
async (input, sendChunk) => {
const { stream, response } = await ai.generateStream({
prompt: `Tell me a joke about ${input.subject}`,
output: { schema: z.object({ joke: z.string() }) },
});
for await (const chunk of stream) {
sendChunk(chunk.text);
}
const result = await response;
return result.output || { joke: result.text };
}
);
// streaming: true returns a StreamifyHandler directly
export const streamingHandler = onCallGenkit(
{ streaming: true, cors: { origin: '*' } },
myStreamingFlow
);Deploy with a Lambda Function URL in serverless.yml:
functions:
myStreamingFunction:
handler: src/index.streamingHandler
url:
invokeMode: RESPONSE_STREAM
cors: trueNote: API Gateway buffers responses and does not support streaming. You must use a Lambda Function URL with
InvokeMode: RESPONSE_STREAM.
import { onCallGenkit, requireApiKey } from 'genkitx-aws-bedrock';
export const handler = onCallGenkit(
{
// CORS configuration
cors: {
origin: 'https://myapp.com',
credentials: true,
},
// Context provider for authentication
contextProvider: requireApiKey('X-API-Key', process.env.API_KEY!),
// Debug logging
debug: true,
// Custom error handling
onError: async (error) => ({
statusCode: 500,
message: error.message,
}),
},
myFlow
);The plugin provides built-in context provider helpers that follow Genkit's ContextProvider pattern (same as @genkit-ai/express):
import {
allowAll, // Allow all requests
requireHeader, // Require a specific header
requireApiKey, // Require API key in header
requireBearerToken, // Require Bearer token with custom validation
allOf, // Combine providers with AND logic
anyOf, // Combine providers with OR logic
} from 'genkitx-aws-bedrock';
// Public endpoint
export const publicHandler = onCallGenkit(
{ contextProvider: allowAll() },
myFlow
);
// API key authentication
export const apiKeyHandler = onCallGenkit(
{ contextProvider: requireApiKey('X-API-Key', 'my-secret-key') },
myFlow
);
// Bearer token with custom validation
export const tokenHandler = onCallGenkit(
{
contextProvider: requireBearerToken(async (token) => {
const user = await validateJWT(token);
return { auth: { user } };
})
},
myFlow
);
// Combine multiple providers (all must pass)
export const strictHandler = onCallGenkit(
{
contextProvider: allOf(
requireHeader('X-Client-ID'),
requireBearerToken(async (token) => {
return await validateToken(token);
})
)
},
myFlow
);The handler follows the Genkit callable protocol (same as @genkit-ai/express).
Request body (callable protocol):
{
"data": { /* flow input */ }
}Direct input is also supported for convenience:
{ /* flow input directly */ }Successful response:
{
"result": { /* flow output */ }
}Error response:
{
"error": {
"status": "UNAUTHENTICATED",
"message": "Missing auth token"
}
}Streaming response (SSE, via streaming: true):
data: {"message": "chunk text"}
data: {"message": "more text"}
data: {"result": {"joke": "full result"}}
See the Lambda example for a complete working project with Serverless Framework deployment, and the Client example for calling flows from a TypeScript client.
This plugin supports all currently available Chat/Completion and Embeddings models from AWS Bedrock. This plugin supports image input and multimodal models.
You can find the full API reference in the API Reference Documentation
Want to contribute to the project? That's awesome! Head over to our Contribution Guidelines.
Note
This repository depends on Google's Genkit. For issues and questions related to Genkit, please refer to instructions available in Genkit's repository.
Reach out by opening a discussion on GitHub Discussions.
This project is licensed under the Apache 2.0 License.