Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions DEVELOPER_GUIDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ At the moment, there is no dedicated development container, thus you need to con

- [pre-commit](https://pre-commit.com/)
- [uv](https://docs.astral.sh/uv/getting-started/installation/)
- [Python](Python) 3.10. You can install it through uv using `uv python install 3.10`
- [Python](https://www.python.org/) 3.10. You can install it through uv using `uv python install 3.10`
- [Git](https://git-scm.com/) (if using code repository)
- (optional) [AWS CLI](https://aws.amazon.com/cli/). Some servers will require to use your AWS credentials to interact with your AWS account. Configure your credentials:

Expand Down Expand Up @@ -88,7 +88,7 @@ npx @modelcontextprotocol/inspector \
```
where `<absolute path to your server code>` is the absolute path to the server code, for instance `/Users/myuser/mcp/src/aws-documentation-mcp-server/awslabs/aws_documentation_mcp_server`.

Inspector will run your server on locahost (for instance: http://127.0.0.1:6274). You can then open your browser and connect to the server. For up to date instructions on how to use Inspector, please refer to the [official documentation](https://modelcontextprotocol.io/docs/tools/inspector).
Inspector will run your server on localhost (for instance: http://127.0.0.1:6274). You can then open your browser and connect to the server. For up to date instructions on how to use Inspector, please refer to the [official documentation](https://modelcontextprotocol.io/docs/tools/inspector).

### Tests

Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -770,7 +770,7 @@ For every new project, always look at your MCP servers and use mcp-core as the s

11. Once the custom prompt is pasted in, click **Done** to return to the chat interface.

12. Now you can begin asking questions and testing out the functionality of your installed AWS MCP Servers. The default option in the chat interface is is `Plan` which will provide the output for you to take manual action on (e.g. providing you a sample configuration that you copy and paste into a file). However, you can optionally toggle this to `Act` which will allow Cline to act on your behalf (e.g. searching for content using a web browser, cloning a repository, executing code, etc). You can optionally toggle on the "Auto-approve" section to avoid having to click to approve the suggestions, however we recommend leaving this off during testing, especially if you have the Act toggle selected.
12. Now you can begin asking questions and testing out the functionality of your installed AWS MCP Servers. The default option in the chat interface is `Plan` which will provide the output for you to take manual action on (e.g. providing you a sample configuration that you copy and paste into a file). However, you can optionally toggle this to `Act` which will allow Cline to act on your behalf (e.g. searching for content using a web browser, cloning a repository, executing code, etc). You can optionally toggle on the "Auto-approve" section to avoid having to click to approve the suggestions, however we recommend leaving this off during testing, especially if you have the Act toggle selected.

**Note:** For the best results, please prompt Cline to use the desired AWS MCP Server you wish to use. For example, `Using the Terraform MCP Server, do...`
</details>
Expand Down
2 changes: 1 addition & 1 deletion docusaurus/docs/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -216,7 +216,7 @@ For Windows:
7. By default, Cline will be set as the API provider, which has limits for the free tier. Next, let's update the API provider to be AWS Bedrock, so we can use the LLMs through Bedrock, which would have billing go through your connected AWS account.
8. Click the settings gear to open up the Cline settings. Then under **API Provider**, switch this from `Cline` to `AWS Bedrock` and select `AWS Profile` for the authentication type. As a note, the `AWS Credentials` option works as well, however it uses a static credentials (Access Key ID and Secret Access Key) instead of temporary credentials that are automatically redistributed when the token expires, so the temporary credentials with an AWS Profile is the more secure and recommended method.
9. Fill out the configuration based on the existing AWS Profile you wish to use, select the desired AWS Region, and enable cross-region inference. Click **Done** to return to the chat interface.
10. Now you can begin asking questions and testing out the functionality of your installed AWS MCP Servers. The default option in the chat interface is is `Plan` which will provide the output for you to take manual action on (e.g. providing you a sample configuration that you copy and paste into a file). However, you can optionally toggle this to `Act` which will allow Cline to act on your behalf (e.g. searching for content using a web browser, cloning a repository, executing code, etc). You can optionally toggle on the "Auto-approve" section to avoid having to click to approve the suggestions, however we recommend leaving this off during testing, especially if you have the Act toggle selected.
10. Now you can begin asking questions and testing out the functionality of your installed AWS MCP Servers. The default option in the chat interface is `Plan` which will provide the output for you to take manual action on (e.g. providing you a sample configuration that you copy and paste into a file). However, you can optionally toggle this to `Act` which will allow Cline to act on your behalf (e.g. searching for content using a web browser, cloning a repository, executing code, etc). You can optionally toggle on the "Auto-approve" section to avoid having to click to approve the suggestions, however we recommend leaving this off during testing, especially if you have the Act toggle selected.

**Note:** For the best results, please prompt Cline to use the desired AWS MCP Server you wish to use. For example, `Using the Terraform MCP Server, do...`

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -73,9 +73,9 @@ Using `basic.consume` with a long-lived consumer is more efficient than polling

You can use the RabbitMQ pre-fetch value to optimize how your consumers consume messages. RabbitMQ implements the channel pre-fetch mechanism provided by AMQP 0-9-1 by applying the pre-fetch count to consumers as opposed to channels. The pre-fetch value is used to specify how many messages are being sent to the consumer at any given time. By default, RabbitMQ sets an unlimited buffer size for client applications.

There are a variety of factors to consider when setting a pre-fetch count for your RabbitMQ consumers. First, consider your consumers' environment and configuration. Because consumers need to keep all messages in memory as they are being processed, a high pre-fetch value can have a negative impact on your consumers' performance, and in some cases, can result in a consumer potentially crashing all together. Similarly, the RabbitMQ broker itself keeps all messages that it sends cached in memory until it recieves consumer acknowledgement. A high pre-fetch value can cause your RabbitMQ server to run out of memory quickly if automatic acknowledgement is not configured for consumers, and if consumers take a relatively long time to process messages.
There are a variety of factors to consider when setting a pre-fetch count for your RabbitMQ consumers. First, consider your consumers' environment and configuration. Because consumers need to keep all messages in memory as they are being processed, a high pre-fetch value can have a negative impact on your consumers' performance, and in some cases, can result in a consumer potentially crashing all together. Similarly, the RabbitMQ broker itself keeps all messages that it sends cached in memory until it receives consumer acknowledgement. A high pre-fetch value can cause your RabbitMQ server to run out of memory quickly if automatic acknowledgement is not configured for consumers, and if consumers take a relatively long time to process messages.

With the above considerations in mind, we recommend always setting a pre-fetch value in order to prevent situations where a RabbitMQ broker or its consumers run out of memory due to a large number number of unprocessed, or unacknowledged messages. If you need to optimize your brokers to process large volumes of messages, you can test your brokers and consumers using a range of pre-fetch counts to determine the value at which point network overhead becomes largely insignificant compared to the time it takes a consumer to process messages.
With the above considerations in mind, we recommend always setting a pre-fetch value in order to prevent situations where a RabbitMQ broker or its consumers run out of memory due to a large number of unprocessed, or unacknowledged messages. If you need to optimize your brokers to process large volumes of messages, you can test your brokers and consumers using a range of pre-fetch counts to determine the value at which point network overhead becomes largely insignificant compared to the time it takes a consumer to process messages.

###### Note

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ channel.basicConsume(queueName, autoAck, "a-consumer-tag",

Unacknowledged messages must be cached in memory. You can limit the number of messages that a consumer pre-fetches by configuring [pre-fetch](https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/best-practices-performance.html#configure-prefetching) settings for a client application.

You can configure `consumer_timeout` to detect when consumers do not acknowledge deliveries. If the consumer does not send an acknowledgment within the timeout value, the channel will be closed, and you will recieve a `PRECONDITION_FAILED`. To diagnose the error, use the [UpdateConfiguration](https://docs.aws.amazon.com/amazon-mq/latest/api-reference/configurations-configuration-id.html) API to increase the `consumer_timeout` value.
You can configure `consumer_timeout` to detect when consumers do not acknowledge deliveries. If the consumer does not send an acknowledgment within the timeout value, the channel will be closed, and you will receive a `PRECONDITION_FAILED`. To diagnose the error, use the [UpdateConfiguration](https://docs.aws.amazon.com/amazon-mq/latest/api-reference/configurations-configuration-id.html) API to increase the `consumer_timeout` value.

## Step 3: Keep queues short

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2563,7 +2563,7 @@
"period": 60,
"statistic": "Sum",
"threshold": {
"justification": "Set the threshold to the number of failed records reflecting the tolerance of the the application for failed records. You can use historical data as reference for the acceptable failure value. You should also consider retries when setting the threshold because failed records can be retried in subsequent PutRecords calls."
"justification": "Set the threshold to the number of failed records reflecting the tolerance of the application for failed records. You can use historical data as reference for the acceptable failure value. You should also consider retries when setting the threshold because failed records can be retried in subsequent PutRecords calls."
},
"treatMissingData": "notBreaching"
}
Expand Down Expand Up @@ -5692,7 +5692,7 @@
{
"alarmRecommendations": [
{
"alarmDescription": "This alarm helps in ensuring that there is available burst credit balance for the file system usage. When there is no available burst credit, applications access to the the file system will be limited due to low throughput. If the metric drops to 0 consistently, consider changing the throughput mode to [Elastic or Provisioned throughput mode](https://docs.aws.amazon.com/efs/latest/ug/performance.html#throughput-modes).",
"alarmDescription": "This alarm helps in ensuring that there is available burst credit balance for the file system usage. When there is no available burst credit, applications access to the file system will be limited due to low throughput. If the metric drops to 0 consistently, consider changing the throughput mode to [Elastic or Provisioned throughput mode](https://docs.aws.amazon.com/efs/latest/ug/performance.html#throughput-modes).",
"comparisonOperator": "LessThanOrEqualToThreshold",
"datapointsToAlarm": 15,
"dimensions": [
Expand Down
2 changes: 1 addition & 1 deletion src/terraform-mcp-server/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ MCP server for Terraform on AWS best practices, infrastructure as code patterns,
- **Terragrunt Workflow Execution** - Run Terragrunt commands directly
- Initialize, plan, validate, apply, run-all and destroy operations
- Pass variables and specify AWS regions
- Configure terragrunt-config and and include/exclude paths flags
- Configure terragrunt-config and include/exclude paths flags
- Get formatted command output for analysis

## Tools and Resources
Expand Down