Skip to content

Commit 1345a33

Browse files
authored
Merge pull request #782 from aws-quickstart/task/1.10.1-release-prep
1.10.1 release - updated helm charts, libs, cdk
2 parents d878d2b + 671bf35 commit 1345a33

File tree

36 files changed

+88
-83
lines changed

36 files changed

+88
-83
lines changed

.github/workflows/linkcheck.json

+8-10
Original file line numberDiff line numberDiff line change
@@ -13,15 +13,13 @@
1313
}
1414
],
1515
"ignorePatterns": [
16-
{
17-
"pattern": [
18-
"localhost"
19-
]
20-
},
21-
{
22-
"pattern": [
23-
"127.0.0.1"
24-
]
25-
}
16+
{ "pattern": "localhost" },
17+
{ "pattern": "127.0.0.1" },
18+
{ "pattern": "../api" },
19+
{ "pattern": "https://helm.datadoghq.com" },
20+
{ "pattern": "https://sqs" },
21+
{ "pattern": "www.rsa-2048.example.com" },
22+
{ "pattern": "rsa-2048.example.com" },
23+
{ "pattern": "https://ingress-red-saas.instana.io/" }
2624
]
2725
}

Makefile

+3
Original file line numberDiff line numberDiff line change
@@ -31,6 +31,9 @@ list:
3131
$(DEPS)
3232
$(CDK) list
3333

34+
markdown-link-check:
35+
find docs -name "*.md" | xargs -n 1 markdown-link-check -q -c .github/workflows/linkcheck.json
36+
3437
run-test:
3538
npm test
3639

README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -44,14 +44,14 @@ aws --version
4444
Install CDK matching the current version of the Blueprints QuickStart (which can be found in package.json).
4545

4646
```bash
47-
npm install -g aws-cdk@2.86.0
47+
npm install -g aws-cdk@2.88.0
4848
```
4949

5050
Verify the installation.
5151

5252
```bash
5353
cdk --version
54-
# must output 2.86.0
54+
# must output 2.88.0
5555
```
5656

5757
Create a new CDK project. We use `typescript` for this example.

docs/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -44,14 +44,14 @@ aws --version
4444
Install CDK matching the current version of the Blueprints QuickStart (which can be found in package.json).
4545

4646
```bash
47-
npm install -g aws-cdk@2.86.0
47+
npm install -g aws-cdk@2.88.0
4848
```
4949

5050
Verify the installation.
5151

5252
```bash
5353
cdk --version
54-
# must output 2.86.0
54+
# must output 2.88.0
5555
```
5656

5757
Create a new CDK project. We use `typescript` for this example.

docs/addons/ack-addon.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ const blueprint = blueprints.EksBlueprint.builder()
2525
.build(app, 'my-stack-name');
2626
```
2727

28-
> Pattern # 2 : This installs AWS Controller for Kubernetes for EC2 ACK controller using service name internally referencing service mapping values for helm options. After Installing this EC2 ACK Controller, the instructions in [Provision ACK Resource](https://preview--eksworkshop-v2-next.netlify.app/docs/gitops/controlplanes/ack/configureResources) can be used to provision EC2 namespaces `SecurityGroup` resources required for creating Amazon RDS database as an example.
28+
> Pattern # 2 : This installs AWS Controller for Kubernetes for EC2 ACK controller using service name internally referencing service mapping values for helm options. After Installing this EC2 ACK Controller, the instructions in [Provision ACK Resource](https://eksworkshop-v2-next.netlify.app/docs/gitops/controlplanes/ack/configureResources) can be used to provision EC2 namespaces `SecurityGroup` resources required for creating Amazon RDS database as an example.
2929
3030
```typescript
3131
import * as cdk from 'aws-cdk-lib';
@@ -44,7 +44,7 @@ const blueprint = blueprints.EksBlueprint.builder()
4444
.build(app, 'my-stack-name');
4545
```
4646

47-
> Pattern # 3 : This installs AWS Controller for Kubernetes for RDS ACK controller with user specified values. After Installing this RDS ACK Controller, the instructions in [Provision ACK Resource](https://preview--eksworkshop-v2-next.netlify.app/docs/gitops/controlplanes/ack/configureResources) can be used to provision Amazon RDS database using the RDS ACK controller as an example.
47+
> Pattern # 3 : This installs AWS Controller for Kubernetes for RDS ACK controller with user specified values. After Installing this RDS ACK Controller, the instructions in [Provision ACK Resource](https://eksworkshop-v2-next.netlify.app/docs/gitops/controlplanes/ack/configureResources) can be used to provision Amazon RDS database using the RDS ACK controller as an example.
4848
4949
```typescript
5050
import * as cdk from 'aws-cdk-lib';
@@ -111,7 +111,7 @@ replicaset.apps/rds-chart-5f6f5b8fc7 1 1 1 5m36s
111111
## aws-controller-8s references
112112

113113
Please refer to following aws-controller-8s references for more information :
114-
- [ACK Workshop](https://preview--eksworkshop-v2-next.netlify.app/docs/gitops/controlplanes/ack/)
114+
- [ACK Workshop](https://eksworkshop-v2-next.netlify.app/docs/gitops/controlplanes/ack/)
115115
- [ECR Gallery for ACK](https://gallery.ecr.aws/aws-controllers-k8s/)
116116
- [ACK GitHub](https://github.com/aws-controllers-k8s/community)
117117

docs/addons/argo-cd.md

+8-8
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
11
# Argo CD Add-on
22

3-
[Argo CD](https://argoproj.github.io/argo-cd/) is a declarative, GitOps continuous delivery tool for Kubernetes. The Argo CD add-on provisions [Argo CD](https://argoproj.github.io/argo-cd/) into an EKS cluster, and can optionally bootstrap your workloads from public and private Git repositories.
3+
[Argo CD](https://argo-cd.readthedocs.io/en/stable/) is a declarative, GitOps continuous delivery tool for Kubernetes. The Argo CD add-on provisions [Argo CD](https://argo-cd.readthedocs.io/en/stable/) into an EKS cluster, and can optionally bootstrap your workloads from public and private Git repositories.
44

55
The Argo CD add-on allows platform administrators to combine cluster provisioning and workload bootstrapping in a single step and enables use cases such as replicating an existing running production cluster in a different region in a matter of minutes. This is important for business continuity and disaster recovery cases as well as for cross-regional availability and geographical expansion.
66

7-
Please see the documentation below for details on automatic boostrapping with ArgoCD add-on. If you prefer manual bootstrapping (once your cluster is deployed with this add-on included), you can find instructions on getting started with Argo CD in our [Getting Started](/getting-started/#deploy-workloads-with-argocd) guide.
7+
Please see the documentation below for details on automatic boostrapping with ArgoCD add-on. If you prefer manual bootstrapping (once your cluster is deployed with this add-on included), you can find instructions on getting started with Argo CD in our [Getting Started](../getting-started.md#deploy-workloads-with-argocd) guide.
88

9-
Full Argo CD project documentation [can be found here](https://argoproj.github.io/argo-cd/).
9+
Full Argo CD project documentation [can be found here](https://argo-cd.readthedocs.io/en/stable/).
1010

1111
## Usage
1212

@@ -26,12 +26,12 @@ const blueprint = blueprints.EksBlueprint.builder()
2626
.build(app, 'my-stack-name');
2727
```
2828

29-
The above will create an `argocd` namespace and install all Argo CD components. In order to bootstrap workloads you will need to change the default ArgoCD admin password and add repositories as specified in the [Getting Started](https://argoproj.github.io/argo-cd/getting_started/#port-forwarding) documentation.
29+
The above will create an `argocd` namespace and install all Argo CD components. In order to bootstrap workloads you will need to change the default ArgoCD admin password and add repositories as specified in the [Getting Started](https://argo-cd.readthedocs.io/en/stable/getting_started/#port-forwarding) documentation.
3030

3131
## Functionality
3232

3333
1. Creates the namespace specified in the construction parameter (`argocd` by default).
34-
2. Deploys the [`argo-cd`](https://argoproj.github.io/argo-helm) Helm chart into the cluster.
34+
2. Deploys the [`argo-cd`](https://argoproj.github.io/argo-helm/) Helm chart into the cluster.
3535
3. Allows to specify `ApplicationRepository` selecting the required authentication method as SSH Key, username/password or username/token. Credentials are expected to be set in AWS Secrets Manager and replicated to the desired region. If bootstrap repository is specified, creates the initial bootstrap application which may be leveraged to bootstrap workloads and/or other add-ons through GitOps.
3636
4. Allows setting the initial admin password through AWS Secrets Manager, replicating to the desired region.
3737
5. Supports [standard helm configuration options](./index.md#standard-helm-add-on-configuration-options).
@@ -55,7 +55,7 @@ You can change the admin password through the Secrets Manager, but it will requi
5555

5656
## Bootstrapping
5757

58-
The Blueprints framework provides an approach to bootstrap workloads and/or additional add-ons from a customer GitOps repository. In a general case, the bootstrap GitOps repository may contains an [App of Apps](https://argoproj.github.io/argo-cd/operator-manual/cluster-bootstrapping/#app-of-apps-pattern) that points to all workloads and add-ons.
58+
The Blueprints framework provides an approach to bootstrap workloads and/or additional add-ons from a customer GitOps repository. In a general case, the bootstrap GitOps repository may contains an [App of Apps](https://argo-cd.readthedocs.io/en/stable/operator-manual/cluster-bootstrapping/#app-of-apps-pattern) that points to all workloads and add-ons.
5959

6060
In order to enable bootstrapping, the add-on allows passing an `ApplicationRepository` at construction time. The following repository types are supported at present:
6161

@@ -124,7 +124,7 @@ The application promotion process in the above example is handled entirely throu
124124

125125
By default all AddOns defined in a blueprint are deployed to the cluster via CDK. You can opt-in to deploy them following the GitOps model via ArgoCD. You will need a repository contains all the AddOns you would like to deploy via ArgoCD, such as, [eks-blueprints-add-ons](https://github.com/aws-samples/eks-blueprints-add-ons). You then configure ArgoCD bootstrapping with this repository as shown above.
126126

127-
There are two types of GitOps deployments via ArgoCD depending on whether you would like to adopt the [App of Apps](https://argoproj.github.io/argo-cd/operator-manual/cluster-bootstrapping/#app-of-apps-pattern) strategy:
127+
There are two types of GitOps deployments via ArgoCD depending on whether you would like to adopt the [App of Apps](https://argo-cd.readthedocs.io/en/stable/operator-manual/cluster-bootstrapping/#app-of-apps-pattern) strategy:
128128

129129
- CDK deploys the `Application` resource for each AddOn enabled, and ArgoCD deploys the actual AddOn via GitOps based on the `Application` resource. Example:
130130

@@ -270,7 +270,7 @@ import * as bcrypt from "bcrypt";
270270
}))
271271
```
272272

273-
For more information, please refer to the [ArgoCD official documentation](https://github.com/argoproj/argo-helm/tree/master/charts/argo-cd).
273+
For more information, please refer to the [ArgoCD official documentation](https://github.com/argoproj/argo-helm/tree/main/charts/argo-cd).
274274
## Known Issues
275275

276276
1. Destruction of the cluster with provisioned applications may cause cloud formation to get stuck on deleting ArgoCD namespace. This happens because the server component that handles Application CRD resource is destroyed before it has a chance to clean up applications that were provisioned through GitOps (of which CFN is unaware). To address this issue at the moment, App of Apps application should be destroyed manually before destroying the stack.

docs/addons/karpenter.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,7 @@ blueprints-addon-karpenter-54fd978b89-hclmp 2/2 Running 0 99m
8484
2. Creates `karpenter` namespace.
8585
3. Creates Kubernetes Service Account, and associate AWS IAM Role with Karpenter Controller Policy attached using [IRSA](https://docs.aws.amazon.com/emr/latest/EMR-on-EKS-DevelopmentGuide/setting-up-enable-IAM.html).
8686
4. Deploys Karpenter helm chart in the `karpenter` namespace, configuring cluster name and cluster endpoint on the controller by default.
87-
5. (Optionally) provisions a default Karpenter Provisioner and AWSNodeTemplate CRD based on user-provided parameters such as [spec.requirements](https://karpenter.sh/docs/concepts/provisioners/#specrequirements), [AMI type](https://karpenter.sh/v0.12.1/aws/provisioning/#amazon-machine-image-ami-family),[weight](https://karpenter.sh/docs/concepts/provisioners/#specweight), [Subnet Selector](https://karpenter.sh/docs/concepts/node-templates/#specsubnetselector), and [Security Group Selector](https://karpenter.sh/docs/concepts/node-templates/#specsecuritygroupselector). If created, the provisioner will discover the EKS VPC subnets and security groups to launch the nodes with.
87+
5. (Optionally) provisions a default Karpenter Provisioner and AWSNodeTemplate CRD based on user-provided parameters such as [spec.requirements](https://karpenter.sh/docs/concepts/provisioners/#specrequirements), [AMI type](https://karpenter.sh/docs/concepts/instance-types/),[weight](https://karpenter.sh/docs/concepts/provisioners/#specweight), [Subnet Selector](https://karpenter.sh/v0.26/concepts/node-templates/#specsubnetselector), and [Security Group Selector](https://karpenter.sh/v0.28/concepts/node-templates/#specsecuritygroupselector). If created, the provisioner will discover the EKS VPC subnets and security groups to launch the nodes with.
8888

8989
**NOTE:**
9090
1. The default provisioner is created only if both the subnet tags and the security group tags are provided.
@@ -95,7 +95,7 @@ blueprints-addon-karpenter-54fd978b89-hclmp 2/2 Running 0 99m
9595

9696
## Using Karpenter
9797

98-
To use Karpenter, you need to provision a Karpenter [provisioner CRD](https://karpenter.sh/docs/provisioner/). A single provisioner is capable of handling many different pod shapes.
98+
To use Karpenter, you need to provision a Karpenter [provisioner CRD](https://karpenter.sh/docs/concepts/provisioners/). A single provisioner is capable of handling many different pod shapes.
9999

100100
This can be done in 2 ways:
101101

@@ -225,7 +225,7 @@ requirements: [
225225

226226
The property is changed to align with the naming convention of the provisioner, and to allow multiple operators (In vs NotIn). The values correspond similarly between the two, with type change being the only difference.
227227

228-
2. Certain upgrades require reapplying the CRDs since Helm does not maintain the lifecycle of CRDs. Please see the [official documentations](https://karpenter.sh/v0.16.0/upgrade-guide/#custom-resource-definition-crd-upgrades) for details.
228+
2. Certain upgrades require reapplying the CRDs since Helm does not maintain the lifecycle of CRDs. Please see the [official documentations](https://karpenter.sh/v0.28/upgrade-guide/) for details.
229229

230230
3. Starting with v0.17.0, Karpenter's Helm chart package is stored in OCI (Open Container Initiative) registry. With this change, [charts.karpenter.sh](https://charts.karpenter.sh/) is no longer updated to preserve older versions. You have to adjust for the following:
231231

docs/addons/kasten-k10.md

+5-2
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,9 @@
33
**Kasten K10 by Veeam Overview**
44

55
The K10 data management platform, purpose-built for Kubernetes, provides enterprise operations teams an easy-to-use, scalable, and secure system for backup/restore, disaster recovery, and mobility of Kubernetes applications.
6-
![Kasten-K10 Overview](/docs/assets/images/kastenk10_image1.png)
6+
7+
## Kasten-K10 Overview
8+
79
K10’s application-centric approach and deep integrations with relational and NoSQL databases, Amazon EKS and AWS Services provides teams the freedom of infrastructure choice without sacrificing operational simplicity. Policy-driven and extensible, K10 provides a native Kubernetes API and includes features such full-spectrum consistency, database integrations, automatic application discovery, application mobility, and a powerful web-based user interface.
810

911
Given K10’s extensive ecosystem support you have the flexibility to choose environments (public/ private/ hybrid cloud/ on-prem) and Kubernetes distributions (cloud vendor managed or self managed) in support of three principal use cases:
@@ -13,7 +15,8 @@ Given K10’s extensive ecosystem support you have the flexibility to choose env
1315
- [Disaster Recovery](https://www.kasten.io/kubernetes/use-cases/disaster-recovery/)
1416

1517
- [Application Mobility](https://www.kasten.io/kubernetes/use-cases/application-mobility/)
16-
![Kasten-K10 Use Cases ](/docs/assets/images/kastenk10_image2.png)
18+
19+
## Kasten-K10 Use Cases
1720

1821
The Kasten K10 add-on installs Kasten K10 into your Amazon EKS cluster.
1922

docs/addons/keda.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -135,7 +135,7 @@ done
135135
7) Purge the SQS queue to test scale in event
136136
Replace ${AWS_REGION} with your target region
137137
```shell
138-
aws sqs purge-queue --queue-url https://sqs.${AWS_REGION}.amazonaws.com/CCOUNT_NUMBER/sqs-consumer
138+
aws sqs purge-queue --queue-url "https://sqs.${AWS_REGION}.amazonaws.com/CCOUNT_NUMBER/sqs-consumer"
139139
```
140140
6) Verify if the nginx pod is scaledd in from 2 to 1 after teh cool down perion set (500 in this case)
141141
```shell

docs/addons/knative-operator.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -63,4 +63,4 @@ documentation.
6363
6464
### Applying KNative Functions
6565
Currently, the Knative Operator does not support the deployment of Knative directly as they're directly run as services.
66-
For better instructions check (their documentation.)[https://knative.dev/docs/functions/deploying-functions/]
66+
For better instructions check [their documentation](https://knative.dev/docs/functions/deploying-functions).

docs/addons/kubecost.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -48,22 +48,22 @@ Custom values to pass to the chart. Config options: https://github.com/kubecost/
4848
#### `customPrometheus: string` (optional)
4949

5050
Kubecost comes bundled with a Prometheus installation. However, if you wish to integrate with an external Prometheus deployment, provide your local Prometheus service address with this format `http://..svc`.
51-
Note: integrating with an existing Prometheus is only officially supported under Kubecost paid plans and requires some extra configurations on your Prometheus: https://docs.kubecost.com/custom-prom.html
51+
Note: integrating with an existing Prometheus is only officially supported under Kubecost paid plans and requires some extra configurations on your Prometheus: https://docs.kubecost.com/install-and-configure/install/custom-prom
5252

5353
#### `installPrometheusNodeExporter: boolean` (optional)
5454

5555
Set to false to use an existing Node Exporter DaemonSet.
5656
Note: this requires your existing Node Exporter endpoint to be visible from the namespace where Kubecost is installed.
57-
https://github.com/kubecost/docs/blob/main/getting-started.md#using-an-existing-node-exporter
57+
https://docs.kubecost.com/install-and-configure/install/getting-started#using-an-existing-node-exporter
5858

5959
#### `repository: string`, `release: string`, `chart: string` (optional)
6060

6161
Additional options for customers who may need to supply their own private Helm repository.
6262

6363
## Support
6464

65-
If you have any questions about Kubecost, get in touch with the team [on Slack](https://docs.kubecost.com/support-channels.html).
65+
If you have any questions about Kubecost, get in touch with the team [on Slack](https://docs.kubecost.com/kubecost-cloud/receiving-kubecost-cloud-support).
6666

6767
## License
6868

69-
The Kubecost Blueprints AddOn is licensed under the Apache 2.0 license. [Project repository](https://github.com/kubecost/kubecost-blueprints-addon)
69+
The Kubecost Blueprints AddOn is licensed under the Apache 2.0 license. [Project repository](https://github.com/kubecost/kubecost-eks-blueprints-addon/)

docs/addons/nginx.md

+3-2
Original file line numberDiff line numberDiff line change
@@ -118,8 +118,9 @@ spec:
118118
119119
After the above ingresses applied (ideally through a GitOps engine) you can now navigate to the specified hosts respectively:
120120
121-
[http://riker.dev.my-domain.com](http://riker.dev.my-domain.com)
122-
[http://troi.dev.my-domain.com](http://troi.dev.my-domain.com)
121+
`http://riker.dev.my-domain.com`
122+
`http://troi.dev.my-domain.com`
123+
123124

124125
## TLS Termination and Certificates
125126

docs/addons/pixie.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -103,7 +103,7 @@ Namespace to deploy Pixie to. Default: `pl`
103103

104104
#### `cloudAddr?: string` (optional)
105105

106-
The address of Pixie Cloud. This should only be modified if you have deployed your own self-hosted Pixie Cloud. By default, it will be set to [Community Cloud for Pixie](https://work.withpixie.dev).
106+
The address of Pixie Cloud. This should only be modified if you have deployed your own self-hosted Pixie Cloud. By default, it will be set to [Community Cloud for Pixie](https://work.withpixie.ai).
107107

108108
#### `devCloudNamespace?: string` (optional)
109109

docs/cluster-providers/asg-cluster-provider.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ Configuration can also be supplied via context variables (specify in cdk.json, c
4646

4747
Configuration of the EC2 parameters through context parameters makes sense if you would like to apply default configuration to multiple clusters without the need to explicitly pass `AsgClusterProviderProps` to each cluster blueprint.
4848

49-
You can find more details on the supported configuration options in the API documentation for the [AsgClusterProviderProps](../api/interfaces/AsgClusterProviderProps.html).
49+
You can find more details on the supported configuration options in the API documentation for the [AsgClusterProviderProps](../api/interfaces/clusters.AsgClusterProviderProps.html).
5050

5151
## Bottlerocket ASG
5252

0 commit comments

Comments
 (0)