Skip to content

Commit d905fb1

Browse files
authored
Merge branch 'master' into docs-rel-4-8-0
2 parents aa1cc2f + 9009c85 commit d905fb1

File tree

5 files changed

+45
-18
lines changed

5 files changed

+45
-18
lines changed

.github/workflows/clean-up-report.yaml

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -33,9 +33,6 @@ jobs:
3333
delete_reports:
3434
name: Delete Reports
3535
runs-on: ubuntu-latest
36-
env:
37-
# Contains all reports for deleted branch
38-
BRANCH_REPORTS_DIR: reports/${{ github.event.ref }}
3936
steps:
4037
- name: Checkout GitHub Pages Branch
4138
uses: actions/checkout@v5
@@ -48,9 +45,14 @@ jobs:
4845
git config --global user.name "github-actions[bot]"
4946
git config --global user.email "41898282+github-actions[bot]@users.noreply.github.com"
5047
48+
- name: Sanitize branch name for path
49+
run: |
50+
BRANCH_SAFE=$(echo "${{ github.event.ref }}" | sed 's/\//-/g')
51+
echo "BRANCH_SAFE=$BRANCH_SAFE" >> $GITHUB_ENV
52+
5153
- name: Check for workflow reports
5254
run: |
53-
if [ -z "$(ls -A $BRANCH_REPORTS_DIR)" ]; then
55+
if [ -z "$(ls -A reports/${{ env.BRANCH_SAFE }})" ]; then
5456
echo "BRANCH_REPORTS_EXIST="false"" >> $GITHUB_ENV
5557
else
5658
echo "BRANCH_REPORTS_EXIST="true"" >> $GITHUB_ENV
@@ -60,9 +62,7 @@ jobs:
6062
if: ${{ env.BRANCH_REPORTS_EXIST == 'true' }}
6163
timeout-minutes: 3
6264
run: |
63-
cd $BRANCH_REPORTS_DIR/..
64-
65-
rm -rf ${{ github.event.ref }}
65+
rm -rf reports/${{ env.BRANCH_SAFE }}
6666
git add .
6767
git commit -m "workflow: remove all reports for branch ${{ github.event.ref }}"
6868

.github/workflows/visual-comparison.yaml

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ jobs:
4646
steps:
4747
# If the condition above is not met, aka, the PR is not in draft status, then this step is skipped.
4848
# Because this step is part of the critical path, omission of this step will result in remaining CI steps not gettinge executed.
49-
# As of 8/8/2022 there is now way to enforce this beahvior in GitHub Actions CI.
49+
# As of 8/8/2022, there is no way to enforce this behavior in GitHub Actions CI.
5050
- run: |
5151
echo "GITHUB_BASE_REF: ${{ github.base_ref }}"
5252
echo "GITHUB_HEAD_REF: ${{ github.head_ref }}"
@@ -225,6 +225,12 @@ jobs:
225225
git config --global user.name "github-actions[bot]"
226226
git config --global user.email "41898282+github-actions[bot]@users.noreply.github.com"
227227
228+
- name: Sanitize branch name for path
229+
run: |
230+
BRANCH_SAFE=$(echo "${{ github.head_ref }}" | sed 's/\//-/g')
231+
echo "BRANCH_SAFE=$BRANCH_SAFE" >> $GITHUB_ENV
232+
echo "HTML_REPORT_URL_PATH=reports/$BRANCH_SAFE/${{ github.run_id }}/${{ github.run_attempt }}" >> $GITHUB_ENV
233+
228234
- name: Download zipped HTML report
229235
uses: actions/download-artifact@v5
230236
with:

docs/docs-content/clusters/data-center/vmware/create-manage-vmware-clusters.md

Lines changed: 19 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,16 @@ tags: ["data center", "vmware"]
1010
You can deploy Kubernetes clusters on VMware vSphere using Palette. Use the following steps to create and manage VMware
1111
clusters in Palette.
1212

13+
## Limitations
14+
15+
- Autoscaling is not supported for VMware vSphere clusters deployed using an
16+
[IP Address Management (IPAM) node pool](../../pcg/manage-pcg/create-manage-node-pool.md) with
17+
[static placement configured](../../pcg/deploy-pcg/vmware.md#static-placement-configuration). To scale your cluster,
18+
either use dynamic IP allocation or disable autoscaler and manually adjust your node pool size using your cluster's
19+
**Nodes** tab. For more information on scaling clusters, refer to our
20+
[Scale, Upgrade, and Secure Clusters](../../../tutorials/getting-started/palette/vmware/scale-secure-cluster.md#scale-a-cluster)
21+
tutorial.
22+
1323
## Prerequisites
1424

1525
Before you begin, ensure that you have the following prerequisites:
@@ -152,15 +162,15 @@ Before you begin, ensure that you have the following prerequisites:
152162

153163
### Worker Plane Pool Configuration
154164

155-
| Field Name | Description |
156-
| ------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
157-
| **Node Pool Name** | The name of the control plane node pool. |
158-
| **Enable Autoscaler** | Scale the pool horizontally based on its per-node workload counts. The **Minimum size** specifies the lower bound of nodes in the pool, and the **Maximum size** specifies the upper bound. Setting both parameters to the same value results in a static node count. Refer to the Cluster API [autoscaler documentation](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/clusterapi/README.md) for more information on autoscaling. |
159-
| **Node Repave Interval** | The interval at which the worker nodes are repaved in seconds. Refer to the [Repave Behavior and Configuration](../../cluster-management/node-pool.md#repave-behavior-and-configuration) for additional information about repave behaviors. |
160-
| **Number of Nodes in the Pool** | Number of nodes to be provisioned for the node pool. This field is hidden if **Enable Autoscaler** is toggled on. |
161-
| **Rolling Update** | Choose between **Expand First** and **Contract First** to determine the order in which nodes are added or removed from the worker node pool. Expand first adds new nodes before removing old nodes. Contract first removes old nodes before adding new nodes. |
162-
| **Additional Labels** | Additional labels to apply to the control plane nodes. |
163-
| **Taints** | Taints to apply to the control plane nodes. If enabled, an input field is displayed to specify the taint key, value and effect. Check out the [Node Labels and Taints](../../cluster-management/taints.md) page to learn more. |
165+
| Field Name | Description |
166+
| ------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
167+
| **Node Pool Name** | The name of the control plane node pool. |
168+
| **Enable Autoscaler** | Scale the pool horizontally based on its per-node workload counts. The **Minimum size** specifies the lower bound of nodes in the pool, and the **Maximum size** specifies the upper bound. Setting both parameters to the same value results in a static node count. Refer to the Cluster API [autoscaler documentation](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/clusterapi/README.md) for more information on autoscaling. <br /> <br /> **NOTE:** Autoscaler is not supported for VMware vSphere clusters deployed using an [IP Address Management (IPAM) node pool](../../pcg/manage-pcg/create-manage-node-pool.md) with [static placement configured](../../pcg/deploy-pcg/vmware.md#static-placement-configuration). |
169+
| **Node Repave Interval** | The interval at which the worker nodes are repaved in seconds. Refer to the [Repave Behavior and Configuration](../../cluster-management/node-pool.md#repave-behavior-and-configuration) for additional information about repave behaviors. |
170+
| **Number of Nodes in the Pool** | Number of nodes to be provisioned for the node pool. This field is hidden if **Enable Autoscaler** is toggled on. |
171+
| **Rolling Update** | Choose between **Expand First** and **Contract First** to determine the order in which nodes are added or removed from the worker node pool. Expand first adds new nodes before removing old nodes. Contract first removes old nodes before adding new nodes. |
172+
| **Additional Labels** | Additional labels to apply to the control plane nodes. |
173+
| **Taints** | Taints to apply to the control plane nodes. If enabled, an input field is displayed to specify the taint key, value and effect. Check out the [Node Labels and Taints](../../cluster-management/taints.md) page to learn more. |
164174

165175
Click on the **Next** button when you are done.
166176

docs/docs-content/clusters/pcg/deploy-pcg/vmware.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -207,7 +207,8 @@ The following requirements apply to tags:
207207

208208
:::warning
209209

210-
If you select **Static Placement**, you must create a PCG IPAM pool before deploying clusters. Refer to the
210+
If you select **Static Placement**, you must create a PCG IPAM pool before deploying clusters. Autoscaling is not
211+
supported for VMware vSphere clusters deployed using IPAM node pools with static placement configured. Refer to the
211212
[Create and Manage IPAM Node Pools](../manage-pcg/create-manage-node-pool.md) guide for more information.
212213

213214
:::

docs/docs-content/clusters/pcg/manage-pcg/create-manage-node-pool.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,16 @@ additional IPAM node pools when deploying a VMware vSphere or a MAAS LXD cluster
2828
provides instructions on how to create an IPAM node pool for a PCG deployed in a VMware vSphere environment or for a
2929
MAAS LXD deployment.
3030

31+
## Limitations
32+
33+
- Autoscaling is not supported for [VMware vSphere clusters](../../data-center/vmware/create-manage-vmware-clusters.md)
34+
deployed using an IPAM node pool with
35+
[static placement configured](../deploy-pcg/vmware.md#static-placement-configuration). To scale your cluster, use
36+
either use dynamic IP allocation or disable autoscaler and manually adjust your node pool size using your cluster's
37+
**Nodes** tab. For more information on scaling clusters, refer to our
38+
[Scale, Upgrade, and Secure Clusters](../../../tutorials/getting-started/palette/vmware/scale-secure-cluster.md#scale-a-cluster)
39+
tutorial.
40+
3141
## Prerequisites
3242

3343
- A PCG is installed, active, and in a healthy state. Refer to [Deploy a PCG](../deploy-pcg/deploy-pcg.md) for

0 commit comments

Comments
 (0)