|
| 1 | +# Rancherd |
| 2 | + |
| 3 | +Rancherd bootstraps a node with Kubernetes (k3s/rke2) and Rancher such |
| 4 | +that all future management of Kubernetes and Rancher can be done from |
| 5 | +Kubernetes. Rancherd will only run once per node. Once the system has |
| 6 | +been fully bootstrapped it will not run again. It is intended that the |
| 7 | +primary use of Rancherd is to be ran from cloud-init or a similar system. |
| 8 | + |
| 9 | +## Quick Start |
| 10 | + |
| 11 | +To create a three node cluster run the following on servers named `server1`, |
| 12 | +`server2`, and `server3`. |
| 13 | + |
| 14 | +On `server1` |
| 15 | +```bash |
| 16 | +mkdir -p /etc/rancher/rancherd |
| 17 | +cat > /etc/rancher/rancherd/config.yaml << EOF |
| 18 | +role: cluster-init |
| 19 | +token: somethingrandom |
| 20 | +EOF |
| 21 | +curl -fL https://raw.githubusercontent.com/rancher/rancherd/master/install.sh | sh - |
| 22 | +``` |
| 23 | + |
| 24 | +On `server2` |
| 25 | +```bash |
| 26 | +mkdir -p /etc/rancher/rancherd |
| 27 | +cat > /etc/rancher/rancherd/config.yaml << EOF |
| 28 | +role: server |
| 29 | +server: https://server1:8443 |
| 30 | +token: somethingrandom |
| 31 | +EOF |
| 32 | +curl -fL https://raw.githubusercontent.com/rancher/rancherd/master/install.sh | sh - |
| 33 | +``` |
| 34 | + |
| 35 | +On `server3` |
| 36 | +```bash |
| 37 | +mkdir -p /etc/rancher/rancherd |
| 38 | +cat > /etc/rancher/rancherd/config.yaml << EOF |
| 39 | +role: server |
| 40 | +server: https://server1:8443 |
| 41 | +token: somethingrandom |
| 42 | +EOF |
| 43 | +curl -fL https://raw.githubusercontent.com/rancher/rancherd/master/install.sh | sh - |
| 44 | +``` |
| 45 | + |
| 46 | +## Installation |
| 47 | + |
| 48 | +### cloud-init |
| 49 | + |
| 50 | +The primary way of running Rancherd is intended to be done from. |
| 51 | +Add to your cloud-init the following for a single node cluster. All |
| 52 | +configuration that would be found in the rancherd config.yaml should |
| 53 | +be embedded in the `rancherd` key in the cloud-config. |
| 54 | + |
| 55 | +```yaml |
| 56 | +#cloud-config |
| 57 | +rancherd: |
| 58 | + role: cluster-init |
| 59 | +runcmd: |
| 60 | + - curl -fL https://raw.githubusercontent.com/rancher/rancherd/master/install.sh | sh - |
| 61 | +``` |
| 62 | +
|
| 63 | +### Manual |
| 64 | +
|
| 65 | +`rancherd` binary can be downloaded from https://github.com/rancher/rancherd/releases/latest |
| 66 | +and manually ran. |
| 67 | + |
| 68 | +### Curl script (systemd installation) |
| 69 | + |
| 70 | +The below command will download `rancherd` binary and setup a systemd unit and run it. |
| 71 | + |
| 72 | +```bash |
| 73 | +curl -sfL https://https://raw.githubusercontent.com/rancher/rancherd/master/install.sh | sh - |
| 74 | +``` |
| 75 | + |
| 76 | +## Cluster Initialization |
| 77 | + |
| 78 | +Creating a cluster always starts with one node initializing the cluster, by |
| 79 | +assigning the `cluster-init` role and then other nodes joining to the cluster. |
| 80 | +The new cluster will have a token generated for it or you can manually |
| 81 | +assign a unique string. The token for an existing cluster can be determined |
| 82 | +by running `rancherd get-token`. |
| 83 | + |
| 84 | +## Joining Nodes |
| 85 | + |
| 86 | +Nodes can be joined to the cluster as the role `server` to add more control |
| 87 | +plane nodes or as the role `agent` to add more worker nodes. To join a node |
| 88 | +you must have the Rancher server URL (which is by default running on port |
| 89 | +`8443`) and the token. |
| 90 | + |
| 91 | +## Node Roles |
| 92 | + |
| 93 | + |
| 94 | +Rancherd will bootstrap a node with one of the following roles |
| 95 | + |
| 96 | +1. __cluster-init__: Every cluster must start with one node that has the |
| 97 | + cluster-init role. |
| 98 | +2. __server__: Joins the cluster as a new control-plane,etcd,worker node |
| 99 | +3. __agent__: Joins the cluster as a worker only node. |
| 100 | + |
| 101 | +## Server discovery |
| 102 | + |
| 103 | +It can be quite cumbersome to automate bringing up a clustered system |
| 104 | +that requires one bootstrap node. Also there are more considerations |
| 105 | +around load balancing and replacing nodes in a proper production setup. |
| 106 | +Rancherd support server discovery based on https://github.com/hashicorp/go-discover. |
| 107 | + |
| 108 | +When using server discovery the `cluster-init` role is not used, only `server` |
| 109 | +and `agent`. The `server` URL is also dropped in place of using the `discovery` |
| 110 | +key. The `discovery` configuration will be used to dynamically determine what |
| 111 | +is the server URL and if the current node should act as the `cluster-init` node. |
| 112 | + |
| 113 | +Example |
| 114 | +```yaml |
| 115 | +role: server |
| 116 | +discovery: |
| 117 | + params: |
| 118 | + # Corresponds to go-discover provider name |
| 119 | + provider: "mdns" |
| 120 | + # All other key/values are parameters corresponding to what |
| 121 | + # the go-discover provider is expecting |
| 122 | + service: "rancher-server" |
| 123 | + # If this is a new cluster it will wait until 3 server are |
| 124 | + # available and they all agree on the same cluster-init node |
| 125 | + expectedServers: 3 |
| 126 | + # How long servers are remembered for. It is useful for providers |
| 127 | + # that are not consistent in their responses, like mdns. |
| 128 | + serverCacheDuration: 1m |
| 129 | +``` |
| 130 | +More information on how to use the discovery is in the config examples. |
| 131 | + |
| 132 | +## Configuration |
| 133 | + |
| 134 | +Configuration for rancherd goes in `/etc/rancher/rancherd/config.yaml`. A full |
| 135 | +example configuration with documentation is available in |
| 136 | +[config-example.yaml](./config-example.yaml). |
| 137 | + |
| 138 | +Minimal configuration |
| 139 | +```yaml |
| 140 | +# /etc/rancher/rancherd/config.yaml |
| 141 | +
|
| 142 | +# role: Valid values cluster-init, server, agent |
| 143 | +role: cluster-init |
| 144 | +
|
| 145 | +# token: A shared secret known by all clusters in the system |
| 146 | +token: somethingrandom |
| 147 | +
|
| 148 | +# server: The server URL to join a cluster to. By default port 8443. |
| 149 | +# Only valid for roles server and agent, not cluster-init |
| 150 | +server: https://example.com:8443 |
| 151 | +``` |
| 152 | + |
| 153 | +### Version Channels |
| 154 | + |
| 155 | +The `kubernetesVersion` and `rancherVersion` accept channel names instead of explict versions. |
| 156 | + |
| 157 | +Valid `kubernetesVersion` channels are as follows: |
| 158 | + |
| 159 | +| Channel Name | Description | |
| 160 | +|--------------|-------------| |
| 161 | +| stable | k3s stable (default value of kubernetesVersion) | |
| 162 | +| latest | k3s latest | |
| 163 | +| testing | k3s test | |
| 164 | +| stable:k3s | Same as stable channel | |
| 165 | +| latest:k3s | Same as latest channel | |
| 166 | +| testing:k3s | Same as testing channel | |
| 167 | +| stable:rke2 | rke2 stable | |
| 168 | +| latest:rke2 | rke2 latest | |
| 169 | +| testing:rke2 | rke2 testing | |
| 170 | +| v1.21 | Latest k3s v1.21 release. The applies to any Kubernetes minor version | |
| 171 | +| v1.21:rke2 | Latest rke2 v1.21 release. The applies to any Kubernetes minor version | |
| 172 | + |
| 173 | +Valid `rancherVersions` channels are as follows: |
| 174 | + |
| 175 | +| Channel Name | Description | |
| 176 | +|--------------|-------------| |
| 177 | +| stable | [stable helm repo](https://releases.rancher.com/server-charts/stable/index.yaml) (default value of rancherVersion) | |
| 178 | +| latest | [latest helm repo](https://releases.rancher.com/server-charts/latest/index.yaml) | |
| 179 | + |
| 180 | +### Rancher Config |
| 181 | + |
| 182 | +By default Rancher is installed with the following values.yaml. You can override |
| 183 | +any of these settings with the `rancherValues` setting in the rancherd `config.yaml` |
| 184 | +```yaml |
| 185 | +# Multi-Cluster Management is disabled by default, change to multi-cluster-management=true to enable |
| 186 | +features: multi-cluster-management=false |
| 187 | +
|
| 188 | +# The Rancher UI will run on the host port 8443 by default. Set to 0 to disable |
| 189 | +# and instead use ingress.enabled=true to route traffic through ingress |
| 190 | +hostPort: 8443 |
| 191 | +
|
| 192 | +# Accessing ingress is disabled by default. |
| 193 | +ingress: |
| 194 | + enabled: false |
| 195 | + |
| 196 | +# Don't create a default admin password |
| 197 | +noDefaultAdmin: true |
| 198 | +
|
| 199 | +# The negative value means it will up to that many replicas if there are |
| 200 | +# at least that many nodes available. For example, if you have 2 nodes and |
| 201 | +# `replicas` is `-3` then 2 replicas will run. Once you add a third node |
| 202 | +# a then 3 replicas will run |
| 203 | +replicas: -3 |
| 204 | + |
| 205 | +# External TLS is assumed |
| 206 | +tls: external |
| 207 | +``` |
| 208 | +
|
| 209 | +## Dashboard/UI |
| 210 | +
|
| 211 | +The Rancher UI is running by default on port `:8443`. There is no default |
| 212 | +`admin` user password set. You must run `rancherd reset-admin` once to |
| 213 | +get an `admin` password to login. |
| 214 | + |
| 215 | +## Multi-Cluster Management |
| 216 | + |
| 217 | +By default Multi Cluster Managmement is disables in Rancher. To enable set the |
| 218 | +following in the rancherd config.yaml |
| 219 | +```yaml |
| 220 | +rancherValues: |
| 221 | + features: multi-cluster-management=true |
| 222 | +``` |
| 223 | + |
| 224 | +## Upgrading |
| 225 | + |
| 226 | +rancherd itself doesn't need to be upgraded. It is only ran once per node |
| 227 | +and then after that provides no value. What you do need to upgrade after |
| 228 | +the fact is Rancher and Kubernetes. |
| 229 | + |
| 230 | +### Rancher |
| 231 | +Rancher is installed as a helm chart following the standard procedure. You can upgrade |
| 232 | +Rancher with the standard procedure documented at |
| 233 | +https://rancher.com/docs/rancher/v2.6/en/installation/install-rancher-on-k8s/upgrades/. |
| 234 | + |
| 235 | +### Kubernetes |
| 236 | +To upgrade Kubernetes you will use Rancher to orchestrate the upgrade. This is a matter of changing |
| 237 | +the Kubernetes version on the `fleet-local/local` `Cluster` in the `provisioning.cattle.io/v1` |
| 238 | +apiVersion. For example |
| 239 | + |
| 240 | +```shell |
| 241 | +kubectl edit clusters.provisioning.cattle.io -n fleet-local local |
| 242 | +``` |
| 243 | +```yaml |
| 244 | +apiVersion: provisioning.cattle.io/v1 |
| 245 | +kind: Cluster |
| 246 | +metadata: |
| 247 | + name: local |
| 248 | + namespace: fleet-local |
| 249 | +spec: |
| 250 | + # Change to new valid k8s version |
| 251 | + kubernetesVersion: v1.21.4+k3s1 |
| 252 | +``` |
| 253 | + |
| 254 | +### Automated |
| 255 | + |
| 256 | +You can also use the `rancherd upgrade` command on a `server` node to automatically do the |
| 257 | +above procedure. |
0 commit comments