Skip to content

gen 1 VPC #24

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 13 commits into
base: master
Choose a base branch
from
159 changes: 159 additions & 0 deletions templates/icp-ee-vpc/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,159 @@
# Terraform ICP IBM Cloud

This Terraform example configurations uses the [IBM Cloud provider](https://ibm-cloud.github.io/tf-ibm-docs/index.html) to provision virtual machines on IBM Cloud Infrastructure (SoftLayer)
and [Terraform Module ICP Deploy](https://github.com/ibm-cloud-architecture/terraform-module-icp-deploy) to prepare VSIs and deploy [IBM Cloud Private](https://www.ibm.com/cloud-computing/products/ibm-cloud-private/) version 3.1.0 or later in a Highly Available configuration. This Terraform template automates best practices learned from installing ICP on IBM Cloud Infrastructure.

## Deployment overview
This template creates an environment where
- Cluster is deployed on [IBM Virtual Private Cloud (VPC)](https://cloud.ibm.com/docs/vpc-on-classic?topic=vpc-on-classic-about) private network and is accessed through load balancers
- The cluster is deployed in a single region across three zones, each zone has its own subnet
- Dedicated management node(s)
- Dedicated boot node
- SSH access from public network is enabled on boot node only
- Optimised VM sizes
- Image Manager is disabled due to lack of File Storage (TODO)
- No Vulnerability Advisor node and vulnerability advisor service disabled by default (can be enabled via `terraform.tfvars` settings as described below)
- The images must be pushed to a remote registry and installed over the internet.

## Pre-requisites

* Working copy of [Terraform](https://www.terraform.io/intro/getting-started/install.html)
* As of this writing, IBM Cloud Terraform provider is not in the main Terraform repository and must be installed manually. See [these steps](https://ibm-cloud.github.io/tf-ibm-docs/index.html#using-terraform-with-the-ibm-cloud-provider). The templates have been tested with Terraform version 0.11.11 and the IBM Cloud provider version 0.17.1.
* The template is tested on VSIs based on Ubuntu 16.04. RHEL is not supported in this automation.

### Environment preparation

The images must be pushed to a remote registry and installed over the internet. One possibilities are to use the IBM Cloud Registry. Acquire the binary tarball for IBM Cloud Private, and follow [these instructions](https://cloud.ibm.com/docs/services/Registry?topic=registry-getting-started) to create a namespace in the IBM Cloud Registry.

Use `docker login` to authenticate to the registry. The following example commands to load the images locally and push them to the IBM Cloud Registry.

```bash
# load all the images locally
tar xf ibm-cloud-private-x86_64-3.2.0.tar.gz -O | docker load

# tag the images with the ICR registry URL and namespace
docker images | grep -v "TAG" | grep -v harbor | awk '{a = $1; b = sub(/ibmcom/,"<namespace>",a); print "docker tag " $1 ":" $2 " <region>.icr.io/" a ":" $2 }'

# remove the arch from the image names
images=`docker images | grep <region>.icr.io | grep -v "TAG" | awk '{print $1 ":" $2}' | grep amd64`
for image in $images; do docker tag $image `echo $image | sed -e 's/-amd64//'`; done

# push all the images to ICR
docker images | grep <region>.icr.io | grep -v "TAG" | awk '{print $1 ":" $2}'  | xargs -n1 docker push 
```

Once this is complete you can configure the ICP installation to pull images from the repository by first [creating an API key for read-only access](https://cloud.ibm.com/docs/services/Registry?topic=registry-registry_access), then setting the following variables before running the terraform:

```
registry_server = "<region>.icr.io"
registry_username = "iamapikey"
registry_password = "<apikey>"
icp_inception_image = "<namespace>/icp-inception:3.2.0-ee"
```


### Using the Terraform templates

1. git clone the repository

2. Navigate to the template directory `templates/icp-ee-vpc`

3. Create a `terraform.tfvars` file to reflect your environment. Please see [variables.tf](variables.tf) and below tables for variable names and descriptions. Here is an example `terraform.tfvars` file:

```
key_name = ["jkwong-pub"]
deployment = "icp"
icp_inception_image = "ibmcom/icp-inception:3.2.0-ee"
registry_server = "<region>.icr.io"
registry_username = "iamapikey"
registry_password = "<my api key>"

network_cidr = "172.24.0.0/16"
service_network_cidr = "172.25.0.0/16"

master = {
nodes = "3"
cpu_cores = "8"
memory = "32768"
}

proxy = {
nodes = "3"
}

worker = {
nodes = "3"
}

mgmt = {
nodes = "3"
}

va = {
nodes = "0"
}
```

1. Export the API keys to the environment

```bash
export BM_API_KEY=<IBM Cloud API key>
```

2. Run `terraform init` to download depenencies (modules and plugins)

3. Run `terraform plan` to investigate deployment plan

4. Run `terraform apply` to start deployment.


### Automation Notes

#### What does the automation do
1. Create a VPC in a region
2. Create subnets for each zone in the region
3. Create public gateways for each subnet
5. Create security groups and rules for cluster communication as declared in [security_group.tf](security_group.tf)
6. Create load balancers for Proxy and Control plane
7. Create a boot node and assign it a floating IP
8. Create the virtual machines as defined in `variables.tf` and `terraform.tfvars`
- Use cloud-init to add a user `icpdeploy` with a randomly generated ssh-key
- Configure a separate hard disk to be used by docker
- Configure the shared storage on master nodes

9. Handover to the [icp-deploy](https://github.com/ibm-cloud-architecture/terraform-module-icp-deploy) terraform module as declared in the [icp-deploy.tf](icp-deploy.tf) file


#### What does the icp deploy module do
1. It uses the provided ssh key which has been generated for the `icpdeploy` user to ssh from the terraform controller to all cluster nodes to install ICP prerequisites
2. It generates a new ssh keypair for ICP Boot(master) node to ICP cluster communication and distributes the public key to the cluster nodes. This key is used by the ICP Ansible installer.
3. It populates the necessary `/etc/hosts` file on the boot node
4. It generates the ICP cluster hosts file based on information provided in [icp-deploy.tf](icp-deploy.tf)
5. It generates the ICP cluster `config.yaml` file based on information provided in [icp-deploy.tf](icp-deploy.tf)

#### Security Groups

The automation leverages Security Groups to lock down public and private access to the cluster.

- Inbound communication to the master and proxy nodes are only permitted on ports from the private subnet that the LBaaS is provisioned on.
- Inbound SSH to the boot node is permitted from all addresses on the internet.
- All outbound communication is allowed.
- All other communication is only permitted between cluster nodes.

#### LBaaS

The automation exposes the Master control plane to the Internet on:
- TCP port 8443 (master console)
- TCP port 8500 (private registry)
- TCP port 8600 (private registry)
- TCP port 8001 (Kubernetes API)
- TCP port 9443 (OIDC authentication endpoint)

The automation exposes the Proxy nodes to the internet on:
- TCP port 443 (https)
- TCP port 80 (http)

### Terraform configuration

Please see [variables.tf](variables.tf) for additional parameters.

4 changes: 4 additions & 0 deletions templates/icp-ee-vpc/cfc-certs/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
add TLS certificates here

* icp-router.crt
* icp-router.key
138 changes: 138 additions & 0 deletions templates/icp-ee-vpc/icp-deploy.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,138 @@
##################################
### Deploy ICP to cluster
##################################
module "icpprovision" {
source = "github.com/ibm-cloud-architecture/terraform-module-icp-deploy.git?ref=3.1.1"

# Provide IP addresses for boot, master, mgmt, va, proxy and workers
boot-node = "${ibm_is_instance.icp-boot.primary_network_interface.0.primary_ipv4_address}"
bastion_host = "${ibm_is_floating_ip.icp-boot-pub.address}"
icp-host-groups = {
master = ["${ibm_is_instance.icp-master.*.primary_network_interface.0.primary_ipv4_address}"]
proxy = "${slice(concat(ibm_is_instance.icp-proxy.*.primary_network_interface.0.primary_ipv4_address,
ibm_is_instance.icp-master.*.primary_network_interface.0.primary_ipv4_address),
var.proxy["nodes"] > 0 ? 0 : length(ibm_is_instance.icp-proxy.*.primary_network_interface.0.primary_ipv4_address),
var.proxy["nodes"] > 0 ? length(ibm_is_instance.icp-proxy.*.primary_network_interface.0.primary_ipv4_address) :
length(ibm_is_instance.icp-proxy.*.primary_network_interface.0.primary_ipv4_address) +
length(ibm_is_instance.icp-master.*.primary_network_interface.0.primary_ipv4_address))}"

worker = ["${ibm_is_instance.icp-worker.*.primary_network_interface.0.primary_ipv4_address}"]

// make the master nodes managements nodes if we don't have any specified
management = "${slice(concat(ibm_is_instance.icp-mgmt.*.primary_network_interface.0.primary_ipv4_address,
ibm_is_instance.icp-master.*.primary_network_interface.0.primary_ipv4_address),
var.mgmt["nodes"] > 0 ? 0 : length(ibm_is_instance.icp-mgmt.*.primary_network_interface.0.primary_ipv4_address),
var.mgmt["nodes"] > 0 ? length(ibm_is_instance.icp-mgmt.*.primary_network_interface.0.primary_ipv4_address) :
length(ibm_is_instance.icp-mgmt.*.primary_network_interface.0.primary_ipv4_address) +
length(ibm_is_instance.icp-master.*.primary_network_interface.0.primary_ipv4_address))}"

va = ["${ibm_is_instance.icp-va.*.primary_network_interface.0.primary_ipv4_address}"]
}

icp-inception = "${local.icp-version}"

image_location = "${var.image_location}"
image_location_user = "${var.image_location_user}"
image_location_pass = "${var.image_location_password}"

/* Workaround for terraform issue #10857
When this is fixed, we can work this out automatically */
cluster_size = "${1 + var.master["nodes"] + var.worker["nodes"] + var.proxy["nodes"] + var.mgmt["nodes"] + var.va["nodes"]}"

###################################################################################################################################
## You can feed in arbitrary configuration items in the icp_configuration map.
## Available configuration items availble from https://www.ibm.com/support/knowledgecenter/SSBS6K_3.1.0/installing/config_yaml.html
icp_configuration = {
"network_cidr" = "${var.pod_network_cidr}"
"service_cluster_ip_range" = "${var.service_network_cidr}"
"cluster_lb_address" = "${ibm_is_lb.master.hostname}"
"proxy_lb_address" = "${ibm_is_lb.proxy.hostname}"
"cluster_CA_domain" = "${var.cluster_cname != "" ? "${var.cluster_cname}" : "${ibm_is_lb.master.hostname}"}"
"cluster_name" = "${var.deployment}"
"calico_ip_autodetection_method" = "interface=eth0"

# An admin password will be generated if not supplied in terraform.tfvars
"default_admin_password" = "${local.icppassword}"

# This is the list of disabled management services
"management_services" = "${local.disabled_management_services}"

"private_registry_enabled" = "${local.registry_server != "" ? "true" : "false" }"
"private_registry_server" = "${local.registry_server}"
"image_repo" = "${local.image_repo}" # Will either be our private repo or external repo
"docker_username" = "${local.docker_username}" # Will either be username generated by us or supplied by user
"docker_password" = "${local.docker_password}" # Will either be username generated by us or supplied by user
}

# We will let terraform generate a new ssh keypair
# for boot master to communicate with worker and proxy nodes
# during ICP deployment
generate_key = true

# SSH user and key for terraform to connect to newly created VMs
# ssh_key is the private key corresponding to the public assumed to be included in the template
ssh_user = "icpdeploy"
ssh_key_base64 = "${base64encode(tls_private_key.installkey.private_key_pem)}"
ssh_agent = false

# a hack to wait for the listeners to come up before we start installing
hooks = {
"boot-preconfig" = [
"echo ${ibm_is_lb_listener.master-8001.id} > /dev/null",
"echo ${ibm_is_lb_listener.master-8443.id} > /dev/null",
"echo ${ibm_is_lb_listener.master-8500.id} > /dev/null",
"echo ${ibm_is_lb_listener.master-8600.id} > /dev/null",
"echo ${ibm_is_lb_listener.master-9443.id} > /dev/null",
"echo ${join(",", ibm_is_lb_pool_member.master-8001.*.id)} > /dev/null",
"echo ${join(",", ibm_is_lb_pool_member.master-8443.*.id)} > /dev/null",
"echo ${join(",", ibm_is_lb_pool_member.master-8500.*.id)} > /dev/null",
"echo ${join(",", ibm_is_lb_pool_member.master-8600.*.id)} > /dev/null",
"echo ${join(",", ibm_is_lb_pool_member.master-9443.*.id)} > /dev/null",
"while [ ! -f /var/lib/cloud/instance/boot-finished ]; do sleep 1; done"
]
# wait for cloud-init to finish on all the nodes before we continue
"cluster-preconfig" = [
"while [ ! -f /var/lib/cloud/instance/boot-finished ]; do sleep 1; done"
],
"cluster-postconfig" = ["echo No hook"]
"preinstall" = ["echo No hook"]
"postinstall" = ["echo No hook"]
}

# Make sure to wait for image load to complete

# hooks = {
# "boot-preconfig" = [
# "while [ ! -f /opt/ibm/.imageload_complete ]; do sleep 5; done"
# ]
# }

}

output "icp_console_host" {
value = "${ibm_is_lb.master.hostname}"
}

output "icp_proxy_host" {
value = "${ibm_is_lb.proxy.hostname}"
}

output "icp_console_url" {
value = "https://${ibm_is_lb.master.hostname}:8443"
}

output "icp_registry_url" {
value = "${ibm_is_lb.master.hostname}:8500"
}

output "kubernetes_api_url" {
value = "https://${ibm_is_lb.master.hostname}:8001"
}

output "icp_admin_username" {
value = "admin"
}

output "icp_admin_password" {
value = "${local.icppassword}"
}
1 change: 1 addition & 0 deletions templates/icp-ee-vpc/icp-install/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
add ICP installation binaries here
Loading