diff --git a/templates/icp-ee-vpc/README.md b/templates/icp-ee-vpc/README.md new file mode 100644 index 0000000..85561dd --- /dev/null +++ b/templates/icp-ee-vpc/README.md @@ -0,0 +1,159 @@ +# Terraform ICP IBM Cloud + +This Terraform example configurations uses the [IBM Cloud provider](https://ibm-cloud.github.io/tf-ibm-docs/index.html) to provision virtual machines on IBM Cloud Infrastructure (SoftLayer) +and [Terraform Module ICP Deploy](https://github.com/ibm-cloud-architecture/terraform-module-icp-deploy) to prepare VSIs and deploy [IBM Cloud Private](https://www.ibm.com/cloud-computing/products/ibm-cloud-private/) version 3.1.0 or later in a Highly Available configuration. This Terraform template automates best practices learned from installing ICP on IBM Cloud Infrastructure. + +## Deployment overview +This template creates an environment where + - Cluster is deployed on [IBM Virtual Private Cloud (VPC)](https://cloud.ibm.com/docs/vpc-on-classic?topic=vpc-on-classic-about) private network and is accessed through load balancers + - The cluster is deployed in a single region across three zones, each zone has its own subnet + - Dedicated management node(s) + - Dedicated boot node + - SSH access from public network is enabled on boot node only + - Optimised VM sizes + - Image Manager is disabled due to lack of File Storage (TODO) + - No Vulnerability Advisor node and vulnerability advisor service disabled by default (can be enabled via `terraform.tfvars` settings as described below) + - The images must be pushed to a remote registry and installed over the internet. + +## Pre-requisites + +* Working copy of [Terraform](https://www.terraform.io/intro/getting-started/install.html) + * As of this writing, IBM Cloud Terraform provider is not in the main Terraform repository and must be installed manually. See [these steps](https://ibm-cloud.github.io/tf-ibm-docs/index.html#using-terraform-with-the-ibm-cloud-provider). The templates have been tested with Terraform version 0.11.11 and the IBM Cloud provider version 0.17.1. +* The template is tested on VSIs based on Ubuntu 16.04. RHEL is not supported in this automation. + +### Environment preparation + +The images must be pushed to a remote registry and installed over the internet. One possibilities are to use the IBM Cloud Registry. Acquire the binary tarball for IBM Cloud Private, and follow [these instructions](https://cloud.ibm.com/docs/services/Registry?topic=registry-getting-started) to create a namespace in the IBM Cloud Registry. + +Use `docker login` to authenticate to the registry. The following example commands to load the images locally and push them to the IBM Cloud Registry. + +```bash +# load all the images locally +tar xf ibm-cloud-private-x86_64-3.2.0.tar.gz -O | docker load + +# tag the images with the ICR registry URL and namespace +docker images | grep -v "TAG" | grep -v harbor | awk '{a = $1; b = sub(/ibmcom/,"",a); print "docker tag " $1 ":" $2 " .icr.io/" a ":" $2 }' + +# remove the arch from the image names +images=`docker images | grep .icr.io | grep -v "TAG" | awk '{print $1 ":" $2}' | grep amd64` +for image in $images; do docker tag $image `echo $image | sed -e 's/-amd64//'`; done + +# push all the images to ICR +docker images | grep .icr.io | grep -v "TAG" | awk '{print $1 ":" $2}'  | xargs -n1 docker push  +``` + +Once this is complete you can configure the ICP installation to pull images from the repository by first [creating an API key for read-only access](https://cloud.ibm.com/docs/services/Registry?topic=registry-registry_access), then setting the following variables before running the terraform: + +``` +registry_server = ".icr.io" +registry_username = "iamapikey" +registry_password = "" +icp_inception_image = "/icp-inception:3.2.0-ee" +``` + + +### Using the Terraform templates + +1. git clone the repository + +2. Navigate to the template directory `templates/icp-ee-vpc` + +3. Create a `terraform.tfvars` file to reflect your environment. Please see [variables.tf](variables.tf) and below tables for variable names and descriptions. Here is an example `terraform.tfvars` file: + +``` +key_name = ["jkwong-pub"] +deployment = "icp" +icp_inception_image = "ibmcom/icp-inception:3.2.0-ee" +registry_server = ".icr.io" +registry_username = "iamapikey" +registry_password = "" + +network_cidr = "172.24.0.0/16" +service_network_cidr = "172.25.0.0/16" + +master = { + nodes = "3" + cpu_cores = "8" + memory = "32768" +} + +proxy = { + nodes = "3" +} + +worker = { + nodes = "3" +} + +mgmt = { + nodes = "3" +} + +va = { + nodes = "0" +} +``` + +1. Export the API keys to the environment + + ```bash + export BM_API_KEY= + ``` + +2. Run `terraform init` to download depenencies (modules and plugins) + +3. Run `terraform plan` to investigate deployment plan + +4. Run `terraform apply` to start deployment. + + +### Automation Notes + +#### What does the automation do +1. Create a VPC in a region +2. Create subnets for each zone in the region +3. Create public gateways for each subnet +5. Create security groups and rules for cluster communication as declared in [security_group.tf](security_group.tf) +6. Create load balancers for Proxy and Control plane +7. Create a boot node and assign it a floating IP +8. Create the virtual machines as defined in `variables.tf` and `terraform.tfvars` + - Use cloud-init to add a user `icpdeploy` with a randomly generated ssh-key + - Configure a separate hard disk to be used by docker + - Configure the shared storage on master nodes + +9. Handover to the [icp-deploy](https://github.com/ibm-cloud-architecture/terraform-module-icp-deploy) terraform module as declared in the [icp-deploy.tf](icp-deploy.tf) file + + +#### What does the icp deploy module do +1. It uses the provided ssh key which has been generated for the `icpdeploy` user to ssh from the terraform controller to all cluster nodes to install ICP prerequisites +2. It generates a new ssh keypair for ICP Boot(master) node to ICP cluster communication and distributes the public key to the cluster nodes. This key is used by the ICP Ansible installer. +3. It populates the necessary `/etc/hosts` file on the boot node +4. It generates the ICP cluster hosts file based on information provided in [icp-deploy.tf](icp-deploy.tf) +5. It generates the ICP cluster `config.yaml` file based on information provided in [icp-deploy.tf](icp-deploy.tf) + +#### Security Groups + +The automation leverages Security Groups to lock down public and private access to the cluster. + +- Inbound communication to the master and proxy nodes are only permitted on ports from the private subnet that the LBaaS is provisioned on. +- Inbound SSH to the boot node is permitted from all addresses on the internet. +- All outbound communication is allowed. +- All other communication is only permitted between cluster nodes. + +#### LBaaS + +The automation exposes the Master control plane to the Internet on: +- TCP port 8443 (master console) +- TCP port 8500 (private registry) +- TCP port 8600 (private registry) +- TCP port 8001 (Kubernetes API) +- TCP port 9443 (OIDC authentication endpoint) + +The automation exposes the Proxy nodes to the internet on: +- TCP port 443 (https) +- TCP port 80 (http) + +### Terraform configuration + +Please see [variables.tf](variables.tf) for additional parameters. + diff --git a/templates/icp-ee-vpc/cfc-certs/README.md b/templates/icp-ee-vpc/cfc-certs/README.md new file mode 100644 index 0000000..c64730a --- /dev/null +++ b/templates/icp-ee-vpc/cfc-certs/README.md @@ -0,0 +1,4 @@ +add TLS certificates here + +* icp-router.crt +* icp-router.key diff --git a/templates/icp-ee-vpc/icp-deploy.tf b/templates/icp-ee-vpc/icp-deploy.tf new file mode 100644 index 0000000..9b2bcee --- /dev/null +++ b/templates/icp-ee-vpc/icp-deploy.tf @@ -0,0 +1,138 @@ +################################## +### Deploy ICP to cluster +################################## +module "icpprovision" { + source = "github.com/ibm-cloud-architecture/terraform-module-icp-deploy.git?ref=3.1.1" + + # Provide IP addresses for boot, master, mgmt, va, proxy and workers + boot-node = "${ibm_is_instance.icp-boot.primary_network_interface.0.primary_ipv4_address}" + bastion_host = "${ibm_is_floating_ip.icp-boot-pub.address}" + icp-host-groups = { + master = ["${ibm_is_instance.icp-master.*.primary_network_interface.0.primary_ipv4_address}"] + proxy = "${slice(concat(ibm_is_instance.icp-proxy.*.primary_network_interface.0.primary_ipv4_address, + ibm_is_instance.icp-master.*.primary_network_interface.0.primary_ipv4_address), + var.proxy["nodes"] > 0 ? 0 : length(ibm_is_instance.icp-proxy.*.primary_network_interface.0.primary_ipv4_address), + var.proxy["nodes"] > 0 ? length(ibm_is_instance.icp-proxy.*.primary_network_interface.0.primary_ipv4_address) : + length(ibm_is_instance.icp-proxy.*.primary_network_interface.0.primary_ipv4_address) + + length(ibm_is_instance.icp-master.*.primary_network_interface.0.primary_ipv4_address))}" + + worker = ["${ibm_is_instance.icp-worker.*.primary_network_interface.0.primary_ipv4_address}"] + + // make the master nodes managements nodes if we don't have any specified + management = "${slice(concat(ibm_is_instance.icp-mgmt.*.primary_network_interface.0.primary_ipv4_address, + ibm_is_instance.icp-master.*.primary_network_interface.0.primary_ipv4_address), + var.mgmt["nodes"] > 0 ? 0 : length(ibm_is_instance.icp-mgmt.*.primary_network_interface.0.primary_ipv4_address), + var.mgmt["nodes"] > 0 ? length(ibm_is_instance.icp-mgmt.*.primary_network_interface.0.primary_ipv4_address) : + length(ibm_is_instance.icp-mgmt.*.primary_network_interface.0.primary_ipv4_address) + + length(ibm_is_instance.icp-master.*.primary_network_interface.0.primary_ipv4_address))}" + + va = ["${ibm_is_instance.icp-va.*.primary_network_interface.0.primary_ipv4_address}"] + } + + icp-inception = "${local.icp-version}" + + image_location = "${var.image_location}" + image_location_user = "${var.image_location_user}" + image_location_pass = "${var.image_location_password}" + + /* Workaround for terraform issue #10857 + When this is fixed, we can work this out automatically */ + cluster_size = "${1 + var.master["nodes"] + var.worker["nodes"] + var.proxy["nodes"] + var.mgmt["nodes"] + var.va["nodes"]}" + + ################################################################################################################################### + ## You can feed in arbitrary configuration items in the icp_configuration map. + ## Available configuration items availble from https://www.ibm.com/support/knowledgecenter/SSBS6K_3.1.0/installing/config_yaml.html + icp_configuration = { + "network_cidr" = "${var.pod_network_cidr}" + "service_cluster_ip_range" = "${var.service_network_cidr}" + "cluster_lb_address" = "${ibm_is_lb.master.hostname}" + "proxy_lb_address" = "${ibm_is_lb.proxy.hostname}" + "cluster_CA_domain" = "${var.cluster_cname != "" ? "${var.cluster_cname}" : "${ibm_is_lb.master.hostname}"}" + "cluster_name" = "${var.deployment}" + "calico_ip_autodetection_method" = "interface=eth0" + + # An admin password will be generated if not supplied in terraform.tfvars + "default_admin_password" = "${local.icppassword}" + + # This is the list of disabled management services + "management_services" = "${local.disabled_management_services}" + + "private_registry_enabled" = "${local.registry_server != "" ? "true" : "false" }" + "private_registry_server" = "${local.registry_server}" + "image_repo" = "${local.image_repo}" # Will either be our private repo or external repo + "docker_username" = "${local.docker_username}" # Will either be username generated by us or supplied by user + "docker_password" = "${local.docker_password}" # Will either be username generated by us or supplied by user + } + + # We will let terraform generate a new ssh keypair + # for boot master to communicate with worker and proxy nodes + # during ICP deployment + generate_key = true + + # SSH user and key for terraform to connect to newly created VMs + # ssh_key is the private key corresponding to the public assumed to be included in the template + ssh_user = "icpdeploy" + ssh_key_base64 = "${base64encode(tls_private_key.installkey.private_key_pem)}" + ssh_agent = false + + # a hack to wait for the listeners to come up before we start installing + hooks = { + "boot-preconfig" = [ + "echo ${ibm_is_lb_listener.master-8001.id} > /dev/null", + "echo ${ibm_is_lb_listener.master-8443.id} > /dev/null", + "echo ${ibm_is_lb_listener.master-8500.id} > /dev/null", + "echo ${ibm_is_lb_listener.master-8600.id} > /dev/null", + "echo ${ibm_is_lb_listener.master-9443.id} > /dev/null", + "echo ${join(",", ibm_is_lb_pool_member.master-8001.*.id)} > /dev/null", + "echo ${join(",", ibm_is_lb_pool_member.master-8443.*.id)} > /dev/null", + "echo ${join(",", ibm_is_lb_pool_member.master-8500.*.id)} > /dev/null", + "echo ${join(",", ibm_is_lb_pool_member.master-8600.*.id)} > /dev/null", + "echo ${join(",", ibm_is_lb_pool_member.master-9443.*.id)} > /dev/null", + "while [ ! -f /var/lib/cloud/instance/boot-finished ]; do sleep 1; done" + ] + # wait for cloud-init to finish on all the nodes before we continue + "cluster-preconfig" = [ + "while [ ! -f /var/lib/cloud/instance/boot-finished ]; do sleep 1; done" + ], + "cluster-postconfig" = ["echo No hook"] + "preinstall" = ["echo No hook"] + "postinstall" = ["echo No hook"] + } + + # Make sure to wait for image load to complete + + # hooks = { + # "boot-preconfig" = [ + # "while [ ! -f /opt/ibm/.imageload_complete ]; do sleep 5; done" + # ] + # } + +} + +output "icp_console_host" { + value = "${ibm_is_lb.master.hostname}" +} + +output "icp_proxy_host" { + value = "${ibm_is_lb.proxy.hostname}" +} + +output "icp_console_url" { + value = "https://${ibm_is_lb.master.hostname}:8443" +} + +output "icp_registry_url" { + value = "${ibm_is_lb.master.hostname}:8500" +} + +output "kubernetes_api_url" { + value = "https://${ibm_is_lb.master.hostname}:8001" +} + +output "icp_admin_username" { + value = "admin" +} + +output "icp_admin_password" { + value = "${local.icppassword}" +} diff --git a/templates/icp-ee-vpc/icp-install/README.md b/templates/icp-ee-vpc/icp-install/README.md new file mode 100644 index 0000000..ae7a72c --- /dev/null +++ b/templates/icp-ee-vpc/icp-install/README.md @@ -0,0 +1 @@ +add ICP installation binaries here diff --git a/templates/icp-ee-vpc/instances.tf b/templates/icp-ee-vpc/instances.tf new file mode 100644 index 0000000..b29a828 --- /dev/null +++ b/templates/icp-ee-vpc/instances.tf @@ -0,0 +1,395 @@ +data "ibm_is_image" "osimage" { + name = "${var.os_image}" +} + +data "ibm_is_instance_profile" "icp-boot-profile" { + name = "${var.boot["profile"]}" +} + +data "ibm_is_instance_profile" "icp-master-profile" { + name = "${var.master["profile"]}" +} + +data "ibm_is_instance_profile" "icp-proxy-profile" { + name = "${var.proxy["profile"]}" +} + +data "ibm_is_instance_profile" "icp-worker-profile" { + name = "${var.worker["profile"]}" +} + +data "ibm_is_instance_profile" "icp-mgmt-profile" { + name = "${var.mgmt["profile"]}" +} + +data "ibm_is_instance_profile" "icp-va-profile" { + name = "${var.va["profile"]}" +} + +resource "ibm_is_floating_ip" "icp-boot-pub" { + name = "${var.deployment}-boot-${random_id.clusterid.hex}-pubip" + target = "${ibm_is_instance.icp-boot.primary_network_interface.0.id}" +} + +############################################## +## Provision boot node +############################################## +resource "ibm_is_volume" "icp-boot-docker-vol" { + name = "${var.deployment}-boot-docker-${random_id.clusterid.hex}" + profile = "general-purpose" + zone = "${element(data.ibm_is_zone.icp_zone.*.name, count.index)}" + capacity = "${var.boot["docker_vol_size"]}" +} + +resource "ibm_is_instance" "icp-boot" { + name = "${var.deployment}-boot-${random_id.clusterid.hex}" + + vpc = "${ibm_is_vpc.icp_vpc.id}" + zone = "${element(data.ibm_is_zone.icp_zone.*.name, 0)}" + + keys = ["${data.ibm_is_ssh_key.public_key.*.id}"] + profile = "${data.ibm_is_instance_profile.icp-boot-profile.name}" + + primary_network_interface = { + subnet = "${element(ibm_is_subnet.icp_subnet.*.id, 0)}" + } + + image = "${data.ibm_is_image.osimage.id}" + volumes = [ + "${ibm_is_volume.icp-boot-docker-vol.id}" + ] + + user_data = < 0 ? var.proxy["nodes"] : var.master["nodes"]}" + lb = "${ibm_is_lb.proxy.id}" + pool = "${element(split("/",ibm_is_lb_pool.proxy-443.id),1)}" + port = "443" + target_address = "${var.proxy["nodes"] > 0 ? + "${element(ibm_is_instance.icp-proxy.*.primary_network_interface.0.primary_ipv4_address, count.index)}" : + "${element(ibm_is_instance.icp-master.*.primary_network_interface.0.primary_ipv4_address, count.index)}" }" +} + +resource "ibm_is_lb_pool" "proxy-80" { + lb = "${ibm_is_lb.proxy.id}" + name = "${var.deployment}-proxy-80-${random_id.clusterid.hex}" + protocol = "tcp" + algorithm = "round_robin" + health_delay = 60 + health_retries = 5 + health_timeout = 30 + health_type = "tcp" + + # ensure these are created serially -- LB limitations + depends_on = [ + "ibm_is_lb_listener.proxy-443", + //"ibm_is_lb_pool.proxy-443" + ] +} + +resource "ibm_is_lb_listener" "proxy-80" { + lb = "${ibm_is_lb.proxy.id}" + protocol = "tcp" + port = "80" + default_pool = "${element(split("/",ibm_is_lb_pool.proxy-80.id),1)}" + + # ensure these are created serially -- LB limitations + depends_on = [ + "ibm_is_lb_listener.proxy-443", + //"ibm_is_lb_pool.proxy-443" + ] +} + +resource "ibm_is_lb_pool_member" "proxy-80" { + count = "${var.proxy["nodes"] > 0 ? var.proxy["nodes"] : var.master["nodes"]}" + lb = "${ibm_is_lb.proxy.id}" + pool = "${element(split("/",ibm_is_lb_pool.proxy-80.id),1)}" + port = "80" + target_address = "${var.proxy["nodes"] > 0 ? + "${element(ibm_is_instance.icp-proxy.*.primary_network_interface.0.primary_ipv4_address, count.index)}" : + "${element(ibm_is_instance.icp-master.*.primary_network_interface.0.primary_ipv4_address, count.index)}" }" + + # ensure these are created serially -- LB limitations + depends_on = [ + "ibm_is_lb_pool_member.proxy-443", + //"ibm_is_lb_pool.proxy-443", + //"ibm_is_lb_listener.proxy-443" + ] +} + +resource "ibm_is_lb" "master" { + name = "${var.deployment}-mastr-${random_id.clusterid.hex}" + subnets = ["${ibm_is_subnet.icp_subnet.*.id}"] +} + +resource "ibm_is_lb_pool" "master-8001" { + lb = "${ibm_is_lb.master.id}" + name = "${var.deployment}-master-8001-${random_id.clusterid.hex}" + protocol = "tcp" + algorithm = "round_robin" + health_delay = 60 + health_retries = 5 + health_timeout = 30 + health_type = "tcp" +} + +resource "ibm_is_lb_listener" "master-8001" { + protocol = "tcp" + lb = "${ibm_is_lb.master.id}" + port = "8001" + default_pool = "${element(split("/",ibm_is_lb_pool.master-8001.id),1)}" +} + +resource "ibm_is_lb_pool_member" "master-8001" { + count = "${var.master["nodes"]}" + lb = "${ibm_is_lb.master.id}" + pool = "${element(split("/",ibm_is_lb_pool.master-8001.id),1)}" + port = "8001" + target_address = "${element(ibm_is_instance.icp-master.*.primary_network_interface.0.primary_ipv4_address, count.index)}" +} + +resource "ibm_is_lb_pool" "master-8443" { + lb = "${ibm_is_lb.master.id}" + name = "${var.deployment}-master-8443-${random_id.clusterid.hex}" + protocol = "tcp" + algorithm = "round_robin" + health_delay = 60 + health_retries = 5 + health_timeout = 30 + health_type = "tcp" + +/* + # ensure these are created serially -- LB limitations + depends_on = [ + "ibm_is_lb_listener.master-8001", + //"ibm_is_lb_pool.master-8001" + ] + */ +} + +resource "ibm_is_lb_listener" "master-8443" { + protocol = "tcp" + lb = "${ibm_is_lb.master.id}" + port = "8443" + default_pool = "${element(split("/",ibm_is_lb_pool.master-8443.id),1)}" + + # ensure these are created serially -- LB limitations + depends_on = [ + "ibm_is_lb_listener.master-8001", + //"ibm_is_lb_pool.master-8001" + ] + +} + +resource "ibm_is_lb_pool_member" "master-8443" { + count = "${var.master["nodes"]}" + lb = "${ibm_is_lb.master.id}" + pool = "${element(split("/",ibm_is_lb_pool.master-8443.id),1)}" + port = "8443" + target_address = "${element(ibm_is_instance.icp-master.*.primary_network_interface.0.primary_ipv4_address, count.index)}" + + # ensure these are created serially -- LB limitations + depends_on = [ + "ibm_is_lb_pool_member.master-8001" + ] + +} + +resource "ibm_is_lb_pool" "master-8500" { + lb = "${ibm_is_lb.master.id}" + name = "${var.deployment}-master-8500-${random_id.clusterid.hex}" + protocol = "tcp" + algorithm = "round_robin" + health_delay = 60 + health_retries = 5 + health_timeout = 30 + health_type = "tcp" + + # ensure these are created serially -- LB limitations + /* + depends_on = [ + "ibm_is_lb_listener.master-8443", + //"ibm_is_lb_pool.master-8443", + "ibm_is_lb_listener.master-8001", + //"ibm_is_lb_pool.master-8001" + ] + */ + +} + +resource "ibm_is_lb_listener" "master-8500" { + protocol = "tcp" + lb = "${ibm_is_lb.master.id}" + port = "8500" + default_pool = "${element(split("/",ibm_is_lb_pool.master-8500.id),1)}" + + # ensure these are created serially -- LB limitations + depends_on = [ + "ibm_is_lb_listener.master-8443", + //"ibm_is_lb_pool.master-8443", + "ibm_is_lb_listener.master-8001", + //"ibm_is_lb_pool.master-8001" + ] + + +} + +resource "ibm_is_lb_pool_member" "master-8500" { + count = "${var.master["nodes"]}" + lb = "${ibm_is_lb.master.id}" + pool = "${element(split("/",ibm_is_lb_pool.master-8500.id),1)}" + port = "8500" + target_address = "${element(ibm_is_instance.icp-master.*.primary_network_interface.0.primary_ipv4_address, count.index)}" + + # ensure these are created serially -- LB limitations + depends_on = [ + "ibm_is_lb_pool_member.master-8001", + "ibm_is_lb_pool_member.master-8443" + ] + +} + +resource "ibm_is_lb_pool" "master-8600" { + lb = "${ibm_is_lb.master.id}" + name = "${var.deployment}-master-8600-${random_id.clusterid.hex}" + protocol = "tcp" + algorithm = "round_robin" + health_delay = 60 + health_retries = 5 + health_timeout = 30 + health_type = "tcp" + + # ensure these are created serially -- LB limitations + /* + depends_on = [ + "ibm_is_lb_listener.master-8500", + //"ibm_is_lb_pool.master-8500", + "ibm_is_lb_listener.master-8443", + //"ibm_is_lb_pool.master-8443", + "ibm_is_lb_listener.master-8001", + //"ibm_is_lb_pool.master-8001" + ] + */ + +} + +resource "ibm_is_lb_listener" "master-8600" { + protocol = "tcp" + lb = "${ibm_is_lb.master.id}" + port = "8600" + default_pool = "${element(split("/",ibm_is_lb_pool.master-8600.id),1)}" + + # ensure these are created serially -- LB limitations + depends_on = [ + "ibm_is_lb_listener.master-8500", + //"ibm_is_lb_pool.master-8500", + "ibm_is_lb_listener.master-8443", + //"ibm_is_lb_pool.master-8443", + "ibm_is_lb_listener.master-8001", + //"ibm_is_lb_pool.master-8001" + ] + +} + +resource "ibm_is_lb_pool_member" "master-8600" { + count = "${var.master["nodes"]}" + lb = "${ibm_is_lb.master.id}" + pool = "${element(split("/",ibm_is_lb_pool.master-8600.id),1)}" + port = "8600" + target_address = "${element(ibm_is_instance.icp-master.*.primary_network_interface.0.primary_ipv4_address, count.index)}" + + depends_on = [ + "ibm_is_lb_pool_member.master-8001", + "ibm_is_lb_pool_member.master-8443", + "ibm_is_lb_pool_member.master-8500" + ] + +} + +resource "ibm_is_lb_pool" "master-9443" { + lb = "${ibm_is_lb.master.id}" + name = "${var.deployment}-master-9443-${random_id.clusterid.hex}" + protocol = "tcp" + algorithm = "round_robin" + health_delay = 60 + health_retries = 5 + health_timeout = 30 + health_type = "tcp" + + /* + # ensure these are created serially -- LB limitations + depends_on = [ + "ibm_is_lb_listener.master-8600", + //"ibm_is_lb_pool.master-8600", + "ibm_is_lb_listener.master-8500", + //"ibm_is_lb_pool.master-8500", + "ibm_is_lb_listener.master-8443", + //"ibm_is_lb_pool.master-8443", + "ibm_is_lb_listener.master-8001", + //"ibm_is_lb_pool.master-8001" + ] + */ + +} + +resource "ibm_is_lb_listener" "master-9443" { + protocol = "tcp" + lb = "${ibm_is_lb.master.id}" + port = "9443" + default_pool = "${element(split("/",ibm_is_lb_pool.master-9443.id),1)}" + + # ensure these are created serially -- LB limitations + depends_on = [ + "ibm_is_lb_listener.master-8600", + //"ibm_is_lb_pool.master-8600", + "ibm_is_lb_listener.master-8500", + //"ibm_is_lb_pool.master-8500", + "ibm_is_lb_listener.master-8443", + //"ibm_is_lb_pool.master-8443", + "ibm_is_lb_listener.master-8001", + //"ibm_is_lb_pool.master-8001" + ] +} + +resource "ibm_is_lb_pool_member" "master-9443" { + count = "${var.master["nodes"]}" + lb = "${ibm_is_lb.master.id}" + pool = "${element(split("/",ibm_is_lb_pool.master-9443.id),1)}" + port = "9443" + target_address = "${element(ibm_is_instance.icp-master.*.primary_network_interface.0.primary_ipv4_address, count.index)}" + + depends_on = [ + "ibm_is_lb_pool_member.master-8001", + "ibm_is_lb_pool_member.master-8443", + "ibm_is_lb_pool_member.master-8500", + "ibm_is_lb_pool_member.master-8600" + ] + +} + + diff --git a/templates/icp-ee-vpc/main.tf b/templates/icp-ee-vpc/main.tf new file mode 100644 index 0000000..99db679 --- /dev/null +++ b/templates/icp-ee-vpc/main.tf @@ -0,0 +1,71 @@ +provider "ibm" { + generation = "1" +} + +locals { + # Set the local filename of the docker package if we're uploading it + docker_package_uri = "${var.docker_package_location != "" ? "/tmp/${basename(var.docker_package_location)}" : "" }" + + # The storage IDs that will be + master_fs_ids = "${compact( + concat( + ibm_storage_file.fs_audit.*.id, + ibm_storage_file.fs_registry.*.id, + list("")) + )}" + + icppassword = "${var.icppassword != "" ? "${var.icppassword}" : "${random_id.adminpassword.hex}"}" + + + ####### + ## Intermediate interpolations for the private registry + ## Whether we are provided with details of an external, or we create one ourselves + ## the image_repo and docker_username / docker_password will always be available and consistent + ####### + + # If we stand up a image registry what will the registry_server name and namespace be + registry_server = "${var.registry_server != "" ? "${var.registry_server}" : ""}" + namespace = "${dirname(var.icp_inception_image)}" # This will typically return ibmcom + + # The final image repo will be either interpolated from what supplied in icp_inception_image or + image_repo = "${var.registry_server == "" ? "" : "${local.registry_server}/${local.namespace}"}" + icp-version = "${format("%s%s%s", "${local.docker_username != "" ? "${local.docker_username}:${local.docker_password}@" : ""}", + "${var.registry_server != "" ? "${var.registry_server}/" : ""}", + "${var.icp_inception_image}")}" + + # If we're using external registry we need to be supplied registry_username and registry_password + docker_username = "${var.registry_username != "" ? var.registry_username : ""}" + docker_password = "${var.registry_password != "" ? var.registry_password : ""}" + + # This is just to have a long list of disabled items to use in icp-deploy.tf + disabled_list = "${list("disabled","disabled","disabled","disabled","disabled","disabled","disabled","disabled","disabled","disabled","disabled","disabled","disabled","disabled","disabled","disabled","disabled","disabled","disabled","disabled")}" + + disabled_management_services = "${zipmap(var.disabled_management_services, slice(local.disabled_list, 0, length(var.disabled_management_services)))}" +} + +# Create a unique random clusterid for this cluster +resource "random_id" "clusterid" { + byte_length = "4" +} + +# Create a SSH key for SSH communication from terraform to VMs +resource "tls_private_key" "installkey" { + algorithm = "RSA" +} + +data "ibm_is_ssh_key" "public_key" { + count = "${length(var.key_name)}" + name = "${element(var.key_name, count.index)}" +} + +resource "ibm_is_ssh_key" "installkey" { + name = "${format("icp-%s", random_id.clusterid.hex)}" + public_key = "${tls_private_key.installkey.public_key_openssh}" +} + +# Generate a random string in case user wants us to generate admin password +resource "random_id" "adminpassword" { + byte_length = "16" +} + + diff --git a/templates/icp-ee-vpc/scripts/bootstrap.sh b/templates/icp-ee-vpc/scripts/bootstrap.sh new file mode 100644 index 0000000..5439122 --- /dev/null +++ b/templates/icp-ee-vpc/scripts/bootstrap.sh @@ -0,0 +1,170 @@ +#!/bin/bash + +ubuntu_install(){ + # attempt to retry apt-get update until cloud-init gives up the apt lock + until apt-get update; do + sleep 2 + done + + until apt-get install -y \ + unzip \ + python \ + python-yaml \ + thin-provisioning-tools \ + nfs-client \ + lvm2; do + sleep 2 + done +} + +crlinux_install() { + yum install -y \ + unzip \ + PyYAML \ + device-mapper \ + libseccomp \ + libtool-ltdl \ + libcgroup \ + iptables \ + device-mapper-persistent-data \ + nfs-util \ + lvm2 +} + +docker_install() { + if docker --version; then + echo "Docker already installed. Exiting" + return 0 + fi + + if [ -z "${package_location}" -a "${OSLEVEL}" == "ubuntu" ]; then + # if we're on ubuntu, we can install docker-ce off of the repo + apt-get install -y \ + apt-transport-https \ + ca-certificates \ + curl \ + software-properties-common + + curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - + + add-apt-repository \ + "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ + $(lsb_release -cs) \ + stable" + + apt-get update && apt-get install -y docker-ce + elif [ ! -z "${package_location}" ]; then + while [ ! -f "${package_location}" ]; do + echo "Waiting for docker package at ${package_location} ... " + sleep 1 + done + + echo "Install docker from ${package_location}" + chmod u+x "${package_location}" + + # loop here until file provisioner is done copying the package + until ${package_location} --install; do + sleep 2 + done + else + return 0 + fi + + partprobe + lsblk + systemctl enable docker + storage_driver=`docker info | grep 'Storage Driver:' | cut -d: -f2 | sed -e 's/\s//g'` + echo "storage driver is ${storage_driver}" + if [ "${storage_driver}" == "devicemapper" ]; then + # check if loop lvm mode is enabled + if [ -z `docker info | grep 'loop file'` ]; then + echo "Direct-lvm mode is configured." + return 0 + fi + + # TODO if docker block device is not provided, make sure we use overlay2 storage driver + if [ -z "${docker_disk}" ]; then + echo "docker loop-lvm mode is configured and a docker block device was not specified! This is not recommended for production!" + return 0 + fi + + echo "A docker disk ${docker_disk} is provided, setting up direct-lvm mode ..." + + # docker installer uses devicemapper already + cat > /etc/docker/daemon.json <&2 + exit 1 + fi + + # Download the file using auth if provided + echo "Downloading ${image_url}" >&2 + mkdir -p ${sourcedir} + wget --continue ${username:+--user} ${username} ${password:+--password} ${password} \ + -O ${sourcedir}/${filename} "${image_url}" + + # Set the image file name if we're on the same platform + if [[ ${filename} =~ .*$(uname -m).* ]]; then + echo "Setting image_file to ${sourcedir}/${filename}" + image_file="${sourcedir}/${filename}" + fi +elif [[ "${package_location:0:3}" == "nfs" ]]; then + # Separate out the filename and path + sourcedir="/opt/ibm/cluster/images" + nfs_mount=$(dirname ${package_location:4}) + image_file="${sourcedir}/$(basename ${package_location})" + sudo mkdir -p ${sourcedir} + + # Mount + sudo mount.nfs $nfs_mount $sourcedir + if [ $? -ne 0 ]; then + echo "An error occurred mounting the NFS server. Mount point: $nfs_mount" + exit 1 + fi +else + # This must be uploaded from local file, terraform should have copied it to /tmp + sourcedir="/opt/ibm/cluster/images" + image_file="/tmp/$(basename ${package_location})" + sudo mkdir -p ${sourcedir} + sudo mv ${image_file} ${sourcedir}/ +fi + +echo "Unpacking ${image_file} ..." +pv --interval 10 ${image_file} | tar zxf - -O | sudo docker load + diff --git a/templates/icp-ee-vpc/security_group.tf b/templates/icp-ee-vpc/security_group.tf new file mode 100644 index 0000000..6ffa061 --- /dev/null +++ b/templates/icp-ee-vpc/security_group.tf @@ -0,0 +1,195 @@ +resource "ibm_is_security_group" "cluster_private" { + name = "${var.deployment}-cluster-priv-${random_id.clusterid.hex}" + vpc = "${ibm_is_vpc.icp_vpc.id}" +} + +resource "ibm_is_security_group_rule" "cluster_ingress_from_self" { + direction = "ingress" + remote = "${ibm_is_security_group.cluster_private.id}" + group = "${ibm_is_security_group.cluster_private.id}" +} + +resource "ibm_is_security_group_rule" "cluster_ingress_master" { + direction = "ingress" + remote = "${ibm_is_security_group.master_node.id}" + group = "${ibm_is_security_group.cluster_private.id}" +} + +resource "ibm_is_security_group_rule" "cluster_ingress_ssh_boot" { + direction = "ingress" + remote = "${ibm_is_security_group.boot_node.id}" + group = "${ibm_is_security_group.cluster_private.id}" + tcp { + port_min = 22 + port_max = 22 + } +} + +resource "ibm_is_security_group_rule" "cluster_egress_all" { + direction = "egress" + group = "${ibm_is_security_group.cluster_private.id}" + remote = "0.0.0.0/0" +} + +resource "ibm_is_security_group" "master_node" { + name = "${var.deployment}-master-${random_id.clusterid.hex}" + vpc = "${ibm_is_vpc.icp_vpc.id}" +} + +resource "ibm_is_security_group_rule" "master_ingress_ssh_boot" { + direction = "ingress" + remote = "${ibm_is_security_group.boot_node.id}" + group = "${ibm_is_security_group.master_node.id}" + tcp { + port_min = 22 + port_max = 22 + } +} + +// TODO i am unsure about allowing all traffic to the master from the cluster, but it doesn't seem +// work without it -- particularly in multi-tenant environments i'm uneasy about allowing +// access to etcd, so NetworkPolicy should be used in the cluster to limit access to specific +// ports from specific pods (i.e. calico) +resource "ibm_is_security_group_rule" "master_ingress_all_cluster" { + direction = "ingress" + remote = "${ibm_is_security_group.cluster_private.id}" + group = "${ibm_is_security_group.master_node.id}" +} + + +resource "ibm_is_security_group_rule" "master_egress_all" { + direction = "egress" + group = "${ibm_is_security_group.master_node.id}" + remote = "0.0.0.0/0" +} + + +# restrict incoming on ports to LBaaS private subnet +resource "ibm_is_security_group_rule" "master_ingress_port_8443_all" { + direction = "ingress" + tcp { + port_min = 8443 + port_max = 8443 + } + group = "${ibm_is_security_group.master_node.id}" + #remote = "${ibm_compute_vm_instance.icp-master.0.private_subnet}" + # Sometimes LBaaS can be placed on a different subnet + remote = "0.0.0.0/0" +} + +# restrict to LBaaS private subnet +resource "ibm_is_security_group_rule" "master_ingress_port_8500_all" { + direction = "ingress" + tcp { + port_min = 8500 + port_max = 8500 + } + group = "${ibm_is_security_group.master_node.id}" + # remote = "${ibm_compute_vm_instance.icp-master.0.private_subnet}" + # Sometimes LBaaS can be placed on a different subnet + remote = "0.0.0.0/0" +} + +# restrict to LBaaS private subnet +resource "ibm_is_security_group_rule" "master_ingress_port_8600_all" { + direction = "ingress" + tcp { + port_min = 8600 + port_max = 8600 + } + group = "${ibm_is_security_group.master_node.id}" + # remote = "${ibm_compute_vm_instance.icp-master.0.private_subnet}" + # Sometimes LBaaS can be placed on a different subnet + remote = "0.0.0.0/0" +} + +# TODO restrict to LBaaS private subnet +resource "ibm_is_security_group_rule" "master_ingress_port_8001_all" { + direction = "ingress" + tcp { + port_min = 8001 + port_max = 8001 + } + group = "${ibm_is_security_group.master_node.id}" + # remote = "${ibm_compute_vm_instance.icp-master.0.private_subnet}" + # Sometimes LBaaS can be placed on a different subnet + remote = "0.0.0.0/0" +} + + +# TODO do we still need this rule? +resource "ibm_is_security_group_rule" "master_ingress_port_9443_all" { + direction = "ingress" + tcp { + port_min = 9443 + port_max = 9443 + } + group = "${ibm_is_security_group.master_node.id}" + # remote = "${ibm_compute_vm_instance.icp-master.0.private_subnet}" + # Sometimes LBaaS can be placed on a different subnet + remote = "0.0.0.0/0" +} + +# restrict to LBaaS private subnet +resource "ibm_is_security_group_rule" "proxy_ingress_port_80_all" { + direction = "ingress" + tcp { + port_min = 80 + port_max = 80 + } + group = "${ibm_is_security_group.proxy_node.id}" + # Sometimes LBaaS can be placed on a different subnet + remote = "0.0.0.0/0" +} + +# restrict to LBaaS private subnet +resource "ibm_is_security_group_rule" "proxy_ingress_port_443_all" { + direction = "ingress" + tcp { + port_min = 443 + port_max = 443 + } + group = "${ibm_is_security_group.proxy_node.id}" + # Sometimes LBaaS can be placed on a different subnet + remote = "0.0.0.0/0" +} + +resource "ibm_is_security_group" "proxy_node" { + name = "${var.deployment}-proxy-${random_id.clusterid.hex}" + vpc = "${ibm_is_vpc.icp_vpc.id}" +} + +resource "ibm_is_security_group_network_interface_attachment" "proxy" { + count = "${var.proxy["nodes"] > 0 ? var.proxy["nodes"] : var.master["nodes"]}" + security_group = "${ibm_is_security_group.proxy_node.id}" + network_interface = "${var.proxy["nodes"] > 0 ? + element(ibm_is_instance.icp-proxy.*.primary_network_interface.0.id, count.index) : + element(ibm_is_instance.icp-master.*.primary_network_interface.0.id, count.index)}" +} + +resource "ibm_is_security_group" "boot_node" { + name = "${var.deployment}-boot-${random_id.clusterid.hex}" + vpc = "${ibm_is_vpc.icp_vpc.id}" +} + +resource "ibm_is_security_group_network_interface_attachment" "boot" { + security_group = "${ibm_is_security_group.boot_node.id}" + network_interface = "${ibm_is_instance.icp-boot.primary_network_interface.0.id}" +} + +# TODO restrict to allowed CIDR +resource "ibm_is_security_group_rule" "boot_ingress_ssh_all" { + group = "${ibm_is_security_group.boot_node.id}" + direction = "ingress" + remote = "0.0.0.0/0" + tcp { + port_min = 22 + port_max = 22 + } +} + +resource "ibm_is_security_group_rule" "boot_egress_all" { + group = "${ibm_is_security_group.boot_node.id}" + remote = "0.0.0.0/0" + direction = "egress" +} diff --git a/templates/icp-ee-vpc/variables.tf b/templates/icp-ee-vpc/variables.tf new file mode 100644 index 0000000..4168669 --- /dev/null +++ b/templates/icp-ee-vpc/variables.tf @@ -0,0 +1,187 @@ +##### SoftLayer/IBMCloud Access Credentials ###### + +variable "key_name" { + description = "Name or reference of SSH key to provision IBM Cloud instances with" + default = [] +} + +variable "deployment" { + description = "Identifier prefix added to the host names." + default = "icp" +} + +variable "os_image" { + description = "IBM Cloud OS reference code to determine OS, version, word length" + default = "ubuntu-16.04-amd64" +} + +variable "vpc_region" { + default = "us-south" +} + +variable "vpc_address_prefix" { + description = "address prefixes for each zone in the VPC. the VPC subnet CIDRs for each zone must be within the address prefix." + default = [ "10.10.0.0/24", "10.11.0.0/24", "10.12.0.0/24" ] +} + +variable "vpc_subnet_cidr" { + default = [ "10.10.0.0/24", "10.11.0.0/24", "10.12.0.0/24" ] +} + +##### ICP Instance details ###### + +variable "boot" { + type = "map" + + default = { + profile = "cc1-2x4" + + disk_size = "100" // GB + docker_vol_size = "100" // GB + + network_speed = "1000" + } +} + +variable "master" { + type = "map" + + default = { + nodes = "3" + profile = "cc1-8x16" + + disk_size = "100" // GB + docker_vol_size = "100" // GB + + network_speed = "1000" + } +} + +variable "mgmt" { + type = "map" + + default = { + nodes = "3" + profile = "bc1-4x16" + + disk_size = "100" // GB + docker_vol_size = "100" // GB + + network_speed = "1000" + } +} + +variable "proxy" { + type = "map" + + default = { + nodes = "3" + + profile = "cc1-2x4" + + disk_size = "100" // GB + docker_vol_size = "100" // GB + + network_speed= "1000" + } +} + +variable "va" { + type = "map" + + default = { + nodes = "0" + + profile = "bc1-4x16" + + disk_size = "100" // GB + docker_vol_size = "100" // GB + + network_speed = "1000" + } +} + + +variable "worker" { + type = "map" + + default = { + nodes = "3" + + profile = "bc1-4x16" + + disk_size = "100" // GB, 25 or 100 + docker_vol_size = "100" // GB + additional_disk = "0" // GB, if you want an additional block device, set to non-zero + + network_speed= "1000" + } +} + +variable "docker_package_location" { + description = "URI for docker package location, e.g. http:///icp-docker-17.09_x86_64.bin or nfs:/icp-docker-17.09_x86_64.bin" + default = "" +} + +variable "image_location" { + description = "URI for image package location, e.g. http:///ibm-cloud-private-x86_64-2.1.0.2.tar.gz or nfs:/ibm-cloud-private-x86_64-2.1.0.2.tar.gz" + default = "" +} + +variable "image_location_user" { + description = "Username if required by image_location i.e. authenticated http source" + default = "" +} + +variable "image_location_password" { + description = "Password if required by image_location i.e. authenticated http source" + default = "" +} + +variable "icppassword" { + description = "Password for the initial admin user in ICP; blank to generate" + default = "" +} + +variable "icp_inception_image" { + description = "ICP image to use for installation" + default = "ibmcom/icp-inception-amd64:3.1.0-ee" +} + +variable "cluster_cname" { + default = "" +} + +variable "registry_server" { + default = "" +} + +variable "registry_username" { + default = "" +} + +variable "registry_password" { + default = "" +} + + +variable "pod_network_cidr" { + description = "Pod network CIDR " + default = "172.20.0.0/16" +} + +variable "service_network_cidr" { + description = "Service network CIDR " + default = "172.21.0.0/16" +} + +# The following services can be disabled for 3.1 +# custom-metrics-adapter, image-security-enforcement, istio, metering, monitoring, service-catalog, storage-minio, storage-glusterfs, and vulnerability-advisor +# TODO: because VPC does not have shared storage, we disabled the image-manager so that installation completes successfully.. Future implementations we may stand up stand-alone gluster, ceph, or nfs. +variable "disabled_management_services" { + description = "List of management services to disable" + type = "list" + default = ["istio", "vulnerability-advisor", "storage-glusterfs", "storage-minio", "image-manager"] +} + + diff --git a/templates/icp-ee-vpc/vpc.tf b/templates/icp-ee-vpc/vpc.tf new file mode 100644 index 0000000..8e79d4c --- /dev/null +++ b/templates/icp-ee-vpc/vpc.tf @@ -0,0 +1,42 @@ +data "ibm_is_region" "icp_region" { + name = "${var.vpc_region}" +} + +data "ibm_is_zone" "icp_zone" { + count = "${length(var.vpc_subnet_cidr)}" + region = "${data.ibm_is_region.icp_region.name}" + name = "${format("%s-%d", data.ibm_is_region.icp_region.name, count.index + 1)}" +} + + +resource "ibm_is_vpc" "icp_vpc" { + name = "${var.deployment}-${random_id.clusterid.hex}" +} + +resource "ibm_is_vpc_address_prefix" "icp_vpc_address_prefix" { + count = "${length(var.vpc_address_prefix)}" + name = "${format("%s-addr-%02d-%s", var.deployment, count.index + 1, random_id.clusterid.hex)}" + vpc = "${ibm_is_vpc.icp_vpc.id}" + zone = "${element(data.ibm_is_zone.icp_zone.*.name, count.index)}" + cidr = "${element(var.vpc_address_prefix, count.index)}" +} + +resource "ibm_is_subnet" "icp_subnet" { + depends_on = [ + "ibm_is_vpc_address_prefix.icp_vpc_address_prefix" + ] + + count = "${length(var.vpc_subnet_cidr)}" + name = "${format("%s-subnet-%02d-%s", var.deployment, count.index + 1, random_id.clusterid.hex)}" + vpc = "${ibm_is_vpc.icp_vpc.id}" + zone = "${element(data.ibm_is_zone.icp_zone.*.name, count.index)}" + ipv4_cidr_block = "${element(var.vpc_subnet_cidr, count.index)}" + public_gateway = "${element(ibm_is_public_gateway.pub_gateway.*.id, count.index)}" +} + +resource "ibm_is_public_gateway" "pub_gateway" { + count = "${length(var.vpc_subnet_cidr)}" + vpc = "${ibm_is_vpc.icp_vpc.id}" + zone = "${element(data.ibm_is_zone.icp_zone.*.name, count.index)}" + name = "${format("%s-pgw-%02d-%s", var.deployment, count.index + 1, random_id.clusterid.hex)}" +} \ No newline at end of file