Skip to content

kitware-data-solutions/histomics.kitware.com

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

32 Commits
 
 
 
 
 
 
 
 

Repository files navigation

This is the instance of the Histomics AWS deployment terraform associated with histomics.kitware.com. Its state is managed on Kitware's Terraform Cloud organization.

Histomics AWS deployment scripts

This repository contains infrastructure-as-code for reproducible Histomics deployments on AWS using highly managed, scalable services, including

  • Elastic Container Service for the web application
  • EC2 instances for celery worker nodes
  • MongoDB Atlas for the database
  • Amazon MQ as the celery queue
  • CloudWatch for log persistence
  • Sentry integration (optional)

Prerequisites

  1. Obtain a domain name via AWS Route53, and set the domain_name terraform variable to its value.
  2. Create an SSH keypair and set the public key as the ssh_public_key terraform variable. This is the key that will be authorized on the worker EC2 instance(s).
  3. Set AWS credentials in your shell environment.
  4. In your target MongoDB Atlas organization, create a new API key and set the public and private key in your local environment in the variables MONGODB_ATLAS_PUBLIC_KEY and MONGODB_ATLAS_PRIVATE_KEY.
  5. Set the target MongoDB Atlas organization ID as the mongodbatlas_org_id terraform variable.

Building the worker AMI

  1. cd packer
  2. packer build worker.pkr.hcl
  3. Use the resulting AMI ID as the worker_ami_id terraform variable.

Deploying

  1. DOCKER_DEFAULT_PLATFORM=linux/amd64 docker build -t zachmullen/histomics-load-test -f histomicsui.Dockerfile .
  2. docker push zachmullen/histomics-load-test
  3. Copy the SHA from the docker push command and paste it into main.tf
  4. Push and merge the resulting change, and ensure the plan and apply succeeds in TF Cloud.

Note: the first time you run the terraform plan on TF Cloud, it will fail with a message like:

count = length(data.aws_subnets.default.ids)

The "count" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to first apply only the resources that the count depends on.

This is because the length of the set of subnets is unknown until the first apply, which is a limitation of terraform itself. To work around this, the very first time you run the plan, set the following env var in TF Cloud:

  • Key: TF_CLI_ARGS_plan
  • Value: -target=data.aws_subnets.default

Run and apply the resulting plan, and then delete that env var in TF Cloud. Subsequent runs should then work.

About

Infrastructure-as-code scripts for deploying high-availability, scalable Histomics on AWS

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • HCL 89.9%
  • Dockerfile 6.8%
  • Jinja 3.3%