In order to access certain AWS resources, the following AWS profiles must be set up in your AWS credentials file:
cool-dns-route53resourcechange-cyber.dhs.govcool-terraform-readstate
The easiest way to set up those profiles is to use our
aws-profile-sync utility.
Follow the usage instructions in that repository before continuing with the
next steps. Note that you will need to know where your team stores their
remote profile data in order to use
aws-profile-sync.
Build Terraform-based infrastructure with:
ansible-galaxy install --role-file ansible/requirements.yml
cd terraform
terraform workspace select <your_workspace>
terraform init
terraform apply -var-file=<your_workspace>.tfvarsAlso note that
ansible-galaxy install --force --role-file ansible/requirements.ymlwill update the roles that are being pulled from external sources. This
may be required, for example, if a role that is being pulled from a
GitHub repository has been updated and you want the new changes. By
default ansible-galaxy install will not upgrade roles.
Tear down Terraform-based infrastructure with:
cd terraform
terraform workspace select <your_workspace>
terraform init
terraform destroy -var-file=<your_workspace>.tfvarsYou can use ssh to connect directly to the bastion EC2 instances in the
Cyber Hygiene and BOD VPCs:
ssh bastion.<your_workspace>.cyhy
ssh bastion.<your_workspace>.bodOther EC2 instances in these two VPCs can only be connected to by
proxying the ssh connection via the corresponding bastion host.
This can be done automatically by ssh if you add something like the
following to your ~/.ssh/config:
Host *.bod *.cyhy
User <your_username>
Host bastion.*.bod bastion.*.cyhy
HostName %h.cyber.dhs.gov
Host !bastion.*.bod *.bod !bastion.*.cyhy *.cyhy
ProxyCommand ssh -W $(sed "s/^\([^.]*\)\..*$/\1/" <<< %h):22 $(sed s/^[^.]*/bastion/ <<< %h)This ssh configuration snippet allows you to ssh directly to
reporter.<your_workspace>.cyhy or docker.<your_workspace>.bod,
for example:
ssh reporter.<your_workspace>.cyhy
ssh docker.<your_workspace>.bodYou may also find it helpful to configure ssh to automatically
forward the Nessus UI and MongoDB ports when connecting to the Cyber
Hygiene VPC:
Host bastion.*.cyhy
LocalForward 8834 vulnscan1:8834
LocalForward 8835 vulnscan2:8834
LocalForward 0.0.0.0:27017 database1:27017Note that the last LocalForward line forwards port 27017 on any
interface to port 27017 on the MongoDB instance. This allows any
local Docker containers to take advantage of the port forwarding.
To create the management VPC, first modify your Terraform variables file
(<your_workspace>.tfvars) such that:
enable_mgmt_vpc = trueIf you want to include one or more Nessus instances in your management VPC, ensure that the correct license keys are entered in your Terraform variables file:
mgmt_nessus_activation_codes = [ "LICENSE-KEY-1", "LICENSE-KEY-2" ]At this point, you are ready to create all of the management VPC infrastructure by running:
terraform apply -var-file=<your_workspace>.tfvarsTo destroy the management VPC, first modify your Terraform variables file
(<your_workspace>.tfvars) such that:
enable_mgmt_vpc = falseAt this point, you are ready to destroy all of the management VPC infrastructure by running:
terraform apply -var-file=<your_workspace>.tfvars| Name | Version |
|---|---|
| terraform | ~> 1.1 |
| aws | ~> 6.7 |
| cloudinit | ~> 2.0 |
| null | ~> 3.2 |
| Name | Version |
|---|---|
| aws | ~> 6.7 |
| aws.public_dns | ~> 6.7 |
| cloudinit | ~> 2.0 |
| null | ~> 3.2 |
| terraform | n/a |
| Name | Source | Version |
|---|---|---|
| bod_bastion_ansible_provisioner | github.com/cloudposse/terraform-null-ansible | n/a |
| bod_docker_ansible_provisioner | github.com/cloudposse/terraform-null-ansible | n/a |
| cyhy_bastion_ansible_provisioner | github.com/cloudposse/terraform-null-ansible | n/a |
| cyhy_dashboard_ansible_provisioner | github.com/cloudposse/terraform-null-ansible | n/a |
| cyhy_mongo_ansible_provisioner | github.com/cloudposse/terraform-null-ansible | n/a |
| cyhy_nessus_ansible_provisioner | github.com/cloudposse/terraform-null-ansible | n/a |
| cyhy_nmap_ansible_provisioner | github.com/cloudposse/terraform-null-ansible | n/a |
| cyhy_reporter_ansible_provisioner | github.com/cloudposse/terraform-null-ansible | n/a |
| mgmt_bastion_ansible_provisioner | github.com/cloudposse/terraform-null-ansible | n/a |
| mgmt_nessus_ansible_provisioner | github.com/cloudposse/terraform-null-ansible | n/a |
| Name | Description | Type | Default | Required |
|---|---|---|---|---|
| ami_prefixes | An object whose keys are the types of Packer images (defined in the packer/ directory in the root of the repository) and whose values are the prefix to use for the corresponding AMI. The default for all images is "cyhy". |
object({ bastion = string, dashboard = string, docker = string, mongo = string, nessus = string, nmap = string, reporter = string, }) |
{ "bastion": "cyhy", "dashboard": "cyhy", "docker": "cyhy", "mongo": "cyhy", "nessus": "cyhy", "nmap": "cyhy", "reporter": "cyhy" } |
no |
| aws_availability_zone | The AWS availability zone to deploy into (e.g. a, b, c, etc.). | string |
"a" |
no |
| aws_region | The AWS region to deploy into (e.g. us-east-1). | string |
"us-east-1" |
no |
| bod_lambda_function_bucket | The name of the S3 bucket where the Lambda function zip files reside. Terraform cannot access buckets that are not in the provider's region, so the region name will be appended to the bucket name to obtain the actual bucket where the zips are stored. So if we are working in region us-west-1 and this variable has the value buckethead, then the zips will be looked for in the bucket buckethead-us-west-1. |
string |
n/a | yes |
| bod_lambda_functions | A map of information for each BOD 18-01 Lambda. The keys are the scan types and the values are objects that contain the Lambda's name and the key (name) for the corresponding deployment package in the BOD Lambda S3 bucket. Example: { pshtt = { lambda_file = "pshtt.zip", lambda_name = "task_pshtt" }} |
map(object({ lambda_file = string, lambda_name = string, })) |
{} |
no |
| bod_nat_gateway_eip | The IP corresponding to the EIP to be used for the BOD 18-01 NAT gateway in production. In a non-production workspace an EIP will be created. | string |
"" |
no |
| cloudwatch_alarm_emails | A list of the emails to which alerts should be sent if any CloudWatch Alarm is triggered. | list(string) |
[ "cisa-cool-group+cyhy@gwe.cisa.dhs.gov" ] |
no |
| commander_config | Configuration options for the CyHy commander's configuration file. | object({ jobs_per_nessus_host = number, jobs_per_nmap_host = number, next_scan_limit = number, }) |
{ "jobs_per_nessus_host": 16, "jobs_per_nmap_host": 8, "next_scan_limit": 8192 } |
no |
| create_bod_flow_logs | Whether or not to create flow logs for the BOD 18-01 VPC. | bool |
false |
no |
| create_cyhy_flow_logs | Whether or not to create flow logs for the CyHy VPC. | bool |
false |
no |
| create_mgmt_flow_logs | Whether or not to create flow logs for the Management VPC. | bool |
false |
no |
| cyhy_archive_bucket_lifecycle_rule_name | The name of the lifecycle rule for the cyhy-archive S3 bucket. | string |
"cyhy-archive-object-storage-class-transitions" |
no |
| cyhy_archive_bucket_name | S3 bucket for storing compressed archive files created by cyhy-archive. | string |
"ncats-cyhy-archive" |
no |
| cyhy_elastic_ip_cidr_block | The CIDR block of elastic addresses available for use by CyHy scanner instances. | string |
"" |
no |
| cyhy_portscan_first_elastic_ip_offset | The offset of the address (from the start of the elastic IP CIDR block) to be assigned to the first CyHy portscan instance. For example, if the CIDR block is 192.168.1.0/24 and the offset is set to 10, the first portscan address used will be 192.168.1.10. This is only used in production workspaces. Each additional portscan instance will get the next consecutive address in the block. NOTE: This will only work as intended when a contiguous CIDR block of EIP addresses is available. | number |
0 |
no |
| cyhy_user_info | User information for the CyHy user created in our AMIs. Please see packer/ansible/vars/cyhy_user.yml for the configuration used when AMIs are built. |
object({ gid = number, home = string, name = string, uid = number, }) |
{ "gid": 2048, "home": "/var/cyhy", "name": "cyhy", "uid": 2048 } |
no |
| cyhy_vulnscan_first_elastic_ip_offset | The offset of the address (from the start of the elastic IP CIDR block) to be assigned to the first CyHy vulnscan instance. For example, if the CIDR block is 192.168.1.0/24 and the offset is set to 10, the first vulnscan address used will be 192.168.1.10. This is only used in production workspaces. Each additional vulnscan instance will get the next consecutive address in the block. NOTE: This will only work as intended when a contiguous CIDR block of EIP addresses is available. | number |
1 |
no |
| dmarc_import_aws_region | The AWS region where the dmarc-import Elasticsearch database resides. | string |
"us-east-1" |
no |
| dmarc_import_es_role_arn | The ARN of the role that must be assumed in order to read the dmarc-import Elasticsearch database. | string |
n/a | yes |
| docker_mailer_override_filename | This file is used to add/override any Docker composition settings for cyhy-mailer for the docker EC2 instance. It must already exist in /var/cyhy/cyhy-mailer. | string |
"docker-compose.bod.yml" |
no |
| enable_mgmt_vpc | Whether or not to enable unfettered access from the vulnerability scanner in the Management VPC to other VPCs (CyHy, BOD). This should only be enabled while running security scans from the Management VPC. | bool |
false |
no |
| findings_data_field_map | The key for the file storing field name mappings in JSON format. | string |
n/a | yes |
| findings_data_import_db_hostname | The hostname that has the database to store the findings data in. | string |
"" |
no |
| findings_data_import_db_port | The port that the database server is listening on. | string |
"" |
no |
| findings_data_import_lambda_description | The description to associate with the findings-data-import Lambda function. | string |
"Lambda function for importing findings data." |
no |
| findings_data_import_lambda_failure_emails | A list of the emails to which alerts should be sent if findings data processing fails. | list(string) |
[] |
no |
| findings_data_import_lambda_failure_prefix | The object prefix that findings JSONs that have failed to process successfully will have in the findings data bucket. | string |
"failed/" |
no |
| findings_data_import_lambda_failure_suffix | The object suffix that findings JSONs that have failed to process successfully will have in the findings data bucket. | string |
".json" |
no |
| findings_data_import_lambda_handler | The entrypoint for the findings-data-import Lambda. | string |
"lambda_handler.handler" |
no |
| findings_data_import_lambda_s3_key | The key (name) of the zip file for the findings data import Lambda function inside the S3 bucket. | string |
n/a | yes |
| findings_data_import_ssm_db_name | The name of the parameter in AWS SSM that holds the name of the database to store the findings data in. | string |
"" |
no |
| findings_data_import_ssm_db_password | The name of the parameter in AWS SSM that holds the database password for the user with write permission to the findings database. | string |
"" |
no |
| findings_data_import_ssm_db_user | The name of the parameter in AWS SSM that holds the database username with write permission to the findings database. | string |
"" |
no |
| findings_data_input_suffix | The suffix used by files found in the findings_data_s3_bucket that contain findings data. | string |
n/a | yes |
| findings_data_s3_bucket | The name of the bucket where the findings data JSON file can be found. Note that in production Terraform workspaces, the string '-production' will be appended to the bucket name. In non-production workspaces, '-<workspace_name>' will be appended to the bucket name. | string |
"" |
no |
| findings_data_save_failed | Whether or not to save files for imports that have failed. | bool |
true |
no |
| findings_data_save_succeeded | Whether or not to save files for imports that have succeeded. | bool |
false |
no |
| kevsync_failure_emails | A list of the emails to which alerts should be sent if KEV synchronization fails. | list(string) |
[ "cyberdirectives@cisa.dhs.gov", "vulnerability@cisa.dhs.gov" ] |
no |
| lambda_artifacts_bucket | The name of the S3 bucket that stores AWS Lambda deployment artifacts. This bucket should be created with the cisagov/cyhy-lambda-bucket-terraform project. Note that in production terraform workspaces, the string '-production' will be appended to the bucket name. In non-production workspaces, '-<workspace_name>' will be appended to the bucket name. | string |
n/a | yes |
| mgmt_nessus_activation_codes | A list of strings containing Nessus activation codes used in the management VPC. | list(string) |
n/a | yes |
| mgmt_nessus_instance_count | The number of Nessus instances to create if a management environment is set to be created. | number |
1 |
no |
| mongo_disks | The data volumes for the mongo instance(s). | map(string) |
{ "data": "/dev/xvdb", "journal": "/dev/xvdc", "log": "/dev/xvdd" } |
no |
| mongo_instance_count | The number of Mongo instances to create. | number |
1 |
no |
| nessus_activation_codes | A list of strings containing Nessus activation codes. | list(string) |
n/a | yes |
| nessus_cyhy_runner_disk | The cyhy-runner data volume for the Nessus instance(s). | string |
"/dev/xvdb" |
no |
| nessus_instance_count | The number of Nessus instances to create. | number |
n/a | yes |
| nmap_cyhy_runner_disk | The cyhy-runner data volume for the Nmap instance(s). | string |
"/dev/nvme1n1" |
no |
| nmap_instance_count | The number of Nmap instances to create. | number |
n/a | yes |
| remote_ssh_user | The username to use when sshing to the EC2 instances. | string |
n/a | yes |
| reporter_mailer_override_filename | This file is used to add/override any Docker composition settings for cyhy-mailer for the reporter EC2 instance. It must already exist in /var/cyhy/cyhy-mailer. | string |
"docker-compose.cyhy.yml" |
no |
| ses_aws_region | The AWS region where SES is configured. | string |
"us-east-1" |
no |
| ses_role_arn | The ARN of the role that must be assumed in order to send emails. | string |
n/a | yes |
| tags | Tags to apply to all AWS resources created. | map(string) |
{} |
no |
| trusted_ingress_networks_ipv4 | IPv4 CIDR blocks from which to allow ingress to the bastion server. | list(string) |
[ "0.0.0.0/0" ] |
no |
| trusted_ingress_networks_ipv6 | IPv6 CIDR blocks from which to allow ingress to the bastion server. | list(string) |
[ "::/0" ] |
no |
No outputs.
This project is in the worldwide public domain.
This project is in the public domain within the United States, and copyright and related rights in the work worldwide are waived through the CC0 1.0 Universal public domain dedication.
All contributions to this project will be released under the CC0 dedication. By submitting a pull request, you are agreeing to comply with this waiver of copyright interest.