-
-
Notifications
You must be signed in to change notification settings - Fork 359
feat: add eks auto mode #249
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 17 commits
3cb909d
f4cbfd4
8468ce2
61c1761
19ee1b8
dddd2ca
08e9898
b7d9982
d85ad7c
8268f9b
891db9e
2c1b2c6
a6fe7ec
9b809ee
5eac273
cd59e29
4b92f0f
90498be
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -57,3 +57,8 @@ upgrade_policy = { | |
zonal_shift_config = { | ||
enabled = true | ||
} | ||
|
||
cluster_compute_config = { | ||
enabled = false | ||
} | ||
Comment on lines
+61
to
+63
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Sorry, but for something this complex and impactful, in addition to the specific changes I called out elsewhere, I want to see a separate test of auto mode in the automated testing. It can run in parallel. Because of a quirk in Terratest, vars set in a test do not override vars set in a varfile set in the test, so my plan would be:
|
||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -131,6 +131,22 @@ variable "zonal_shift_config" { | |
default = null | ||
} | ||
|
||
variable "cluster_compute_config" { | ||
type = object({ | ||
enabled = optional(bool, false) | ||
node_pools = optional(list(string), []) | ||
node_role_arn = optional(string, null) | ||
}) | ||
description = <<-EOT | ||
Configuration block with compute configuration for EKS Auto Mode | ||
|
||
enabled: Request to enable or disable the compute capability on your EKS Auto Mode cluster. If the compute capability is enabled, EKS Auto Mode will create and delete EC2 Managed Instances in your Amazon Web Services account. | ||
node_pools: Optional configuration for node pools that defines the compute resources for your EKS Auto Mode cluster. Valid options are general-purpose and system. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What happens if left undefined? |
||
node_role_arn: Optional ARN of the IAM Role EKS will assign to EC2 Managed Instances in your EKS Auto Mode cluster. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Are there any gotchas with the default IAM role EKS will use, e.g edge cases? If so, then we should probably have an option to create and use the role we create when we use the old/existing node pool. In fact, that should probably be an option in any case. |
||
EOT | ||
default = {} | ||
} | ||
|
||
variable "private_ipv6_enabled" { | ||
type = bool | ||
default = false | ||
|
Original file line number | Diff line number | Diff line change | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
@@ -14,6 +14,8 @@ locals { | |||||||||||||||||
} | ||||||||||||||||||
|
||||||||||||||||||
cloudwatch_log_group_name = "/aws/eks/${module.label.id}/cluster" | ||||||||||||||||||
|
||||||||||||||||||
auto_mode_enabled = try(var.cluster_compute_config.enabled, false) | ||||||||||||||||||
} | ||||||||||||||||||
|
||||||||||||||||||
module "label" { | ||||||||||||||||||
|
@@ -56,19 +58,31 @@ resource "aws_kms_alias" "cluster" { | |||||||||||||||||
resource "aws_eks_cluster" "default" { | ||||||||||||||||||
#bridgecrew:skip=BC_AWS_KUBERNETES_1:Allow permissive security group for public access, difficult to restrict without a VPN | ||||||||||||||||||
#bridgecrew:skip=BC_AWS_KUBERNETES_4:Let user decide on control plane logging, not necessary in non-production environments | ||||||||||||||||||
count = local.enabled ? 1 : 0 | ||||||||||||||||||
name = module.label.id | ||||||||||||||||||
tags = module.label.tags | ||||||||||||||||||
role_arn = local.eks_service_role_arn | ||||||||||||||||||
version = var.kubernetes_version | ||||||||||||||||||
enabled_cluster_log_types = var.enabled_cluster_log_types | ||||||||||||||||||
bootstrap_self_managed_addons = var.bootstrap_self_managed_addons_enabled | ||||||||||||||||||
count = local.enabled ? 1 : 0 | ||||||||||||||||||
name = module.label.id | ||||||||||||||||||
tags = module.label.tags | ||||||||||||||||||
role_arn = local.eks_service_role_arn | ||||||||||||||||||
version = var.kubernetes_version | ||||||||||||||||||
enabled_cluster_log_types = var.enabled_cluster_log_types | ||||||||||||||||||
# Enabling EKS Auto Mode also requires that bootstrap_self_managed_addons is set to false | ||||||||||||||||||
bootstrap_self_managed_addons = local.auto_mode_enabled ? false : var.bootstrap_self_managed_addons_enabled | ||||||||||||||||||
Comment on lines
+67
to
+68
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I consider it a bad idea to silently change input configuration like this. Because this is a configuration conflict, and may change in the future, my first preference is to allow AWS to throw an error message rather than try to fix it, because then if AWS relaxes this condition, we don't need to make any changes. An acceptable alternative is to add this check as a precondition and report an error. |
||||||||||||||||||
|
||||||||||||||||||
access_config { | ||||||||||||||||||
authentication_mode = var.access_config.authentication_mode | ||||||||||||||||||
bootstrap_cluster_creator_admin_permissions = var.access_config.bootstrap_cluster_creator_admin_permissions | ||||||||||||||||||
} | ||||||||||||||||||
|
||||||||||||||||||
# EKS Auto Mode | ||||||||||||||||||
dynamic "compute_config" { | ||||||||||||||||||
for_each = local.auto_mode_enabled && (var.cluster_compute_config) > 0 ? [var.cluster_compute_config] : [] | ||||||||||||||||||
|
||||||||||||||||||
content { | ||||||||||||||||||
enabled = local.auto_mode_enabled | ||||||||||||||||||
node_pools = local.auto_mode_enabled ? try(compute_config.value.node_pools, []) : null | ||||||||||||||||||
node_role_arn = local.auto_mode_enabled && length(try(compute_config.value.node_pools, [])) > 0 ? try(compute_config.value.node_role_arn, aws_iam_role.default[0].arn, null) : null | ||||||||||||||||||
} | ||||||||||||||||||
} | ||||||||||||||||||
|
||||||||||||||||||
lifecycle { | ||||||||||||||||||
# bootstrap_cluster_creator_admin_permissions is documented as only applying | ||||||||||||||||||
# to the initial creation of the cluster, and being unreliable afterward, | ||||||||||||||||||
|
@@ -98,18 +112,43 @@ resource "aws_eks_cluster" "default" { | |||||||||||||||||
|
||||||||||||||||||
dynamic "kubernetes_network_config" { | ||||||||||||||||||
for_each = local.use_ipv6 ? [] : compact([var.service_ipv4_cidr]) | ||||||||||||||||||
|
||||||||||||||||||
content { | ||||||||||||||||||
service_ipv4_cidr = kubernetes_network_config.value | ||||||||||||||||||
} | ||||||||||||||||||
} | ||||||||||||||||||
|
||||||||||||||||||
dynamic "kubernetes_network_config" { | ||||||||||||||||||
for_each = local.use_ipv6 ? [true] : [] | ||||||||||||||||||
|
||||||||||||||||||
content { | ||||||||||||||||||
ip_family = "ipv6" | ||||||||||||||||||
} | ||||||||||||||||||
} | ||||||||||||||||||
|
||||||||||||||||||
dynamic "kubernetes_network_config" { | ||||||||||||||||||
for_each = local.auto_mode_enabled ? [1] : [] | ||||||||||||||||||
|
||||||||||||||||||
content { | ||||||||||||||||||
dynamic "elastic_load_balancing" { | ||||||||||||||||||
for_each = local.auto_mode_enabled ? [1] : [] | ||||||||||||||||||
content { | ||||||||||||||||||
enabled = local.auto_mode_enabled | ||||||||||||||||||
} | ||||||||||||||||||
Comment on lines
+133
to
+137
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is it really a requirement that elastic load balancing be enabled for auto mode? If so:
Suggested change
If it not a requirement, then it should be an option in the auto mode configuration object. |
||||||||||||||||||
} | ||||||||||||||||||
} | ||||||||||||||||||
} | ||||||||||||||||||
RoseSecurity marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||||||||||||
|
||||||||||||||||||
dynamic "storage_config" { | ||||||||||||||||||
for_each = local.auto_mode_enabled ? [1] : [] | ||||||||||||||||||
|
||||||||||||||||||
content { | ||||||||||||||||||
block_storage { | ||||||||||||||||||
enabled = local.auto_mode_enabled | ||||||||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Again, if required, then just use
Suggested change
|
||||||||||||||||||
} | ||||||||||||||||||
} | ||||||||||||||||||
} | ||||||||||||||||||
|
||||||||||||||||||
dynamic "upgrade_policy" { | ||||||||||||||||||
for_each = var.upgrade_policy != null ? [var.upgrade_policy] : [] | ||||||||||||||||||
content { | ||||||||||||||||||
|
Uh oh!
There was an error while loading. Please reload this page.