-
Notifications
You must be signed in to change notification settings - Fork 86
Description
What happened?
From pulumi_eks==4.0.1
This is configuration
EKS_WORKER_DATASTORE = SelfManagedNodeGroupConfig(
name=f"{ENVIRONMENT}-eks-self-managed-worker-datastore-1",
cluster=eks_control_plane,
instance_profile=eks_worker_profile,
instance_type="t3.xlarge",
min_size=EKS_WORKER_GROUP_DATASTORE_NODE_AMT,
max_size=EKS_WORKER_GROUP_DATASTORE_NODE_AMT,
desired_capacity=EKS_WORKER_GROUP_DATASTORE_NODE_AMT,
node_security_group=eks_worker_sg,
cluster_ingress_rule=eks_control_plane.eks_cluster_ingress_rule,
node_subnet_ids=[
vpc.private_subnet_ids[0]
],
node_associate_public_ip_address=True,
enable_detailed_monitoring=True,
kubelet_extra_args="--max-pods=110",
labels={
"node-role/datastore": "true",
},
auto_scaling_group_tags={
"Name": "datastore",
"Purpose": "application",
"managed-by": "Pulumi",
"Environment": ENVIRONMENT,
},
taints={
"node-role/datastore": TaintArgs(
effect="NoSchedule",
value=True
)
},
)
Passing parameter to class
class SelfManagedNodeGroupConfig(CustomBaseModel):
name: str
cluster: eks.Cluster
instance_profile: InstanceProfile
instance_type: str
min_size: int
max_size: int
desired_capacity: int
node_security_group: aws.ec2.SecurityGroup
cluster_ingress_rule: Union[Output[aws.ec2.SecurityGroupRule], aws.ec2.SecurityGroupRule]
node_subnet_ids: List[Union[str, Output[str]]]
node_associate_public_ip_address: bool = False
enable_detailed_monitoring: bool = False
kubelet_extra_args: Optional[str] = None
labels: Optional[Dict[str, str]] = None
taints: Optional[Dict[str, TaintArgs]] = None
auto_scaling_group_tags: Optional[Dict[str, Union[str, Output[str]]]] = None
launch_template_tag_specifications: Optional[List[LaunchTemplateTagSpecification]] = None
And create with function
def create_self_managed_node_group(node_group_config: SelfManagedNodeGroupConfig):
node_group = eks.NodeGroupV2(
resource_name=node_group_config.name,
cluster=node_group_config.cluster,
instance_profile=node_group_config.instance_profile,
instance_type=node_group_config.instance_type,
min_size=node_group_config.min_size,
max_size=node_group_config.max_size,
desired_capacity=node_group_config.desired_capacity,
node_security_group=node_group_config.node_security_group,
cluster_ingress_rule=node_group_config.cluster_ingress_rule,
node_subnet_ids=node_group_config.node_subnet_ids,
node_associate_public_ip_address=node_group_config.node_associate_public_ip_address,
enable_detailed_monitoring=node_group_config.enable_detailed_monitoring,
kubelet_extra_args=node_group_config.kubelet_extra_args,
labels=node_group_config.labels,
auto_scaling_group_tags=node_group_config.auto_scaling_group_tags,
taints=node_group_config.taints,
launch_template_tag_specifications=node_group_config.launch_template_tag_specifications
)
This is what is created

My expectation is Name should follow configuration, if not have, default is ok.
Example
Short explanation.
- I create auto_scaleing_group_tags as config and passing it to pulumi_eks NodeGroupV2
- The other tags work as expected only name is replaced by cluster name + "worker"
Output of pulumi about
Version 3.187.0
Go Version go1.24.5
Go Compiler gc
Plugins
KIND NAME VERSION
resource argocd 1.0.1
resource aws 7.2.0
resource awsx 3.0.0
resource command 1.1.0
resource docker 4.8.0
resource docker-build 0.0.12
resource eks 4.0.1
resource gitlab 9.2.0
resource kubernetes 4.23.0
language python 3.187.0
Host
OS ubuntu
Version 24.04
Arch x86_64
Dependencies:
NAME VERSION
pipreqs 0.5.0
pulumi_argocd 1.0.1
pulumi_awsx 3.0.0
pulumi_command 1.1.0
pulumi_eks 4.0.1
pulumi_gitlab 9.2.0
pydantic 2.11.7
tinycss2 1.4.0
Additional context
No response
Contributing
Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).