You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
| <aname="input_asg_inservice_timeout_in_mins"></a> [asg\_inservice\_timeout\_in\_mins](#input\_asg\_inservice\_timeout\_in\_mins)| Timeout in mins which will be used by the rolling update script to wait for instances to be InService for an ASG |`number`|`10`| no |
174
174
| <aname="input_asg_lifecycle_hook_heartbeat_timeout"></a> [asg\_lifecycle\_hook\_heartbeat\_timeout](#input\_asg\_lifecycle\_hook\_heartbeat\_timeout)| Timeout for ASG initial lifecycle hook. This is used only during ASG creation, subsequent value changes are not handled by terraform (has to be updated manually) |`number`|`3600`| no |
175
175
| <aname="input_command_timeout_seconds"></a> [command\_timeout\_seconds](#input\_command\_timeout\_seconds)| The timeout that will be used by the userdata script to retry commands on failure. Keep it higher to allow manual recovery |`number`|`1800`| no |
176
-
| <a name="input_data_volume"></a> [data\_volume](#input\_data\_volume) | device\_name = "Device name for additional Data volume, select name as per https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/device_naming.html"<br> type = "EBS volume type e.g. gp2, gp3 etc"<br> iops = "Only valid for type gp3"<br> throughput\_mib\_per\_sec = "only valid for type gp3"<br> mount\_path = "path where to mount the data volume"<br> file\_system\_type = "File system to use to format the volume. eg. ext4 or xfs. This is used only initial time. Later changes will be ignored"<br> mount\_params = "Parameters to be used while mounting the volume eg. noatime etc. Optional, empty if not provided"<br> mount\_path\_owner\_user = "OS user that should own volume mount path will be used for chown"<br> mount\_path\_owner\_group = "OS group that should own the volume mount path, will be used for chown" | <pre>object({<br> device_name = optional(string, "/dev/sdf")<br> size_in_gibs = number<br> type = string<br> iops = optional(number)<br> throughput_mib_per_sec = optional(number)<br> mount_path = string<br> file_system_type = string<br> mount_params = optional(list(string), [])<br> mount_path_owner_user = string<br> mount_path_owner_group = string<br> })</pre> | n/a | yes |
176
+
| <a name="input_data_volume"></a> [data\_volume](#input\_data\_volume) | device\_name = "Device name for additional Data volume, select name as per https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/device_naming.html"<br> type = "EBS volume type e.g. gp2, gp3 etc"<br> iops = "Only valid for type gp3"<br> throughput\_mib\_per\_sec = "only valid for type gp3"<br> mount\_path = "path where to mount the data volume"<br> file\_system\_type = "File system to use to format the volume. eg. ext4 or xfs. This is used only initial time. Later changes will be ignored"<br> mount\_params = "Parameters to be used while mounting the volume eg. noatime etc. Optional, empty if not provided"<br> mount\_path\_owner\_user = "OS user that should own volume mount path will be used for chown"<br> mount\_path\_owner\_group = "OS group that should own the volume mount path, will be used for chown" | <pre>object({<br> device_name = optional(string, "/dev/sdf")<br> size_in_gibs = number<br> type = string<br> iops = optional(number)<br> throughput_mib_per_sec = optional(number)<br> mount_path = string<br> file_system_type = string<br> mount_params = optional(list(string), [])<br> mount_path_owner_user = string<br> mount_path_owner_group = string<br> tags = optional(map(string), {})<br> })</pre> | n/a | yes |
| <aname="input_jq_download_url"></a> [jq\_download\_url](#input\_jq\_download\_url)| n/a |`string`|`"https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64"`| no |
179
179
| <aname="input_node_config_script"></a> [node\_config\_script](#input\_node\_config\_script)| Base64 encoded node configuration shell script.<br> Must include configure\_cluster\_node and wait\_for\_healthy\_cluster function. Check documentation for more details about the contract |`string`| n/a | yes |
| <aname="input_nodes"></a> [nodes](#input\_nodes)| node\_ip = IP address of the cluster node. This should be available within the subnet.<br> node\_subnet\_id = Id of the subnet where node should be created.<br> node\_files\_toupload = list of file to be uploaded per node. These can be cluster confi files etc.<br> node\_files\_toupload.contents = Base64 encoded contents of the file to be uploaded on the node.<br> node\_files\_toupload.destination = File destination on the node. This will be the file path and name on the node. The file ownership should be changed by node\_config\_script. | <pre>list(object({<br> node_ip = string<br> node_subnet_id = string<br> node_files_toupload = optional(list(object({<br> contents = string<br> destination = string<br> })), [])<br> }))</pre> | n/a | yes |
182
+
| <aname="input_nodes"></a> [nodes](#input\_nodes)| node\_ip = IP address of the cluster node. This should be available within the subnet.<br> node\_image = image for node of the cluster node.<br> node\_subnet\_id = Id of the subnet where node should be created.<br> node\_files\_toupload = list of file to be uploaded per node. These can be cluster confi files etc.<br> node\_files\_toupload.contents = Base64 encoded contents of the file to be uploaded on the node.<br> node\_files\_toupload.destination = File destination on the node. This will be the file path and name on the node. The file ownership should be changed by node\_config\_script. | <pre>list(object({<br> node_ip = string<br> node_image = optional(string)<br> node_subnet_id = string<br> node_files_toupload = optional(list(object({<br> contents = string<br> destination = string<br> })), [])<br> }))</pre> | n/a | yes |
0 commit comments