-
Notifications
You must be signed in to change notification settings - Fork 81
Open
Labels
Description
What happened?
I could observe that:
- If i do terraform apply and destroy very immediate after finish it fails to destroy
- if i apply wait some time it did finish destroy like the expected value i noted.
- if i applied and didnt touch the infratructure for a longer time the problem also --> happend <--- as described. Not 100% reproducable bc this is very not nice to test accurate.
- Point 1 is not is not set in stone but thats what i could observe maybe the cloud_init is a factor here or maybe not, maybe its just a random happening. Or its happning becuase of the cloudinit which is havin a reboot in it.
- However Point 4 yes or no, but Point 3 did defnitly occure which makes it a bug in my eyes.
However sometimes and often enough the following error occurs:
If i destroy again after the error terraform/hetzner is doing its job as expected and everything is destroyed.
Terraform:
Do you really want to destroy all resources?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.
Enter a value: yes
local_file.ansible_hosts: Destroying... [id=b3d6233cd5ff6f368f7edc13bd3fe37bfb6b96a9]
local_file.ansible_ssh_config: Destroying... [id=d5f70eb7913675f225e5e2332e143b90b2aa8197]
local_file.ansible_ssh_config: Destruction complete after 0s
local_file.ansible_hosts: Destruction complete after 0s
hcloud_network_route.privNet: Destroying... [id=11306062-0.0.0.0/0]
hcloud_server.node1: Destroying... [id=105903911]
hcloud_server.node1: Still destroying... [id=105903911, 10s elapsed]
hcloud_server.node1: Destruction complete after 16s
╷
│ Error: hcloudutil/WaitForAction: an unknown error occurred (unknown_error, 571428222195368)
│
│
What did you expect to happen?
Terraform:
Do you really want to destroy all resources?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.
Enter a value: yes
local_file.ansible_hosts: Destroying... [id=c79191720277d871d100a6f15f9c32f76d67fffe]
local_file.ansible_ssh_config: Destroying... [id=d1d45ae77c4d09975fb468a63486321ee0a2db4e]
local_file.ansible_ssh_config: Destruction complete after 0s
local_file.ansible_hosts: Destruction complete after 0s
hcloud_network_route.privNet: Destroying... [id=11306023-0.0.0.0/0]
hcloud_server.node1: Destroying... [id=105902618]
hcloud_network_route.privNet: Destruction complete after 4s
hcloud_server.node1: Still destroying... [id=105902618, 10s elapsed]
hcloud_server.node1: Destruction complete after 16s
hcloud_server_network.gateway: Destroying... [id=105902498-11306023]
hcloud_server_network.gateway: Destruction complete after 5s
hcloud_server.natgateway: Destroying... [id=105902498]
hcloud_server.natgateway: Still destroying... [id=105902498, 10s elapsed]
hcloud_server.natgateway: Destruction complete after 18s
hcloud_network_subnet.network-subnet: Destroying... [id=11306023-10.0.0.0/16]
hcloud_ssh_key.sshkey: Destroying... [id=100658990]
hcloud_primary_ip.natgatewayip: Destroying... [id=97010443]
hcloud_ssh_key.sshkey: Destruction complete after 0s
hcloud_network_subnet.network-subnet: Destruction complete after 0s
hcloud_network.network: Destroying... [id=11306023]
hcloud_primary_ip.natgatewayip: Destruction complete after 0s
hcloud_network.network: Destruction complete after 0s
Destroy complete! Resources: 10 destroyed.
Please provide a minimal working example
provider.tf
# Tell terraform to use the provider and select a version.
terraform {
required_version = ">= 1.0"
required_providers {
hcloud = {
source = "hetznercloud/hcloud"
version = "~> 1.45"
}
}
}
# Configure the Hetzner Cloud Provider
provider "hcloud" {
token = var.hcloud_token
}
natgateway.tf
resource "hcloud_primary_ip" "natgatewayip" {
name = "${var.project_name}-natgateway-ip"
datacenter = var.datacenter
type = "ipv4"
assignee_type = "server"
auto_delete = false
labels = {
project = var.project_name
environment = var.environment
component = "natgateway"
}
}
resource "hcloud_network" "network" {
name = "${var.project_name}-private-network"
ip_range = var.private_network_cidr
labels = {
project = var.project_name
environment = var.environment
component = "networking"
}
}
resource "hcloud_network_route" "privNet" {
network_id = hcloud_network.network.id
destination = "0.0.0.0/0"
gateway = hcloud_server_network.gateway.ip
}
resource "hcloud_network_subnet" "network-subnet" {
type = "cloud"
network_id = hcloud_network.network.id
network_zone = var.network_zone
ip_range = var.subnet_cidr
}
resource "hcloud_server" "natgateway" {
name = "${var.project_name}-natgateway"
server_type = var.server_type
image = "ubuntu-24.04"
location = var.location
# **Note**: the depends_on is important when directly attaching the
# server to a network. Otherwise Terraform will attempt to create
# server and sub-network in parallel. This may result in the server
# creation failing randomly.
depends_on = [
hcloud_network_subnet.network-subnet
]
public_net {
ipv4_enabled = true
ipv4 = hcloud_primary_ip.natgatewayip.id
ipv6_enabled = false
}
ssh_keys = [ hcloud_ssh_key.sshkey.id ]
user_data = file("./cloud_init_natgateway.yaml")
labels = {
project = var.project_name
environment = var.environment
component = "natgateway"
role = "gateway"
}
}
resource "hcloud_server_network" "gateway" {
server_id = hcloud_server.natgateway.id
network_id = hcloud_network.network.id
ip = var.nat_gateway_ip
}
resource "hcloud_ssh_key" "sshkey" {
name = "${var.project_name}-ssh-key"
public_key = file(var.ssh_public_key_path)
labels = {
project = var.project_name
environment = var.environment
}
}
kubenodes.tf
###First Kubenode###
resource "hcloud_server" "node1" {
name = "node1"
server_type = "cx22"
image = "ubuntu-24.04"
location = "hel1"
# **Note**: the depends_on is important when directly attaching the
# server to a network. Otherwise Terraform will attempt to create
# server and sub-network in parallel. This may result in the server
# creation failing randomly.
depends_on = [
hcloud_network_subnet.network-subnet,
hcloud_server_network.gateway
]
network {
network_id = hcloud_network.network.id
}
public_net {
ipv4_enabled = false
ipv6_enabled = false
}
ssh_keys = [ hcloud_ssh_key.sshkey.id ]
}
output "node1_ip" {
value = tolist(hcloud_server.node1.network)[0].ip
}