Skip to content

Conversation

@hpst3r
Copy link

@hpst3r hpst3r commented Dec 7, 2025

update alpine_cloudinit.tf example so it validates and applies under v0.9.1, ref issue #1243

@schober-ch
Copy link

Not sure if I'm doing it wrong, but I tried to run the updated example with the latest version 0.9.1 and it fails to boot. I'm new to libvirt and currently fail to capture any useful logs other than what I see with virt-viewer, but once I figure it out I'll provide some details

@hpst3r
Copy link
Author

hpst3r commented Dec 13, 2025

Hi @schober-ch - what's the target system? I suppose I should have mentioned that I tested against an AlmaLinux 10 box with Libvirt 11.5.0 and was running Terraform 1.14.

I just (tonight) ran this against another Alma 10 machine (Libvirt 11.5) with success and a Fedora 42 machine (Libvirt 11.0) with a failure (see below). Unfortunately those three hosts are all I have accessible at the moment.

image

The failure on the Fedora machine seems to be related to the virtio disk driver in pre-11.5 libvirt & this version of Alpine; swapping to SCSI fails the same way (unable to mount /) but swapping to the emulated SATA bus sees the guest boot successfully:

image

If you're seeing the same error, can you try to adjust the devices { disks [] } segment (line 106 - 135) from:

    disks = [
      {
        source = {
          volume = {
            pool   = libvirt_volume.alpine_disk.pool
            volume = libvirt_volume.alpine_disk.name
          }
        }
        target = {
          dev = "vda"
          bus = "virtio"
        }
        driver = {
          type = "qcow2"
        }
      },
      {
        device = "cdrom"
        source = {
          volume = {
            pool   = libvirt_volume.alpine_seed_volume.pool
            volume = libvirt_volume.alpine_seed_volume.name
          }
        }
        target = {
          dev = "sda"
          bus = "sata"
        }
      }
    ]

To:

    disks = [
      {
        source = {
          volume = {
            pool   = libvirt_volume.alpine_disk.pool
            volume = libvirt_volume.alpine_disk.name
          }
        }
        target = {
          dev = "sda"
          bus = "sata"
        }
        driver = {
          type = "qcow2"
        }
      },
      {
        device = "cdrom"
        source = {
          volume = {
            pool   = libvirt_volume.alpine_seed_volume.pool
            volume = libvirt_volume.alpine_seed_volume.name
          }
        }
        target = {
          dev = "sdb"
          bus = "sata"
        }
      }
    ]

I did notice that an "ERROR: cloud-final failed to start" is coming up once the guest does boot, which I didn't notice last weekend. I'll see if I have some time to troubleshoot later on.

@schober-ch
Copy link

As usual I got distracted from this project by other responsibilities, but I tested on an Arch Linux laptop. libvirt 11.10.0. I will get back to this after Christmas (I hope) and then try some systematic testing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants