little-vm-helper (lvh) is a VM management tool, aimed for testing and development of features that depend on the kernel, such as BPF. It is used in cilium, tetragon, and pwru. It can also be used for kernel development. It is not meant, and should not be used for running production VMs. Fast booting and image building, as well as being storage efficient are the main goals.
It uses qemu and libguestfs tools.
Configurations for specific images used in the Cilium project can be found in little-vm-helper-image.
For an example script, see scripts/example.sh.
LVH can be used to:
- build root images for VMs
- build kernels
- boot VMs using above
Build example images:
$ mkdir _data
$ go run cmd/lvh images example-config > _data/images.json
$ go run cmd/lvh images build --dir _data/images # this may require sudo as relies on /dev/kvm
The first command will create a configuration file:
The configuration file includes:
- a set of packages for the image
- an optional parent image
- a set of actions to be performed after the installation of the packets. There are multiple actions supported, see pkg/images/actions.go.
Once the build-images
command completes, the two images described in the configuration file will
be present in the images directory. ote that the images are stored as sparse files so they take less
space:
$ ls -sh1 _data/images/*.img
856M _data/images/base.img
1.7G _data/images/k8s.img
$ mkdir -p _data/kernels
$ go run cmd/lvh kernels --dir _data init
$ go run cmd/lvh kernels --dir _data add bpf-next git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git --fetch
$ go run cmd/lvh kernels --dir _datas build bpf-next
The configuration file keeps the url for a kernel, togther with its configuration options:
$ jq . < _data/kernel.json
{
"kernels": [
{
"name": "bpf-next",
"url": "git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git"
}
],
"common_opts": [
[
"--enable",
"CONFIG_LOCALVERSION_AUTO"
],
... more options ...
]
}
There are options that are applied to all kernels (common_opts
) as well as
kernel-specific options.
The kernels are kept in worktrees. Specifically, There is a
git bare directory (git
) that holds all the objects, and one worktree per kernel. This allows
efficient fetching and, also, having each kernel on its own seperate directory.
For example:
$ ls -1 _data/kernels
5.18/
bpf-next/
git/
Currently, kernels are built using the bzImage
and dir-pkg
targets (see pkg/kernels/conf.go).
You can use the run
subcommand to start images.
For example:
go run cmd/lvh --image _data/images/base.qcow2 --kernel _data/kernels/bpf-next/arch/x86_64/boot/bzImage
Or, to with the kernel installed in the image:
go run cmd/lvh --image _data/images/base.qcow2
Note: Building images and kernels is only supported on Linux. On the other hand, images and kernels already build on Linux can be booted in MacOS (both x86 and Arm). The only requirement is qemu-system-x86_64
. As MacOS does not support KVM, the commands to boot images are:
go run cmd/lvh --image _data/images/base.qcow2 --qemu-disable-kvm
Existing packer builders (e.g, packer-ci-build/cilium-ubuntu.json, qemu) are meant to manage VMs with longer lifetimes than a single use, and use facilities that introduce unnecessary overhead for our use-case.
Also, packer does not seem to have a way to provision images without booting a machine. There is an outdated chroot package packer-builder-qemu-chroot, and cloud chroot builders (e.g., AMI Builder (chroot) that uses packer-plugin-sdk/chroot).
That being said, if we need packer functionality we can create a packer plugin developing-plugins.
These tools also target production VMs with lifetime streching beyond a single use. As a result, they introduce overhead in booting time, provisioning time, and storage.
- development workflow for MacOS X
- images: configuration option for using different deb distros (hardcoded to sid now)
- images: build tetragon images
- unit tests
- e2e tests (kind)
- images: docker image with required binaries (libguestfs, mmdebstrap, etc.) to run the tool - [x] is that possible? libguestfs needs to boot a mini-VM
- kernels: add suport for buidling kernels
- runner: qemu runner wrapper
- images bootable VMs: running qemu with --kernel is convinient for development. If we want to store images externally (e.g., AWS), it might make sense to support bootable VMs.
- improve boot time: minimal init, use qemu microvm (microvm, QEMU MicroVMs)
- images: on a failed run, save everything in a image-failed-$(date) directory
- use
guestfish --listen
(see prepare-rootfs/run.sh)
- earlier attempt: kkourt/kvm-dev-scripts