- Spawns a headless qemu virtual machines based on a
vm.nixnixos module in the current working directory. - Mounts
$HOMEand the user's nix profile into the virtual machine - Provides console access in the same terminal window
Example vm.nix:
{ pkgs, ... }: {
boot.kernelPackages = pkgs.linuxPackages_latest;
}nixos-shell is available in nixpkgs.
To start a vm use:
$ nixos-shellIn this case nixos-shell will read vm.nix in the current directory.
Instead of vm.nix, nixos-shell also accepts other modules on the command line.
$ nixos-shell some-nix-module.nixYou can also start a vm from a flake's nixosConfigurations using the --flake flag.
$ nixos-shell --flake github:Mic92/nixos-shell#vm-forwardThis will run the vm-forward example.
Note: system configurations have to be made overridable with
lib.makeOverridableto use them withnixos-shell{ nixosConfigurations = let lib = nixpkgs.lib; in { vm = lib.makeOverridable lib.nixosSystem { # ... }; }; }
Type Ctrl-a x to exit the virtual machine.
You can also run the poweroff command in the virtual machine console:
$vm> poweroffOr switch to qemu console with Ctrl-a c and type:
(qemu) quitTo forward ports from the virtual machine to the host, override the
virtualisation.qemu.networkingOptions NixOS option.
See examples/vm-forward.nix where the ssh server running on port 22 in the
virtual machine is made accessible through port 2222 on the host.
If virtualisation.qemu.networkingOptions is not overridden the same can be
also achieved by using the QEMU_NET_OPTS environment variable.
$ QEMU_NET_OPTS="hostfwd=tcp::2222-:22" nixos-shellYour keys are used to enable passwordless login for the root user.
At the moment only ~/.ssh/id_rsa.pub, ~/.ssh/id_ecdsa.pub and ~/.ssh/id_ed25519.pub are
added automatically. Use users.users.root.openssh.authorizedKeys.keyFiles to add more.
Note: sshd is not started by default. It can be enabled by setting
services.openssh.enable = true.
QEMU is started with user mode network by default. To use bridge network instead,
set virtualisation.qemu.networkingOptions to something like
[ "-nic bridge,br=br0,model=virtio-net-pci,mac=11:11:11:11:11:11,helper=/run/wrappers/bin/qemu-bridge-helper" ]. /run/wrappers/bin/qemu-bridge-helper is a NixOS specific
path for qemu-bridge-helper on other Linux distributions it will be different.
QEMU needs to be installed on the host to get qemu-bridge-helper with setuid bit
set - otherwise you will need to start VM as root. On NixOS this can be achieved using
virtualisation.libvirtd.enable = true;
By default qemu will allow at most 500MB of RAM, this can be increased using virtualisation.memorySize (size in megabyte).
{ virtualisation.memorySize = 1024; }To increase the CPU count use virtualisation.cores (defaults to 1):
{ virtualisation.cores = 2; }To increase the size of the virtual hard drive, i. e. to 20 GB (see virtualisation options at bottom, defaults to 512M):
{ virtualisation.diskSize = 20 * 1024; }Notice that for this option to become effective you may also need to delete previous block device files created by qemu (nixos.qcow2).
Notice that changes in the nix store are written to an overlayfs backed by tmpfs rather than the block device
that is configured by virtualisation.diskSize. This tmpfs can be disabled however by using:
{ virtualisation.writableStoreUseTmpfs = false; }This option is recommend if you plan to use nixos-shell as a remote builder.
To use graphical applications, add the virtualisation.graphics NixOS option (see examples/vm-graphics.nix).
By default for user's convenience nixos-shell does not enable a firewall.
This can be overridden by:
{ networking.firewall.enable = true; }There does not exists any explicit options right now but
one can use either the $QEMU_OPTS environment variable
or set virtualisation.qemu.options to pass the right qemu
command line flags:
{
# /dev/sdc also needs to be read-writable by the user executing nixos-shell
virtualisation.qemu.options = [ "-hdc" "/dev/sdc" ];
}{ virtualisation.qemu.options = [ "-bios" "${pkgs.OVMF.fd}/FV/OVMF.fd" ]; }To mount anywhere inside the virtual machine, use the nixos-shell.mounts.extraMounts option.
{
nixos-shell.mounts.extraMounts = {
# simple USB stick sharing
"/media" = /media;
# override options for each mount
"/var/www" = {
target = ./src;
cache = "none";
};
};
}You can further configure the default mount settings:
{
nixos-shell.mounts = {
mountHome = false;
mountNixProfile = false;
cache = "none"; # default is "loose"
};
}Available cache modes are documented in the 9p kernel module.
In many cloud environments KVM is not available and therefore nixos-shell will fail with:
CPU model 'host' requires KVM.
In newer versions of nixpkgs this has been fixed by falling back to emulation.
In older version one can set the virtualisation.qemu.options or set the environment variable QEMU_OPTS:
export QEMU_OPTS="-cpu max"
nixos-shellA full list of supported qemu cpus can be obtained by running qemu-kvm -cpu help.
By default VMs will have a NIX_PATH configured for nix channels but no channel are downloaded yet. To avoid having to download a nix-channel every time the VM is reset, you can use the following nixos configuration:
{...}: {
nix.nixPath = [
"nixpkgs=${pkgs.path}"
];
}This will add the nixpkgs that is used for the VM in the NIX_PATH of login shell.
Have a look at the virtualisation options NixOS provides.