| title | List of Integration Test Cases |
|---|---|
| description | Auto generated list of all minikube integration tests and what they do. |
makes sure the --download-only parameter in minikube start caches the appropriate images and tarballs.
makes sure --download-only caches the docker driver images as well.
tests functionality of --binary-mirror flag
makes sure minikube works without internet, once the user has cached the necessary images. This test has to run after TestDownloadOnly.
tests addons that require no special environment in parallel
tests the ingress addon by deploying a default nginx pod
tests the registry-creds addon by trying to load its configs
tests the registry addon
tests the metrics server addon by making sure "kubectl top pods" returns a sensible result
tests the OLM addon
tests the csi hostpath driver by creating a persistent volume, snapshotting it and restoring it.
validates that newly created namespaces contain the gcp-auth secret.
tests the GCP Auth addon with either phony or real credentials and makes sure the files are mounted into pods correctly
tests the inspektor-gadget addon by ensuring the pod has come up and addon disables
tests the cloud-spanner addon by ensuring the deployment and pod come up and addon disables
tests the Volcano addon, makes sure the Volcano is installed into cluster.
tests the functionality of the storage-provisioner-rancher addon
tests enabling an addon on a non-existing cluster
tests disabling an addon on a non-existing cluster
tests the nvidia-device-plugin addon by ensuring the pod comes up and the addon disables
tests the amd-gpu-device-plugin addon by ensuring the pod comes up and the addon disables
makes sure minikube certs respect the --apiserver-ips and --apiserver-names parameters
makes sure minikube can start after its profile certs have expired. It does this by configuring minikube certs to expire after 3 minutes, then waiting 3 minutes, then starting again. It also makes sure minikube prints a cert expiration warning to the user.
makes sure the --docker-env and --docker-opt parameters are respected
tests the --force-systemd flag, as one would expect.
makes sure the --force-systemd flag worked with the docker container runtime
makes sure the --force-systemd flag worked with the containerd container runtime
makes sure the --force-systemd flag worked with the cri-o container runtime
makes sure the MINIKUBE_FORCE_SYSTEMD environment variable works just as well as the --force-systemd flag
makes sure that minikube docker-env command works when the runtime is containerd
makes sure our docker-machine-driver-hyperkit binary can be installed properly
makes sure our docker-machine-driver-hyperkit binary can be installed properly
asserts that there are no unexpected errors displayed in minikube command outputs.
are functionality tests which can safely share a profile in parallel
are functionality run functional tests using NewestKubernetesVersion
checks if minikube cluster is created with correct kubernetes's node label
Steps:
- Get the node labels from the cluster with
kubectl get nodes - check if the node labels matches with the expected Minikube labels:
minikube.k8s.io/*
runs tests on all the minikube image commands, ex. minikube image load, minikube image list, etc.
Steps:
- Build an image with
minikube image buildusingtest/integration/testdata/build/Dockerfile - Add
content.txtfrom the build context into agcr.io/k8s-minikube/busyboximage and tag it aslocalhost/my-image:<profile> - Start
buildkitdon demand inside minikube only when the build runs, then verify the new image exists in the cluster runtime - Make sure image loading from Docker daemon works by
minikube image load --daemon - Try to load image already loaded and make sure
minikube image load --daemonworks - Make sure a new updated tag works by
minikube image load --daemon - Make sure image saving works by
minikube image load --daemon - Make sure image removal works by
minikube image rm - Make sure image loading from file works by
minikube image load - Make sure image saving to Docker daemon works by
minikube image load
Skips:
- Skips on
nonedriver as image loading is not supported - Skips on GitHub Actions / prow environment and macOS as this test case requires a running docker daemon
check functionality of minikube after evaluating docker-env
Steps:
- Run
eval $(minikube docker-env)to configure current environment to use minikube's Docker daemon - Run
minikube statusto get the minikube status - Make sure minikube components have status
Running - Make sure
docker-envhas statusin-use - Run eval
$(minikube -p profile docker-env)and check if we are point to docker inside minikube - Make sure
docker imageshits the minikube's Docker daemon by check ifgcr.io/k8s-minikube/storage-provisioneris in the output ofdocker images
Skips:
- Skips on
nonedrive sincedocker-envis not supported - Skips on non-docker container runtime
check functionality of minikube after evaluating podman-env
Steps:
- Run
eval $(minikube podman-env)to configure current environment to use minikube's Podman daemon, andminikube statusto get the minikube status - Make sure minikube components have status
Running - Make sure
podman-envhas statusin-use - Run
eval $(minikube docker-env)again anddocker imagesto list the docker images using the minikube's Docker daemon - Make sure
docker imageshits the minikube's Podman daemon by check ifgcr.io/k8s-minikube/storage-provisioneris in the output ofdocker images
Skips:
- Skips on
nonedrive sincepodman-envis not supported - Skips on non-docker container runtime
- Skips on non-Linux platforms
makes sure minikube start respects the HTTP_PROXY environment variable
Steps:
- Start a local HTTP proxy
- Start minikube with the environment variable
HTTP_PROXYset to the local HTTP proxy
makes sure minikube start respects the HTTPS_PROXY environment variable and works with custom certs a proxy is started by calling the mitmdump binary in the background, then installing the certs generated by the binary mitmproxy/dump creates the proxy at localhost at port 8080 only runs on GitHub Actions for amd64 linux, otherwise validateStartWithProxy runs instead
makes sure the audit log contains the correct logging after minikube start
Steps:
- Read the audit log file and make sure it contains the current minikube profile name
validates that after minikube already started, a minikube start should not change the configs.
Steps:
- The test
validateStartWithProxyshould have start minikube, make sure the configured node port is8441 - Run
minikube startagain as a soft start - Make sure the configured node port is not changed
asserts that kubectl is properly configured (race-condition prone!)
Steps:
- Run
kubectl config current-context - Make sure the current minikube profile name is in the output of the command
asserts that kubectl get pod -A returns non-zero content
Steps:
- Run
kubectl get po -Ato get all pods in the current minikube profile - Make sure the output is not empty and contains
kube-systemcomponents
validates that the minikube kubectl command returns content
Steps:
- Run
minikube kubectl -- get podsto get the pods in the current minikube profile - Make sure the command doesn't raise any error
validates that calling the minikube binary linked as "kubectl" acts as a kubectl wrapper. This tests the feature where minikube behaves like kubectl when invoked via a binary named "kubectl".
Steps:
- Run
kubectl get podsby calling the minikube'skubectlbinary file directly - Make sure the command doesn't raise any error
verifies minikube with --extra-config works as expected
Steps:
- The tests before this already created a profile
- Soft-start minikube with different
--extra-configcommand line option - Load the profile's config
- Make sure the specified
--extra-configis correctly returned
asserts that all Kubernetes components are healthy NOTE: It expects all components to be Ready, so it makes sense to run it close after only those tests that include '--wait=all' start flag
Steps:
- Run
kubectl get po po -l tier=control-plane -n kube-system -o=jsonto get all the Kubernetes conponents - For each component, make sure the pod status is
Running
makes sure minikube status outputs correctly
Steps:
- Run
minikube statuswith custom formathost:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}} - Make sure
host,kublete,apiserverandkubeconfigstatuses are shown in the output - Run
minikube statusagain as JSON output - Make sure
host,kublete,apiserverandkubeconfigstatuses are set in the JSON output
asserts that the dashboard command works
Steps:
- Run
minikube dashboard --urlto start minikube dashboard and return the URL of it - Send a GET request to the dashboard URL
- Make sure HTTP status OK is returned
asserts that the dry-run mode quickly exits with the right code
Steps:
- Run
minikube start --dry-run --memory 250MB - Since the 250MB memory is less than the required 2GB, minikube should exit with an exit code
ExInsufficientMemory - Run
minikube start --dry-run - Make sure the command doesn't raise any error
asserts that the language used can be changed with environment variables
Steps:
- Set environment variable
LC_ALL=frto enable minikube translation to French - Start minikube with memory of 250MB which is too little:
minikube start --dry-run --memory 250MB - Make sure the dry-run output message is in French
tests functionality of cache command (cache add, delete, list)
Steps:
- Run
minikube cache addand make sure we can add a remote image to the cache - Run
minikube cache addand make sure we can build and add a local image to the cache - Run
minikube cache deleteand make sure we can delete an image from the cache - Run
minikube cache listand make sure we can list the images in the cache - Run
minikube ssh sudo crictl imagesand make sure we can list the images in the cache withcrictl - Delete an image from minikube node and run
minikube cache reloadto make sure the image is brought back correctly
asserts basic "config" command functionality
Steps:
- Run
minikube config set/get/unsetto make sure configuration is modified correctly
asserts basic "logs" command functionality
Steps:
- Run
minikube logsand make sure the logs contains some keywords likeapiserver,AuditandLast Start
asserts "logs --file" command functionality
Steps:
- Run
minikube logs --file logs.txtto save the logs to a local file - Make sure the logs are correctly written
asserts "profile" command functionality
Steps:
- Run
minikube profile lisand make sure the command doesn't fail for the non-existent profilelis - Run
minikube profile list --output jsonto make sure the previous command doesn't create a new profile - Run
minikube profile listand make sure the profiles are correctly listed - Run
minikube profile list -o JSONand make sure the profiles are correctly listed as JSON output
asserts basic "service" command functionality
Create a new kickbase/echo_server deployment
Run minikube service list to make sure the newly created service is correctly listed in the output
Run minikube service list -o JSON and make sure the services are correctly listed as JSON output
Run minikube service with --https --url to make sure the HTTPS endpoint URL of the service is printed
Run minikube service with --url --format={{.IP}} to make sure the IP address of the service is printed
Run minikube service with a regular --url to make sure the HTTP endpoint URL of the service is printed
Steps:
- Create a new
kickbase/echo-serverdeployment - Run
minikube servicewith a regular--urlto make sure the HTTP endpoint URL of the service is printed - Make sure we can hit the endpoint URL with an HTTP GET request
asserts basic "addon" command functionality
Steps:
- Run
minikube addons listto list the addons in a tabular format - Make sure
dashboard,ingressandingress-dnsis listed as available addons - Run
minikube addons list -o JSONlists the addons in JSON format
asserts basic "ssh" command functionality
Steps:
- Run
minikube ssh echo helloto make sure we can SSH into the minikube container and run an command - Run
minikube ssh cat /etc/hostnameas well to make sure the command is run inside minikube
asserts basic "cp" command functionality
Steps:
- Run
minikube cp ...to copy a file to the minikube node - Run
minikube ssh sudo cat ...to print out the copied file within minikube - make sure the file is correctly copied
Skips:
- Skips
nonedriver sincecpis not supported
validates a minimalist MySQL deployment
Steps:
- Run
kubectl replace --force -f testdata/mysql/yaml - Wait for the
mysqlpod to be running - Run
mysql -e show databases;inside the MySQL pod to verify MySQL is up and running - Retry with exponential backoff if failed, as
mysqldfirst comes up without users configured. Scan for names in case of a reschedule.
to check existence of the test file
Steps:
- Test files have been synced into minikube in the previous step
setupFileSync - Check the existence of the test file
- Make sure the file is correctly synced
Skips:
- Skips on
nonedriver since SSH is not supported
checks to make sure a custom cert has been copied into the minikube guest and installed correctly
Steps:
- Check both the installed & reference certs and make sure they are symlinked
asserts that for a given runtime, the other runtimes are disabled, for example for containerd runtime, docker and crio needs to be not running
Steps:
- For each container runtime, run
minikube ssh sudo systemctl is-active ...and make sure the other container runtimes are not running
asserts basic "update-context" command functionality
Steps:
- Run
minikube update-context - Make sure the context has been correctly updated by checking the command output
asserts minikube version command works fine for both --short and --components
Steps:
- Run
minikube version --shortand make sure the returned version is a valid semver - Run
minikube version --componentsand make sure the component versions are returned
asserts that the minikube license command downloads and untars the licenses
Note: This test will fail on release PRs as the licenses file for the new version won't be uploaded at that point
makes sure minikube will not start a tunnel for an unavailable service that has no running pods
verifies the minikube mount command works properly for the platforms that support it, we're testing:
- a generic 9p mount
- a 9p mount on a specific port
- cleaning-mechanism for profile-specific mounts
makes sure PVCs work properly verifies at least one StorageClass exists Applies a PVC manifest (pvc.yaml) and verfies PVC named myclaim reaches phase Bound. Creates a test pod (sp-pod) that mounts the claim (via createPVTestPod). Writes a file foo to the mounted volume at /tmp/mount/foo. Deletes the pod, recreates it, and verifies the file foo still exists by listing /tmp/mount, proving data persists across pod restarts.
makes sure the minikube tunnel command works as expected
starts minikube tunnel
ensures only 1 tunnel can run simultaneously
starts nginx pod, nginx service and waits nginx having loadbalancer ingress IP
validates if the test service can be accessed with LoadBalancer IP from host
validates if the DNS forwarding works by dig command DNS lookup NOTE: DNS forwarding is experimental: https://minikube.sigs.k8s.io/docs/handbook/accessing/#dns-resolution-experimental
validates if the DNS forwarding works by dscacheutil command DNS lookup NOTE: DNS forwarding is experimental: https://minikube.sigs.k8s.io/docs/handbook/accessing/#dns-resolution-experimental
validates if the test service can be accessed with DNS forwarding from host NOTE: DNS forwarding is experimental: https://minikube.sigs.k8s.io/docs/handbook/accessing/#dns-resolution-experimental
stops minikube tunnel
tests the functionality of the gVisor addon
tests all ha (multi-control plane) cluster functionality
ensures ha (multi-control plane) cluster can start.
deploys an app to ha (multi-control plane) cluster and ensures all nodes can serve traffic.
uses app previously deployed by validateDeployAppToHACluster to verify its pods, located on different nodes, can resolve "host.minikube.internal".
uses the minikube node add command to add a worker node to an existing ha (multi-control plane) cluster.
check if all node labels were configured correctly.
Steps:
- Get the node labels from the cluster with
kubectl get nodes - check if all node labels matches with the expected Minikube labels:
minikube.k8s.io/*
ensures minikube profile list outputs correct with ha (multi-control plane) clusters.
ensures minikube cp works with ha (multi-control plane) clusters.
tests ha (multi-control plane) cluster by stopping a secondary control-plane node using minikube node stop command.
ensures minikube profile list outputs correct with ha (multi-control plane) clusters.
tests the minikube node start command on existing stopped secondary node.
restarts minikube cluster and checks if the reported node list is unchanged.
tests the minikube node delete command on secondary control-plane. note: currently, 'minikube status' subcommand relies on primary control-plane node and storage-provisioner only runs on a primary control-plane node.
runs minikube stop on a ha (multi-control plane) cluster.
verifies a soft restart on a ha (multi-control plane) cluster works.
uses the minikube node add command to add a secondary control-plane node to an existing ha (multi-control plane) cluster.
makes sure the 'minikube image build' command works fine
starts a cluster for the image builds
is normal test case for minikube image build, with -t parameter
is normal test case for minikube image build, with -t and -f parameter
is a test case building with --build-opt
is a test case building with --build-env
is a test case building with .dockerignore
verifies files and packages installed inside minikube ISO/Base image
makes sure json output works properly for the start, pause, unpause, and stop commands
makes sure each step has a distinct step number
verifies that for a successful minikube start, 'current step' should be increasing
makes sure json output can print errors properly
verifies the docker driver works with a custom network
verifies the docker driver and run with an existing network
verifies the docker/podman driver works with a custom subnet
starts minikube with the static IP flag
will return true if the integraiton test is running against a passed --base-image flag
tests using the mount command on start
starts a cluster with mount enabled
checks if the cluster has a folder mounted
stops a cluster
restarts a cluster
tests all multi node cluster functionality
makes sure a 2 node cluster can start
uses the minikube node add command to add a node to an existing cluster
make sure minikube profile list outputs correct with multinode clusters
make sure minikube cp works with multinode clusters.
check if all node labels were configured correctly
Steps:
- Get the node labels from the cluster with
kubectl get nodes - check if all node labels matches with the expected Minikube labels:
minikube.k8s.io/*
tests the minikube node stop command
tests the minikube node start command on an existing stopped node
restarts minikube cluster and checks if the reported node list is unchanged
runs minikube stop on a multinode cluster
verifies a soft restart on a multinode cluster works
tests the minikube node delete command
tests that the node name verification works as expected
deploys an app to a multinode cluster and makes sure all nodes can serve traffic
uses app previously deployed by validateDeployAppToMultiNode to verify its pods, located on different nodes, can resolve "host.minikube.internal".
tests all supported CNI options Options tested: kubenet, bridge, flannel, kindnet, calico, cilium Flags tested: enable-default-cni (legacy), false (CNI off), auto-detection
checks that minikube returns and error if container runtime is "containerd" or "crio" and --cni=false
makes sure the hairpinning (https://en.wikipedia.org/wiki/Hairpinning) is correctly configured for given CNI try to access deployment/netcat pod using external, obtained from 'netcat' service dns resolution, IP address should fail if hairpinMode is off
tests starting minikube without Kubernetes, for use cases where user only needs to use the container runtime (docker, containerd, crio) inside minikube
expect an error when starting a minikube cluster without kubernetes and with a kubernetes version.
Steps:
- start minikube with no kubernetes.
starts a minikube cluster with Kubernetes started/configured.
Steps:
- start minikube with Kubernetes.
- return an error if Kubernetes is not running.
starts a minikube cluster while stopping Kubernetes.
Steps:
- start minikube with no Kubernetes.
- return an error if Kubernetes is not stopped.
- delete minikube profile.
starts a minikube cluster without kubernetes started/configured
Steps:
- start minikube with no Kubernetes.
validates that there is no kubernetes running inside minikube
validates that minikube is stopped after a --no-kubernetes start
validates that profile list works with --no-kubernetes
validates that minikube start with no args works.
tests to make sure the CHANGE_MINIKUBE_NONE_USER environment variable is respected and changes the minikube file permissions from root to the correct user.
tests minikube pause functionality
just starts a new minikube cluster
validates that starting a running cluster does not invoke reconfiguration
runs minikube pause
runs minikube unpause
deletes the unpaused cluster
makes sure no left over left after deleting a profile such as containers or volumes
makes sure paused clusters show up in minikube status correctly
verifies that disabling the initial preload, pulling a specific image, and restarting the cluster preserves the image across restarts. also tests --preload-source should work for both github and gcs
tests the schedule stop functionality on Windows
tests the schedule stop functionality on Unix
makes sure skaffold run can be run with minikube
tests starting, stopping and restarting a minikube clusters with various Kubernetes versions and configurations The oldest supported, newest supported and default Kubernetes versions are always tested.
runs the initial minikube start
deploys an app the minikube cluster
makes sure addons can be enabled while cluster is active.
tests minikube stop
makes sure addons can be enabled on a stopped cluster
verifies that starting a stopped cluster works
verifies that a user's app will not vanish after a minikube stop
validates that an addon which was enabled when minikube is stopped will be enabled and working..
verifies that a restarted cluster contains all the necessary images
verifies that minikube pause works
makes sure minikube status displays the correct info if there is insufficient disk space on the machine
upgrades a running legacy cluster to minikube at HEAD
starts a legacy minikube, stops it, and then upgrades to minikube at HEAD
upgrades Kubernetes from oldest to newest
tests a Docker upgrade where the underlying container is missing