Skip to content

invalid option to crictl  #2136

Open
Open
@lbrigman124

Description

What is version of KubeKey has the issue?

3.1.0-alpha.7

What is your os environment?

Rocky 9

KubeKey config file

Using a standard config, here is the section that is critical:

  kubernetes:
    version: v1.29.0
    clusterName: sample.lab.local
    autoRenewCerts: true
    containerManager: containerd

A clear and concise description of what happend.

During the install the following messages are emitted from kubekey (with debug on) sample below but all requested images
result with this message:

sudo -E /bin/bash -c "env PATH=$PATH crictl pull dockerhub.kubekey.local/calico/pod2daemon-flexvol:v3.26.1 --platform amd64"
15:51:47 PST stdout: [sample.lab.local]
NAME:
crictl pull - Pull an image from a registry

USAGE:
crictl pull command [command options] NAME[:TAG|@digest]

COMMANDS:
help, h Shows a list of commands or help for one command

OPTIONS:
--annotation value, -a value [ --annotation value, -a value ] Annotation to be set on the pulled image
--auth AUTH_STRING Use AUTH_STRING for accessing the registry. AUTH_STRING is a base64 encoded 'USERNAME[:PASSWORD]' [$CRICTL_AUTH]
--creds USERNAME[:PASSWORD] Use USERNAME[:PASSWORD] for accessing the registry [$CRICTL_CREDS]
--pod-config pod-config.[json|yaml] Use pod-config.[json|yaml] to override the pull c
--username USERNAME, -u USERNAME Use USERNAME for accessing the registry. The password will be requested on the command line
--help, -h show help

Relevant log output

None of the images are downloaded due to a secondary issue.  The containerd service was not restarted
after the configuration of the local registry was added.
resulting in crictl/containerd failing to pull images do to this error:

time="2024-02-21T15:52:11-08:00" level=fatal msg="pulling image: failed to pull and unpack image \"dockerhub.kubekey.local/kubesphere/pause:3.9\": failed to resolve reference \"dockerhub.kubekey.local/kubesphere/pause:3.9\": failed to do request: Head \"https://dockerhub.kubekey.local/v2/kubesphere/pause/manifests/3.9\": tls: failed to verify certificate: x509: certificate signed by unknown authority"
, error: exit status 1

[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
        - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster

Additional information

The resulting system deployment ended with unable to initialize a kubernetes cluster.

Activity

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions