Skip to content

panic: storage is (re)initializing #7709

Open
@mcleantom

Description

@mcleantom

Description

Observed Behavior:
I am trying to install karpenter via helm, I have a Chart.yaml file which looks like:

apiVersion: v2
name: karpenter
description: Umbrella chart for karpenter
type: application
version: v1.2.1
dependencies:
  - name: karpenter
    version: 1.2.1
    repository: oci://public.ecr.aws/karpenter
    alias: karpenter

And a values.yaml file which looks like:

karpenter:
  controller:
    containerName: controller
    image:
      repository: public.ecr.aws/karpenter/controller
      tag: 1.1.2
      digest: ~

  serviceMonitor:
    enabled: true
  serviceAccount:
    create: true
    name: karpenter
    annotations:
      eks.amazonaws.com/role-arn: arn:aws:iam::442042521319:role/KarpenterControllerRole-eks-sandbox 
  affinity:
    nodeAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
        - weight: 1
          preference:
            matchExpressions:
              - key: karpenter.sh/nodepool
                operator: DoesNotExist
  priorityClassName: system-cluster-critical
  logConfig:
    logEncoding: json
  settings:
    clusterName: eks-sandbox

Which I install by running:

helm upgrade --install karpenter . -f values.yaml --namespace karpenter --create-namespace

When I look at the pods, one of the karpenter pods is in a crash loop backoff state:

mcleantom@ARL-145:/mnt/c/Users/tom.mclean/src/tmp/ARL-AWS/argocd/apps/karpenter$ kubectl get pods -n karpenter
NAME                         READY   STATUS             RESTARTS      AGE
karpenter-766fbcfdb7-cpz6c   0/1     Pending            0             4m34s
karpenter-766fbcfdb7-tjqlm   0/1     CrashLoopBackOff   5 (26s ago)   4m34s

And the crash loop back off pod is outputting:

mcleantom@ARL-145:/mnt/c/Users/tom.mclean/src/tmp/ARL-AWS/argocd/apps/karpenter$ kubectl logs -f karpenter-766fbcfdb7-tjqlm -n karpenter
panic: storage is (re)initializing

goroutine 1 [running]:
github.com/samber/lo.must({0x3a43aa0, 0xc000518500}, {0x0, 0x0, 0x0})
        github.com/samber/[email protected]/errors.go:53 +0x1df
github.com/samber/lo.Must0(...)
        github.com/samber/[email protected]/errors.go:72
github.com/aws/karpenter-provider-aws/pkg/operator.NewOperator({0x43c6580, 0xc0006205d0}, 0xc0006b02c0)
        github.com/aws/karpenter-provider-aws/pkg/operator/operator.go:101 +0x147
main.main()
        github.com/aws/karpenter-provider-aws/cmd/controller/main.go:29 +0x2a

I am not sure why it is outputting this, any help would be appreciated.

Expected Behavior:
The pod runs, or outputs other error messages to say why it is panicing

Reproduction Steps (Please include YAML):
Included above

Versions:

  • Chart Version: 1.2.1
  • Kubernetes Version (kubectl version):
Client Version: v1.32.0
Kustomize Version: v5.5.0
Server Version: v1.32.0-eks-5ca49cb
  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Activity

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

Labels

bugSomething isn't workinglifecycle/staletriage/needs-informationMarks that the issue still needs more information to properly triage

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions