Open
Description
I am getting an error in one of the test setup for lease controller system
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0xc0 pc=0x18b0c80]
goroutine 1 [running]:
k8s.io/client-go/kubernetes.(*Clientset).CoordinationV1(0xc0005078e0?)
/go/src/sigs.k8s.io/apiserver-network-proxy/vendor/k8s.io/client-go/kubernetes/clientset.go:315
k8s.io/component-helpers/apimachinery/lease.NewController({0x2076b10, 0x3070b20}, {0x20987d0, 0x0}, {0xc0003c00c0, 0x24}, 0x1e, 0x0, 0x37e11d600, {0xc000186380, ...}, ...)
/go/src/sigs.k8s.io/apiserver-network-proxy/vendor/k8s.io/component-helpers/apimachinery/lease/controller.go:77 +0x66
sigs.k8s.io/apiserver-network-proxy/pkg/server/leases.NewController({0x20987d0, 0x0}, {0xc0003c00c0, 0x24}, 0x1e, 0x37e11d600, 0x37e11d600, {0xc000186380, 0x3e}, {0x7ffc263f9cef, ...}, ...)
/go/src/sigs.k8s.io/apiserver-network-proxy/pkg/server/leases/controller.go:41 +0x11e
sigs.k8s.io/apiserver-network-proxy/cmd/server/app.(*Proxy).Run(0xc0000b0420, 0xc00029a000, 0xc0001f2120)
/go/src/sigs.k8s.io/apiserver-network-proxy/cmd/server/app/server.go:164 +0x953
main.main.NewProxyCommand.func2(0xc0001a4500?, {0x1dab178?, 0x4?, 0x1daafb4?})
/go/src/sigs.k8s.io/apiserver-network-proxy/cmd/server/app/server.go:68 +0x37
github.com/spf13/cobra.(*Command).execute(0xc0002a8008, {0xc00003e130, 0x11, 0x11})
/go/src/sigs.k8s.io/apiserver-network-proxy/vendor/github.com/spf13/cobra/command.go:985 +0xaca
github.com/spf13/cobra.(*Command).ExecuteC(0xc0002a8008)
/go/src/sigs.k8s.io/apiserver-network-proxy/vendor/github.com/spf13/cobra/command.go:1117 +0x3ff
github.com/spf13/cobra.(*Command).Execute(0xc0000ea2a0?)
/go/src/sigs.k8s.io/apiserver-network-proxy/vendor/github.com/spf13/cobra/command.go:1041 +0x13
main.main()
/go/src/sigs.k8s.io/apiserver-network-proxy/cmd/server/main.go:47 +0x292
line 315 in `/client-go/kubernetes/clientset.go` is the return statement
// CoordinationV1 retrieves the CoordinationV1Client
func (c *Clientset) CoordinationV1() coordinationv1.CoordinationV1Interface {
return c.coordinationV1
}
So far, I've checked the the RBAC, verified via
kubectl auth can-i list/get/watch/create/update leases --as=system:serviceaccount:<namespace>:<serviceaccountname> -n <namespace>
yes
I've also checked if automountServiceAccountToken: true
Not sure, what could be the problem, but still we wouldn't wanna panic and gracefully throw the error or try to fall back to server count with a warning or something else perhaps ?