-
Notifications
You must be signed in to change notification settings - Fork 372
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
k0s kubectl connection fails after upgrading to K0s v1.31.2 #5350
Comments
Hi @jsalgado78 the only explanatioon I can imagine for this is that there are multiple CAs in different control plane nodes. From the node that you gathered this outputs from, please gather the output of: We also need to know the list of IP addresses of each controller and from EACH controller run: Additionally I'd like from each controller node the following outputs: Finally I need to know if you're using any apiserver extraArgs (this is specified in |
Controller 1 (hostname s504pre132k0s) IPs: 192.168.60.132, 10.230.222.132, 192.168.80.132
I've attached a log file with output on each controller server. controller_s504pre134k0s.log Thanks
|
Hi @jsalgado78 |
This is /etc/k0s/k0s.yaml file on controller1 (s504pre132k0s) renamed to .txt because of GitHub restrictions. [root@s504pre132k0s /etc/k0s]# ps aux | grep kube-apiserver Thanks
|
Hi @jsalgado78, 1- We want to see how 2- We believe there might be some object in kubernetes causing this behavior, for instance a validating webhook, please acquire: 3- We released 1.31.3 on Saturday, which includes a new kube-apiserver, if you're able to update to that we think it's worth giving it a shot just in case... |
This is admin.conf file. This is kubectl get output I've probe K0s 1.31.3 with same results:
Thanks. |
Hi @jsalgado78, Would it be possible for you to gather the output of: |
These are output log files for each ip on controller s504pre132k0s: Thanks. |
Hi @jsalgado78, |
Before creating an issue, make sure you've checked the following:
Platform
Version
v1.31.2+k0s.0
Sysinfo
`k0s sysinfo`
What happened?
I've upgraded an on-premise K0s cluster with three controller nodes and three worker nodes from K0s v.1.30.4 to v.1.31.2 using k0sctl 0.20.0 and I get this error message on each controller node when I execute k0s kubectl:
E1211 17:33:36.392466 13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get "https://192.168.80.132:6443/api?timeout=32s\": Forbidden"
Unable to connect to the server: Forbidden
Steps to reproduce
Expected behavior
k0s kubectl commands should work on each controller node using /var/lib/k0s/pki/admin.conf as kubeconfig file
Actual behavior
/var/lib/k0s/pki/admin.conf file don't use https://localhost:6443 as api server URL like previous versions and /etc/k0s/k0s.yaml file doesn't include 127.0.0.1 in sans parameter like previous versions so k0s kubectl commands fails with Forbidden error message.
If I run kubectl with my own kubeconfig previously generated all appears to be fine in this K0s cluster
Screenshots and logs
No response
Additional context
[root]# k0s kubectl get nodes
E1211 17:33:36.362183 13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get "https://192.168.80.132:6443/api?timeout=32s\": Forbidden"
E1211 17:33:36.368966 13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get "https://192.168.80.132:6443/api?timeout=32s\": Forbidden"
E1211 17:33:36.376438 13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get "https://192.168.80.132:6443/api?timeout=32s\": Forbidden"
E1211 17:33:36.384324 13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get "https://192.168.80.132:6443/api?timeout=32s\": Forbidden"
E1211 17:33:36.392466 13798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get "https://192.168.80.132:6443/api?timeout=32s\": Forbidden"
Unable to connect to the server: Forbidden
These controller nodes have three network interfaces. In the example controller node has IPs 192.168.80.132, 192.168.60.132 and 10.230.222.132
Something is wrong in /var/lib/k0s/pki/admin.conf with K0s 1.31.2 because of I've been upgrading K0s from previous versions without issues like this:
[root]# grep server /var/lib/k0s/pki/admin.conf
server: https://192.168.80.132:6443
[root]# sed -i 's/192.168.80.132/localhost/' /var/lib/k0s/pki/admin.conf
[root]# k0s kubectl get nodes
NAME STATUS ROLES AGE VERSION
s504pre144k0s Ready 2y150d v1.31.2+k0s
s504pre145k0s Ready 2y150d v1.31.2+k0s
s504pre146k0s Ready 2y150d v1.31.2+k0s
[root]# systemctl restart k0scontroller
[root]# k0s kubectl get nodes
E1211 17:40:19.164057 14737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get "https://192.168.80.132:6443/api?timeout=32s\": Forbidden"
E1211 17:40:19.172882 14737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get "https://192.168.80.132:6443/api?timeout=32s\": Forbidden"
E1211 17:40:19.181674 14737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get "https://192.168.80.132:6443/api?timeout=32s\": Forbidden"
E1211 17:40:19.189815 14737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get "https://192.168.80.132:6443/api?timeout=32s\": Forbidden"
E1211 17:40:19.197323 14737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get "https://192.168.80.132:6443/api?timeout=32s\": Forbidden"
Unable to connect to the server: Forbidden
[root]# grep server /var/lib/k0s/pki/admin.conf
server: https://192.168.80.132:6443
beforeupgrade-k0s.yaml.log
afterupgrade-k0s.yaml.log
The text was updated successfully, but these errors were encountered: