You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried to test the K0smotron control plane deployment on both the minikube and k0s docker deployment before applying the control plane to running cluster. On both deployments once the the cluster is created with following configuration(The namespace and version configuration do not seem to matter. Tried with several combos). Is this kind of deployment expected to work or are there access right issues accessing the metrics server?
time="2024-10-06 08:53:10" level=info msg="E1006 08:53:10.613845 84 controller.go:113] \"Unhandled Error\" err=\"loading OpenAPI spec for \\\"v1beta1.metrics.k8s.io \\\" failed with: Error, could not get list of group versions for APIService\" logger=\"UnhandledError\"" component=kube-apiserver stream=stderr
time="2024-10-06 08:53:10" level=info msg="E1006 08:53:10.613879 84 controller.go:102] \"Unhandled Error\" err=<" component=kube-apiserver stream=stderr
time="2024-10-06 08:53:10" level=info msg="\tloading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable" component=kube-apiserver stream=stderr
time="2024-10-06 08:53:10" level=info msg="\t, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]" component=kube-apiserver stream=stderr
Api services are there and the kube-system/metrics-server service is readable:
kubectl get apiservices (base)
NAME SERVICE AVAILABLE AGE
v1. Local True 17h
v1.acme.cert-manager.io Local True 49m
v1.admissionregistration.k8s.io Local True 17h
v1.apiextensions.k8s.io Local True 17h
v1.apps Local True 17h
v1.authentication.k8s.io Local True 17h
v1.authorization.k8s.io Local True 17h
v1.autoscaling Local True 17h
v1.batch Local True 17h
v1.cert-manager.io Local True 49m
v1.certificates.k8s.io Local True 17h
v1.coordination.k8s.io Local True 17h
v1.discovery.k8s.io Local True 17h
v1.events.k8s.io Local True 17h
v1.flowcontrol.apiserver.k8s.io Local True 17h
v1.networking.k8s.io Local True 17h
v1.node.k8s.io Local True 17h
v1.policy Local True 17h
v1.rbac.authorization.k8s.io Local True 17h
v1.scheduling.k8s.io Local True 17h
v1.storage.k8s.io Local True 17h
v1alpha1.ipam.cluster.x-k8s.io Local True 29m
v1alpha1.runtime.cluster.x-k8s.io Local True 29m
v1alpha3.clusterctl.cluster.x-k8s.io Local True 49m
v1beta1.addons.cluster.x-k8s.io Local True 29m
v1beta1.bootstrap.cluster.x-k8s.io Local True 29m
v1beta1.cluster.x-k8s.io Local True 29m
v1beta1.controlplane.cluster.x-k8s.io Local True 29m
v1beta1.etcd.k0sproject.io Local True 17h
v1beta1.helm.k0sproject.io Local True 17h
v1beta1.infrastructure.cluster.x-k8s.io Local True 29m
v1beta1.ipam.cluster.x-k8s.io Local True 29m
v1beta1.k0smotron.io Local True 29m
v1beta1.metrics.k8s.io kube-system/metrics-server True 17h
v1beta2.autopilot.k0sproject.io Local True 17h
v1beta3.flowcontrol.apiserver.k8s.io Local True 17h
v2.autoscaling Local True 17h
The text was updated successfully, but these errors were encountered:
Did you attach any nodes to the child cluster you created? If not, then this is expected till you have nodes attached and actually do get metric-server running.
Not yet. At least for me when component starts throwing errors when testing it locally is a pass on moving forward to moving it to actual harware, if there's not a clear reason why the component would behave differently on virtual deployment vs hardware deployment.
Thanks for the clarification on whether the error is caused by not finding the metrics server of the cluster running the control plane, but rather not finding the k0smotron cluster metrics server
Hi,
I tried to test the K0smotron control plane deployment on both the minikube and k0s docker deployment before applying the control plane to running cluster. On both deployments once the the cluster is created with following configuration(The namespace and version configuration do not seem to matter. Tried with several combos). Is this kind of deployment expected to work or are there access right issues accessing the metrics server?
Cluster controller pod keeps trashing logs with
Api services are there and the kube-system/metrics-server service is readable:
The text was updated successfully, but these errors were encountered: