-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Modifying the installFlags doesn't do anything after initial install #302
Comments
And these are in the correct section of the YAML? spec:
hosts:
- role: controller
installFlags:
- --disable-components xyz
- role: worker
installFlags:
- --debug |
Yep, they even work after doing a apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
metadata:
name: my-cluster
spec:
k0s:
version: 1.23.1+k0s.1
hosts:
- ssh:
address: 172.20.4.56
port: 22
user: root
role: controller
installFlags: &installFlagsController
# disable metrics-server since we use prometheus instead
- --disable-components metrics-server
# disable konnectivity-server since we don't have a LB infront of our controller
# see: https://github.com/k0sproject/k0s/issues/1352
- --disable-components konnectivity-server
- ssh:
address: 172.20.4.57
port: 22
user: root
role: controller
installFlags: *installFlagsController
- ssh:
address: 172.20.4.58
port: 22
user: root
role: controller
installFlags: *installFlagsController
- ssh:
address: 172.20.4.59
port: 22
user: root
role: worker
- ssh:
address: 172.20.4.60
port: 22
user: root
role: worker
- ssh:
address: 172.20.4.61
port: 22
user: root
role: worker
ps aux after
|
Hmm, OK, that's weird 🤔 I'll investigate this tomorrow. |
apiVersion: k0sctl.k0sproject.io/v1beta1
kind: cluster
spec:
hosts:
- role: single
installFlags:
- --debug
- --disable-components metrics-server
- --disable-components konnectivity-server
ssh:
address: 127.0.0.1
port: 9022
Seems to work fine for me 🤔 |
Mh... I've added Controller:
Here is the full log using trace enabled: |
Ah yes, are we talking about changing the install flags for an existing cluster? That doesn't happen. |
Yeah, this was my initial question. I was expecting that k0sctl can change that :( |
K0sctl uses |
k0sproject/k0s#1458 is closed! 🥳 Presumably |
Yes, but that's not all, it needs to determine when it is needed and if the version that is being installed supports it. |
For the time being: What would be the simplest/quickest way to achieve this? Drain the node -> k0s stop -> delete systemd service -> k0sctl apply? |
Didn't use it for quite some time, but there's the |
k0sctl won't run |
How's that determined? Would |
It just runs |
this worked out fine for what I wanted to do |
Same issue with the latest version of k0sctl, I have to say, this is a very annoying issue, especially when you are testing the current technology stack and need to modify the cluster configuration multiple times.
I think we should recreate the systemd files on every upgrade, just like nixos, the k0sctl.yaml file is the user's expectation of the current cluster, and the responsibility of ensuring that the k0s version supports the |
Slightly related to #722 Modifying the install flags (which are actually run flags, they don't do much to control how the installation happens, only what is put into ExecStart) requires a restart. |
Maybe what k0sctl could do is to always run |
( |
Btw I think this also applies to any env-variables. |
Can't just "run install --force always", that phase involves the token generation stuff, so need a bit more refined approach. |
Hi,
just did a clean installation of a HA claster (3 controllers, 3 nodes) using k0sctl v0.12.3. Everything went smooth until I noticed that I had to add "--disable-components konnectivity-server" to my
installFlags
.Changed from
to
Running
k0sctl apply --config k0sctl.yaml
after modifying thek0sctl.yaml
did nothing. Is this expected?The cluster is still running only with a single argument instead of two:
The text was updated successfully, but these errors were encountered: