-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove and reset nodes during apply by setting reset: true #417
Conversation
I think this is a good alternative to #396 |
Wanted to ask about opinions between |
Maybe connect phase should ignore errors from nodes with |
That is actually a great idea yes |
Maybe |
Hmm, actually, I think the hosts still need to be drained/removed from kube even though they can't be connected for doing the |
I guess ignoring error and only warning about them like it does currently is good then. |
This would also remove some redundant code. But does it make sense to even look at the |
Yeah but you need to ignore that error in a lot of places :( But I think it's pretty much necessary, otherwise you're going to have problems getting rid of hosts that have actually gone missing or something. Not sure if not being able to connect will make the draining + removing impossible, because the hostname isn't known. Getting rid of a dead worker node is addressed in #396
Yeah not really. Not worth the trouble. |
I already ignore errors during draining and all commands. So draining and all ssh based commands would fail, but etcd leave and also cube delete node would still be executed, which would certainly leave the cluster in a better state then when those wouldn't run.
So I'll just make reset phase set the reset flag to true for all nodes. And at the end explicitly delete the leader node by just doing what the reset phase currently does. |
Sounds good to me |
|
The connection library also leaves behind an empty dir after service cleanup but I never got around to fixing that k0sproject/rig#42 |
Maybe having a flag like |
Better not add too much complexity. Just decide to clean or not clean. I think leaving behind stuff that affects/breaks a new install is unacceptable, but leaving behind empty dirs and such is just a bit unsanitary but doesn't do any real harm. When you uninstall something like |
I think if this PR is merged this note can be remove from the README: |
Yes, also if you're reseting a host you probably gonna reinstall or destroy it if necessary which takes care of left over files. |
Closes #396
Fixes #383
This adds the ability to uninstall single or multiple nodes from the cluster.
This is achieved by adding a
reset
field to the host inside ofk0sctl.yaml
. By default this value is set tofalse
and can be omitted.Controllers will be removed from Kubernetes/etcd and reset one by one. This also applies to Controller+Worker nodes.
Workers will be removed from Kubernetes and reset all at once.