-
Notifications
You must be signed in to change notification settings - Fork 163
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
flux reconcile on helmrelease leaves erroneously modified object in bad state #267
Comments
That's a Kubernetes thing. It has to do with how patches work--very non-automated/human friendly--the idea being that a patch consisting of a partial manifest does not contain enough information to denote anything other than put this here or update the value of this key to a different value. In lieu of requiring client processes (that category includes the non-automated/human set) to submit complete manifests to submit a much smaller set of changes, the assumption is to leave manifest keys alone unless they are explicitly addressed in the update. If you were to be one of the rare individuals that can actually generate a functioning patch to submit (or are a kube/manifest patch generator routine), I'm sure this is considered a useful and welcome feature. If you want to do away with a key (command, in this case), you need to leave it defined but set it to whatever passes for the key/value's type empty value. If you note: another kubernetes pointy bit: you can't have an empty container object (list, map str) stored with a manifest. I blame What's even groovier: In cases where I've done exactly what you have described (usually in the name of troubleshooting) I've found it quite important to revert any manual changes to any helm-deployed object before resuming the HelmRelease object to avoid it becoming wedged. The only semi-reliable method I have found for escape is:
I'm still on 0.11--I think I spotted something in a changelog or PR or some such indicating that suspended objects will no longer be affected by reconciliation requests. I'm not sure what the status is for that feature--in the future, YMMV :) AND (big breath), it is a Helm thing for a similar reason--Helm keeps its manifests' state in the kube object store.. it becomes wedged when it sees anything that it didn't put in place; similar to an OS package manager... except the package manager usually doesn't soil its trousers when it comes across a configuration file a (human)operator may have changed. flux/HelmRelease does have the option of specifying a |
hmm I'm not so sure.
|
I am seeing this as well on flux 0.20 it seems like diffing with helmreleases is broken. It also seems that if I delete a resource manually flux doesn't seem to be picking up that it should put it back and won't until the chart version is modified or the git repo is updated when using reconcileStrategy: Revision. |
I experience the same with deleted objects from a HelmRelease, lets say an ingress. |
Same here. |
@zaggash @monotek This is expected behavior. To understand why, you must understand how Helm resolves drifts, (and what obstacles it has placed in its own way with respect to that story) Tl;dr: helm controller doesn't reconcile drift in the cluster, except when it is performing an actual upgrade (and that drift in the cluster wouldn't currently be able to trigger any upgrade, because for Helm Controller it's perfectly indistinguishable from any side-effects that might have been the intended result of any post-render or lifecycle hook behavior in the chart.) I am not certain if these reports are the same as the original report, but the behavior that both @zaggash and @monotek are describing are already explained by #186 as I'll elaborate further on below. (This is a complicated story, so save yourself and look the other way), unless the tl;dr above has made you hungry to understand what the problem is and why this issue can't easily be resolved: [snip] I moved this comment part to the #186 discussion because it's only distracting from the OP topic here. I am virtually certain this does not have anything to do with the original issue, sorry for contributing to the noise, but this is a hard issue to explain, and I'm sure it comes up all the time. Though I am afraid I have not quite understood the cronjob issue that started this thread, doing my best to steer the discussion back on topic and find out if any fix is possible, (or if we might have already fixed it somehow without noticing) |
Ok, i understand that continuously upgrading the helm chart would be a mess secretswise, but it would be nice anyway if one could force such upgrade via "flux reconcile helmrelease" command somehow, so that for example deleted parts of a chart would be recreated, without the need to delete the chart completly first. |
Any change that would adjust the inputs or spec of the chart, can trigger an upgrade without deleting anything; however the point is well taken, that it would be useful for the API client to have a direct and force-upgrade button, whatever magic happens behind the curtain. Moving my thoughts over to #186 where they are more relevant to the discussion, this issue report is about a different topic. |
Describe the bug
A HelmRelease contains a cronjob as part of a release. If we edit that cronjob in k8s and then run
flux reconcile -n <namespace> helmrelease <hr name> --with-source
, the edited cronjob is not replaced with the correct cronjob.To Reproduce
Steps to reproduce the behavior:
Expected behavior
The invalid cronjob should have been replaced with helms manifest
Additional context
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-14T05:14:17Z", GoVersion:"go1.15.6", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.12-eks-7684af", GitCommit:"7684af4ac41370dd109ac13817023cb8063e3d45", GitTreeState:"clean", BuildDate:"2020-10-20T22:57:40Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
Github
Below please provide the output of the following commands:
The text was updated successfully, but these errors were encountered: