You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Current state:
When resources are deployed via HelmRelease each change, which is applied outside of git repository won't be reversed by reconciliation.
Update apps/base/podinfo/release.yaml with the following values:
hpa:
enabled: true
maxReplicas: 6
Wait for the reconciliation and ensure there are six pods running in podinfo namespace.
Execute command on the cluster kubectl patch hpa podinfo --patch '{"spec":{"maxReplicas":2}}'. After a while there should be two pods left.
Wait for the reconciliation to happen.
There are still only two pods running.
Expected state:
After reconciliation the state should be brought to the state provided by HelmRelease as it is done when using pure kustomize and flux.
Is there any way we can overcome this? Maybe I missed something from docs. We see a lot of benefits of using helm charts rather then kustomize, but current approach forces us to either ensure nobody is touching resources within clusters (meaning no access at all) or generate manifests for flux from helm charts via helm template command. The latter solution makes the process of deployment much more complex. What are your solutions here?
The text was updated successfully, but these errors were encountered:
We are aware of this issue and it is tricky to resolve it. The main difficulty as I understand it is that Helm itself is not built for drift detection, as it provides features like Helm Hooks which can execute jobs that may alter the manifests after the HelmRelease is first reconciled to the cluster, which makes it very difficult to account for the "final state." Generators can also create dynamic outputs (like a generated secret) that will always reflect as drift, if Helm Controller were to execute periodically checking for diffs.
For this reason, Helm Controller does not periodically reconcile. It watches for changes to the inputs and only upgrades when a change appears in one of the upstream resources (those are usually HelmRelease, Repo, Versioned Chart Template, Values, ValuesFrom, and so on.)
I'm closing this as a duplicate, find the issue referenced below where much discussion has already taken place.
Current state:
When resources are deployed via HelmRelease each change, which is applied outside of git repository won't be reversed by reconciliation.
Steps to reproduce:
apps/base/podinfo/release.yaml
with the following values:podinfo
namespace.kubectl patch hpa podinfo --patch '{"spec":{"maxReplicas":2}}'
. After a while there should be two pods left.Expected state:
After reconciliation the state should be brought to the state provided by HelmRelease as it is done when using pure kustomize and flux.
Is there any way we can overcome this? Maybe I missed something from docs. We see a lot of benefits of using helm charts rather then kustomize, but current approach forces us to either ensure nobody is touching resources within clusters (meaning no access at all) or generate manifests for flux from helm charts via
helm template
command. The latter solution makes the process of deployment much more complex. What are your solutions here?The text was updated successfully, but these errors were encountered: