Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helm resource should be resilient to manual changes within a cluster #299

Closed
matrus2 opened this issue Jul 20, 2021 · 1 comment
Closed

Comments

@matrus2
Copy link

matrus2 commented Jul 20, 2021

Current state:
When resources are deployed via HelmRelease each change, which is applied outside of git repository won't be reversed by reconciliation.

Steps to reproduce:

  1. Execute flux bootstrap on a completely new cluster and point it to fork of flux2-kustomize-helm-example
  2. Update apps/base/podinfo/release.yaml with the following values:
    hpa:
      enabled: true
      maxReplicas: 6
  1. Wait for the reconciliation and ensure there are six pods running in podinfo namespace.
  2. Execute command on the cluster kubectl patch hpa podinfo --patch '{"spec":{"maxReplicas":2}}'. After a while there should be two pods left.
  3. Wait for the reconciliation to happen.
  4. There are still only two pods running.

Expected state:

After reconciliation the state should be brought to the state provided by HelmRelease as it is done when using pure kustomize and flux.


Is there any way we can overcome this? Maybe I missed something from docs. We see a lot of benefits of using helm charts rather then kustomize, but current approach forces us to either ensure nobody is touching resources within clusters (meaning no access at all) or generate manifests for flux from helm charts via helm template command. The latter solution makes the process of deployment much more complex. What are your solutions here?

@kingdonb
Copy link
Member

kingdonb commented Jul 20, 2021

We are aware of this issue and it is tricky to resolve it. The main difficulty as I understand it is that Helm itself is not built for drift detection, as it provides features like Helm Hooks which can execute jobs that may alter the manifests after the HelmRelease is first reconciled to the cluster, which makes it very difficult to account for the "final state." Generators can also create dynamic outputs (like a generated secret) that will always reflect as drift, if Helm Controller were to execute periodically checking for diffs.

For this reason, Helm Controller does not periodically reconcile. It watches for changes to the inputs and only upgrades when a change appears in one of the upstream resources (those are usually HelmRelease, Repo, Versioned Chart Template, Values, ValuesFrom, and so on.)

I'm closing this as a duplicate, find the issue referenced below where much discussion has already taken place.

Duplicate of #186.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants