Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] - Updating a pre-existing service with a different port fails #124

Open
ryan109 opened this issue May 30, 2022 · 0 comments
Open

[BUG] - Updating a pre-existing service with a different port fails #124

ryan109 opened this issue May 30, 2022 · 0 comments
Labels
bug Something isn't working

Comments

@ryan109
Copy link
Contributor

ryan109 commented May 30, 2022

Describe the bug
If we create a service and expose port 8080, but then change the port to 80, we fail to create the object.

If we use a base config like this

ports:
  - 80

and update it to this

ports:
  - 8080

it fails, as we use the k8s patch operation. As per the documentation:

Patch: Patch will apply a change to a specific field. How the change is merged is defined per field. Lists may either be replaced or merged. Merging lists will not preserve ordering.
Patches will never cause optimistic locking failures, and the last write will win. Patches are recommended when the full state is not read before an update, or when failing on optimistic locking is undesirable. When patching complex types, arrays and maps, how the patch is applied is defined on a per-field basis and may either replace the field's current value, or merge the contents into the current value.

We could potentially use the replace operation, but that comes with it's own caveats:

Note: The ResourceStatus will be ignored by the system and will not be updated. To update the status, one must invoke the specific status update operation.
Note: Replacing a resource object may not result immediately in changes being propagated to downstream objects. For instance replacing a ConfigMap or Secret resource will not result in all Pods seeing the changes unless the Pods are restarted out of band.

A "hack" is to change your config to provide a name for your port, then create a second port with a name, as the failure in the deploy comes from not being able to patch the resource as we have multiple in a list with no name?

[{"reason":"FieldValueRequired","message":"Required value","field":"spec.ports[0].name"},{"reason":"FieldValueRequired","message":"Required value","field":"spec.ports[1].name"},{"reason":"FieldValueRequired","message":"Required value","field":"spec.ports[2].name"}]},"code":422}

So we could run the deployment on a config that looks like:

- ports:
  - name: http
    port: 80
  - name: proxy
    port 8080

Then we could remove the second port object and run it again, and now it would continue working. However this uncovers a further bug - as the deployment will go through but both ports will still be exposed in the cluster. This is to do with the nature of using a patch operation instead of replace.

Expected behavior

Screenshots

Sentry / Fullstory link

Additional context

@ryan109 ryan109 added the bug Something isn't working label May 30, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant