-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
vpa-recommender: Make max allowed recommendation configurable #7147
Comments
/area vertical-pod-autoscaler |
Thanks for submitting this! This makes sense and is nice and simple, as you mention, it's very analogous to the One possible alternative that we didn't talk about during the SIG meeting was to use the Regarding this proposal, as we talked about, the downside is that you need to manually set this flag, this value could change over the lifetime of your cluster as different sized nodes are adopted. If you were to set this "max" value dynamically you would want to make this adjustment in:
However, doing this statically cluster wide is a great first step and seems totally reasonable to me! |
That would be very helpful. Usually, the configuration of the cluster and its VPA installation will be done by the same (human or technical) operators. It would be more difficult to change all VPA resources, especially if they are controlled by third-party (technical) operators that we cannot influence in all aspects. Then, the alternative, if this isn't becoming a global VPA option, is to create a mutating web hook, but then everybody would have to do that. If VPA would offer that feature out-of-the-box, this would be great. After all, maxAllowed seems to have predominantly this use case: to make the recommender not recommend anything that wouldn't fit. The definition of the cluster and its worker pools is usually always handled by other people than the developers of the workload and their VPA resources - at least that's a pattern we see. As for dynamically following the adjustment/larger/largest nodes, I am doubtful too much intelligence will pay off. In our case it wouldn't as we a.) use worker pool node scale-from-zero, i.e. VPA cannot see the possible available machine sizes at all if it just checks the nodes and b.) it is challenging to understand which nodes are really suitable (taints/tolerations, affinities/anti-affinities, etc.). The last point could be overcome, the first one never. Therefore static configuration for VPA's global maxAllowed sounds simple and reasonable - more intelligence may help, but isn't guaranteed to always help. |
Here’s how I see the options for supporting this:
|
Related issue: #6487 |
After spending some time, it is not straightforward to add Pod-level flags for max allowed cpu and memory recommendations. I already raised the concern in the initial issue description:
As end users of VPA, it would be great if end users can specify Pod-level maximum allowed values because scheduling is based on Pod resource requests. In this way, end user can make sure that VPA won't recommended values that make Pods unschedulable. Hence, I would propose to introduce container-level max allowed recommendation flags. Yes, with this approach end user can theoretically end up again with unschedulable Pods due to VPA recommendations above the Node's allocatable. Example: Cluster Node's allocatable is 7Gi, while a Pod has 2 containers and both of the containers have a recommendation of 4Gi each. The Pod's memory request will be 8Gi and it will exceed the Node's allocatable of 7Gi. However, I think the container-level max allowed proposal is good enough because:
That's my proposal is to add flags on container-level:
I would even propose to deprecate the existing Pod-level flags for min allowed recommendations in favor of container-level equivalents.
Let me know what you think. |
I like the proposal, but I worry about this:
I guess if one were to run a recommender for various Pods it could work.
I am wondering if the Pod level resources KEP could actually help out here. |
From my understanding, pod-level resource limits are optional, so we can't depend on them being enforced. However, we should account for them to ensure the total resource usage of all containers stays within the pod's defined limits. Here's an approach we could consider—let me know your thoughts: Assume a pod has a memory limit of 1 Gi, for example, and contains 3 containers.
|
@raywainman @adrianmoisey , I created #7560. I will be happy to receive a feedback! :) |
Which component are you using?:
vertical-pod-autoscaler
Is your feature request designed to solve a problem? If so describe the problem this feature should solve.:
Currently, it is possible to configure max allowed recommendation per VPA resource using the field
.spec.resourcePolicy.containerPolicies[].maxAllowed
. Setting this field is useful to make sure that the recommendations won't exceed largest Node's allocatable. Otherwise, due to the high recommendation the Pod will be unschedulable and this will be a downtime for the corresponding Pod.It seems that GKE's VPA already offers such a feature on top of the open-source VPA:
We would like to make vpa-recommender aware of the maximum node size and prevent from providing recommendations that would make the corresponding Pods unschedulable.
Describe the solution you'd like.:
Does it make sense to provide flags in vpa-recommender about the max recommendation it can make for a Pod? I see that there are already similar flags for min allowed cpu and memory:
autoscaler/vertical-pod-autoscaler/pkg/recommender/logic/recommender.go
Lines 29 to 30 in 5aaeb83
PS: However, for min allowed recommendation the flags seem to be per Pod and the recommendations are distributed proportionally based on the Pod's containers count. See
autoscaler/vertical-pod-autoscaler/pkg/recommender/logic/recommender.go
Lines 70 to 74 in 5aaeb83
I am not sure if the same thing would work well for Pods that have multiple containers under VPA. For example, for a Pod with 2 containers, one of the containers could be limited at 50% of the configured max allowed value.
Describe any alternative solutions you've considered.:
Other folks may share the way they are tackling this issue.
One alternative we are trying to avoid is to have a mutating webhook on VPA that sets maxAllowed per VPA resource.
Additional context.:
This topic was also brought up in the community meeting on
2024-06-17
.The text was updated successfully, but these errors were encountered: