-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
VPA: prune stale container aggregates, split recommendations over true number of containers #6745
base: master
Are you sure you want to change the base?
Conversation
Hi @jkyros. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
// TODO(jkyros): This only removes the container state from the VPA's aggregate states, there | ||
// is still a reference to them in feeder.clusterState.aggregateStateMap, and those get | ||
// garbage collected eventually by the rate limited aggregate garbage collector later. | ||
// Maybe we should clean those up here too since we know which ones are stale? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it a lot of extra work to do that? Do you see any risks doing it here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, I don't think it's a lot of extra work, it should be reasonably cheap to clean them up here since it's just deletions from the other maps if the keys exist, I just didn't know all the history.
It seemed possible at least that we were intentionally waiting to clean up the aggregates so if there was an unexpected hiccup we didn't just immediately blow away all that aggregate history we worked so hard to get? (Like maybe someone oopses, deletes their deployment, then puts it back? Right now we don't have to start over -- the pods come back in, find their container aggregates, and resume ? But if I clean them up here, we have to start over...)
// the correct number and not just the number of aggregates that have *ever* been present. (We don't want minimum resources | ||
// to erroneously shrink, either) | ||
func (cluster *ClusterState) setVPAContainersPerPod(pod *PodState) { | ||
for _, vpa := range cluster.Vpas { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm wondering if there is already a place where this logic could go so we don't have to loop over all VPAs for every pod again here.
In large clusters with a VPA to Pod ratio that's closer to 1 this could be a little wasteful.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, yeah, I struggled with finding a less expensive way without making too much of a mess. Unless I'm missing something (and I might be) we don't seem to have a VPA <--> Pod map -- probably because we didn't need one until now? At the very least I think I should gate this to only run if the number of containers in the pod is > 1.
Like, I think our options are:
- update the VPA as the pods roll through (which requires me to find the VPA for each pod like I did here) or
- count the containers as we load the VPAs (but we load the VPAs before we load the pods, so we'd have to go through the pods again, so that doesn't help us)
- have the VPA actually track the pods it's managing, something like this: jkyros@6ddc208 (could also just be an array of
PodID
s and we could look up the state so we could save the memory cost of the PodState pointer, but you know what I mean)
I put it where I did (option 1
) because at least LoadPods()
was already looping through all the pods so we could freeload off the "outer" pod loop and I figured we didn't want to spend the memory on option 3
. If we'd entertain option 3
and are okay with the memory usage, I can totally do that?
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten |
/ok-to-test I want to see if I can help get this merged |
Hi everyone, so John has take a hiatus and has left me with this PR, so after catching up, I guess we are still waiting for those conversations to resolve on which way do we want to go with those design decisions. The two commits I just put up are just a improvement on the existing implementation (assuming we will go with that, we don't have to), and some e2e tests to prove this works. |
c3a1f0e
to
c84075b
Compare
I think this is a possible scenario prior to this PR merging:
But, I think this PR would change that scenario to the following:
This scenario is assuming that the CronJob is configured with |
Got it, thanks for the detailed explanation! This does happen and the recommendations are cleaned up before the next job gets scheduled, assuming the recommendation happens in time and the job schedule is long enough for the recommendation to prune the stale recommendations. Thinking about your solution, I think it makes sense to add some sort of timeout feature for recommendations that refresh when they are used again -- pretty much what you mean by your cleanup feature. Maybe as a per VPA configuration in the VPA CR spec? Currently this is the current implemenation of whether recommendations will get recorded in the object:
So instead, the "matching pods" filtered pods should still include pods that have removed aggregates for a certain period of time based on a per VPA configuration, and maybe a global timeout as well (similar to how Although thinking about it, there is a small window between the VPA deciding "we should remove this container recommendation, it doesn't exist anymore and we should stop splitting recommendations over it because it was a rename" vs "we shouldn't remove this container recommendation because its targetRef's top most controller is a cronjob so we should not remove the recommendation as jobs will quickly use it again." I think there would need to be a lot of care in setting this staleness timeout feature, and users would have to know how exactly how long they want the recommendation to linger for depending on if it's a cronjob, or a regular workload, for them to scale as they intended. |
So maybe it is best to just look at if the topmost controller is CronJob and decide based on that to remove this extra complexity? |
I was actually wondering about this option too. |
In this case if we do this, if we rename a container, and the deployment exists, wouldn't there still be stale recommendations for the old container? Maybe the aggregate would get deleted and we wouldn't split resources anymore, but there just might be a stale recommendation in the vpa object that will never get deleted (but maybe that's okay?), if I am understanding this right. That's why I am wondering if we have to specially handle CronJobs since those containers that get created are supposed to be deleted. EDIT: I think the grace period idea might be the way to go, because the renaming situation vs the CronJob situation seems to me like the VPA would not be able to tell the difference, and I'm not sure we want to implement the top most controller checking in the recommender since the caches and informers don't exist there (exists for updater and admission I think?). |
Previously we were dividing the resources per pod by the number of container aggregates, but in a situation where we're doing a rollout and the container names are changing (either a rename, or a removal) we're splitting resources across the wrong number of containers, resulting in smaller values than we should actually have. This collects a count of containers in the model when the pods are loaded, and uses the "high water mark value", so in the event we are doing something like adding a container during a rollout, we favor the pod that has the additional container. There are probably better ways to do this plumbing, but this was my initial attempt, and it does fix the issue.
Previously we were only cleaning checkpoints after something happened to the VPA or the targetRef, and so when a container got renamed the checkpoint would stick around forever. Since we're trying to clean up the aggregates immediately now, we need to force the checkpoint garbage collection to clean up any checkpoints that don't have matching aggregates. If the checkpoints did get loaded back in after a restart, PruneContainers() would take the aggregates back out, but we probably shouldn't leave the checkpoints out there. Signed-off-by: Max Cao <[email protected]>
Previously we were letting the rate limited garbage collector clean up the aggregate states, and that works really well in most cases, but when the list of containers in a pod changes, either due to the removal or rename of a container, the aggregates for the old containers stick around forever and cause problems. To get around this, this marks all existing aggregates/initial aggregates in the list for each VPA as "not under a VPA" every time before we LoadPods(), and then LoadPods() will re-mark the aggregates as "under a VPA" for all the ones that are still there, which lets us easily prune the stale container aggregates that are still marked as "not under a VPA" but are still wrongly in the VPA's list. This does leave the ultimate garbage collection to the rate limited garbage collector, which should be fine, we just needed the stale entries to get removed from the per-VPA lists so they didn't affect VPA behavior.
Signed-off-by: Max Cao <[email protected]>
… containers Signed-off-by: Max Cao <[email protected]>
4c48eda
to
5742d15
Compare
…pa prunes recommendations from non-existente containers Signed-off-by: Max Cao <[email protected]>
5742d15
to
b6e435b
Compare
Okay, in b6e435b, I added
The grace period is long enough so that aggregates don't get pruned before the next job gets created and refreshes the last update time of the container, and so the recommendations still get preserved. If you specify a long grace period or the targetRef is CronJob, it will be pruned in 24 hours. If you don't specify the field and the targetRef is not CronJob, then the IDK how people feel about this, so I didn't write any tests. But using it locally, it seems to work as intended. |
My idea was to check the targetRef to see what it contains, in order to determine if a VPA's recommendation needs cleaning. This will only work on well-known types |
My opinion is that this is an adequate solution. @voelzmo @raywainman @kwiesmueller @omerap12 thoughts on this one? |
I’m not sure I understand the use case for allowing users to specify grace periods for stale recommendations. What do you think? |
if targetRef != nil && targetRef.Kind == "CronJob" { | ||
// CronJob is a special case, because they create containers they are usually supposed to be deleted after the job is done. | ||
// So we set a higher default grace period so that future recommendations for the same workload are not pruned too early. | ||
// TODO(maxcao13): maybe it makes sense to set the default based on the cron schedule? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMHO It does make sense.
I think I figured out another edge-case that needs to be handled. Sometimes in my day-job, when we're firefighting issues, we may scale a deployment down to zero to alleviate pressure on other parts of the system. I think this use-case is possibly common, tools such as https://keda.sh are built to allow users to scale workloads down to zero, to save costs. During these times the recommendation would be removed. I think the grace period needs to apply to all resource types. |
I think allowing the users to set it will be good in a case like @adrianmoisey said, where you may scale a deployment down to 0 to alleviate pressure on the system but don't want the recommendations to get lost. Then you can specifically set the
This should be able to work on all types as long as you set the field (unless I am misunderstanding you). Do you mean that we should default the gracePeriod to be very high (never expire) instead of 0? Maybe we also allow a global default gracePeriod set with a command line flag? |
Yeah, I guess it makes sense to have a long default or keep the feature disabled and allow it to be opt-in. |
Could the default be "don't clean them up" to match behavior today? I worry that this is verging on a breaking change. Then a user can override the pruning grace period for the problematic VPAs that are churning containers? |
I agree |
…on-breaking opt-in change Signed-off-by: Max Cao <[email protected]>
098c138
to
808c62b
Compare
Okay, I've added a I removed any special handling of CronJob since the default pruning functionality should now opt-in and not be a breaking change. EDIT: To make it clear for reviewers, this pruning of aggregate collection states/recommendations only applies to the state that is kept by each VPA. The state map that exists for the
So if the top-most-controller still exists, the gc will never prune the Eventually if the top-most-controller not longer exists, the per-vpa aggregates will eventually get pruned anyways as mentioned in an earlier review comment. |
@@ -46,6 +47,10 @@ import ( | |||
"k8s.io/autoscaler/vertical-pod-autoscaler/pkg/recommender/util" | |||
) | |||
|
|||
var ( | |||
globalPruningGracePeriodDuration = flag.Duration("pruning-grace-period-duration", 24*365*100*time.Hour, `The grace period for deleting stale aggregates and recommendations. By default, set to 100 years, effectively disabling it.`) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm unsure what others think, but I'm not super excited by having the default be 100 years. My preference would be that setting the global to zero would disable the pruning feature (except for VPAs with pruningGracePeriod set)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I agree it's not ideal. I was trying to figure out how I could enable/disable it with a single flag, and without specially handling a zero value for a duration. I guess I could switch to flag.String
and specially handle a 0
as off. Or if there's some cleaner way to do it I'm happy for feedback.
What type of PR is this?
/kind bug
What this PR does / why we need it:
Previously we weren't cleaning up "stale" aggregates when container names changed (because of renames, removals) and that was resulting in:
This PR is an attempt to clean up those stale aggregates without incurring too much overhead, and make sure that the resources get spread across the correct number of containers during a rollout.
Which issue(s) this PR fixes:
Fixes #6744
Special notes for your reviewer:
There are probably a lot of different ways we can do the pruning of stale aggregates for missing containers:
PruneAggregates()
that runs afterLoadPods()
that goes through everything and removes them (or do this work as part ofLoadPods()
but that seems...deceptive?)garbageCollectAggregateCollectionStates
and run it immediately afterLoadPods()
every time but that might be expensive.I'm not super-attached to any particular approach, I'd just like to fix this, so I can retool it if necessary.
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: