-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prune and clean clash with backups #142
Comments
There is no special feature for that. Do you do backups more than once a day? Otherwise you should just schedule your prune job so there is no overlap. And if there are too many hosts using a single repository, it might make sense to split those up into using multiple different repositories. That said, I would be open to have a retry mechanism as a feature. But I probably won't implement that myself. So open for PRs :) |
Yeah I do run multiples. |
As a follow on. I have managed to fix by only setting one machine to do the prunes. But what I have just observed (whiile trying to see why my POST_COMMANDS_SUCCESS and POST_COMMANDS_FAILURE where not running, was that after the backup a forget is running. And this seems to be crashing the container:
This was a docker logs, which crash, and therefore restarted the container. Which I had to remove the run backup on start. I think my multiple machines to one repo is causing this problem, but should a failure of the restic forget cause the container to drop? |
hmm further to this, I think it was being killed due to memory restrictions on the containers. Depending how you view the logs you don't see the killed which if the container being restarted. |
Oh right. Memory restrictions. That's possible. With |
Ahh confirmed that yes, the memory was hitting the restrictions as part of a forget. And in doing so container was killed, which killed off any post commands from running. |
Is there a way, or feature request that if repository locked at time of prune/clean that it can retry?
Especially helpful for multiple systems to single repository, it's not always easy to find a gap.
The text was updated successfully, but these errors were encountered: