-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New feature "Enqueue-Slowdown" (or -Throttling) #69
Conversation
Background: Redis-Memory is limited - therefore queue sizes are limited (queue length multiplied with average queue entry size). So at some point, EN-queuing suddenly fails (presumed that DE-queuing is not possible for a while) For stability reasons we now can slowly increase a backpressure towards EN-queuing process (http- or EventBus-Clients) by delaying a "success enqueue"-reply more and more. Presuming that the originating EN-queuing process also works sequentially (i.e. it only requests to enqueue a next message when the previous message is successfully enqueued) this simple mechano helps to secure 'our' Redis memory and give us more time (minutes or even hours) to react. There are now two additonal config options 'per QueueName-pattern': - enqueueDelayMillisPerSize - enqueueMaxDelayMillis Addtionally: - simplified Config-Object (i.e.: remove the 'Builder'-pattern. Seems to be useless but enforces duplicate code) - now using a pre-compiled RegEx-Pattern instead of using method String#matches (which requires to compile the RegEx over and over again) - reduced string-concat operation within RedisQues when building Redis-Key-Prefixes over and over again - harmonized naming: variable-name "queueName" is now used throughout RedisQues-class
src/main/java/org/swisspush/redisques/util/QueueConfiguration.java
Outdated
Show resolved
Hide resolved
Could you also put the first comment describing the feature as issue? It's easier to link to in the release notes. |
PR ok for me and agree with other comments. |
It's just ISA. NEMO likes it fast :-) |
Yes - that's exactly the problem: Our system runs too fast. An alternative would be to split RedisQues into two Verticles: one which only does ENqueuing and the other with does only DEqueuing. We then could deploy a differnt number of verticles (i.e. only one ENqueue-verticle but 8 DEqueue-verticles). Still: a smooth increasing backpressure towards ENqueing clients seems to be a valid option |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fine for me
PR is fine for me now. Could you please add the description as issue. |
Background: Redis-Memory is limited - therefore queue sizes are limited (queue length multiplied with average queue entry size).
So at some point, EN-queuing suddenly fails (presumed that DE-queuing is not possible for a while)
For stability reasons we now can slowly increase a backpressure towards EN-queuing process (http- or EventBus-Clients) by delaying a "success enqueue"-reply more and more. Presuming that the originating EN-queuing process also works sequentially (i.e. it only requests to enqueue a next message when the previous message is successfully enqueued) this simple mechano helps to secure 'our' Redis memory and give us more time (minutes or even hours) to react.
There are now two additonal config options 'per QueueName-pattern':
Closes #70
Addtionally: