Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

prevent event loop polling from stopping on active redis connections #1734

Open
wants to merge 10 commits into
base: main
Choose a base branch
from

Conversation

pawl
Copy link
Contributor

@pawl pawl commented May 26, 2023

Fixes: celery/celery#7276

Caused by: #1476

Currently, the Redis transport will always remove the last function added to the event loop (on_tick) regardless of which connection is disconnected. If you restart Redis enough (and repeatedly cause connection errors), eventually _on_connection_disconnect will remove a function from the event loop for the only active connection and the worker will get stuck.

I've been seeing these issues with workers getting stuck after I migrated from the RabbitMQ broker to Redis on Celery 5.2.6 and Kombu 5.2.3.

This PR changes it to track which event loop function belongs to which connection and remove the correct function from on_tick.

@pawl
Copy link
Contributor Author

pawl commented May 26, 2023

@auvipy @thedrow would either of you guys be able to review this? And do you think this is the right approach?

I was able to reproduce this issue locally using this example repo: https://github.com/pb-dod/celery_pyamqp_memory_leak/tree/test_redis_disconnect

I was also able to verify this fix works there too.

I think this fix is somewhat critical, because it's likely making running Celery on the Redis broker unreliable for everyone who upgraded to at least v5.2.3.

@pawl pawl force-pushed the fix_redis_reconnect branch from b584059 to 2e9f60b Compare May 26, 2023 22:15
@auvipy auvipy requested review from Nusnus and auvipy May 27, 2023 05:46
@auvipy auvipy added this to the 5.3 milestone May 27, 2023
@auvipy auvipy requested a review from thedrow May 27, 2023 05:50
kombu/transport/redis.py Show resolved Hide resolved
@pawl
Copy link
Contributor Author

pawl commented May 30, 2023

@auvipy I tested this change in production today, and I'm still seeing the same behavior where workers stop responding the worker heartbeats after about an hour (the red lines are a deployment start/finish):
unnamed

I have INFO level logging turned on, and I see no indication in the logs about why the workers get stuck (the last log message is often a successful task run). Workers on different queues with very different workloads get stuck too.

I'm not seeing the issue with workers getting stuck when I use RabbitMQ as a broker.

I think this PR fixes an issue around workers getting stuck after several Redis disconnections, but now I'm not sure it fixes the primary cause of celery/celery#7276 This also explains why I was getting this issue without seeing Redis connection errors in my logs.

pawl added a commit to pawl/kombu that referenced this pull request May 31, 2023
@pawl
Copy link
Contributor Author

pawl commented May 31, 2023

@auvipy I added the integration test: 34e366d

It looks like it's revealing a problem. Good call on adding it!

I assumed the two connection objects here were the same:

def _on_disconnect(connection):

def register_with_event_loop(self, connection, loop):

However:

  • register_with_event_loop's connection is a Transport
  • _on_disconnect's connection is a Redis Connection.

The way I have it currently won't work because I'm adding Transports as keys to on_poll_start_by_connection and those will never match with Redis connection objects.

I'll need to re-think the solution for this.

What I tested in production today effectively disabled the fix from #1476 and workers were still getting stuck.

Copy link
Member

@auvipy auvipy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we close this in favor of #2007?

@auvipy
Copy link
Member

auvipy commented Jun 16, 2024

can we close this in favor of #2007?

also should we try to extract relevant integrations tests from here to add to the test suite?

@Nusnus Nusnus force-pushed the fix_redis_reconnect branch from bab5ee5 to 8c16509 Compare June 24, 2024 10:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants