Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Any thoughts on an async style @lru_cache? #21

Open
goodboy opened this issue Apr 19, 2023 · 1 comment
Open

Any thoughts on an async style @lru_cache? #21

goodboy opened this issue Apr 19, 2023 · 1 comment

Comments

@goodboy
Copy link

goodboy commented Apr 19, 2023

I can't get my head around the complexity in the equivalent for asyncio:
https://github.com/aio-libs/async-lru/blob/master/async_lru/__init__.py

But we have a small naive implementation (that should be async framework agnostic) that I could put here if anyone is interested?

@belm0
Copy link
Contributor

belm0 commented Apr 20, 2023

That async_lru doesn't seem to care about parallel cache misses having identical args, which seems like a significant case for async.

In my project we, don't have a need for async LRU-- though we do have an @async_join decorator for simple side-effect functions (like an action), that will merge parallel calls, without regards to args.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants