-
Notifications
You must be signed in to change notification settings - Fork 591
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Heap size grows when publishing a very large batch of messages #1106
Comments
All of those seem to be tasks and |
This is most likely related to the use of unbounded channel buffering we use for publishing messages (see here and here) So most likely most of the used memory is freed again once the messages are actually published (most = the byte arrays). This usage pattern I'd argue is unlikely something you encounter in a real world application, and even if, since most of the memory is going to be reclaimed again, it's less of an issue. possible solution:
|
I have the same problem,When I send data at high frequency。 |
Using a bounded channel will force us face a classic problem of "what to do when the buffer is full". Neither dropping data on the floor nor blocking publishing seem very appealing to most. In addition, the user can build something similar on top of what we have. Or we have to make this configurable so that each picks their own poison. |
My guess is that the changes already made in |
I noticed a memory leak on production (application in the docker container) when I was publishing a large number of messages at one time (500 k).
Then I reproduced this behavior locally on the Mac:
Program output:
3.1) create memory dump with dotnet-dump tools
3.2) get stat
dumpheap -stat
3.3) get references of instance
dumpheap -mt 00000001114308a0
3.4) get stack trace created instance
gcroot -all 00000001c5871ab0
Check this code in dotnet sdk 5, and sdk 6.
The text was updated successfully, but these errors were encountered: