-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HiveMQTT subscriber dies, never recovers, after receiving large burst of messages #144
Comments
Hello @Aaronontheweb, thanks for contributing to the HiveMQ community! We will respond as soon as possible. |
Hi @Aaronontheweb - I'm traveling today but will take a closer look at this tomorrow first thing. I agree on the benchmarks & event processing. This is planned soon next time I loop back to them. |
Hey @Aaronontheweb - The client should definitely not die - I'll aim to recreate this here locally and post back soon. A couple of thoughts:
This is unnecessary and probably only slows down/complicates the issue. Considering that you've tried both this library and MQTTNet, have you tried swapping brokers just to identify any difference in either client behavior? That may provide some useful information. If events processing seems like the issue, you could try merging the I'll post back here soon with updates. |
Hi @Aaronontheweb - have you made any progress on this issue? I tested locally and haven't been able to reproduce this yet. I wanted to get a base line so I used a tool to blast publish QoS 1 messages endlessly. With this client/single subscriber, I got about ~10k msgs/second for processing. HiveMQ v4 broker in the middle though. I recorded the session here: https://asciinema.org/a/hGOgBiYMDWNgOu5uLse7M1wzf With QoS 0 we peak out at about ~80k msgs/sec: In any case, I just wanted to update you. I have a couple more tests planned. Let me know if you've made any progress on your side. |
@pglombardo thanks! I'll retry this without the packet size setting |
Hi @Aaronontheweb - have you had a chance to revisit this? With v0.18.1, we've added a bunch of health checks and back-pressure support that should resolve this issue. Let me know. |
Hi @pglombardo - I ended up writing my own MQTT library in C# https://github.com/petabridge/TurboMqtt |
Excellent - that looks like a great client! I'll close out this issue but If you ever need anything else, don't hesitate - I'd be happy to help out. |
Thank you! |
🐛 Bug Report
I'm working on an MQTT PoC that needs to be able to process 30,000 3k-7kb packets per second per node (large scale network) with QoS=0. Tried doing this with MQTTNet and ran into issues with it dotnet/MQTTnet#1962 - so I thought I would give HiveMQ a try.
Here is what I was able to produce with the same sample I used for MQTTnet:
Received about ~30k messages and then the client dies - no disconnect message or anything is received. The
OnMessage
event stops firing and according to EMQX, the client is still alive but it's no longer ACKing any of the published messages.🔬 How To Reproduce
Steps to reproduce the behavior:
Code sample
Fairly simply client setup - we're just writing any of the messages we receive to a
ChannelWriter<MQTT5PublishMessage>
- those messages will get picked up by Akka.NET's Streaming library which will pipe the messages to RabbitMQ. Akka.Streams hasn't had any trouble keeping up.Environment
Windows, .NET 8, using an EMQX 5.5.1 broker running on Ubuntu WSL2
Screenshots
What I can see in the EMQX broker logs is that HiveMQTT fails to keep up not long after the load starts - notice the lack of
PUBACK
responses here:PUBACK
events, eventually, do get fired back at the server but only after the message is no longer retained in the fixed-size buffer.What I can't figure out is - _why doesn't my
OnMessageReceived
handler get fired when this is happening?📈 Expected behavior
I'd expect the MQTT client to keep up with the events being thrown at it - the project should really add some event processing benchmarks; that's a much more important measure than event publishing throughput IMHO.
The text was updated successfully, but these errors were encountered: