Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: rumqttc async v5 client request batching #823

Open
wants to merge 13 commits into
base: batching
Choose a base branch
from

Conversation

flxo
Copy link
Contributor

@flxo flxo commented Mar 19, 2024

Process multiple client requests before flushing the network in the async v5 client.

Type of change

New feature (non-breaking change which adds functionality)

Checklist:

  • Formatted with cargo fmt
  • Make an entry to CHANGELOG.md if it's relevant to the users of the library. If it's not relevant mention why.

Fixes:

@flxo
Copy link
Contributor Author

flxo commented Mar 19, 2024

Hello.
There's some CI failure I'm unable to reproduce. Furthermore the test idle_connection_triggers_pings_on_timedoesn't use the v5 async client at all. Are the tests known to be flakey?

@coveralls
Copy link

coveralls commented Mar 19, 2024

Pull Request Test Coverage Report for Build 8611480676

Details

  • 160 of 242 (66.12%) changed or added relevant lines in 10 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall first build on pr-batch-fan-out at 38.167%

Changes Missing Coverage Covered Lines Changed/Added Lines %
rumqttc/src/proxy.rs 0 1 0.0%
rumqttc/src/lib.rs 9 13 69.23%
rumqttc/src/v5/framed.rs 3 7 42.86%
rumqttc/src/v5/state.rs 0 6 0.0%
rumqttc/src/v5/mod.rs 11 19 57.89%
rumqttc/src/v5/eventloop.rs 79 138 57.25%
Totals Coverage Status
Change from base Build 8590593284: 38.2%
Covered Lines: 6322
Relevant Lines: 16564

💛 - Coveralls

@de-sh
Copy link
Contributor

de-sh commented Mar 20, 2024

It would be really helpful if you could base this PR on #825 so that we can move towards simplifying the codebase significantly!

I am thinking of streamlining all the idiosyncratic code in v4/v5 and making it enough DRY to be maintainer friendly, until then we have to keep both codebases aligned as much as possible(it's already a bit of a wildfire 😅 ).

@flxo
Copy link
Contributor Author

flxo commented Mar 21, 2024

I'm putting this on hold until #825 is fixed and merged.

@flxo flxo marked this pull request as draft March 21, 2024 12:18
@de-sh de-sh mentioned this pull request Mar 25, 2024
2 tasks
@flxo flxo force-pushed the pr-batch-fan-out branch from 03de027 to 86023d5 Compare March 26, 2024 15:38
@flxo flxo force-pushed the pr-batch-fan-out branch from 86023d5 to 4ea2238 Compare March 26, 2024 15:40
@flxo
Copy link
Contributor Author

flxo commented Mar 26, 2024

Implemented the processing of requests and incoming frames in batches for v5. Minor updates here and there.
The v4 and v5 clients share a horrible big amount of code. I just updated the v5 version - v4 probably easy to adopt or even better refactor the common parts.

@flxo flxo marked this pull request as ready for review March 26, 2024 15:42
@flxo flxo force-pushed the pr-batch-fan-out branch from 4ea2238 to 5be9be7 Compare March 26, 2024 15:44
@flxo flxo force-pushed the pr-batch-fan-out branch from 5be9be7 to e389354 Compare March 26, 2024 15:48
Felix Obenhuber added 4 commits March 27, 2024 07:51
Creating certain instances part of event loop needs to happen inside
the target tokio runtime.
Add timeouts on network read and write in the v4 client.

Partial cleanup of the options: use references where possible instead
of copying the network options. Use std::time::Duration instead of u32
secs (align with pending throttle).
Remove the `Io` variant from `StateError` because this type of error is
also present in `ConnectionError` and fits better there. Same for
`ConnectionAborted` which is covered with `ConnectionError::ConnectionClosed`.
@flxo
Copy link
Contributor Author

flxo commented Apr 2, 2024

Hello. Can I get a comment on this PR? I'd like to incorporate findings before applying the pattern used in v5 in v3. Thanks.

@swanandx
Copy link
Member

swanandx commented Apr 2, 2024

Can I get a comment on this PR?

hey, sorry about the delay but i'm currently caught up with other work, so don't have enough bandwidth for this right now!

thanks for understanding!

@flxo
Copy link
Contributor Author

flxo commented May 10, 2024

@swanandx Let me know if this PR is still relevant for the project. I'm off keyboard for 3 weeks and would abandon the PR afterwards from my side if there's no progress.

Thanks @flxo

@de-sh de-sh changed the base branch from main to v5-batching May 10, 2024 17:07
@de-sh
Copy link
Contributor

de-sh commented May 10, 2024

Let me know if this PR is still relevant for the project.

@flxo please note that we value your time and contributions a lot but have been facing a landslide of work due to which our ability to maintain the repo has been significantly hampered. We will merge your PR into a feature branch for now, as we need to continue developing test suites and benchmarks to ensure that we are really seeing a good enough impact for the feature to be merged into main, without breaking the wrong things.

Hope you understand!

I'm off keyboard for 3 weeks

Happy holiday!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants