Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf(stream): add hash join memory benchmarking for cache refill #19712

Merged
merged 6 commits into from
Dec 10, 2024

Conversation

kwannoel
Copy link
Contributor

@kwannoel kwannoel commented Dec 9, 2024

I hereby agree to the terms of the RisingWave Labs, Inc. Contributor License Agreement.

What's changed and what's your intention?

Add hash join memory benchmarking for cache refill. Prerequisite of #19629.

amplification peak memory (bytes) peak blocks allocated runtime (ms)
10K 1,687,682 10.188
20K 3,279,096 17.146
30K 4,832,972 26.360
40K 6,444,552 35.388
100K 15,942,456 216,915 92.481
200K 31,769,735 433,580 188.06
400K 63,427,368 866,914 387.23

From dhat-heap.json (400K amp) here's the primary (64.87%):

  │   │     Total:     41,142,240 bytes (26.75%, 1,501,077.92/s) in 57,142 blocks (1.99%, 2,084.83/s), avg size 720 bytes, avg lifetime 15,948,177.38 µs (58.19% of program duration)
  │   │     Max:       41,142,240 bytes in 57,142 blocks, avg size 720 bytes
  │   │     At t-gmax: 41,142,240 bytes (64.87%) in 57,142 blocks (6.59%), avg size 720 bytes
  │   │     At t-end:  0 bytes (0%) in 0 blocks (0%), avg size 0 bytes
  │   │     Allocated at {
  │   │       ^1: 0x1053b614c: <alloc::alloc::Global as core::alloc::Allocator>::allocate (alloc/src/alloc.rs:254:9)
  │   │       ^2: 0x1053b614c: alloc::boxed::Box<T,A>::try_new_uninit_in (alloc/src/boxed.rs:524:13)
  │   │       ^3: 0x1053b614c: alloc::boxed::Box<T,A>::new_uninit_in (alloc/src/boxed.rs:486:15)
  │   │       #4: 0x1053ae288: alloc::collections::btree::node::LeafNode<K,V>::new (collections/btree/node.rs:83:28)
  │   │       #5: 0x1053ae288: alloc::collections::btree::node::Handle<alloc::collections::btree::node::NodeRef<alloc::collections::btree::node::marker::Mut,K,V,alloc::collections::btree::node::marker::Leaf>,alloc::collections::btree::node::marker::KV>::split (collections/btree/node.rs:1221:28)
  │   │       #6: 0x1053aeec4: alloc::collections::btree::node::Handle<alloc::collections::btree::node::NodeRef<alloc::collections::btree::node::marker::Mut,K,V,alloc::collections::btree::node::marker::Leaf>,alloc::collections::btree::node::marker::Edge>::insert (collections/btree/node.rs:950:30)
  │   │       #7: 0x1053aeec4: alloc::collections::btree::node::Handle<alloc::collections::btree::node::NodeRef<alloc::collections::btree::node::marker::Mut,K,V,alloc::collections::btree::node::marker::Leaf>,alloc::collections::btree::node::marker::Edge>::insert_recursing (collections/btree/node.rs:1046:41)
  │   │       #8: 0x1053aca1c: alloc::collections::btree::map::entry::VacantEntry<K,V,A>::insert (btree/map/entry.rs:364:21)
  │   │       #9: 0x1053d4850: alloc::collections::btree::map::BTreeMap<K,V,A>::try_insert (collections/btree/map.rs:1027:33)
  │   │       #10: 0x1053cdd44: risingwave_stream::executor::join::join_row_set::JoinRowSet<K,V>::try_insert (executor/join/join_row_set.rs:67:35)
  │   │       #11: 0x1053a37cc: risingwave_stream::executor::join::hash_join::JoinEntryState::insert (executor/join/hash_join.rs:832:19)
  │   │       #12: 0x104cf0690: risingwave_stream::executor::join::hash_join::JoinHashMap<K,S>::fetch_cached_state::{{closure}} (executor/join/hash_join.rs:555:17)
  │   │       #13: 0x104cf2d38: risingwave_stream::executor::join::hash_join::JoinHashMap<K,S>::take_state::{{closure}} (executor/join/hash_join.rs:386:42)
  │   │       #14: 0x104cf2d38: risingwave_stream::executor::hash_join::HashJoinExecutor<K,S,_>::hash_eq_match::{{closure}} (src/executor/hash_join.rs:732:32)

Here's the secondary (17.03%):

  │   ├── PP 1.1.2/3 {
  │   │     Total:     10,800,000 bytes (7.02%, 394,038.86/s) in 400,000 blocks (13.9%, 14,594.03/s), avg size 27 bytes, avg lifetime 15,948,476.91 µs (58.19% of program duration)
  │   │     Max:       10,800,000 bytes in 400,000 blocks, avg size 27 bytes
  │   │     At t-gmax: 10,800,000 bytes (17.03%) in 400,000 blocks (46.14%), avg size 27 bytes
  │   │     At t-end:  0 bytes (0%) in 0 blocks (0%), avg size 0 bytes
  │   │     Allocated at {
  │   │       ^1: 0x104cc820c: <alloc::alloc::Global as core::alloc::Allocator>::allocate (alloc/src/alloc.rs:254:9)
  │   │       ^2: 0x104cc820c: alloc::raw_vec::RawVecInner<A>::try_allocate_in (alloc/src/raw_vec.rs:465:41)
  │   │       #3: 0x104ce2cac: alloc::raw_vec::RawVecInner<A>::with_capacity_in (alloc/src/raw_vec.rs:412:15)
  │   │       #4: 0x104ce2cac: alloc::raw_vec::RawVec<T,A>::with_capacity_in (alloc/src/raw_vec.rs:189:20)
  │   │       #5: 0x104ce2cac: alloc::vec::Vec<T,A>::with_capacity_in (src/vec/mod.rs:801:20)
  │   │       #6: 0x104ce2cac: alloc::vec::Vec<T>::with_capacity (src/vec/mod.rs:482:9)
  │   │       #7: 0x104ce2cac: bytes::bytes_mut::BytesMut::with_capacity (bytes-1.9.0/src/bytes_mut.rs:149:28)
  │   │       #8: 0x104ce2cac: risingwave_common::row::Row::value_serialize_bytes (src/row/mod.rs:93:23)
  │   │       #9: 0x104ce2cac: <risingwave_common::row::compacted_row::CompactedRow as core::convert::From<R>>::from (src/row/compacted_row.rs:47:18)
  │   │       #10: 0x104d0d89c: <T as core::convert::Into<U>>::into (src/convert/mod.rs:759:9)
  │   │       #11: 0x104d0d89c: risingwave_stream::executor::join::row::JoinRow<R>::encode (executor/join/row.rs:52:28)
  │   │       #12: 0x104cf065c: risingwave_stream::executor::join::hash_join::JoinHashMap<K,S>::fetch_cached_state::{{closure}} (executor/join/hash_join.rs:556:33)
  │   │       #13: 0x104cf2d38: risingwave_stream::executor::join::hash_join::JoinHashMap<K,S>::take_state::{{closure}} (executor/join/hash_join.rs:386:42)
  │   │       #14: 0x104cf2d38: risingwave_stream::executor::hash_join::HashJoinExecutor<K,S,_>::hash_eq_match::{{closure}} (src/executor/hash_join.rs:732:32)
  │   │       #15: 0x104cf37d0: risingwave_stream::executor::hash_join::HashJoinExecutor<K,S,_>::eq_join_oneside::{{closure}} (src/executor/hash_join.rs:829:62)
  │   │       #16: 0x104ceb1b0: <futures_async_stream::try_stream::GenTryStream<G> as futures_core::stream::Stream>::poll_next (futures-async-stream-0.2.11/src/lib.rs:492:33)
  │   │       #17: 0x104ceb1b0: risingwave_stream::executor::hash_join::HashJoinExecutor<K,S,_>::into_stream::{{closure}} (src/executor/hash_join.rs:453:5)
  │   │       #18: 0x104ceb1b0: <futures_async_stream::try_stream::GenTryStream<G> as futures_core::stream::Stream>::poll_next (futures-async-stream-0.2.11/src/lib.rs:492:33)
  │   │       #19: 0x104d59064: stream_hash_join::handle_streams::{{closure}} (stream/benches/stream_hash_join.rs:170:29)
  │   │       #20: 0x104d59064: stream_hash_join::main::{{closure}} (stream/benches/stream_hash_join.rs:208:46)
  │   │       #21: 0x104d59064: tokio::runtime::park::CachedParkThread::block_on::{{closure}} (src/runtime/park.rs:281:63)
  │   │       #22: 0x104d59064: tokio::runtime::coop::with_budget (src/runtime/coop.rs:107:5)
  │   │       #23: 0x104d59064: tokio::runtime::coop::budget (src/runtime/coop.rs:73:5)
  │   │       #24: 0x104d59064: tokio::runtime::park::CachedParkThread::block_on (src/runtime/park.rs:281:31)

Checklist

  • I have written necessary rustdoc comments
  • I have added necessary unit tests and integration tests
  • I have added test labels as necessary. See details.
  • I have added fuzzing tests or opened an issue to track them. (Optional, recommended for new SQL features Sqlsmith: Sql feature generation #7934).
  • My PR contains breaking changes. (If it deprecates some features, please create a tracking issue to remove them in the future).
  • All checks passed in ./risedev check (or alias, ./risedev c)
  • My PR changes performance-critical code. (Please run macro/micro-benchmarks and show the results.)
  • My PR contains critical fixes that are necessary to be merged into the latest release. (Please check out the details)

Documentation

  • My PR needs documentation updates. (Please use the Release note section below to summarize the impact on users)

Release note

If this PR includes changes that directly affect users or other significant modifications relevant to the community, kindly draft a release note to provide a concise summary of these changes. Please prioritize highlighting the impact these changes will have on users.

Copy link
Contributor Author

kwannoel commented Dec 9, 2024

@kwannoel kwannoel changed the title refactor hash join create executor utils bench(stream): add hash join memory benchmarking for cache refill Dec 9, 2024
@kwannoel kwannoel marked this pull request as ready for review December 9, 2024 05:50
@kwannoel kwannoel requested a review from a team as a code owner December 9, 2024 05:50
@kwannoel kwannoel requested a review from lmatz December 9, 2024 05:50
@kwannoel kwannoel marked this pull request as draft December 9, 2024 05:50
src/stream/Cargo.toml Outdated Show resolved Hide resolved
@kwannoel kwannoel force-pushed the kwannoel/join-bench branch from efda035 to 5d72e75 Compare December 9, 2024 07:22
@kwannoel kwannoel marked this pull request as ready for review December 9, 2024 11:25
@kwannoel kwannoel requested a review from chenzl25 December 9, 2024 11:25
@graphite-app graphite-app bot requested a review from a team December 9, 2024 11:54
@kwannoel kwannoel added this pull request to the merge queue Dec 10, 2024
@kwannoel kwannoel removed this pull request from the merge queue due to a manual request Dec 10, 2024
@lmatz lmatz changed the title bench(stream): add hash join memory benchmarking for cache refill perf(stream): add hash join memory benchmarking for cache refill Dec 10, 2024
Copy link
Contributor Author

Merge activity

  • Dec 9, 9:37 PM EST: Graphite couldn't merge this PR because it failed for an unknown reason (This repository has GitHub's merge queue enabled, which is currently incompatible with Graphite).

@kwannoel kwannoel added this pull request to the merge queue Dec 10, 2024
@kwannoel kwannoel removed this pull request from the merge queue due to a manual request Dec 10, 2024
@kwannoel kwannoel added this pull request to the merge queue Dec 10, 2024
Merged via the queue into main with commit bd82fe3 Dec 10, 2024
32 of 34 checks passed
@kwannoel kwannoel deleted the kwannoel/join-bench branch December 10, 2024 06:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants