Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(storage): support online cache resize via risectl #19677

Merged
merged 8 commits into from
Dec 9, 2024
Merged

Conversation

MrCroxx
Copy link
Contributor

@MrCroxx MrCroxx commented Dec 4, 2024

I hereby agree to the terms of the RisingWave Labs, Inc. Contributor License Agreement.

What's changed and what's your intention?

Usage:

Usage: risingwave ctl hummock resize-cache [OPTIONS]

Options:
      --meta-cache-capacity-mb <META_CACHE_CAPACITY_MB>  
      --data-cache-capacity-mb <DATA_CACHE_CAPACITY_MB>  
  -h, --help                                             Print help
  -V, --version                                          Print version

Related foyer side PRs: foyer-rs/foyer#794 (Already merged 2 weeks ago, not related to the version bump in this PR.)

Checklist

  • I have written necessary rustdoc comments
  • I have added necessary unit tests and integration tests
  • I have added test labels as necessary. See details.
  • I have added fuzzing tests or opened an issue to track them. (Optional, recommended for new SQL features Sqlsmith: Sql feature generation #7934).
  • My PR contains breaking changes. (If it deprecates some features, please create a tracking issue to remove them in the future).
  • All checks passed in ./risedev check (or alias, ./risedev c)
  • My PR changes performance-critical code. (Please run macro/micro-benchmarks and show the results.)
  • My PR contains critical fixes that are necessary to be merged into the latest release. (Please check out the details)

Documentation

  • My PR needs documentation updates. (Please use the Release note section below to summarize the impact on users)

Release note

If this PR includes changes that directly affect users or other significant modifications relevant to the community, kindly draft a release note to provide a concise summary of these changes. Please prioritize highlighting the impact these changes will have on users.

@MrCroxx MrCroxx self-assigned this Dec 4, 2024
@MrCroxx MrCroxx requested a review from a team as a code owner December 4, 2024 09:06
@MrCroxx MrCroxx requested a review from xxchan December 4, 2024 09:06
@MrCroxx MrCroxx requested review from hzxa21 and Li0k and removed request for xxchan December 4, 2024 09:11
@@ -42,13 +50,53 @@ impl ConfigService for ConfigServiceImpl {
};
Ok(Response::new(show_config_response))
}

async fn resize_cache(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

get_meta_cache_memory_usage_ratio and get_block_cache_memory_usage_ratio will be inaccurate after this PR. We should change HummockMemoryCollector to use cache.capacity() instead.

if let Some(meta_cache) = &self.meta_cache
&& req.meta_cache_capacity > 0
{
match meta_cache.memory().resize(req.meta_cache_capacity as _) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC, the resize operation is not persistent, which means if the CN restarts, the cache capacity will be back to the configured values from the toml config. Will this be a problem? For example, if we have 2 CNs in the cluster and after resize_cache, 1 CN restarts for some reason while the other one doesn't. I think it depends on when we are going to use resize_cache. If it is mainly for testing or perf tunning and after the tuning we will update the configs, this is fine. But if we rely on resize_cache in production, that will not be ideal.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it depends on how we define resize.
Is this a permanent operation or a temporary one?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In fact, I also think that multi-CN cache config inconsistencies are a concern, and it's not easy to detect. If we allow online resize cache size, then we need to make sure that the operation succeeds on all machines, and is persistent, otherwise this is a risk.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC, the resize operation is not persistent, which means if the CN restarts, the cache capacity will be back to the configured values from the toml config. Will this be a problem?

This feature just aims to resize the in-memory meta/data block cache without downtime. If there are restarts, it is fine to modify the persistent configuration directly. Modifying the per-node configuration can be (and maybe better be) achieved in the cloud control panel.

Copy link
Contributor

@Li0k Li0k left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rest LGTM, thanks for the efforts

}
};

let futures = worker_nodes.iter().map(|worker| async {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just a question:

I'm considering a partial success. Do we need to provide a retry capability to perform an rpc retry for the wrong worker node, to avoid as much as possible inconsistencies in the configs of multiple CNs? (I believe this is an idempotent operation.)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO, it is okay to make the user responsible to retry it.

Copy link
Collaborator

@hzxa21 hzxa21 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@MrCroxx MrCroxx enabled auto-merge December 9, 2024 05:59
@MrCroxx MrCroxx added this pull request to the merge queue Dec 9, 2024
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Dec 9, 2024
@MrCroxx MrCroxx added this pull request to the merge queue Dec 9, 2024
Merged via the queue into main with commit 7005c05 Dec 9, 2024
30 of 31 checks passed
@MrCroxx MrCroxx deleted the xx/resize-cache branch December 9, 2024 08:26
wenym1 pushed a commit that referenced this pull request Dec 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants