-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nhse d30 kv1869 #2
Conversation
Rather than wait for the next read (which may never happen) to repair - try and defer, as the reaper will not process until the failure has cleared.
Add config switch for `defer_reap_on_failure`
%% further failure may lead to the loss of deferred reaps | ||
{mapping, "defer_reap_on_failure", "riak_kv.defer_reap_on_failure", [ | ||
{datatype, flag}, | ||
{default, on}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not familiar with defaults for flags, but I assume that if you write on
here, the Erlang value correspondign to that is true
.
Because line 644 of riak_kv_get_fsm states:
app_helper:get_env(riak_kv, defer_reap_on_failure, true)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some others have:
{mapping, "buckets.default.allow_mult", "riak_core.default_bucket_props.allow_mult", [
{datatype, {enum, [true, false]}},
{default, false},
hidden
]}.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is not complete consistency here, but the most common is the on/off flag I think at present
Clarify words
The test was passing due to good fortune in timing. There is a fundamental issue with this PR. When the node comes back up, the primaries will be reinstated and the reaps will occur. However, handoffs may not have occurred. So the reaps can complete, the clusters can appear to be in-sync ... but once handoffs complete the tombstones re-appear and full-sync must still resolve the issue. There is no obvious was of deferring reaps until after handoffs, so there is no way forward for this solution. |
Rather than wait for the next read (which may never happen) to repair - try and defer, as the reaper will not process until the failure has cleared.
OpenRiak/riak_test#1
basho#1869