You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It looks like the memory store doesn't honor the guarantee of scheduling a single segment per node per repair.
Instead, it's the maxParallelRepair concurrency number that seem to be applied as two repairs seem run at once.
At any time, one replica should run at most one segment per allowed repair, and never multiple segments for the same repair.
In the Cassandra storage implementation, we use LWTs to guarantee that.
Definition of Done
The memory storage implementation should allow no more than one segment per node to be scheduled for a given repair run
Hello, Alexander! Sometimes it will be great to repair more then only one segment per node per one repair - it will be more faster if node have enough resources. This will be useful when we need run full segmented repair as fust as we can in an emergency situation (as example, when consistency in multi-dc have just been broken and we know about this). Can we add variable parameber for set maximum allowed parallel segments repairing count at same time for current repar? Of course we must understand the risk of affect on node if this variable will be very big, but if this value will be less or equeal 4 -this will not produce any problem for powerful nodes.I mean, that why is this limitation a dogma and can't be slightly increased with the possibility of flexible adjustment if necessary?
Project board link
It looks like the memory store doesn't honor the guarantee of scheduling a single segment per node per repair.
Instead, it's the maxParallelRepair concurrency number that seem to be applied as two repairs seem run at once.
At any time, one replica should run at most one segment per allowed repair, and never multiple segments for the same repair.
In the Cassandra storage implementation, we use LWTs to guarantee that.
Definition of Done
┆Issue is synchronized with this Jira Story by Unito
The text was updated successfully, but these errors were encountered: