This is a note to let you know that I've just added the patch titled
md/raid6: Fix anomily when recovering a single device in RAID6.
to the 4.9-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git%3Ba=su...
The filename of the patch is: md-raid6-fix-anomily-when-recovering-a-single-device-in-raid6.patch and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree, please let stable@vger.kernel.org know about it.
From foo@baz Sun Mar 18 16:55:33 CET 2018
From: NeilBrown neilb@suse.com Date: Mon, 3 Apr 2017 12:11:32 +1000 Subject: md/raid6: Fix anomily when recovering a single device in RAID6.
From: NeilBrown neilb@suse.com
[ Upstream commit 7471fb77ce4dc4cb81291189947fcdf621a97987 ]
When recoverying a single missing/failed device in a RAID6, those stripes where the Q block is on the missing device are handled a bit differently. In these cases it is easy to check that the P block is correct, so we do. This results in the P block be destroy. Consequently the P block needs to be read a second time in order to compute Q. This causes lots of seeks and hurts performance.
It shouldn't be necessary to re-read P as it can be computed from the DATA. But we only compute blocks on missing devices, since c337869d9501 ("md: do not compute parity unless it is on a failed drive").
So relax the change made in that commit to allow computing of the P block in a RAID6 which it is the only missing that block.
This makes RAID6 recovery run much faster as the disk just "before" the recovering device is no longer seeking back-and-forth.
Reported-by-tested-by: Brad Campbell lists2009@fnarfbargle.com Reviewed-by: Dan Williams dan.j.williams@intel.com Signed-off-by: NeilBrown neilb@suse.com Signed-off-by: Shaohua Li shli@fb.com Signed-off-by: Sasha Levin alexander.levin@microsoft.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- drivers/md/raid5.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-)
--- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -3391,9 +3391,20 @@ static int fetch_block(struct stripe_hea BUG_ON(test_bit(R5_Wantcompute, &dev->flags)); BUG_ON(test_bit(R5_Wantread, &dev->flags)); BUG_ON(sh->batch_head); + + /* + * In the raid6 case if the only non-uptodate disk is P + * then we already trusted P to compute the other failed + * drives. It is safe to compute rather than re-read P. + * In other cases we only compute blocks from failed + * devices, otherwise check/repair might fail to detect + * a real inconsistency. + */ + if ((s->uptodate == disks - 1) && + ((sh->qd_idx >= 0 && sh->pd_idx == disk_idx) || (s->failed && (disk_idx == s->failed_num[0] || - disk_idx == s->failed_num[1]))) { + disk_idx == s->failed_num[1])))) { /* have disk failed, and we're requested to fetch it; * do compute it */
Patches currently in stable-queue which might be from neilb@suse.com are
queue-4.9/md-raid6-fix-anomily-when-recovering-a-single-device-in-raid6.patch
linux-stable-mirror@lists.linaro.org