Skip to content

Commit 368fd89

Browse files
adam900710kdave
authored andcommitted
btrfs: scrub: add cancel/pause/removed bg checks for raid56 parity stripes
For raid56, data and parity stripes are handled differently. For data stripes they are handled just like regular RAID1/RAID10 stripes, going through the regular scrub_simple_mirror(). But for parity stripes we have to read out all involved data stripes and do any needed verification and repair, then scrub the parity stripe. This process will take a much longer time than a regular stripe, but unlike scrub_simple_mirror(), we do not check if we should cancel/pause or the block group is already removed. Aligned the behavior of scrub_raid56_parity_stripe() to scrub_simple_mirror(), by adding: - Cancel check - Pause check - Removed block group check Since those checks are the same from the scrub_simple_mirror(), also update the comments of scrub_simple_mirror() by: - Remove too obvious comments We do not need extra comments on what we're checking, it's really too obvious. - Remove a stale comment about pausing Now the scrub is always queuing all involved stripes, and submit them in one go, there is no more submission part during pausing. Reviewed-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
1 parent 4e3c05b commit 368fd89

File tree

1 file changed

+17
-6
lines changed

1 file changed

+17
-6
lines changed

fs/btrfs/scrub.c

Lines changed: 17 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -2091,6 +2091,20 @@ static int scrub_raid56_parity_stripe(struct scrub_ctx *sctx,
20912091

20922092
ASSERT(sctx->raid56_data_stripes);
20932093

2094+
if (atomic_read(&fs_info->scrub_cancel_req) ||
2095+
atomic_read(&sctx->cancel_req))
2096+
return -ECANCELED;
2097+
2098+
if (atomic_read(&fs_info->scrub_pause_req))
2099+
scrub_blocked_if_needed(fs_info);
2100+
2101+
spin_lock(&bg->lock);
2102+
if (test_bit(BLOCK_GROUP_FLAG_REMOVED, &bg->runtime_flags)) {
2103+
spin_unlock(&bg->lock);
2104+
return 0;
2105+
}
2106+
spin_unlock(&bg->lock);
2107+
20942108
/*
20952109
* For data stripe search, we cannot reuse the same extent/csum paths,
20962110
* as the data stripe bytenr may be smaller than previous extent. Thus
@@ -2261,18 +2275,15 @@ static int scrub_simple_mirror(struct scrub_ctx *sctx,
22612275
u64 found_logical = U64_MAX;
22622276
u64 cur_physical = physical + cur_logical - logical_start;
22632277

2264-
/* Canceled? */
22652278
if (atomic_read(&fs_info->scrub_cancel_req) ||
22662279
atomic_read(&sctx->cancel_req)) {
22672280
ret = -ECANCELED;
22682281
break;
22692282
}
2270-
/* Paused? */
2271-
if (atomic_read(&fs_info->scrub_pause_req)) {
2272-
/* Push queued extents */
2283+
2284+
if (atomic_read(&fs_info->scrub_pause_req))
22732285
scrub_blocked_if_needed(fs_info);
2274-
}
2275-
/* Block group removed? */
2286+
22762287
spin_lock(&bg->lock);
22772288
if (test_bit(BLOCK_GROUP_FLAG_REMOVED, &bg->runtime_flags)) {
22782289
spin_unlock(&bg->lock);

0 commit comments

Comments
 (0)