Btrfs: Improve and cleanup locking done by walk_down_tree

While dropping snapshots, walk_down_tree does most of the work of checking
reference counts and limiting tree traversal to just the blocks that
we are freeing.

It dropped and held the allocation mutex in strange and confusing ways,
this commit changes it to only hold the mutex while actually freeing a block.

The rest of the checks around reference counts should be safe without the lock
because we only allow one process in btrfs_drop_snapshot at a time.  Other
processes dropping reference counts should not drop it to 1 because
their tree roots already have an extra ref on the block.

Signed-off-by: Chris Mason <chris.mason@oracle.com>
This commit is contained in:
Chris Mason
2008-08-01 11:27:23 -04:00
parent 492bb6deee
commit f87f057b49
3 changed files with 70 additions and 34 deletions

View File

@ -338,6 +338,13 @@ static int noinline dirty_and_release_pages(struct btrfs_trans_handle *trans,
btrfs_drop_extent_cache(inode, start_pos, aligned_end - 1);
BUG_ON(err);
mutex_unlock(&BTRFS_I(inode)->extent_mutex);
/*
* an ugly way to do all the prop accounting around
* the page bits and mapping tags
*/
set_page_writeback(pages[0]);
end_page_writeback(pages[0]);
did_inline = 1;
}
if (end_pos > isize) {
@ -833,11 +840,7 @@ again:
start_pos, last_pos - 1, GFP_NOFS);
}
for (i = 0; i < num_pages; i++) {
#if LINUX_VERSION_CODE <= KERNEL_VERSION(2,6,18)
ClearPageDirty(pages[i]);
#else
cancel_dirty_page(pages[i], PAGE_CACHE_SIZE);
#endif
clear_page_dirty_for_io(pages[i]);
set_page_extent_mapped(pages[i]);
WARN_ON(!PageLocked(pages[i]));
}