1
0
Fork 0
mirror of synced 2025-03-06 20:59:54 +01:00

btrfs: clear defragmented inodes using postorder in btrfs_cleanup_defrag_inodes()

btrfs_cleanup_defrag_inodes() is not called frequently, only in remount
or unmount, but the way it frees the inodes in fs_info->defrag_inodes
is inefficient. Each time it needs to locate first node, remove it,
potentially rebalance tree until it's done. This allows to do a
conditional reschedule.

For cleanups the rbtree_postorder_for_each_entry_safe() iterator is
convenient but we can't reschedule and restart iteration because some of
the tree nodes would be already freed.

The cleanup operation is kmem_cache_free() which will likely take the
fast path for most objects so rescheduling should not be necessary.

Reviewed-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
This commit is contained in:
David Sterba 2024-08-27 04:05:48 +02:00
parent ffc531652d
commit 276940915f

View file

@ -212,20 +212,14 @@ out:
void btrfs_cleanup_defrag_inodes(struct btrfs_fs_info *fs_info) void btrfs_cleanup_defrag_inodes(struct btrfs_fs_info *fs_info)
{ {
struct inode_defrag *defrag; struct inode_defrag *defrag, *next;
struct rb_node *node;
spin_lock(&fs_info->defrag_inodes_lock); spin_lock(&fs_info->defrag_inodes_lock);
node = rb_first(&fs_info->defrag_inodes);
while (node) { rbtree_postorder_for_each_entry_safe(defrag, next,
rb_erase(node, &fs_info->defrag_inodes); &fs_info->defrag_inodes, rb_node)
defrag = rb_entry(node, struct inode_defrag, rb_node);
kmem_cache_free(btrfs_inode_defrag_cachep, defrag); kmem_cache_free(btrfs_inode_defrag_cachep, defrag);
cond_resched_lock(&fs_info->defrag_inodes_lock);
node = rb_first(&fs_info->defrag_inodes);
}
spin_unlock(&fs_info->defrag_inodes_lock); spin_unlock(&fs_info->defrag_inodes_lock);
} }