1
0
Fork 0
mirror of synced 2025-03-06 20:59:54 +01:00

rhashtable: Fix potential deadlock by moving schedule_work outside lock

Move the hash table growth check and work scheduling outside the
rht lock to prevent a possible circular locking dependency.

The original implementation could trigger a lockdep warning due to
a potential deadlock scenario involving nested locks between
rhashtable bucket, rq lock, and dsq lock. By relocating the
growth check and work scheduling after releasing the rth lock, we break
this potential deadlock chain.

This change expands the flexibility of rhashtable by removing
restrictive locking that previously limited its use in scheduler
and workqueue contexts.

Import to say that this calls rht_grow_above_75(), which reads from
struct rhashtable without holding the lock, if this is a problem, we can
move the check to the lock, and schedule the workqueue after the lock.

Fixes: f0e1a0643a ("sched_ext: Implement BPF extensible scheduler class")
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Breno Leitao <leitao@debian.org>

Modified so that atomic_inc is also moved outside of the bucket
lock along with the growth above 75% check.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This commit is contained in:
Breno Leitao 2024-11-28 04:16:25 -08:00 committed by Herbert Xu
parent f916e44487
commit e1d3422c95

View file

@ -584,10 +584,6 @@ static struct bucket_table *rhashtable_insert_one(
*/
rht_assign_locked(bkt, obj);
atomic_inc(&ht->nelems);
if (rht_grow_above_75(ht, tbl))
schedule_work(&ht->run_work);
return NULL;
}
@ -624,6 +620,12 @@ static void *rhashtable_try_insert(struct rhashtable *ht, const void *key,
data = ERR_CAST(new_tbl);
rht_unlock(tbl, bkt, flags);
if (PTR_ERR(data) == -ENOENT && !new_tbl) {
atomic_inc(&ht->nelems);
if (rht_grow_above_75(ht, tbl))
schedule_work(&ht->run_work);
}
}
} while (!IS_ERR_OR_NULL(new_tbl));