[Devel] [PATCH rh7 11/19] ms/netfilter: conntrack: fix calculation of next bucket number in early_drop
Konstantin Khorenko
khorenko at virtuozzo.com
Fri May 22 11:10:48 MSK 2020
From: Vasily Khoruzhick <vasilykh at arista.com>
If there's no entry to drop in bucket that corresponds to the hash,
early_drop() should look for it in other buckets. But since it increments
hash instead of bucket number, it actually looks in the same bucket 8
times: hsize is 16k by default (14 bits) and hash is 32-bit value, so
reciprocal_scale(hash, hsize) returns the same value for hash..hash+7 in
most cases.
Fix it by increasing bucket number instead of hash and rename _hash
to bucket to avoid future confusion.
Fixes: 3e86638e9a0b ("netfilter: conntrack: consider ct netns in early_drop logic")
Cc: <stable at vger.kernel.org> # v4.7+
Signed-off-by: Vasily Khoruzhick <vasilykh at arista.com>
Signed-off-by: Pablo Neira Ayuso <pablo at netfilter.org>
https://jira.sw.ru/browse/PSBM-103515
(cherry picked from commit f393808dc64149ccd0e5a8427505ba2974a59854)
Signed-off-by: Konstantin Khorenko <khorenko at virtuozzo.com>
---
net/netfilter/nf_conntrack_core.c | 14 +++++++++-----
1 file changed, 9 insertions(+), 5 deletions(-)
diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
index 91f0940bac57a..aca113963fed9 100644
--- a/net/netfilter/nf_conntrack_core.c
+++ b/net/netfilter/nf_conntrack_core.c
@@ -887,13 +887,13 @@ EXPORT_SYMBOL_GPL(nf_conntrack_tuple_taken);
/* There's a small race here where we may free a just-assured
connection. Too bad: we're in trouble anyway. */
-static noinline int early_drop(struct net *net, unsigned int _hash)
+static noinline int early_drop(struct net *net, unsigned int hash)
{
/* Use oldest entry, which is roughly LRU */
struct nf_conntrack_tuple_hash *h;
struct nf_conn *tmp;
struct hlist_nulls_node *n;
- unsigned int i, hash, sequence;
+ unsigned int i, bucket, sequence;
struct nf_conn *ct = NULL;
spinlock_t *lockp;
bool ret = false;
@@ -904,14 +904,18 @@ static noinline int early_drop(struct net *net, unsigned int _hash)
restart:
sequence = read_seqcount_begin(&nf_conntrack_generation);
for (; i < NF_CT_EVICTION_RANGE; i++) {
- hash = scale_hash(_hash++);
- lockp = &nf_conntrack_locks[hash % CONNTRACK_LOCKS];
+ if (!i)
+ bucket = scale_hash(hash++);
+ else
+ bucket = (bucket + 1) % nf_conntrack_htable_size;
+
+ lockp = &nf_conntrack_locks[bucket % CONNTRACK_LOCKS];
spin_lock(lockp);
if (read_seqcount_retry(&nf_conntrack_generation, sequence)) {
spin_unlock(lockp);
goto restart;
}
- hlist_nulls_for_each_entry_rcu(h, n, &nf_conntrack_hash[hash],
+ hlist_nulls_for_each_entry_rcu(h, n, &nf_conntrack_hash[bucket],
hnnode) {
tmp = nf_ct_tuplehash_to_ctrack(h);
--
2.15.1
More information about the Devel
mailing list