[Devel] [PATCH RHEL7 COMMIT] ms/slub: relocate freelist pointer to middle of object
Konstantin Khorenko
khorenko at virtuozzo.com
Tue Jul 2 21:03:52 MSK 2024
The commit is pushed to "branch-rh7-3.10.0-1160.105.1.vz7.220.x-ovz" and will appear at https://src.openvz.org/scm/ovz/vzkernel.git
after rh7-3.10.0-1160.105.1.vz7.220.7
------>
commit 6ed614e0dc9326315a9bc3c2e9e99480c9959e27
Author: Kees Cook <keescook at chromium.org>
Date: Mon Jul 1 19:04:41 2024 +0800
ms/slub: relocate freelist pointer to middle of object
In a recent discussion[1] with Vitaly Nikolenko and Silvio Cesare, it
became clear that moving the freelist pointer away from the edge of
allocations would likely improve the overall defensive posture of the
inline freelist pointer. My benchmarks show no meaningful change to
performance (they seem to show it being faster), so this looks like a
reasonable change to make.
Instead of having the freelist pointer at the very beginning of an
allocation (offset 0) or at the very end of an allocation (effectively
offset -sizeof(void *) from the next allocation), move it away from the
edges of the allocation and into the middle. This provides some
protection against small-sized neighboring overflows (or underflows), for
which the freelist pointer is commonly the target. (Large or well
controlled overwrites are much more likely to attack live object contents,
instead of attempting freelist corruption.)
The vaunted kernel build benchmark, across 5 runs. Before:
Mean: 250.05
Std Dev: 1.85
and after, which appears mysteriously faster:
Mean: 247.13
Std Dev: 0.76
Attempts at running "sysbench --test=memory" show the change to be well in
the noise (sysbench seems to be pretty unstable here -- it's not really
measuring allocation).
Hackbench is more allocation-heavy, and while the std dev is above the
difference, it looks like may manifest as an improvement as well:
20 runs of "hackbench -g 20 -l 1000", before:
Mean: 36.322
Std Dev: 0.577
and after:
Mean: 36.056
Std Dev: 0.598
[1] https://twitter.com/vnik5287/status/1235113523098685440
Signed-off-by: Kees Cook <keescook at chromium.org>
Signed-off-by: Andrew Morton <akpm at linux-foundation.org>
Acked-by: Christoph Lameter <cl at linux.com>
Cc: Vitaly Nikolenko <vnik at duasynt.com>
Cc: Silvio Cesare <silvio.cesare at gmail.com>
Cc: Christoph Lameter <cl at linux.com>Cc: Pekka Enberg <penberg at kernel.org>
Cc: David Rientjes <rientjes at google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim at lge.com>
Link: http://lkml.kernel.org/r/202003051624.AAAC9AECC@keescook
Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
Merge fixup patches:
89b83f282d8ba ("slub: avoid redzone when choosing freepointer location")
e41a49fadbc80 ("mm/slub: actually fix freelist pointer vs redzoning")
This might make the double rcu-free less destructive and do not
corrupt both rcu-free and slub freelist lists. As rcu free lists are
frequently placed at the beginning of the object.
https://virtuozzo.atlassian.net/browse/PSBM-155867
(cherry picked from commit 3202fa62fb43087387c65bfa9c100feffac74aa6)
Signed-off-by: Pavel Tikhomirov <ptikhomirov at virtuozzo.com>
---
mm/slub.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/mm/slub.c b/mm/slub.c
index 33f85ff063a7..26a008eab3fe 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3400,7 +3400,7 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
/*
* With that we have determined the number of bytes in actual use
- * by the object. This is the potential offset to the free pointer.
+ * by the object and redzoning.
*/
s->inuse = size;
@@ -3416,6 +3416,13 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
*/
s->offset = size;
size += sizeof(void *);
+ } else {
+ /*
+ * Store freelist pointer near middle of object to keep
+ * it away from the edges of the object to avoid small
+ * sized over/underflows from neighboring allocations.
+ */
+ s->offset = ALIGN_DOWN(s->object_size / 2, sizeof(void *));
}
#ifdef CONFIG_SLUB_DEBUG
More information about the Devel
mailing list