[Devel] [PATCH 25/39] mm: vmalloc: add flag preventing guard hole allocation

Andrey Ryabinin aryabinin at odin.com
Mon Aug 24 03:19:15 PDT 2015


From: Andrey Ryabinin <a.ryabinin at samsung.com>

commit 71394fe50146202f2c8d92cf50f5ebc761acf254 upstream.

For instrumenting global variables KASan will shadow memory backing memory
for modules.  So on module loading we will need to allocate memory for
shadow and map it at address in shadow that corresponds to the address
allocated in module_alloc().

__vmalloc_node_range() could be used for this purpose, except it puts a
guard hole after allocated area.  Guard hole in shadow memory should be a
problem because at some future point we might need to have a shadow memory
at address occupied by guard hole.  So we could fail to allocate shadow
for module_alloc().

Add a new vm_struct flag 'VM_NO_GUARD' indicating that vm area doesn't
have a guard hole.

Signed-off-by: Andrey Ryabinin <a.ryabinin at samsung.com>
Cc: Dmitry Vyukov <dvyukov at google.com>
Cc: Konstantin Serebryany <kcc at google.com>
Cc: Dmitry Chernenkov <dmitryc at google.com>
Signed-off-by: Andrey Konovalov <adech.fo at gmail.com>
Cc: Yuri Gribov <tetra2005 at gmail.com>
Cc: Konstantin Khlebnikov <koct9i at gmail.com>
Cc: Sasha Levin <sasha.levin at oracle.com>
Cc: Christoph Lameter <cl at linux.com>
Cc: Joonsoo Kim <iamjoonsoo.kim at lge.com>
Cc: Dave Hansen <dave.hansen at intel.com>
Cc: Andi Kleen <andi at firstfloor.org>
Cc: Ingo Molnar <mingo at elte.hu>
Cc: Thomas Gleixner <tglx at linutronix.de>
Cc: "H. Peter Anvin" <hpa at zytor.com>
Cc: Christoph Lameter <cl at linux.com>
Cc: Pekka Enberg <penberg at kernel.org>
Cc: David Rientjes <rientjes at google.com>
Signed-off-by: Andrew Morton <akpm at linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
Signed-off-by: Andrey Ryabinin <aryabinin at odin.com>
---
 include/linux/vmalloc.h | 9 +++++++--
 mm/vmalloc.c            | 6 ++----
 2 files changed, 9 insertions(+), 6 deletions(-)

diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index dd0a2c8..00b9b15 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -16,6 +16,7 @@ struct vm_area_struct;		/* vma defining user mapping in mm_types.h */
 #define VM_USERMAP	0x00000008	/* suitable for remap_vmalloc_range */
 #define VM_VPAGES	0x00000010	/* buffer for pages was vmalloc'ed */
 #define VM_UNLIST	0x00000020	/* vm_struct is not listed in vmlist */
+#define VM_NO_GUARD	0x00000040      /* don't add guard page */
 /* bits [20..32] reserved for arch specific ioremap internals */
 
 /*
@@ -96,8 +97,12 @@ void vmalloc_sync_all(void);
 
 static inline size_t get_vm_area_size(const struct vm_struct *area)
 {
-	/* return actual size without guard page */
-	return area->size - PAGE_SIZE;
+	if (!(area->flags & VM_NO_GUARD))
+		/* return actual size without guard page */
+		return area->size - PAGE_SIZE;
+	else
+		return area->size;
+
 }
 
 extern struct vm_struct *get_vm_area(unsigned long size, unsigned long flags);
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 0c531e1..7a0addf 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1356,10 +1356,8 @@ static struct vm_struct *__get_vm_area_node(unsigned long size,
 	if (unlikely(!area))
 		return NULL;
 
-	/*
-	 * We always allocate a guard page.
-	 */
-	size += PAGE_SIZE;
+	if (!(flags & VM_NO_GUARD))
+		size += PAGE_SIZE;
 
 	va = alloc_vmap_area(size, align, start, end, node, gfp_mask);
 	if (IS_ERR(va)) {
-- 
2.4.6




More information about the Devel mailing list