[Devel] [PATCH RH8 05/11] ms/mm: select HAVE_MOVE_PMD on x86 for faster mremap

Andrey Zhadchenko andrey.zhadchenko at virtuozzo.com
Fri Jun 4 17:45:32 MSK 2021


From: "Joel Fernandes (Google)" <joel at joelfernandes.org>

Moving page-tables at the PMD-level on x86 is known to be safe.  Enable
this option so that we can do fast mremap when possible.

Link: http://lkml.kernel.org/r/20181108181201.88826-4-joelaf@google.com
Signed-off-by: Joel Fernandes (Google) <joel at joelfernandes.org>
Suggested-by: Kirill A. Shutemov <kirill at shutemov.name>
Acked-by: Kirill A. Shutemov <kirill at shutemov.name>
Cc: Julia Lawall <Julia.Lawall at lip6.fr>
Cc: Michal Hocko <mhocko at kernel.org>
Cc: William Kucharski <william.kucharski at oracle.com>
Signed-off-by: Andrew Morton <akpm at linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>

(cherry picked from commit 9f132f7e145506efc0744426cb338b18a54afc3b)
Signed-off-by: Kirill Tkhai <ktkhai at virtuozzo.com>

Patchset description:

Patch "mm: speed up mremap by 20x on large regions" introduces
optimization: when a moved region has source and destination addresses
and size equal to multiple of PMD_SIZE; PTEs are not really copyed.
Instead of this, new PMD pointer is changed to point to old PTEs, while
old PMD is cleared.

This may be useful, when CRIU remaps large memory areas on restore (but
really, alignment to PMD_SIZE is not very often, though possible).

(cherry-picked from ms 9f132f7e145506efc0744426cb338b18a54afc3b)
Signed-off-by: Andrey Zhadchenko <andrey.zhadchenko at virtuozzo.com>

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index d90d31e..fb22076 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -186,6 +186,7 @@ config X86
 	select HAVE_MEMBLOCK_NODE_MAP
 	select HAVE_MIXED_BREAKPOINTS_REGS
 	select HAVE_MOD_ARCH_SPECIFIC
+	select HAVE_MOVE_PMD
 	select HAVE_NMI
 	select HAVE_OPROFILE
 	select HAVE_OPTPROBES
-- 
1.8.3.1



More information about the Devel mailing list