[Devel] [PATCH rh7] mm: default collapse huge pages if there's at least 1/4th ptes mapped

Vladimir Davydov vdavydov at virtuozzo.com
Tue Jul 12 04:09:45 PDT 2016


A huge page may be collapsed by khugepaged if there's not more than
khugepaged_max_ptes_none unmapped ptes (configured via sysfs). The
latter equals 511 (HPAGE_PMD_NR - 1) by default, which results in
noticeable growth in memory footprint if a process has a sparse address
space. Experiments have shown (see bug-id below) that decreasing the
threshold down to 384 (3/4*HPAGE_PMD_NR) results in no performance
degradation for VMs and CTs and at the same time improves test results
for VMs (because qemu has a sparse heap). So let's set it by default.

https://jira.sw.ru/browse/PSBM-48885

Signed-off-by: Vladimir Davydov <vdavydov at virtuozzo.com>
---
 mm/huge_memory.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 7543156e8d39..3c23df1d3392 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -58,11 +58,10 @@ static DEFINE_MUTEX(khugepaged_mutex);
 static DEFINE_SPINLOCK(khugepaged_mm_lock);
 static DECLARE_WAIT_QUEUE_HEAD(khugepaged_wait);
 /*
- * default collapse hugepages if there is at least one pte mapped like
- * it would have happened if the vma was large enough during page
- * fault.
+ * default collapse hugepages if there is at least 1/4th ptes mapped
+ * to avoid memory footprint growth due to fragmentation
  */
-static unsigned int khugepaged_max_ptes_none __read_mostly = HPAGE_PMD_NR-1;
+static unsigned int khugepaged_max_ptes_none __read_mostly = HPAGE_PMD_NR*3/4;
 
 static int khugepaged(void *none);
 static int khugepaged_slab_init(void);
-- 
2.1.4



More information about the Devel mailing list