[Devel] [PATCH RHEL10 COMMIT] mm/memcontrol: make memory.numa_migrate support Multi-Gen LRU

Konstantin Khorenko khorenko at virtuozzo.com
Fri Sep 5 22:58:13 MSK 2025


The commit is pushed to "branch-rh10-6.12.0-55.13.1.2.x.vz10-ovz" and will appear at git at bitbucket.org:openvz/vzkernel.git
after rh10-6.12.0-55.13.1.2.3.vz10
------>
commit 75d9cb06586109f96b1078d05ba0ec1f58d6cada
Author: Pavel Tikhomirov <ptikhomirov at virtuozzo.com>
Date:   Tue Sep 2 18:59:18 2025 +0800

    mm/memcontrol: make memory.numa_migrate support Multi-Gen LRU
    
    Currently when multi-gen LRU is enabled all lruvec->lists[lru] lists are
    empty and our memory.numa_migrate feature is unable to find any
    pages/folios for migration, so it just does not work.
    
    To fix that we need to use lruvec->lrugen.folios[gen][type][zone] lists
    when Multi-Gen LRU is enabled instead. The idea is to make the code
    universal and work both for old and new lru (as they can be dynamically
    switched on running system).
    
    We also need to change max limit auto detection for Multi-Gen LRU,
    max(READ_ONCE(lruvec->lrugen.nr_pages[gen][type][zone]), 0L), should be
    the right thing, e.g. it is updated in lru_gen_update_size().
    
    Include mmzone.h to use previously exported for_each_gen_type_zone()
    helper for iterating over Multi-Gen LRU lists.
    
    https://virtuozzo.atlassian.net/browse/VSTOR-114298
    Fixes: c92459bc18307 ("mm: memcontrol: add memory.numa_migrate file")
    Signed-off-by: Pavel Tikhomirov <ptikhomirov at virtuozzo.com>
    
    Feature: mm: interface to migrate memory between NUMA nodes upon userspace request
---
 mm/memcontrol.c | 58 +++++++++++++++++++++++++++++++++++++++------------------
 1 file changed, 40 insertions(+), 18 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index d4461342c3d5a..a61e2a8326c13 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -63,6 +63,7 @@
 #include <linux/seq_buf.h>
 #include <linux/sched/isolation.h>
 #include <linux/kmemleak.h>
+#include <linux/mmzone.h>
 #include <linux/virtinfo.h>
 #include <linux/migrate.h>
 #include <linux/ve.h>
@@ -3701,28 +3702,49 @@ static int memcg_numa_migrate_pages(struct mem_cgroup *memcg,
 				continue;
 
 			lruvec = mem_cgroup_lruvec(iter, NODE_DATA(nid));
-			/*
-			 * For the sake of simplicity, do not attempt to migrate
-			 * unevictable pages. It should be fine as long as there
-			 * aren't too many of them, which is usually true.
-			 */
-			for_each_evictable_lru(lru) {
+#ifdef CONFIG_LRU_GEN
+			if (!lruvec->lrugen.enabled) {
+#endif
 				/*
-				 * If no limit on the maximal number of
-				 * migrated pages is specified assume the
-				 * caller wants to migrate them all.
+				 * For the sake of simplicity, do not attempt to migrate
+				 * unevictable pages. It should be fine as long as there
+				 * aren't too many of them, which is usually true.
 				 */
-				scan = nr_to_scan > 0 ? SWAP_CLUSTER_MAX :
-				       lruvec_page_state_local(lruvec, NR_LRU_BASE + lru);
-				src = &lruvec->lists[lru];
-
-				ret = __memcg_numa_migrate_pages(lruvec, src, target_nodes, scan);
-				if (ret < 0) {
-					mem_cgroup_iter_break(memcg, iter);
-					return ret;
+				for_each_evictable_lru(lru) {
+					/*
+					 * If no limit on the maximal number of
+					 * migrated pages is specified assume the
+					 * caller wants to migrate them all.
+					 */
+					scan = nr_to_scan > 0 ? SWAP_CLUSTER_MAX :
+					       lruvec_page_state_local(lruvec, NR_LRU_BASE + lru);
+					src = &lruvec->lists[lru];
+
+					ret = __memcg_numa_migrate_pages(lruvec, src, target_nodes, scan);
+					if (ret < 0) {
+						mem_cgroup_iter_break(memcg, iter);
+						return ret;
+					}
+					scanned += ret;
+				}
+#ifdef CONFIG_LRU_GEN
+			} else {
+				int gen, type, zone;
+
+				for_each_gen_type_zone(gen, type, zone) {
+					scan = nr_to_scan > 0 ? SWAP_CLUSTER_MAX :
+					       max(READ_ONCE(lruvec->lrugen.nr_pages[gen][type][zone]), 0L);
+					src = &lruvec->lrugen.folios[gen][type][zone];
+
+					ret = __memcg_numa_migrate_pages(lruvec, src, target_nodes, scan);
+					if (ret < 0) {
+						mem_cgroup_iter_break(memcg, iter);
+						return ret;
+					}
+					scanned += ret;
 				}
-				scanned += ret;
 			}
+#endif
 		}
 	}
 


More information about the Devel mailing list