[Devel] [PATCH RHEL8 COMMIT] ms/mm: vmscan: move inactive_list_is_low() swap check to the caller

Konstantin Khorenko khorenko at virtuozzo.com
Thu Apr 2 16:02:54 MSK 2020


The commit is pushed to "branch-rh8-4.18.0-80.1.2.vz8.3.x-ovz" and will appear at https://src.openvz.org/scm/ovz/vzkernel.git
after rh8-4.18.0-80.1.2.vz8.3.4
------>
commit bae3754d3565f170eef9cb59c23077084f02cbdf
Author: Johannes Weiner <hannes at cmpxchg.org>
Date:   Thu Apr 2 16:02:54 2020 +0300

    ms/mm: vmscan: move inactive_list_is_low() swap check to the caller
    
    inactive_list_is_low() should be about one thing: checking the ratio
    between inactive and active list.  Kitchensink checks like the one for
    swap space makes the function hard to use and modify its callsites.
    Luckly, most callers already have an understanding of the swap situation,
    so it's easy to clean up.
    
    get_scan_count() has its own, memcg-aware swap check, and doesn't even get
    to the inactive_list_is_low() check on the anon list when there is no swap
    space available.
    
    shrink_list() is called on the results of get_scan_count(), so that check
    is redundant too.
    
    age_active_anon() has its own totalswap_pages check right before it checks
    the list proportions.
    
    The shrink_node_memcg() site is the only one that doesn't do its own swap
    check.  Add it there.
    
    Then delete the swap check from inactive_list_is_low().
    
    Link: http://lkml.kernel.org/r/20191022144803.302233-4-hannes@cmpxchg.org
    Signed-off-by: Johannes Weiner <hannes at cmpxchg.org>
    Reviewed-by: Roman Gushchin <guro at fb.com>
    Reviewed-by: Shakeel Butt <shakeelb at google.com>
    Acked-by: Michal Hocko <mhocko at suse.com>
    Signed-off-by: Andrew Morton <akpm at linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
    
    (cherry picked from commit a108629149cc63cfb6fd446184e3e578e04bcfd1)
    Signed-off-by: Andrey Ryabinin <aryabinin at virtuozzo.com>
---
 mm/vmscan.c | 9 +--------
 1 file changed, 1 insertion(+), 8 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 97f2d297d4a1..7be29b8b4811 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2032,13 +2032,6 @@ static bool inactive_list_is_low(struct lruvec *lruvec, bool file,
 	unsigned long refaults;
 	unsigned long gb;
 
-	/*
-	 * If we don't have swap space, anonymous page deactivation
-	 * is pointless.
-	 */
-	if (!file && !total_swap_pages)
-		return false;
-
 	inactive = lruvec_lru_size(lruvec, inactive_lru, sc->reclaim_idx);
 	active = lruvec_lru_size(lruvec, active_lru, sc->reclaim_idx);
 
@@ -2404,7 +2397,7 @@ static void shrink_node_memcg(struct pglist_data *pgdat, struct mem_cgroup *memc
 	 * Even if we did not try to evict anon pages at all, we want to
 	 * rebalance the anon lru active/inactive ratio.
 	 */
-	if (inactive_list_is_low(lruvec, false, sc, true))
+	if (total_swap_pages && inactive_list_is_low(lruvec, false, sc, true))
 		shrink_active_list(SWAP_CLUSTER_MAX, lruvec,
 				   sc, LRU_ACTIVE_ANON);
 }


More information about the Devel mailing list