[Devel] [PATCH RHEL7 COMMIT] ms/mm/compaction: avoid rescanning pageblocks in isolate_freepages

Konstantin Khorenko khorenko at virtuozzo.com
Wed Jan 31 18:22:03 MSK 2018


The commit is pushed to "branch-rh7-3.10.0-693.11.6.vz7.42.x-ovz" and will appear at https://src.openvz.org/scm/ovz/vzkernel.git
after rh7-3.10.0-693.11.6.vz7.42.4
------>
commit 2b6e65e79c5560941f2c65caa77273db45d20316
Author: Vlastimil Babka <vbabka at suse.cz>
Date:   Wed Jan 31 18:22:02 2018 +0300

    ms/mm/compaction: avoid rescanning pageblocks in isolate_freepages
    
    The compaction free scanner in isolate_freepages() currently remembers PFN
    of the highest pageblock where it successfully isolates, to be used as the
    starting pageblock for the next invocation.  The rationale behind this is
    that page migration might return free pages to the allocator when
    migration fails and we don't want to skip them if the compaction
    continues.
    
    Since migration now returns free pages back to compaction code where they
    can be reused, this is no longer a concern.  This patch changes
    isolate_freepages() so that the PFN for restarting is updated with each
    pageblock where isolation is attempted.  Using stress-highalloc from
    mmtests, this resulted in 10% reduction of the pages scanned by the free
    scanner.
    
    Note that the somewhat similar functionality that records highest
    successful pageblock in zone->compact_cached_free_pfn, remains unchanged.
    This cache is used when the whole compaction is restarted, not for
    multiple invocations of the free scanner during single compaction.
    
    Signed-off-by: Vlastimil Babka <vbabka at suse.cz>
    Cc: Minchan Kim <minchan at kernel.org>
    Cc: Mel Gorman <mgorman at suse.de>
    Cc: Joonsoo Kim <iamjoonsoo.kim at lge.com>
    Cc: Bartlomiej Zolnierkiewicz <b.zolnierkie at samsung.com>
    Acked-by: Michal Nazarewicz <mina86 at mina86.com>
    Reviewed-by: Naoya Horiguchi <n-horiguchi at ah.jp.nec.com>
    Cc: Christoph Lameter <cl at linux.com>
    Cc: Rik van Riel <riel at redhat.com>
    Acked-by: David Rientjes <rientjes at google.com>
    Signed-off-by: Andrew Morton <akpm at linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
    (cherry picked from commit e9ade569910a82614ff5f2c2cea2b65a8d785da4)
    Signed-off-by: Andrey Ryabinin <aryabinin at virtuozzo.com>
---
 mm/compaction.c | 22 +++++++---------------
 1 file changed, 7 insertions(+), 15 deletions(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index 316a7b34ce37..d8ee1536819f 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -692,7 +692,6 @@ static void isolate_freepages(struct zone *zone,
 	unsigned long block_start_pfn;	/* start of current pageblock */
 	unsigned long block_end_pfn;	/* end of current pageblock */
 	unsigned long low_pfn;	     /* lowest pfn scanner is able to scan */
-	unsigned long next_free_pfn; /* start pfn for scaning at next round */
 	int nr_freepages = cc->nr_freepages;
 	struct list_head *freelist = &cc->freepages;
 
@@ -712,12 +711,6 @@ static void isolate_freepages(struct zone *zone,
 						zone_end_pfn(zone));
 	low_pfn = ALIGN(cc->migrate_pfn + 1, pageblock_nr_pages);
 
-	/*
-	 * If no pages are isolated, the block_start_pfn < low_pfn check
-	 * will kick in.
-	 */
-	next_free_pfn = 0;
-
 	/*
 	 * Isolate free pages until enough are available to migrate the
 	 * pages on cc->migratepages. We stop searching if the migrate
@@ -758,19 +751,19 @@ static void isolate_freepages(struct zone *zone,
 			continue;
 
 		/* Found a block suitable for isolating free pages from */
+		cc->free_pfn = block_start_pfn;
 		isolated = isolate_freepages_block(cc, block_start_pfn,
 					block_end_pfn, freelist, false);
 		nr_freepages += isolated;
 
 		/*
-		 * Record the highest PFN we isolated pages from. When next
-		 * looking for free pages, the search will restart here as
-		 * page migration may have returned some pages to the allocator
+		 * Set a flag that we successfully isolated in this pageblock.
+		 * In the next loop iteration, zone->compact_cached_free_pfn
+		 * will not be updated and thus it will effectively contain the
+		 * highest pageblock we isolated pages from.
 		 */
-		if (isolated && next_free_pfn == 0) {
+		if (isolated)
 			cc->finished_update_free = true;
-			next_free_pfn = block_start_pfn;
-		}
 	}
 
 	/* split_free_page does not map the pages */
@@ -781,9 +774,8 @@ static void isolate_freepages(struct zone *zone,
 	 * so that compact_finished() may detect this
 	 */
 	if (block_start_pfn < low_pfn)
-		next_free_pfn = cc->migrate_pfn;
+		cc->free_pfn = cc->migrate_pfn;
 
-	cc->free_pfn = next_free_pfn;
 	cc->nr_freepages = nr_freepages;
 }
 


More information about the Devel mailing list