[Devel] [PATCH RHEL7 COMMIT] ms/mm/compaction.c: periodically schedule when freeing pages

Konstantin Khorenko khorenko at virtuozzo.com
Wed Jan 31 18:21:50 MSK 2018


The commit is pushed to "branch-rh7-3.10.0-693.11.6.vz7.42.x-ovz" and will appear at https://src.openvz.org/scm/ovz/vzkernel.git
after rh7-3.10.0-693.11.6.vz7.42.4
------>
commit 9bdc45a8f824e41e5374e08503c806cbd07cb512
Author: David Rientjes <rientjes at google.com>
Date:   Wed Jan 31 18:21:49 2018 +0300

    ms/mm/compaction.c: periodically schedule when freeing pages
    
    Patchset description:
    compaction related stable backports
    
    These are some compaction related -stable backports that we missing.
    
    David Rientjes (9):
      ms/mm/compaction.c: periodically schedule when freeing pages
      ms/mm, compaction: avoid isolating pinned pages
      ms/mm, compaction: determine isolation mode only once
      ms/mm, compaction: ignore pageblock skip when manually invoking
        compaction
      ms/mm, migration: add destination page freeing callback
      ms/mm, compaction: return failed migration target pages back to
        freelist
      ms/mm, compaction: add per-zone migration pfn cache for async
        compaction
      ms/mm, compaction: embed migration mode in compact_control
      ms/mm, compaction: terminate async compaction when rescheduling
    
    Heesub Shin (1):
      ms/mm/compaction: clean up unused code lines
    
    Hugh Dickins (1):
      ms/mm: fix direct reclaim writeback regression
    
    Joonsoo Kim (6):
      ms/mm/compaction: disallow high-order page for migration target
      ms/mm/compaction: do not call suitable_migration_target() on every
        page
      ms/mm/compaction: change the timing to check to drop the spinlock
      ms/mm/compaction: check pageblock suitability once per pageblock
      ms/mm/compaction: clean-up code on success of ballon isolation
      ms/mm/compaction: fix wrong order check in compact_finished()
    
    Mel Gorman (1):
      ms/mm: compaction: trace compaction begin and end
    
    Vlastimil Babka (7):
      ms/mm: compaction: encapsulate defer reset logic
      ms/mm: compaction: do not mark unmovable pageblocks as skipped in
        async compaction
      ms/mm: compaction: reset scanner positions immediately when they meet
      ms/mm/compaction: cleanup isolate_freepages()
      ms/mm/compaction: do not count migratepages when unnecessary
      ms/mm/compaction: avoid rescanning pageblocks in isolate_freepages
      ms/mm, compaction: properly signal and act upon lock and need_sched()
        contention
    
    https://jira.sw.ru/browse/PSBM-81070
    
    ===================================================================
    This patch description:
    
    We've been getting warnings about an excessive amount of time spent
    allocating pages for migration during memory compaction without
    scheduling.  isolate_freepages_block() already periodically checks for
    contended locks or the need to schedule, but isolate_freepages() never
    does.
    
    When a zone is massively long and no suitable targets can be found, this
    iteration can be quite expensive without ever doing cond_resched().
    
    Check periodically for the need to reschedule while the compaction free
    scanner iterates.
    
    Signed-off-by: David Rientjes <rientjes at google.com>
    Reviewed-by: Rik van Riel <riel at redhat.com>
    Reviewed-by: Wanpeng Li <liwanp at linux.vnet.ibm.com>
    Acked-by: Mel Gorman <mgorman at suse.de>
    Signed-off-by: Andrew Morton <akpm at linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
    (cherry picked from commit f6ea3adb70b20ae36277a1b0eaaf4da9f6479a28)
    Signed-off-by: Andrey Ryabinin <aryabinin at virtuozzo.com>
---
 mm/compaction.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/mm/compaction.c b/mm/compaction.c
index 63f5f4627ea7..f693bf3b87e2 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -698,6 +698,13 @@ static void isolate_freepages(struct zone *zone,
 		unsigned long isolated;
 		unsigned long end_pfn;
 
+		/*
+		 * This can iterate a massively long zone without finding any
+		 * suitable migration targets, so periodically check if we need
+		 * to schedule.
+		 */
+		cond_resched();
+
 		if (!pfn_valid(pfn))
 			continue;
 


More information about the Devel mailing list