[Devel] [PATCH RHEL7 COMMIT] ms/Revert "mm/page_alloc: fix memmap_init_zone pageblock alignment"

Konstantin Khorenko khorenko at virtuozzo.com
Fri Apr 27 13:00:55 MSK 2018


The commit is pushed to "branch-rh7-3.10.0-693.21.1.vz7.47.x-ovz" and will appear at https://src.openvz.org/scm/ovz/vzkernel.git
after rh7-3.10.0-693.21.1.vz7.47.1
------>
commit 4a6b0a19a994b70944c16a63e744134d2454248c
Author: Ard Biesheuvel <ard.biesheuvel at linaro.org>
Date:   Fri Apr 27 13:00:55 2018 +0300

    ms/Revert "mm/page_alloc: fix memmap_init_zone pageblock alignment"
    
    This reverts commit 864b75f9d6b0100bb24fdd9a20d156e7cda9b5ae.
    
    Commit 864b75f9d6b0 ("mm/page_alloc: fix memmap_init_zone pageblock
    alignment") modified the logic in memmap_init_zone() to initialize
    struct pages associated with invalid PFNs, to appease a VM_BUG_ON()
    in move_freepages(), which is redundant by its own admission, and
    dereferences struct page fields to obtain the zone without checking
    whether the struct pages in question are valid to begin with.
    
    Commit 864b75f9d6b0 only makes it worse, since the rounding it does
    may cause pfn assume the same value it had in a prior iteration of
    the loop, resulting in an infinite loop and a hang very early in the
    boot. Also, since it doesn't perform the same rounding on start_pfn
    itself but only on intermediate values following an invalid PFN, we
    may still hit the same VM_BUG_ON() as before.
    
    So instead, let's fix this at the core, and ensure that the BUG
    check doesn't dereference struct page fields of invalid pages.
    
    Fixes: 864b75f9d6b0 ("mm/page_alloc: fix memmap_init_zone pageblock alignment")
    Tested-by: Jan Glauber <jglauber at cavium.com>
    Tested-by: Shanker Donthineni <shankerd at codeaurora.org>
    Cc: Daniel Vacek <neelx at redhat.com>
    Cc: Mel Gorman <mgorman at techsingularity.net>
    Cc: Michal Hocko <mhocko at suse.com>
    Cc: Paul Burton <paul.burton at imgtec.com>
    Cc: Pavel Tatashin <pasha.tatashin at oracle.com>
    Cc: Vlastimil Babka <vbabka at suse.cz>
    Cc: Andrew Morton <akpm at linux-foundation.org>
    Signed-off-by: Ard Biesheuvel <ard.biesheuvel at linaro.org>
    Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
    
    https://jira.sw.ru/browse/PSBM-83746
    
    (cherry picked from commit 3e04040df6d4613a8af5a80882d5f7f298f49810)
    Signed-off-by: Konstantin Khorenko <khorenko at virtuozzo.com>
---
 mm/page_alloc.c | 13 +++++--------
 1 file changed, 5 insertions(+), 8 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2647fe4d484b..7fa5f026434c 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1412,7 +1412,9 @@ int move_freepages(struct zone *zone,
 	 * Remove at a later date when no bug reports exist related to
 	 * grouping pages by mobility
 	 */
-	BUG_ON(page_zone(start_page) != page_zone(end_page));
+	BUG_ON(pfn_valid(page_to_pfn(start_page)) &&
+	       pfn_valid(page_to_pfn(end_page)) &&
+	       page_zone(start_page) != page_zone(end_page));
 #endif
 
 	for (page = start_page; page <= end_page;) {
@@ -4720,14 +4722,9 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
 			/*
 			 * Skip to the pfn preceding the next valid one (or
 			 * end_pfn), such that we hit a valid pfn (or end_pfn)
-			 * on our next iteration of the loop. Note that it needs
-			 * to be pageblock aligned even when the region itself
-			 * is not. move_freepages_block() can shift ahead of
-			 * the valid region but still depends on correct page
-			 * metadata.
+			 * on our next iteration of the loop.
 			 */
-			pfn = (memblock_next_valid_pfn(pfn, end_pfn) &
-					~(pageblock_nr_pages-1)) - 1;
+			pfn = memblock_next_valid_pfn(pfn, end_pfn) - 1;
 #endif
 			continue;
 		}


More information about the Devel mailing list