[Devel] [PATCH vz8 28/42] mm: thp: fix flags for pmd migration when split
Andrey Ryabinin
aryabinin at virtuozzo.com
Thu Apr 16 13:36:06 MSK 2020
From: Peter Xu <peterx at redhat.com>
When splitting a huge migrating PMD, we'll transfer all the existing PMD
bits and apply them again onto the small PTEs. However we are fetching
the bits unconditionally via pmd_soft_dirty(), pmd_write() or
pmd_yound() while actually they don't make sense at all when it's a
migration entry. Fix them up. Since at it, drop the ifdef together as
not needed.
Note that if my understanding is correct about the problem then if
without the patch there is chance to lose some of the dirty bits in the
migrating pmd pages (on x86_64 we're fetching bit 11 which is part of
swap offset instead of bit 2) and it could potentially corrupt the
memory of an userspace program which depends on the dirty bit.
Link: http://lkml.kernel.org/r/20181213051510.20306-1-peterx@redhat.com
Signed-off-by: Peter Xu <peterx at redhat.com>
Reviewed-by: Konstantin Khlebnikov <khlebnikov at yandex-team.ru>
Reviewed-by: William Kucharski <william.kucharski at oracle.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov at linux.intel.com>
Cc: Andrea Arcangeli <aarcange at redhat.com>
Cc: Matthew Wilcox <willy at infradead.org>
Cc: Michal Hocko <mhocko at suse.com>
Cc: Dave Jiang <dave.jiang at intel.com>
Cc: "Aneesh Kumar K.V" <aneesh.kumar at linux.vnet.ibm.com>
Cc: Souptick Joarder <jrdr.linux at gmail.com>
Cc: Konstantin Khlebnikov <khlebnikov at yandex-team.ru>
Cc: Zi Yan <zi.yan at cs.rutgers.edu>
Cc: <stable at vger.kernel.org> [4.14+]
Signed-off-by: Andrew Morton <akpm at linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
https://jira.sw.ru/browse/PSBM-102938
(cherry picked from commit 2e83ee1d8694a61d0d95a5b694f2e61e8dde8627)
Signed-off-by: Andrey Ryabinin <aryabinin at virtuozzo.com>
---
mm/huge_memory.c | 20 +++++++++++---------
1 file changed, 11 insertions(+), 9 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 77a827a55703..7b652b3a70e6 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2124,23 +2124,25 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
*/
old_pmd = pmdp_invalidate(vma, haddr, pmd);
-#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
pmd_migration = is_pmd_migration_entry(old_pmd);
- if (pmd_migration) {
+ if (unlikely(pmd_migration)) {
swp_entry_t entry;
entry = pmd_to_swp_entry(old_pmd);
page = pfn_to_page(swp_offset(entry));
- } else
-#endif
+ write = is_write_migration_entry(entry);
+ young = false;
+ soft_dirty = pmd_swp_soft_dirty(old_pmd);
+ } else {
page = pmd_page(old_pmd);
+ if (pmd_dirty(old_pmd))
+ SetPageDirty(page);
+ write = pmd_write(old_pmd);
+ young = pmd_young(old_pmd);
+ soft_dirty = pmd_soft_dirty(old_pmd);
+ }
VM_BUG_ON_PAGE(!page_count(page), page);
page_ref_add(page, HPAGE_PMD_NR - 1);
- if (pmd_dirty(old_pmd))
- SetPageDirty(page);
- write = pmd_write(old_pmd);
- young = pmd_young(old_pmd);
- soft_dirty = pmd_soft_dirty(old_pmd);
/*
* Withdraw the table only after we mark the pmd entry invalid.
--
2.25.3
More information about the Devel
mailing list