[Devel] [PATCH RHEL8 COMMIT] coredump: fix race condition between mmget_not_zero()/get_task_mm() and core dumping

Konstantin Khorenko khorenko at virtuozzo.com
Mon Apr 20 10:34:01 MSK 2020


The commit is pushed to "branch-rh8-4.18.0-80.1.2.vz8.3.x-ovz" and will appear at https://src.openvz.org/scm/ovz/vzkernel.git
after rh8-4.18.0-80.1.2.vz8.3.6
------>
commit 69697dcce9f7dcfd07ac1c45135091f33c53cbe4
Author: Andrea Arcangeli <aarcange at redhat.com>
Date:   Mon Apr 20 10:34:01 2020 +0300

    coredump: fix race condition between mmget_not_zero()/get_task_mm() and core dumping
    
    The core dumping code has always run without holding the mmap_sem for
    writing, despite that is the only way to ensure that the entire vma
    layout will not change from under it.  Only using some signal
    serialization on the processes belonging to the mm is not nearly enough.
    This was pointed out earlier.  For example in Hugh's post from Jul 2017:
    
      https://lkml.kernel.org/r/alpine.LSU.2.11.1707191716030.2055@eggly.anvils
    
      "Not strictly relevant here, but a related note: I was very surprised
       to discover, only quite recently, how handle_mm_fault() may be called
       without down_read(mmap_sem) - when core dumping. That seems a
       misguided optimization to me, which would also be nice to correct"
    
    In particular because the growsdown and growsup can move the
    vm_start/vm_end the various loops the core dump does around the vma will
    not be consistent if page faults can happen concurrently.
    
    Pretty much all users calling mmget_not_zero()/get_task_mm() and then
    taking the mmap_sem had the potential to introduce unexpected side
    effects in the core dumping code.
    
    Adding mmap_sem for writing around the ->core_dump invocation is a
    viable long term fix, but it requires removing all copy user and page
    faults and to replace them with get_dump_page() for all binary formats
    which is not suitable as a short term fix.
    
    For the time being this solution manually covers the places that can
    confuse the core dump either by altering the vma layout or the vma flags
    while it runs.  Once ->core_dump runs under mmap_sem for writing the
    function mmget_still_valid() can be dropped.
    
    Allowing mmap_sem protected sections to run in parallel with the
    coredump provides some minor parallelism advantage to the swapoff code
    (which seems to be safe enough by never mangling any vma field and can
    keep doing swapins in parallel to the core dumping) and to some other
    corner case.
    
    In order to facilitate the backporting I added "Fixes: 86039bd3b4e6"
    however the side effect of this same race condition in /proc/pid/mem
    should be reproducible since before 2.6.12-rc2 so I couldn't add any
    other "Fixes:" because there's no hash beyond the git genesis commit.
    
    Because find_extend_vma() is the only location outside of the process
    context that could modify the "mm" structures under mmap_sem for
    reading, by adding the mmget_still_valid() check to it, all other cases
    that take the mmap_sem for reading don't need the new check after
    mmget_not_zero()/get_task_mm().  The expand_stack() in page fault
    context also doesn't need the new check, because all tasks under core
    dumping are frozen.
    
    Link: http://lkml.kernel.org/r/20190325224949.11068-1-aarcange@redhat.com
    Fixes: 86039bd3b4e6 ("userfaultfd: add new syscall to provide memory externalization")
    Signed-off-by: Andrea Arcangeli <aarcange at redhat.com>
    Reported-by: Jann Horn <jannh at google.com>
    Suggested-by: Oleg Nesterov <oleg at redhat.com>
    Acked-by: Peter Xu <peterx at redhat.com>
    Reviewed-by: Mike Rapoport <rppt at linux.ibm.com>
    Reviewed-by: Oleg Nesterov <oleg at redhat.com>
    Reviewed-by: Jann Horn <jannh at google.com>
    Acked-by: Jason Gunthorpe <jgg at mellanox.com>
    Acked-by: Michal Hocko <mhocko at suse.com>
    Cc: <stable at vger.kernel.org>
    Signed-off-by: Andrew Morton <akpm at linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
    
    https://jira.sw.ru/browse/PSBM-102938
    (cherry picked from commit 04f5866e41fb70690e28397487d8bd8eea7d712a)
    Signed-off-by: Andrey Ryabinin <aryabinin at virtuozzo.com>
---
 fs/proc/task_mmu.c       | 18 ++++++++++++++++++
 fs/userfaultfd.c         |  9 +++++++++
 include/linux/sched/mm.h | 21 +++++++++++++++++++++
 mm/mmap.c                |  7 ++++++-
 4 files changed, 54 insertions(+), 1 deletion(-)

diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index dfd73a4616ce..21a3cd4372b4 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -1137,6 +1137,24 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
 					count = -EINTR;
 					goto out_mm;
 				}
+				/*
+				 * Avoid to modify vma->vm_flags
+				 * without locked ops while the
+				 * coredump reads the vm_flags.
+				 */
+				if (!mmget_still_valid(mm)) {
+					/*
+					 * Silently return "count"
+					 * like if get_task_mm()
+					 * failed. FIXME: should this
+					 * function have returned
+					 * -ESRCH if get_task_mm()
+					 * failed like if
+					 * get_proc_task() fails?
+					 */
+					up_write(&mm->mmap_sem);
+					goto out_mm;
+				}
 				for (vma = mm->mmap; vma; vma = vma->vm_next) {
 					vma->vm_flags &= ~VM_SOFTDIRTY;
 					vma_set_page_prot(vma);
diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
index a4037ccdaa1d..df1e3d45aeb4 100644
--- a/fs/userfaultfd.c
+++ b/fs/userfaultfd.c
@@ -632,6 +632,8 @@ static void userfaultfd_event_wait_completion(struct userfaultfd_ctx *ctx,
 
 		/* the various vma->vm_userfaultfd_ctx still points to it */
 		down_write(&mm->mmap_sem);
+		/* no task can run (and in turn coredump) yet */
+		VM_WARN_ON(!mmget_still_valid(mm));
 		for (vma = mm->mmap; vma; vma = vma->vm_next)
 			if (vma->vm_userfaultfd_ctx.ctx == release_new_ctx) {
 				vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
@@ -886,6 +888,8 @@ static int userfaultfd_release(struct inode *inode, struct file *file)
 	 * taking the mmap_sem for writing.
 	 */
 	down_write(&mm->mmap_sem);
+	if (!mmget_still_valid(mm))
+		goto skip_mm;
 	prev = NULL;
 	for (vma = mm->mmap; vma; vma = vma->vm_next) {
 		cond_resched();
@@ -908,6 +912,7 @@ static int userfaultfd_release(struct inode *inode, struct file *file)
 		vma->vm_flags = new_flags;
 		vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
 	}
+skip_mm:
 	up_write(&mm->mmap_sem);
 	mmput(mm);
 wakeup:
@@ -1336,6 +1341,8 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx,
 		goto out;
 
 	down_write(&mm->mmap_sem);
+	if (!mmget_still_valid(mm))
+		goto out_unlock;
 	vma = find_vma_prev(mm, start, &prev);
 	if (!vma)
 		goto out_unlock;
@@ -1523,6 +1530,8 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx,
 		goto out;
 
 	down_write(&mm->mmap_sem);
+	if (!mmget_still_valid(mm))
+		goto out_unlock;
 	vma = find_vma_prev(mm, start, &prev);
 	if (!vma)
 		goto out_unlock;
diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h
index 44d356f5e47c..32c40b2d09d9 100644
--- a/include/linux/sched/mm.h
+++ b/include/linux/sched/mm.h
@@ -49,6 +49,27 @@ static inline void mmdrop(struct mm_struct *mm)
 		__mmdrop(mm);
 }
 
+/*
+ * This has to be called after a get_task_mm()/mmget_not_zero()
+ * followed by taking the mmap_sem for writing before modifying the
+ * vmas or anything the coredump pretends not to change from under it.
+ *
+ * NOTE: find_extend_vma() called from GUP context is the only place
+ * that can modify the "mm" (notably the vm_start/end) under mmap_sem
+ * for reading and outside the context of the process, so it is also
+ * the only case that holds the mmap_sem for reading that must call
+ * this function. Generally if the mmap_sem is hold for reading
+ * there's no need of this check after get_task_mm()/mmget_not_zero().
+ *
+ * This function can be obsoleted and the check can be removed, after
+ * the coredump code will hold the mmap_sem for writing before
+ * invoking the ->core_dump methods.
+ */
+static inline bool mmget_still_valid(struct mm_struct *mm)
+{
+	return likely(!mm->core_state);
+}
+
 /**
  * mmget() - Pin the address space associated with a &struct mm_struct.
  * @mm: The address space to pin.
diff --git a/mm/mmap.c b/mm/mmap.c
index 4c2c24b3e765..210bf99c5a5c 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -45,6 +45,7 @@
 #include <linux/moduleparam.h>
 #include <linux/pkeys.h>
 #include <linux/oom.h>
+#include <linux/sched/mm.h>
 
 #include <linux/uaccess.h>
 #include <asm/cacheflush.h>
@@ -2503,7 +2504,8 @@ find_extend_vma(struct mm_struct *mm, unsigned long addr)
 	vma = find_vma_prev(mm, addr, &prev);
 	if (vma && (vma->vm_start <= addr))
 		return vma;
-	if (!prev || expand_stack(prev, addr))
+	/* don't alter vm_end if the coredump is running */
+	if (!prev || !mmget_still_valid(mm) || expand_stack(prev, addr))
 		return NULL;
 	if (prev->vm_flags & VM_LOCKED)
 		populate_vma_page_range(prev, addr, prev->vm_end, NULL);
@@ -2529,6 +2531,9 @@ find_extend_vma(struct mm_struct *mm, unsigned long addr)
 		return vma;
 	if (!(vma->vm_flags & VM_GROWSDOWN))
 		return NULL;
+	/* don't alter vm_start if the coredump is running */
+	if (!mmget_still_valid(mm))
+		return NULL;
 	start = vma->vm_start;
 	if (expand_stack(vma, addr))
 		return NULL;


More information about the Devel mailing list