[CRIU] [PATCH v3 2/2] powerpc/mm: Tracking vDSO remap

Ingo Molnar mingo at kernel.org
Thu Mar 26 02:48:44 PDT 2015


* Benjamin Herrenschmidt <benh at kernel.crashing.org> wrote:

> > > +#define __HAVE_ARCH_REMAP
> > > +static inline void arch_remap(struct mm_struct *mm,
> > > +			      unsigned long old_start, unsigned long old_end,
> > > +			      unsigned long new_start, unsigned long new_end)
> > > +{
> > > +	/*
> > > +	 * mremap() doesn't allow moving multiple vmas so we can limit the
> > > +	 * check to old_start == vdso_base.
> > > +	 */
> > > +	if (old_start == mm->context.vdso_base)
> > > +		mm->context.vdso_base = new_start;
> > > +}
> > 
> > mremap() doesn't allow moving multiple vmas, but it allows the 
> > movement of multi-page vmas and it also allows partial mremap()s, 
> > where it will split up a vma.
> > 
> > In particular, what happens if an mremap() is done with 
> > old_start == vdso_base, but a shorter end than the end of the vDSO? 
> > (i.e. a partial mremap() with fewer pages than the vDSO size)
> 
> Is there a way to forbid splitting ? Does x86 deal with that case at 
> all or it doesn't have to for some other reason ?

So we use _install_special_mapping() - maybe PowerPC does that too? 
That adds VM_DONTEXPAND which ought to prevent some - but not all - of 
the VM API weirdnesses.

On x86 we'll just dump core if someone unmaps the vdso.

Thanks,

	Ingo


More information about the CRIU mailing list