[Devel] Re: LXC container, unmounting unneeded mount points (from the container namespace)

Michael H. Warfield mhw at WittsEnd.com
Wed Jan 13 06:15:34 PST 2010


On Wed, 2010-01-13 at 14:53 +0100, Daniel Lezcano wrote: 
> Michael H. Warfield wrote:
> > Added lxc-devel to the cc list since this seems to be more appropriate
> > over there (maybe) and I've just joined that one as well.
> >
> > On Fri, 2010-01-08 at 15:35 +0100, Daniel Lezcano wrote: 
> >   
> >> Ciprian Dorin, Craciun wrote:
> >>     
> >>> On Fri, Jan 8, 2010 at 4:17 PM, Daniel Lezcano <daniel.lezcano at free.fr> wrote:
> >>>   
> >>>       
> >>>> Ciprian Dorin, Craciun wrote:
> >>>>     
> >>>>         
> >>>>>    Hy all!
> >>>>>
> >>>>>    Today I've started to play with the LXC containers, and I've got a
> >>>>> question: when starting a container (with it's own mount point
> >>>>> namespace), and I do a `cat /proc/mounts` I also see the mount points
> >>>>> from my host system.
> >>>>>
> >>>>>    So the question is: how can I force `lxc-create` to remove any
> >>>>> uneeded mount points (maybe all)?
> >>>>>
> >>>>>       
> >>>>>           
> >>>> Committed today :)
> >>>>
> >>>> http://git.lab.toulouse-stg.fr.ibm.com/?p=lxc.git;a=commit;h=ed83715df7666879116d1657b1dd54a8fc6513f6
> >>>>     
> >>>>         
> >
> >   
> >> Oops, this one:
> >>     
> >
> >   
> >> http://lxc.git.sourceforge.net/git/gitweb.cgi?p=lxc/lxc;a=commit;h=bf601689a9e0cea1ceaf17e4f7f853f5392c2827
> >>     
> >
> > Nice.  This is EXACTLY what I've been looking for as well.  Only problem
> > is...  It didn't work.  It complains about:
> >
> > lxc-start: could not umount 1 mounts"
> >
> > Then terminates saying it can't start the container.
> >
> > When I throw in some debugging prints, I find out that it's complaining
> > that it can't umount {pivotdir}/dev.  Commented out the return -1's in
> > that routine so it starts the container anyways and jump into the
> > running container and I discover that {pivotdir}/dev/shm is still
> > mounted even though it thinks it umounted it.  I can see from the
> > debugging prints that it seems to have successfully umount it (the
> > debugging message indicates that it umounted it - so it saw it had it
> > and tried and got a good return from the attempt) but it's apparently
> > still mount.  If I umount that {pivotdir}/dev/shm inside the container,
> > I can then umount {pivotdir}/dev and {pivotdir}. 
> >
> > Tried it both by defining lxc.pivotdir and by letting it choose a
> > temporary one.  Tried two different containers.  Same result.
> >
> > This is what I get with the "return -1"'s commented out in the function
> > and laced with my own debugging prints to print each step of the umount
> > loop (this is with lxc.pivotdir = /rootfs):
> >
> > ===
> > [root at Berserker-Base ~]# lxc-start -n 1009
> > still mounted '/rootfs/dev'
> > umounted '/rootfs/dev/pts'
> > umounted '/rootfs/dev/shm'
> > still mounted '/rootfs/proc'
> > umounted '/rootfs/proc/bus/usb'
> > umounted '/rootfs/proc/sys/fs/binfmt_misc'
> > umounted '/rootfs/sys'
> > umounted '/rootfs/home'
> > still mounted '/rootfs/var'
> > umounted '/rootfs/var/lib/cgroup'
> > umounted '/rootfs/var/lib/nfs/rpc_pipefs'
> > umounted '/rootfs/usr'
> > umounted '/rootfs/boot'
> > still mounted '/rootfs/srv'
> > umounted '/rootfs/misc'
> > umounted '/rootfs/net'
> > still mounted '/rootfs/srv/lxc/root/1009'
> > umounted '/rootfs/srv/lxc/root/1009/export'
> > umounted '/rootfs/srv/lxc/root/1009/srv/shared'
> > umounted '/rootfs/srv/lxc/root/1009/dev/pts'
> > umounted '/rootfs/srv/lxc/root/1009/proc'
> > umounted '/rootfs/srv/lxc/root/1009/sys'
> > umounted '/rootfs/srv/lxc/root/1009/dev/shm'
> > umounted '/rootfs/srv/lxc/root/1009/dev/console'
> > umounted '/rootfs/srv/lxc/root/1009/dev/tty1'
> > umounted '/rootfs/srv/lxc/root/1009/dev/tty2'
> > umounted '/rootfs/srv/lxc/root/1009/dev/tty3'
> > umounted '/rootfs/srv/lxc/root/1009/dev/tty4'
> > umounted '/rootfs/srv/lxc/root/1009/dev/tty5'
> > umounted '/rootfs/srv/lxc/root/1009/dev/tty6'
> > still mounted '/rootfs/dev'
> > umounted '/rootfs/proc'
> > umounted '/rootfs/var'
> > still mounted '/rootfs/srv'
> > umounted '/rootfs/srv/lxc/root/1009'
> > still mounted '/rootfs/dev'
> > umounted '/rootfs/srv'
> > still mounted '/rootfs/dev'
> > lxc-start: could not umount 1 mounts
> > lxc-start: Device or resource busy - could not unmount old rootfs
> > INIT: version 2.86 booting
> > ===
> >
> > Container now continues to boot, since I commented out the error
> > returns.  Jumping into that container:
> >
> > === 
> > [root at Berserker ~]# df
> > Filesystem           1K-blocks      Used Available Use% Mounted on
> > rootfs               288433504 240305660  33476248  88% /
> > /dev/sda6              2063504    662116   1296568  34% /rootfs
> > udev                   1031280       200   1031080   1% /rootfs/dev
> > /dev/shm               1031280         0   1031280   0% /rootfs/dev/shm
> > /dev/mapper/VolGroup00-LogVol00
> >                      288433504 240305660  33476248  88% /
> > /dev/sda6              2063504    662116   1296568  34% /export
> > /dev/sda7            693727244 554704408 103783460  85% /srv/shared
> > none                   1031280         0   1031280   0% /dev/shm
> > [root at Berserker ~]# umount /rootfs/dev/shm
> > [root at Berserker ~]# umount /rootfs/dev    
> > [root at Berserker ~]# umount /rootfs    
> > [root at Berserker ~]# df
> > Filesystem           1K-blocks      Used Available Use% Mounted on
> > rootfs               288433504 240305672  33476236  88% /
> > /dev/mapper/VolGroup00-LogVol00
> >                      288433504 240305672  33476236  88% /
> > /dev/sda6              2063504    662116   1296568  34% /export
> > /dev/sda7            693727244 554704408 103783460  85% /srv/shared
> > none                   1031280         0   1031280   0% /dev/shm
> > ===
> >
> > Of course we also have the strange "rootfs" entry there as well.
> >
> > Host system is Fedora 12.
> > Host kernel is: 2.6.31.9-174.fc12.i686
> > Guest system in each case is CentOS 5.4
> >   

> I added in the Cc Michael Holzt.

> Thanks for reporting this.

I should now mention that I've been digging into it further and found
that I can not reproduce this on a second, similar, system I'm testing
on at this point.  It may be a peculiarity of the original system and
I'm just getting ready to test on a third as soon as I finish getting it
updated.

IAC, there should be some way to force the container to start, for
diagnostic purposes, in a case like this, without modifying the sources.
Having the old root directory still there isn't fatal to run the
container, particularly in a test system, even though highly undesirable
in a production VM.

> -- Daniel

Regards,
Mike
-- 
Michael H. Warfield (AI4NB) | (770) 985-6132 |  mhw at WittsEnd.com
   /\/\|=mhw=|\/\/          | (678) 463-0932 |  http://www.wittsend.com/mhw/
   NIC whois: MHW9          | An optimist believes we live in the best of all
 PGP Key: 0x674627FF        | possible worlds.  A pessimist is sure of it!
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 482 bytes
Desc: This is a digitally signed message part
URL: <http://lists.openvz.org/pipermail/devel/attachments/20100113/37245b04/attachment-0001.sig>
-------------- next part --------------
_______________________________________________
Containers mailing list
Containers at lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/containers


More information about the Devel mailing list