[Users] mount ploop image from read-only fs

Pavel Odintsov pavel.odintsov at gmail.com
Sat Sep 13 15:15:40 PDT 2014


Thank you for report, its very useful for investigation. But only one
difference between v1 and v2 is ploop disk size in header (32 vs 64 bit).
But I use 64 bit numbers anywhere and everything should work fine. But I
suppose alignment issues which not handled in my tool.

On Friday, September 12, 2014, Roman Haefeli <reduzent at gmail.com> wrote:

> On Fri, 2014-09-12 at 11:15 +0200, Roman Haefeli wrote:
> > On Fri, 2014-09-12 at 10:56 +0200, Roman Haefeli wrote:
> > > Hi Pavel
> > >
> > > I might have some more information on the issue. It seems that only
> > > 'old' ploop images cannot be mounted by ploop_userspace. I actually
> > > don't quite know the ploop version I used for creating the 'old' ploop
> > > images,  but I know it works well with images created with ploop v1.6.
> > >
> > > Does ploop_userspace know about older image formats?
> >
> > No, it's also not the version.
>
> Yes, there are different versions... I must have checked on the wrong
> machine. ploop_userspace works well with images created by ploop v1.11,
> but not with images created by ploop v1.6.
>
> Sorry for the noise.
>
> Roman
>
>
>
> >
> > > On Thu, 2014-08-28 at 22:53 +0400, Pavel Odintsov wrote:
> > > > Hello!
> > > >
> > > > No, it's not depend on kernel version. I created issue for you and
> > > > will try to investigate:
> > > > https://github.com/FastVPSEestiOu/ploop_userspace/issues/10 please
> > > > track this github issue.
> > > >
> > > > On Thu, Aug 28, 2014 at 6:12 PM, Roman Haefeli <reduzent at gmail.com
> <javascript:;>> wrote:
> > > > > Some more info:
> > > > >
> > > > > It works on our test cluster where we have
> > > > > 2.6.32-openvz-042stab093.4-amd64 installed. The report from below
> is
> > > > > from a host node running 2.6.32-042stab081.3-amd64.
> > > > >
> > > > > Is ploop_userspace dependent on kernel version?
> > > > >
> > > > > Roman
> > > > >
> > > > >
> > > > > On Thu, 2014-08-28 at 15:59 +0200, Roman Haefeli wrote:
> > > > >> Hi Pavel
> > > > >>
> > > > >> Your tool comes in handy. That is exactly what we'd need.
> However, I had
> > > > >> troubles using it. I did:
> > > > >>
> > > > >> $ ploop_userspace
> /virtual/.snapshot/nightly.0/vz/private/2006/root.hdd/root.hdd
> > > > >>
> > > > >>   We process:
> /virtual/.snapshot/nightly.0/vz/private/2006/root.hdd/root.hdd
> > > > >>   Ploop file size is: 4193255424
> > > > >>   version: 1 disk type: 2 heads count: 16 cylinder count: 81920
> sector count: 2048 size in tracks: 20480 size in sectors: 41943040 disk in
> use: 1953459801 first block offset: 2048 flags: 0
> > > > >>   For storing 21474836480 bytes on disk we need 20480 ploop blocks
> > > > >>   We have 1 BAT blocks
> > > > >>   We have 262128 slots in 1 map
> > > > >>   Number of non zero blocks in map: 3998
> > > > >>   Please be careful because this disk used now! If you need
> consistent backup please stop VE
> > > > >>   !!!ERROR!!! We can't found GPT table on this disk
> > > > >>   !!!ERROR!!! We can't find ext4 signature
> > > > >>   Set device /dev/nbd0 as read only
> > > > >>   Try to found partitions on ploop device
> > > > >>   First ploop partition was not detected properly, please call
> partx/partprobe manually
> > > > >>   You could mount ploop filesystem with command: mount -r -o
> noload /dev/nbd0p1 /mnt
> > > > >>
> > > > >>
> > > > >> Despite the errors, I tried to mount the ploop-partition:
> > > > >>
> > > > >> $ mount -r  -o noload /dev/nbd0p1 /mnt/
> > > > >>
> > > > >> and got:
> > > > >>
> > > > >>   mount: special device /dev/nbd0p1 does not exist
> > > > >>
> > > > >> Apparently, ploop_userspace wasn't able to read the GPT partition
> table.
> > > > >>
> > > > >> Tell me, if you need further information.
> > > > >>
> > > > >> Thanks,
> > > > >> Roman
> > > > >>
> > > > >>
> > > > >>
> > > > >>
> > > > >>
> > > > >>
> > > > >>
> > > > >>
> > > > >> On Tue, 2014-08-19 at 12:48 +0400, Pavel Odintsov wrote:
> > > > >> > Hello!
> > > > >> >
> > > > >> > You can mount ploop from RO root.hdd images with my tool:
> > > > >> > https://github.com/FastVPSEestiOu/ploop_userspace but it's not
> stable
> > > > >> > now. You can try it and provide feedback.
> > > > >> >
> > > > >> > On Tue, Aug 19, 2014 at 12:24 PM, Roman Haefeli <
> reduzent at gmail.com <javascript:;>> wrote:
> > > > >> > > Hi all
> > > > >> > >
> > > > >> > > At the university I work, we plan to switch all containers
> from simfs to
> > > > >> > > ploop images on the long run. Despite the many advantages of
> using
> > > > >> > > ploop, there is one major drawback that keeps us from
> switching
> > > > >> > > production already now: We can't mount ploop images from
> read-only
> > > > >> > > snapshots. In case of a recovery of a single file, we have to
> copy the
> > > > >> > > ploop image from the read-only snapshot to some read-write
> storage in
> > > > >> > > order to be able to mount it and extract the file. For CTs
> with huge
> > > > >> > > ploop-images this is a big hurdle.
> > > > >> > >
> > > > >> > > Wouldn't it be possible to add a some flag to the 'ploop'
> utility to
> > > > >> > > allow mounting ploop images from read-only storage (by
> bypassing some
> > > > >> > > checks or skipping to set the dirty flag or whatever is
> necessary)?
> > > > >> > >
> > > > >> > > Cheers,
> > > > >> > > Roman
> > > > >> > >
> > > > >> > >
> > > > >> > > _______________________________________________
> > > > >> > > Users mailing list
> > > > >> > > Users at openvz.org <javascript:;>
> > > > >> > > https://lists.openvz.org/mailman/listinfo/users
> > > > >> >
> > > > >> >
> > > > >> >
> > > > >>
> > > > >>
> > > > >
> > > > >
> > > > > _______________________________________________
> > > > > Users mailing list
> > > > > Users at openvz.org <javascript:;>
> > > > > https://lists.openvz.org/mailman/listinfo/users
> > > >
> > > >
> > > >
> > >
> > >
> >
> >
>
>
> _______________________________________________
> Users mailing list
> Users at openvz.org <javascript:;>
> https://lists.openvz.org/mailman/listinfo/users
>


-- 
Sincerely yours, Pavel Odintsov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openvz.org/pipermail/users/attachments/20140914/4079a205/attachment.html>


More information about the Users mailing list