[Devel] vzpkg
Robert Nelson
robertn at the-nelsons.org
Fri Aug 29 13:23:29 PDT 2008
Kir Kolyshkin wrote:
> Robert Nelson wrote:
>> Kir Kolyshkin wrote:
>>>
>>> Also see my comments below.
>>>
>>> Robert Nelson wrote:
>>>> Is anyone actively working on vzpkg?
>>>>
>>>> I've been rewriting it to eliminate the dependence on yum and rpm,
>>>> so that it also works for Debian and hopefully some day Gentoo.
>>>> This also eliminates the requirement for vzyum, vzrpm, vzrpm43 and
>>>> vzrpm44. vzpkgadd, vzpkgrm, vzpkgls and vzpkgcache would just do
>>>> the right thing. This would also fix the incompatibilities between
>>>> working with packages from the HN and from within the VE.
>>>
>>> That sounds interesting, do you have a git repo or something I can
>>> take a look at?
>>>
>>
>> I haven't got a repo set up but I could set one up pretty easily.
>>
>>> So, how are you solving the problem of different RPMDB versions? You
>>> know, if you have used rpm-4.2 to create/manage an RPM database, the
>>> moment you use rpm-4.3 on it will become incompatible with rpm-4.2.
>>> The only way to fix that would be to use only specified RPM version.
>>>
>>> We can definitely use rpm from inside a VE only, but then another
>>> problem of duplicate downloads arises.
>>>
>>
>> This problem was pretty easy to solve once I figured out what was
>> going on. I just remove the __db.* files before and after running
>> commands in the HN then RPM automatically rebuilds them on the next
>> command.
> Hmm... __db* files are just some temporary cache, removing those are
> safe (and is sometimes required) but it's not gonna help.
>
> Here's a simple test:
>
> 1. Create a container using some template cache which uses RPM of
> different version than one on your host system. For example, CentOS4
> uses rpm-4.3, CentOS 5 -- rpm-4.4
>
> 2. Start a container:
> # vzctl start NNN
>
> 3. Check container's RPM is working fine (it should at this point):
> # vzctl exec NNN rpm -q rpm
>
> 4. Check if host RPM is working:
> # rpm --root /vz/root/NNN -q rpm
>
> 5. Check if container RPM is working:
> # vzctl exec NNN rpm -q rpm
>
> Sure you can insert removing of __db.* files in the appropriate places
> and see if it helps.
>
I've already tested this. But I don't use rpm directly in the HN, just
my new vzpkg* functions which automatically remove the __db.* files
before and after. The new vzpkg* commands also take care of Debian
packages now and will deal with Gentoo portage in the future.
>> For the yum-cache, I mount the /vz/template version of the cache into
>> the VE. I do the same for the apt/archives on Debian.
>
> If you do it read-only, how do you handle the case yum/apt wants to
> write something to it?
>
> If you do it read-write, how can you make sure that an evil container
> root will not put some home-baked Trojaned packages into that area?
>
Currently I mount it rw, but only while a vzpkg* command is running. If
the VE manages their own packages they don't get to share the cache.
There is still a window while the vzpkg command is running but I don't
know how to specify different access to a directory for the HN versus
the VE. Is there a way?
Long term, the best solution is probably implementing something like
Debian's apt-cacher for rpms and then running apt-cacher and
"rpm-cacher" on the HN.
>>
>>>> Is this something that you would like to incorporate into the product?
>>>>
>>>> One of the things I noticed was that there was a lot of duplication
>>>> in scripts and data files. This is because everything is stored in
>>>> an OS/Version/Platform/Config directory, even though there may not
>>>> be any difference between the corresponding files between platforms
>>>> or even Versions.
>>>>
>>>> I have a change which is backwards-compatible which allows config
>>>> directories anywhere in the template tree. Files lower in the tree
>>>> override any specified higher in the tree.
>>>>
>>>> For example, instead of this directory structure:
>>>>
>>>> /vz/template
>>>> centos
>>>> 4
>>>> i386
>>>> config
>>>> minimal.list
>>>> yum.conf
>>>> ...
>>>> x86_64
>>>> config
>>>> minimal.list
>>>> yum.conf
>>>> ...
>>>>
>>>> You would have:
>>>>
>>>> /vz/template
>>>> centos
>>>> config
>>>> minimal.list
>>>> 4
>>>> i386
>>>> config
>>>> yum.conf
>>>> ...
>>>>
>>>> This eliminates a lot of duplicate work and is less error prone.
>>> Will the minimal.conf in
>>> /vz/template/centos/5/i386/config/minimal.list be an addition to, or
>>> a replacement for /vz/template/centos/config/minimal.list?
>>>
>> Currently it is a replacement, in all the templates I looked at the
>> files were exactly the same. The *.list files just list the desired
>> functionality which doesn't change, the big changes are the
>> dependencies which are handled automatically. But they definitely
>> don't differ between architectures for the same release.
>>
>> I handle things a little differently for Debian / Ubuntu since
>> debootstrap files provide the initial set. Packages listed in the
>> *.list file are added to a --include option to debootstrap, if they
>> have a trailing - then they are added to --exclude.
>>> In case it's addition, say you have a package called httpd in
>>> /vz/template/centos/config/minimal.list. What if in CentOS 6 we
>>> don't want package with that name, but want something called httpd3
>>> instead? I mean, we can definitely add more packages, but how can we
>>> "remove" packages?
>>>
>>> In case it's a replacement, I doubt that "generic" file will work --
>>> every major version of a given distro have some changes in the
>>> minimal.list.
>>>>
>>>> I can provide a diff of this change against the current git if you
>>>> are interested.
>>>>
>>>> If there is interest in any of this work please let me know the
>>>> process for getting the changes reviewed and incorporated into the
>>>> product.
>>>
>>> I put users@ to cc: in order to bring some more attention to the
>>> topic. I am definitely interested so let's discuss it further (for
>>> now my biggest concern is rpmdb compatibility problem described above).
>>
>
More information about the Devel
mailing list