[Devel] [Q rh7] ve.legacy_veid

Vladimir Davydov vdavydov at parallels.com
Tue May 5 07:18:02 PDT 2015


On Tue, May 05, 2015 at 05:04:29PM +0300, Vasily Averin wrote:
> On 05.05.2015 15:48, Vladimir Davydov wrote:
> > The problem is some data looked up by VEID can outlive the corresponding
> > container. An example is /proc/vz/venetstat, whose entries live until
> > container destruction. Since we currently remove all cgroups, including
> > the VE cgroup, on container stop, container restart will result in
> > regenerating ve.legacy_veid and cluttering /proc/vz/venetstat. We should
> > fix this somehow.
> > 
> > After discussion with Igor and Konstantin, we've found three ways of
> > fixing the issue:
> > 
> > 1. Get rid of ve.legacy_veid and switch all kernel interfaces to UUID.
> > This will require massive patching of the kernel.
> > 
> > 2. Convert UUID to VEID in kernel in a stable fashion (e.g. take first
> > 32 bits of UUID) and hope there will be no UUID conflicts. Or allow
> > userspace to write to ve.legacy_veid.
> > 
> > 3. Do not remove cgroups on container stop. Only remove them on
> > container destruction.
> > 
> > Personally, I would vote for #3 as the least intrusive way. Cgroup
> > directories are not that fat to purge them on each CT stop, IMO.
> > 
> > Any thoughts?
> 
> But how about CT migration? or suspend-resume?
> Will legacy_veid changed in such cases?
> 

In case of CT migration, the CT is destroyed on one node and re-created
on another node, so legacy_veid will be changed, but that's OK, because
all long-standing stats (like venetstat) should be dropped then.

Suspend-resume is the same as stop-start, i.e. CT cgroups won't be
re-created and therefore legacy_veid will remain the same.



More information about the Devel mailing list