[Devel] Re: [PATCH 1/2] Adds a read-only "procs" file similar to "tasks" that shows only unique tgids

Andrew Morton akpm at linux-foundation.org
Thu Jul 2 19:08:45 PDT 2009


On Thu, 2 Jul 2009 18:17:56 -0700 Benjamin Blum <bblum at google.com> wrote:

> On Thu, Jul 2, 2009 at 6:08 PM, Paul Menage<menage at google.com> wrote:
> > On Thu, Jul 2, 2009 at 5:53 PM, Andrew Morton<akpm at linux-foundation.org> wrote:
> >>> In the first snippet, count will be at most equal to length. As length
> >>> is determined from cgroup_task_count, it can be no greater than the
> >>> total number of pids on the system.
> >>
> >> Well that's a problem, because there can be tens or hundreds of
> >> thousands of pids, and there's a fairly low maximum size for kmalloc()s
> >> (include/linux/kmalloc_sizes.h).
> >>
> >> And even if this allocation attempt doesn't exceed KMALLOC_MAX_SIZE,
> >> large allocations are less unreliable. __There is a large break point at
> >> 8*PAGE_SIZE (PAGE_ALLOC_COSTLY_ORDER).
> >
> > This has been a long-standing problem with the tasks file, ever since
> > the cpusets days.
> >
> > There are ways around it - Lai Jiangshan <laijs at cn.fujitsu.com> posted
> > a patch that allocated an array of pages to store pids in, with a
> > custom sorting function that let you specify indirection rather than
> > assuming everything was in one contiguous array. This was technically
> > the right approach in terms of not needing vmalloc and never doing
> > large allocations, but it was very complex; an alternative that was
> > mooted was to use kmalloc for small cgroups and vmalloc for large
> > ones, so the vmalloc penalty wouldn't be paid generally. The thread
> > fizzled AFAICS.
> 
> As it is currently, the kmalloc call will simply fail if there are too
> many pids, correct? Do we prefer not being able to read the file in
> this case, or would we rather use vmalloc?

We'd prefer that we not use vmalloc and that the reads not fail!



Why are we doing all this anyway?  To avoid presenting duplicated pids
to userspace?  Nothing else?

If so, why not stop doing that - userspace can remove dupes (if it
cares) more easily than the kernel can?


Or we can do it the other way?  Create an initially-empty local IDR
tree or radix tree and, within that, mark off any pids which we've
already emitted?  That'll have a worst-case memory consumption of
approximately PID_MAX_LIMIT bits -- presently that's half a megabyte. 
With no large allocations needed?


btw, did pidlist_uniq() actually needs to allocate new memory for the
output array?  Could it have done the filtering in-place?

_______________________________________________
Containers mailing list
Containers at lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/containers




More information about the Devel mailing list