[Devel] Re: [RFC v10][PATCH 05/13] Dump memory address space
Oren Laadan
orenl at cs.columbia.edu
Mon Dec 1 12:57:09 PST 2008
Dave Hansen wrote:
> On Fri, 2008-11-28 at 10:53 +0000, Al Viro wrote:
>>> +static int cr_ctx_checkpoint(struct cr_ctx *ctx, pid_t pid)
>>> +{
>>> + ctx->root_pid = pid;
>>> +
>>> + /*
>>> + * assume checkpointer is in container's root vfs
>>> + * FIXME: this works for now, but will change with real containers
>>> + */
>>> + ctx->vfsroot = ¤t->fs->root;
>>> + path_get(ctx->vfsroot);
>> This is going to break as soon as you get another thread doing e.g. chroot(2)
>> while you are in there.
>
> Yeah, we do need at least a read_lock(¤t->fs->lock) to keep people
> from chroot()'ing underneath us.
True.
(while adapting older and safer code I omitted these tests with no reason).
>
>> And it's a really, _really_ bad idea to take a
>> pointer to shared object, increment refcount on the current *contents* of
>> said object and assume that dropping refcount on the later contents of the
>> same will balance out.
>
> Absolutely. I assume you mean get_fs_struct(current) instead of
> path_get().
True.
Should change the type of ctx->vfsroot to not be a pointer, and do:
>>> + ctx->vfsroot = *current->fs->root;
>>> + path_get(&ctx->vfsroot);
and adjust accordingly in where the refcount is dropped.
What we need here is a reference point (this will change later when we handle
multiple fs-namespaces), which is the path of the "container root". Assuming
locking is correct so that current->fs does not change under us, it's enough
to get that path and later release that path.
BW, the current->fs is assumed to not change during the checkpoint; if it does,
then it's a mis-use of the checkpoint interface, and the resulting behavior
is undefined - restart is guaranteed to restore the exact old state even if
checkpoint succeeds.
Thanks,
Oren.
_______________________________________________
Containers mailing list
Containers at lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/containers
More information about the Devel
mailing list