[Devel] [PATCH RHEL7 v19 09/13] ve/cgroup: Implemented logic that uses 'cgroup->ve_owner' to run release_agent notifications.

Kirill Tkhai ktkhai at virtuozzo.com
Fri Apr 24 12:39:32 MSK 2020


On 23.04.2020 18:44, Valeriy Vdovin wrote:
> release_agent_cgroup work will extract ve_owner information from each
> cgroup in it's list and run user mode helper under it's namespaces.
> Also some code was added to detect ve destruction and manage
> release_agent executions in this case.
> 
> https://jira.sw.ru/browse/PSBM-83887
> 
> Signed-off-by: Valeriy Vdovin <valeriy.vdovin at virtuozzo.com>
> ---
>  kernel/cgroup.c | 29 +++++++++++++++++++++++++----
>  1 file changed, 25 insertions(+), 4 deletions(-)
> 
> diff --git a/kernel/cgroup.c b/kernel/cgroup.c
> index abe3284..0b047b0 100644
> --- a/kernel/cgroup.c
> +++ b/kernel/cgroup.c
> @@ -5438,7 +5438,8 @@ void cgroup_release_agent(struct work_struct *work)
>  		char *argv[3], *envp[3];
>  		int i, err;
>  		char *pathbuf = NULL, *agentbuf = NULL;
> -		struct cgroup *cgrp;
> +		struct cgroup *cgrp, *root_cgrp;
> +		struct task_struct *ve_task;
>  
>  		cgrp = list_entry(ve->release_list.next,
>  				  struct cgroup,
> @@ -5449,8 +5450,24 @@ void cgroup_release_agent(struct work_struct *work)
>  		pathbuf = kmalloc(PAGE_SIZE, GFP_KERNEL);
>  		if (!pathbuf)
>  			goto continue_free;
> -		if (cgroup_path(cgrp, pathbuf, PAGE_SIZE) < 0)
> +		if (__cgroup_path(cgrp, pathbuf, PAGE_SIZE, 1) < 0)

s/1/true/

This is bool argument

> +			goto continue_free;
> +		rcu_read_lock();

What does this rcu_read_lock() protect here? In case of the protected variable is ve_owner only,
rcu_read_lock() is not needed, since we set ve_owner under cgroup_mutex and we read it here also
under cgroup_mutex. rcu_dereference_protected() may be used instead.

Or maybe we use rcu_read_lock() here for something else?

> +		root_cgrp = cgroup_get_local_root(cgrp);
> +		/*
> +		 * At VE destruction root cgroup looses VE_ROOT flag.
> +		 * Because of that 'cgroup_get_local_root' will not see
> +		 * VE root and return host's root cgroup instead.
> +		 * We can detect this because we have a pointer to
> +		 * original ve coming from work argument.
> +		 * We do not want to execute VE's notifications on host,
> +		 * so in this case we skip.
> +		 */
> +		if (rcu_dereference(root_cgrp->ve_owner) != ve) {
> +			rcu_read_unlock();
>  			goto continue_free;
> +		}
> +		rcu_read_unlock();
>  		agentbuf = kstrdup(cgrp->root->release_agent_path, GFP_KERNEL);
>  		if (!agentbuf)
>  			goto continue_free;
> @@ -5470,8 +5487,12 @@ void cgroup_release_agent(struct work_struct *work)
>  		 * since the exec could involve hitting disk and hence
>  		 * be a slow process */
>  		mutex_unlock(&cgroup_mutex);
> -		err = call_usermodehelper(argv[0], argv, envp, UMH_WAIT_EXEC);
> -		if (err < 0)
> +
> +		err = call_usermodehelper_fns_ve(ve, argv[0], argv,
> +			envp, UMH_WAIT_EXEC, NULL, NULL, NULL);
> +
> +		ve_task = ve->init_task;
> +		if (err < 0 && (ve == &ve0 || !(ve_task->flags & PF_EXITING)))
>  			pr_warn_ratelimited("cgroup release_agent "
>  					    "%s %s failed: %d\n",
>  					    agentbuf, pathbuf, err);
> 



More information about the Devel mailing list