[Devel] [RFC] Fix get_exec_env() races
Kirill Tkhai
ktkhai at odin.com
Thu Oct 15 10:17:26 PDT 2015
On 15.10.2015 19:49, Kirill Tkhai wrote:
>
>
> On 15.10.2015 17:44, Vladimir Davydov wrote:
>> On Thu, Oct 15, 2015 at 05:21:04PM +0300, Kirill Tkhai wrote:
>>>
>>>
>>> On 15.10.2015 14:15, Pavel Emelyanov wrote:
>>>>
>>>>> @@ -130,6 +131,34 @@ struct ve_struct {
>>>>> #endif
>>>>> };
>>>>>
>>>>> +static inline struct ve_struct *get_exec_env(void)
>>>>> +{
>>>>> + struct ve_struct *ve;
>>>>> +
>>>>> + if (++current->ve_attach_lock_depth > 1)
>>>>> + return current->task_ve;
>>>>> +
>>>>> + rcu_read_lock();
>>>>> +again:
>>>>> + ve = current->task_ve;
>>>>> + read_lock(&ve->attach_lock);
>>>>> + if (unlikely(current->task_ve != ve)) {
>>>>> + read_unlock(&ve->attach_lock);
>>>>> + goto again;
>>>>
>>>> Please, no. 3.10 kernel has task_work-s, ask the task you want to
>>>> attach to ve to execute the work by moving itself into it and keep
>>>> this small routine small and simple.
>>>
>>> cgroup_attach_task() is called under cgroup_mutex and threadgroup_lock(),
>>> so we can't wait attaching task till it complete the task work (it may
>>> execute any code; to be locking cgroup_mutex, for example).
>>
>> Do we really want to wait until exec_env of the target task has changed?
>>
>> Anyway, I think we can use task_work even if we need to be synchronous.
>> You just need two task works - one for the target and one for the
>> caller. The former will change the target task's exec_env while the
>> latter will wait for it to finish.
>
> Hm. Maybe, it would be a good idea if cgroup_attach_task() could not be
> called several times at once. Every attach requires a separate task_work,
> so it needs additional memory. This complicates the thing.
Though, we may wait on counter of number of all process and threads. Looks OK.
>>>
>>> Should we give a possibility (an interface) for userspace to get it know,
>>> the task's finished ve changing?
>>>
>>>
>>>>> + }
>>>>> + rcu_read_unlock();
>>>>> +
>>>>> + return ve;
>>>>> +}
>>>>> +
>>>>
>>>
More information about the Devel
mailing list