[Devel] [PATCH] nfs: protect callback execution against per-net callback thread shutdown

Kirill Tkhai ktkhai at virtuozzo.com
Tue Nov 7 13:03:23 MSK 2017


On 07.11.2017 13:01, Stanislav Kinsburskiy wrote:
> Sure, here it is:
> 
> CPU #0					CPU#1
> 
> cleanup_mnt				nfs41_callback_svc
> ...						(get transport from the list)
> nfs_callback_down			...
> ...					...
> svc_close_net				...
> ...					...
> svc_xprt_free				...
> svc_bc_sock_free			bc_svc_process
> kfree(xprt)					svc_process_common
> 						rqstp->rq_xprt->xpt_ops (use after free)
> 					
> 
> 
> 07.11.2017 10:49, Kirill Tkhai пишет:
>> On 03.11.2017 19:47, Stanislav Kinsburskiy wrote:
>>> From: Stanislav Kinsburskiy <skinsbursky at parallels.com>
>>>
>>> The problem is that per-net SUNRPC transports shutdown is done regardless
>>> current callback execution. This is a race leading to transport use-after-free
>>> in callback handler.
>>
>> Could you please draw the race to show the interaction between functions?
>>
>>> This patch fixes it in stright-forward way. I.e. it protects callback
>>> execution with the same mutex used for per-net data creation and destruction.
>>> Hopefully, it won't slow down NFS client significantly.
>>>
>>> https://jira.sw.ru/browse/PSBM-75751
>>>
>>> Signed-off-by: Stanislav Kinsburskiy <skinsbursky at parallels.com>
>>> ---
>>>  fs/nfs/callback.c |    3 +++
>>>  1 file changed, 3 insertions(+)
>>>
>>> diff --git a/fs/nfs/callback.c b/fs/nfs/callback.c
>>> index 0beb275..82e8ed1 100644
>>> --- a/fs/nfs/callback.c
>>> +++ b/fs/nfs/callback.c
>>> @@ -118,6 +118,7 @@ nfs41_callback_svc(void *vrqstp)
>>>  			continue;
>>>  
>>>  		prepare_to_wait(&serv->sv_cb_waitq, &wq, TASK_INTERRUPTIBLE);
>>> +		mutex_lock(&nfs_callback_mutex);
>>>  		spin_lock_bh(&serv->sv_cb_lock);
>>>  		if (!list_empty(&serv->sv_cb_list)) {
>>>  			req = list_first_entry(&serv->sv_cb_list,
>>> @@ -129,8 +130,10 @@ nfs41_callback_svc(void *vrqstp)
>>>  			error = bc_svc_process(serv, req, rqstp);
>>>  			dprintk("bc_svc_process() returned w/ error code= %d\n",
>>>  				error);
>>> +			mutex_unlock(&nfs_callback_mutex);
>>>  		} else {
>>>  			spin_unlock_bh(&serv->sv_cb_lock);
>>> +			mutex_unlock(&nfs_callback_mutex);
>>>  			schedule();
>>>  			finish_wait(&serv->sv_cb_waitq, &wq);
>>>  		}
>>
>> Couldn't this change introduce a deadlock like below?
>>                                                           [thread]
>> nfs_callback_down()                                       nfs41_callback_svc()
>>    mutex_lock(&nfs_callback_mutex);                       
>>    kthread_stop(cb_info->task);                           
>>       wake_up_process();                                  
>>       wait_for_completion(&kthread->exited);              

And what about above one?


More information about the Devel mailing list