[Devel] [RFC PATCH vz9 v6 06/62] dm-ploop: reduce the time lock is hold, taking it only to protect data

Alexander Atanasov alexander.atanasov at virtuozzo.com
Mon Jan 13 19:27:37 MSK 2025


On 9.01.25 5:52, Pavel Tikhomirov wrote:
> 
> 
> On 12/6/24 05:55, Alexander Atanasov wrote:
>> Currently lock is taken once for all pios submitted(code block),
>> move it to protect only the list insertion (data).
>> The goal is to reduce lock contention on the deferred lock as
>> it is in the top of lock stats.
>>
>> https://virtuozzo.atlassian.net/browse/VSTOR-91820
>> Signed-off-by: Alexander Atanasov <alexander.atanasov at virtuozzo.com>
>> ---
>>   drivers/md/dm-ploop-map.c | 8 ++++----
>>   1 file changed, 4 insertions(+), 4 deletions(-)
>>
>> diff --git a/drivers/md/dm-ploop-map.c b/drivers/md/dm-ploop-map.c
>> index 4b9e000e0f98..a0fe92a592a1 100644
>> --- a/drivers/md/dm-ploop-map.c
>> +++ b/drivers/md/dm-ploop-map.c
>> @@ -340,8 +340,9 @@ static void ploop_dispatch_pio(struct ploop 
>> *ploop, struct pio *pio,
>>                      bool *is_data, bool *is_flush)
>>   {
>>       struct list_head *list = &ploop->pios[pio->queue_list_id];
>> +    unsigned long flags;
>> -    lockdep_assert_held(&ploop->deferred_lock);
>> +    lockdep_assert_not_held(&ploop->deferred_lock);
>>       WARN_ON_ONCE(pio->queue_list_id >= PLOOP_LIST_COUNT);
>>       if (pio->queue_list_id == PLOOP_LIST_FLUSH)
>> @@ -349,23 +350,22 @@ static void ploop_dispatch_pio(struct ploop 
>> *ploop, struct pio *pio,
>>       else
>>           *is_data = true;
>> +    spin_lock_irqsave(&ploop->deferred_lock, flags);
>>       list_add_tail(&pio->list, list);
>> +    spin_unlock_irqrestore(&ploop->deferred_lock, flags);
> 
> This patch can be merged to "[RFC PATCH vz9 v6 10/62] dm-ploop: convert 
> the rest of the lists to use llist variant".

I try to extra careful with list conversions, they bring hard to find 
issues. So i prefer conversions to be in small steps.

> 
> Also I think that having one lock around the whole loop introduces much 
> less lock contention than doing multiple take/release on each iteration 
> of the loop.


This lock is gone later with move to llist. Usually there is just one 
entry in the list most users dispatch single pio - locking  the list 
when it is necessary reduces the time lock is held - locking data 
instead of locking code .

> 
>>   }
>>   void ploop_dispatch_pios(struct ploop *ploop, struct pio *pio,
>>                struct list_head *pio_list)
>>   {
>>       bool is_data = false, is_flush = false;
>> -    unsigned long flags;
>> -    spin_lock_irqsave(&ploop->deferred_lock, flags);
>>       if (pio)
>>           ploop_dispatch_pio(ploop, pio, &is_data, &is_flush);
>>       if (pio_list) {
>>           while ((pio = ploop_pio_list_pop(pio_list)) != NULL)
>>               ploop_dispatch_pio(ploop, pio, &is_data, &is_flush);
>>       }
>> -    spin_unlock_irqrestore(&ploop->deferred_lock, flags);
>>       if (is_data)
>>           queue_work(ploop->wq, &ploop->worker);
> 

-- 
Regards,
Alexander Atanasov



More information about the Devel mailing list