[Devel] [RFC PATCH vz9 v6 00/62] ploop optimistations and scalling

Pavel Tikhomirov ptikhomirov at virtuozzo.com
Fri Jan 10 10:55:24 MSK 2025



On 1/10/25 15:52, Alexander Atanasov wrote:
> On 10.01.25 9:43, Pavel Tikhomirov wrote:
>> I didn't find specific patch for this comment so I'll leave it to 
>> patch -00.
>>
>> What if ploop_dispatch_pios and do_ploop_run_work(do_ploop_work) run 
>> concurrently for one ploop on different cpus? Would not it be bad if
>> ploop_dispatch_pios only dispatched part of pios and do_ploop_run_work 
>> already starts processing them, or if do_ploop_run_work yet does not 
>> "see" all pios from already fully dispatched ploop_dispatch_pios? That 
>> can lead to arbitrary pio reordering.
> 
> This is not a problem if we reorder pios - i had the same worries and 
> even tried to make the to be executed in sequence but it is not 
> required. Filesystems are receiving and handling out of order requests 
> and the callers must wait for completion before issuing further 
> requests. Inside ploop completion is organized so when all split pios 
> are done then the completion is performed.
> 
> Said in another way - if a caller issues two writes or read and write to 
> the same place and he did not wait completion it is not our problem.
> 

If a caller issues write and sync in this order, should the caller 
expect that on receiving both completions sync was done after write?

> 
> 
>>
>> For instance imagine:
>>
>> ploop_dispatch_pios () {
>>                            do_ploop_run_work () {
>>                              llcow_pios = llist_del_all(...);
>>                              ...
>>
>>    add data pio #1
>>    add data pio #2
>>    add flush pio #3
>>                              llflush_pios = llist_del_all(...);
>>                              process_ploop_fsync_work(llflush_pios);
>>                            }
>> }
>>
>> This way a flush pio #3 will be processed before data pios #1 and #2.
>>
> 

-- 
Best regards, Tikhomirov Pavel
Senior Software Developer, Virtuozzo.



More information about the Devel mailing list