[Devel] [PATCH RHEL9 COMMIT] dm-ploop: fsync after all pios are sent

Konstantin Khorenko khorenko at virtuozzo.com
Mon Jan 27 16:12:33 MSK 2025


The commit is pushed to "branch-rh9-5.14.0-427.44.1.vz9.80.x-ovz" and will appear at git at bitbucket.org:openvz/vzkernel.git
after rh9-5.14.0-427.44.1.vz9.80.6
------>
commit e44e7893e18a2505e60dca4edff3481971e75ef3
Author: Alexander Atanasov <alexander.atanasov at virtuozzo.com>
Date:   Fri Jan 24 17:35:37 2025 +0200

    dm-ploop: fsync after all pios are sent
    
    Currently there are two workers one to handle pios,
    one to handle flush (via vfs_fsync). This workers are
    created unbound which means they are run whenever there is a free
    CPU. When ploop sends pios (via ploop_dispatch_pios) it checks
    if there are data and if there are flush pios. If both are
    present both workers are scheduled to run. Which results in
    a lot of writes and sync in parallel - which is slow and incorrect.
    Slow due to the underlaying fs trying to write and sync, instead of
    cache writes and then sync them. And incorrect due to the fact
    that REQ_FLUSH ploop handles must complete after all is send to disk
    which when run in parallel is not guaranteed to happen.
    
    To address this process flushes after all pending pios are processed
    and submitted and then process any pending flushes.
    
    https://virtuozzo.atlassian.net/browse/VSTOR-93454
    Signed-off-by: Alexander Atanasov <alexander.atanasov at virtuozzo.com>
    
    ======
    Patchset description:
    ploop: optimistations and scalling
    
    Ploop processes requsts in a different threads in parallel
    where possible which results in significant improvement in
    performance and makes further optimistations possible.
    
    Known bugs:
      - delayed metadata writeback is not working and is missing error handling
         - patch to disable it until fixed
      - fast path is not working - causes rcu lockups - patch to disable it
    
    Further improvements:
      - optimize md pages lookups
    
    Alexander Atanasov (50):
      dm-ploop: md_pages map all pages at creation time
      dm-ploop: Use READ_ONCE/WRITE_ONCE to access md page data
      dm-ploop: fsync after all pios are sent
      dm-ploop: move md status to use proper bitops
      dm-ploop: convert wait_list and wb_batch_llist to use lockless lists
      dm-ploop: convert enospc handling to use lockless lists
      dm-ploop: convert suspended_pios list to use lockless list
      dm-ploop: convert the rest of the lists to use llist variant
      dm-ploop: combine processing of pios thru prepare list and remove
        fsync worker
      dm-ploop: move from wq to kthread
      dm-ploop: move preparations of pios into the caller from worker
      dm-ploop: fast path execution for reads
      dm-ploop: do not use a wrapper for set_bit to make a page writeback
      dm-ploop: BAT use only one list for writeback
      dm-ploop: make md writeback timeout to be per page
      dm-ploop: add interface to disable bat writeback delay
      dm-ploop: convert wb_batch_list to lockless variant
      dm-ploop: convert high_prio to status
      dm-ploop: split cow processing into two functions
      dm-ploop: convert md page rw lock to spin lock
      dm-ploop: convert bat_rwlock to bat_lock spinlock
      dm-ploop: prepare bat updates under bat_lock
      dm-ploop: make ploop_bat_write_complete ready for parallel pio
        completion
      dm-ploop: make ploop_submit_metadata_writeback return number of
        requests sent
      dm-ploop: introduce pio runner threads
      dm-ploop: add pio list ids to be used when passing pios to runners
      dm-ploop: process pios via runners
      dm-ploop: disable metadata writeback delay
      dm-ploop: disable fast path
      dm-ploop: use lockless lists for chained cow updates list
      dm-ploop: use lockless lists for data ready pios
      dm-ploop: give runner threads better name
      dm-ploop: resize operation - add holes bitmap locking
      dm-ploop: remove unnecessary operations
      dm-ploop: use filp per thread
      dm-ploop: catch if we try to advance pio past bio end
      dm-ploop: support REQ_FUA for data pios
      dm-ploop: proplerly access nr_bat_entries
      dm-ploop: fix locking and improve error handling when submitting pios
      dm-ploop: fix how ENOTBLK is handled
      dm-ploop: sync when suspended or stopping
      dm-ploop: rework bat completion logic
      dm-ploop: rework logic in pio processing
      dm-ploop: end fsync pios in parallel
      dm-ploop: make filespace preallocations async
      dm-ploop: resubmit enospc pios from dispatcher thread
      dm-ploop: dm-ploop: simplify discard completion
      dm-ploop: use GFP_ATOMIC instead of GFP_NOIO
      dm-ploop: fix locks used in mixed context
      dm-ploop: fix how current flags are managed inside threads
    
    Andrey Zhadchenko (13):
      dm-ploop: do not flush after metadata writes
      dm-ploop: set IOCB_DSYNC on all FUA requests
      dm-ploop: remove extra ploop_cluster_is_in_top_delta()
      dm-ploop: introduce per-md page locking
      dm-ploop: reduce BAT accesses on discard completion
      dm-ploop: simplify llseek
      dm-ploop: speed up ploop_prepare_bat_update()
      dm-ploop: make new allocations immediately visible in BAT
      dm-ploop: drop ploop_cluster_is_in_top_delta()
      dm-ploop: do not wait for BAT update for non-FUA requests
      dm-ploop: add delay for metadata writeback
      dm-ploop: submit all postponed metadata on REQ_OP_FLUSH
      dm-ploop: handle REQ_PREFLUSH
    
    Feature: dm-ploop: ploop target driver
---
 drivers/md/dm-ploop-map.c | 52 +++++++++++++++++++++++++++++------------------
 1 file changed, 32 insertions(+), 20 deletions(-)

diff --git a/drivers/md/dm-ploop-map.c b/drivers/md/dm-ploop-map.c
index 221e97ba84a9..010492e96f52 100644
--- a/drivers/md/dm-ploop-map.c
+++ b/drivers/md/dm-ploop-map.c
@@ -368,7 +368,7 @@ void ploop_dispatch_pios(struct ploop *ploop, struct pio *pio,
 
 	if (is_data)
 		queue_work(ploop->wq, &ploop->worker);
-	if (is_flush)
+	else if (is_flush)
 		queue_work(ploop->wq, &ploop->fsync_worker);
 }
 
@@ -1786,6 +1786,30 @@ static void ploop_submit_metadata_writeback(struct ploop *ploop)
 	}
 }
 
+static void process_ploop_fsync_work(struct ploop *ploop)
+{
+	LIST_HEAD(flush_pios);
+	struct file *file;
+	struct pio *pio;
+	int ret;
+
+	spin_lock_irq(&ploop->deferred_lock);
+	list_splice_init(&ploop->pios[PLOOP_LIST_FLUSH], &flush_pios);
+	spin_unlock_irq(&ploop->deferred_lock);
+
+	file = ploop_top_delta(ploop)->file;
+	ret = vfs_fsync(file, 0);
+
+	while ((pio = ploop_pio_list_pop(&flush_pios)) != NULL) {
+		if (unlikely(ret)) {
+			pio->bi_status = errno_to_blk_status(ret);
+			if (static_branch_unlikely(&ploop_standby_check))
+				ploop_check_standby_mode(ploop, ret);
+		}
+		ploop_pio_endio(pio);
+	}
+}
+
 void do_ploop_work(struct work_struct *ws)
 {
 	struct ploop *ploop = container_of(ws, struct ploop, worker);
@@ -1794,6 +1818,7 @@ void do_ploop_work(struct work_struct *ws)
 	LIST_HEAD(discard_pios);
 	LIST_HEAD(cow_pios);
 	LIST_HEAD(resubmit_pios);
+	bool do_fsync = false;
 	unsigned int old_flags = current->flags;
 
 	current->flags |= PF_IO_THREAD|PF_LOCAL_THROTTLE|PF_MEMALLOC_NOIO;
@@ -1804,6 +1829,8 @@ void do_ploop_work(struct work_struct *ws)
 	list_splice_init(&ploop->pios[PLOOP_LIST_DISCARD], &discard_pios);
 	list_splice_init(&ploop->pios[PLOOP_LIST_COW], &cow_pios);
 	list_splice_init(&ploop->resubmit_pios, &resubmit_pios);
+	if (!list_empty(&ploop->pios[PLOOP_LIST_FLUSH]))
+		do_fsync = true;
 	spin_unlock_irq(&ploop->deferred_lock);
 
 	ploop_prepare_embedded_pios(ploop, &embedded_pios, &deferred_pios);
@@ -1816,31 +1843,16 @@ void do_ploop_work(struct work_struct *ws)
 	ploop_submit_metadata_writeback(ploop);
 
 	current->flags = old_flags;
+
+	if (do_fsync)
+		process_ploop_fsync_work(ploop);
 }
 
 void do_ploop_fsync_work(struct work_struct *ws)
 {
 	struct ploop *ploop = container_of(ws, struct ploop, fsync_worker);
-	LIST_HEAD(flush_pios);
-	struct file *file;
-	struct pio *pio;
-	int ret;
 
-	spin_lock_irq(&ploop->deferred_lock);
-	list_splice_init(&ploop->pios[PLOOP_LIST_FLUSH], &flush_pios);
-	spin_unlock_irq(&ploop->deferred_lock);
-
-	file = ploop_top_delta(ploop)->file;
-	ret = vfs_fsync(file, 0);
-
-	while ((pio = ploop_pio_list_pop(&flush_pios)) != NULL) {
-		if (unlikely(ret)) {
-			pio->bi_status = errno_to_blk_status(ret);
-			if (static_branch_unlikely(&ploop_standby_check))
-				ploop_check_standby_mode(ploop, ret);
-		}
-		ploop_pio_endio(pio);
-	}
+	process_ploop_fsync_work(ploop);
 }
 
 static void ploop_submit_embedded_pio(struct ploop *ploop, struct pio *pio)


More information about the Devel mailing list