[Devel] [PATCH RHEL9 COMMIT] dm-ploop: fix locking and improve error handling when submitting pios

Konstantin Khorenko khorenko at virtuozzo.com
Mon Jan 27 16:12:52 MSK 2025


The commit is pushed to "branch-rh9-5.14.0-427.44.1.vz9.80.x-ovz" and will appear at git at bitbucket.org:openvz/vzkernel.git
after rh9-5.14.0-427.44.1.vz9.80.6
------>
commit 4b922f2e6f8217ed03051e0ab79c530c1c615b3c
Author: Alexander Atanasov <alexander.atanasov at virtuozzo.com>
Date:   Fri Jan 24 17:36:26 2025 +0200

    dm-ploop: fix locking and improve error handling when submitting pios
    
    The difference between hlist_unhashed_lockless and hlist_unhashed
    is that _lockless version uses READ_ONCE to do the check.
    Since it is used without locks we must switch to the _lockless variant.
    
    Also make locking clusters and adding to inflight_pios return result
    so we can track if they failed and where.
    
    https://virtuozzo.atlassian.net/browse/VSTOR-91821
    Signed-off-by: Alexander Atanasov <alexander.atanasov at virtuozzo.com>
    
    ======
    Patchset description:
    ploop: optimistations and scalling
    
    Ploop processes requsts in a different threads in parallel
    where possible which results in significant improvement in
    performance and makes further optimistations possible.
    
    Known bugs:
      - delayed metadata writeback is not working and is missing error handling
         - patch to disable it until fixed
      - fast path is not working - causes rcu lockups - patch to disable it
    
    Further improvements:
      - optimize md pages lookups
    
    Alexander Atanasov (50):
      dm-ploop: md_pages map all pages at creation time
      dm-ploop: Use READ_ONCE/WRITE_ONCE to access md page data
      dm-ploop: fsync after all pios are sent
      dm-ploop: move md status to use proper bitops
      dm-ploop: convert wait_list and wb_batch_llist to use lockless lists
      dm-ploop: convert enospc handling to use lockless lists
      dm-ploop: convert suspended_pios list to use lockless list
      dm-ploop: convert the rest of the lists to use llist variant
      dm-ploop: combine processing of pios thru prepare list and remove
        fsync worker
      dm-ploop: move from wq to kthread
      dm-ploop: move preparations of pios into the caller from worker
      dm-ploop: fast path execution for reads
      dm-ploop: do not use a wrapper for set_bit to make a page writeback
      dm-ploop: BAT use only one list for writeback
      dm-ploop: make md writeback timeout to be per page
      dm-ploop: add interface to disable bat writeback delay
      dm-ploop: convert wb_batch_list to lockless variant
      dm-ploop: convert high_prio to status
      dm-ploop: split cow processing into two functions
      dm-ploop: convert md page rw lock to spin lock
      dm-ploop: convert bat_rwlock to bat_lock spinlock
      dm-ploop: prepare bat updates under bat_lock
      dm-ploop: make ploop_bat_write_complete ready for parallel pio
        completion
      dm-ploop: make ploop_submit_metadata_writeback return number of
        requests sent
      dm-ploop: introduce pio runner threads
      dm-ploop: add pio list ids to be used when passing pios to runners
      dm-ploop: process pios via runners
      dm-ploop: disable metadata writeback delay
      dm-ploop: disable fast path
      dm-ploop: use lockless lists for chained cow updates list
      dm-ploop: use lockless lists for data ready pios
      dm-ploop: give runner threads better name
      dm-ploop: resize operation - add holes bitmap locking
      dm-ploop: remove unnecessary operations
      dm-ploop: use filp per thread
      dm-ploop: catch if we try to advance pio past bio end
      dm-ploop: support REQ_FUA for data pios
      dm-ploop: proplerly access nr_bat_entries
      dm-ploop: fix locking and improve error handling when submitting pios
      dm-ploop: fix how ENOTBLK is handled
      dm-ploop: sync when suspended or stopping
      dm-ploop: rework bat completion logic
      dm-ploop: rework logic in pio processing
      dm-ploop: end fsync pios in parallel
      dm-ploop: make filespace preallocations async
      dm-ploop: resubmit enospc pios from dispatcher thread
      dm-ploop: dm-ploop: simplify discard completion
      dm-ploop: use GFP_ATOMIC instead of GFP_NOIO
      dm-ploop: fix locks used in mixed context
      dm-ploop: fix how current flags are managed inside threads
    
    Andrey Zhadchenko (13):
      dm-ploop: do not flush after metadata writes
      dm-ploop: set IOCB_DSYNC on all FUA requests
      dm-ploop: remove extra ploop_cluster_is_in_top_delta()
      dm-ploop: introduce per-md page locking
      dm-ploop: reduce BAT accesses on discard completion
      dm-ploop: simplify llseek
      dm-ploop: speed up ploop_prepare_bat_update()
      dm-ploop: make new allocations immediately visible in BAT
      dm-ploop: drop ploop_cluster_is_in_top_delta()
      dm-ploop: do not wait for BAT update for non-FUA requests
      dm-ploop: add delay for metadata writeback
      dm-ploop: submit all postponed metadata on REQ_OP_FLUSH
      dm-ploop: handle REQ_PREFLUSH
    
    Feature: dm-ploop: ploop target driver
---
 drivers/md/dm-ploop-map.c | 70 ++++++++++++++++++++++++++++++++++++-----------
 1 file changed, 54 insertions(+), 16 deletions(-)

diff --git a/drivers/md/dm-ploop-map.c b/drivers/md/dm-ploop-map.c
index 3b24778e0894..dc150c8050d9 100644
--- a/drivers/md/dm-ploop-map.c
+++ b/drivers/md/dm-ploop-map.c
@@ -498,17 +498,32 @@ static void ploop_dec_nr_inflight(struct ploop *ploop, struct pio *pio)
 	}
 }
 
-static void ploop_link_pio(struct hlist_head head[], struct pio *pio,
+static int ploop_link_pio(struct hlist_head head[], struct pio *pio,
 		     u32 clu, bool exclusive)
 {
 	struct hlist_head *slot = ploop_htable_slot(head, clu);
+	struct ploop *ploop = pio->ploop;
+
+	if (exclusive) {
+		struct pio *pe = ploop_find_pio(head, clu);
+
+		if (pe == pio)
+			return 1;
 
-	if (exclusive)
-		WARN_ON_ONCE(ploop_find_pio(head, clu) != NULL);
+		WARN_ON_ONCE(pe != NULL);
 
-	BUG_ON(!hlist_unhashed(&pio->hlist_node));
+		if (pe)
+			PL_ERR("clu:%u already exclusively locked\n", clu);
+	}
+
+	if (!hlist_unhashed_lockless(&pio->hlist_node)) {
+		PL_ERR("already hashed clu:%u e:%d\n", clu, exclusive);
+		WARN_ON_ONCE(1);
+		return 0;
+	}
 	hlist_add_head(&pio->hlist_node, slot);
 	pio->clu = clu;
+	return 1;
 }
 
 /*
@@ -516,45 +531,55 @@ static void ploop_link_pio(struct hlist_head head[], struct pio *pio,
  * or from exclusive_bios_rbtree. BIOs from endio_list are requeued
  * to deferred_list.
  */
-static void ploop_unlink_pio(struct ploop *ploop, struct pio *pio,
+static int ploop_unlink_pio(struct ploop *ploop, struct pio *pio,
 		       struct list_head *pio_list)
 {
-	BUG_ON(hlist_unhashed(&pio->hlist_node));
+	if (hlist_unhashed_lockless(&pio->hlist_node)) {
+		WARN_ON_ONCE(1);
+		return 0;
+	}
 
 	hlist_del_init(&pio->hlist_node);
 	list_splice_tail_init(&pio->endio_list, pio_list);
+	return 1;
 }
 
-static void ploop_add_cluster_lk(struct ploop *ploop, struct pio *pio, u32 clu)
+static int ploop_add_cluster_lk(struct ploop *ploop, struct pio *pio, u32 clu)
 {
 	unsigned long flags;
+	int ret;
 
 	spin_lock_irqsave(&ploop->deferred_lock, flags);
-	ploop_link_pio(ploop->exclusive_pios, pio, clu, true);
+	ret = ploop_link_pio(ploop->exclusive_pios, pio, clu, true);
 	spin_unlock_irqrestore(&ploop->deferred_lock, flags);
+	return ret;
 }
 
-static void ploop_del_cluster_lk(struct ploop *ploop, struct pio *pio)
+static int ploop_del_cluster_lk(struct ploop *ploop, struct pio *pio)
 {
 	LIST_HEAD(pio_list);
 	unsigned long flags;
+	int ret;
 
 	spin_lock_irqsave(&ploop->deferred_lock, flags);
-	ploop_unlink_pio(ploop, pio, &pio_list);
+	ret = ploop_unlink_pio(ploop, pio, &pio_list);
 	spin_unlock_irqrestore(&ploop->deferred_lock, flags);
 
 	if (!list_empty(&pio_list))
 		ploop_dispatch_pios(ploop, NULL, &pio_list);
+	return ret;
 }
 
-static void ploop_link_submitting_pio(struct ploop *ploop, struct pio *pio,
+static int ploop_link_submitting_pio(struct ploop *ploop, struct pio *pio,
 				      u32 clu)
 {
 	unsigned long flags;
+	int ret;
 
 	spin_lock_irqsave(&ploop->inflight_lock, flags);
-	ploop_link_pio(ploop->inflight_pios, pio, clu, false);
+	ret = ploop_link_pio(ploop->inflight_pios, pio, clu, false);
 	spin_unlock_irqrestore(&ploop->inflight_lock, flags);
+	return ret;
 }
 
 static void ploop_unlink_completed_pio(struct ploop *ploop, struct pio *pio)
@@ -562,7 +587,7 @@ static void ploop_unlink_completed_pio(struct ploop *ploop, struct pio *pio)
 	LIST_HEAD(pio_list);
 	unsigned long flags;
 
-	if (hlist_unhashed(&pio->hlist_node))
+	if (hlist_unhashed_lockless(&pio->hlist_node))
 		return;
 
 	spin_lock_irqsave(&ploop->inflight_lock, flags);
@@ -706,7 +731,8 @@ static int ploop_handle_discard_pio(struct ploop *ploop, struct pio *pio,
 		return 0;
 	}
 
-	ploop_add_cluster_lk(ploop, pio, clu);
+	if (!ploop_add_cluster_lk(ploop, pio, clu))
+		PL_ERR("dis clu %d already locked\n", clu);
 	pio->wants_discard_index_cleanup = true;
 
 punch_hole:
@@ -1429,7 +1455,8 @@ static int ploop_submit_cluster_cow(struct ploop *ploop, unsigned int level,
 	cow->aux_pio = aux_pio;
 	cow->cow_pio = cow_pio;
 
-	ploop_add_cluster_lk(ploop, cow_pio, clu);
+	if (!ploop_add_cluster_lk(ploop, cow_pio, clu))
+		PL_ERR("cowclu %d already locked\n", clu);
 
 	/* Stage #0: read secondary delta full clu */
 	ploop_map_and_submit_rw(ploop, dst_clu, aux_pio, level);
@@ -1706,8 +1733,19 @@ static int ploop_process_one_deferred_bio(struct ploop *ploop, struct pio *pio)
 						      &dst_clu, pio);
 	if (!ret)
 		goto executed;
+
 queue:		/* pio needs to go to the queue */
-	ploop_link_submitting_pio(ploop, pio, clu);
+	if (!ploop_link_submitting_pio(ploop, pio, clu)) {
+		PL_ERR("link submitting pio failed\n");
+		/* Pio is put into endio_list */
+		if (ploop_postpone_if_cluster_locked(ploop, pio, clu)) {
+			goto handled;
+		} else {
+			PL_ERR("link submitting pio postpone failed\n");
+			WARN_ON_ONCE(1);
+			/* fail thru and run it anyway nothing */
+		}
+	}
 	ploop_map_and_submit_rw(ploop, dst_clu, pio, ploop_top_level(ploop));
 	// return 1;
 executed:	/* pio op started  */


More information about the Devel mailing list