[Devel] [PATCH RHEL9 COMMIT] dm-ploop: proplerly access nr_bat_entries

Konstantin Khorenko khorenko at virtuozzo.com
Mon Jan 27 16:12:51 MSK 2025


The commit is pushed to "branch-rh9-5.14.0-427.44.1.vz9.80.x-ovz" and will appear at git at bitbucket.org:openvz/vzkernel.git
after rh9-5.14.0-427.44.1.vz9.80.6
------>
commit 6c527ebc75036f82303e4dc20aeff33f4ac44ad0
Author: Alexander Atanasov <alexander.atanasov at virtuozzo.com>
Date:   Fri Jan 24 17:36:25 2025 +0200

    dm-ploop: proplerly access nr_bat_entries
    
    nr_bat_entries are updated while resizing, some places
    read it without holding the bat_lock, to ensure a good
    value is read use READ_ONCE and WRITE_ONCE when updating.
    During grow/resize and shrink pios are suspended and
    code waits for all active inflight pios to complete.
    
    https://virtuozzo.atlassian.net/browse/VSTOR-91821
    Signed-off-by: Alexander Atanasov <alexander.atanasov at virtuozzo.com>
    
    ======
    Patchset description:
    ploop: optimistations and scalling
    
    Ploop processes requsts in a different threads in parallel
    where possible which results in significant improvement in
    performance and makes further optimistations possible.
    
    Known bugs:
      - delayed metadata writeback is not working and is missing error handling
         - patch to disable it until fixed
      - fast path is not working - causes rcu lockups - patch to disable it
    
    Further improvements:
      - optimize md pages lookups
    
    Alexander Atanasov (50):
      dm-ploop: md_pages map all pages at creation time
      dm-ploop: Use READ_ONCE/WRITE_ONCE to access md page data
      dm-ploop: fsync after all pios are sent
      dm-ploop: move md status to use proper bitops
      dm-ploop: convert wait_list and wb_batch_llist to use lockless lists
      dm-ploop: convert enospc handling to use lockless lists
      dm-ploop: convert suspended_pios list to use lockless list
      dm-ploop: convert the rest of the lists to use llist variant
      dm-ploop: combine processing of pios thru prepare list and remove
        fsync worker
      dm-ploop: move from wq to kthread
      dm-ploop: move preparations of pios into the caller from worker
      dm-ploop: fast path execution for reads
      dm-ploop: do not use a wrapper for set_bit to make a page writeback
      dm-ploop: BAT use only one list for writeback
      dm-ploop: make md writeback timeout to be per page
      dm-ploop: add interface to disable bat writeback delay
      dm-ploop: convert wb_batch_list to lockless variant
      dm-ploop: convert high_prio to status
      dm-ploop: split cow processing into two functions
      dm-ploop: convert md page rw lock to spin lock
      dm-ploop: convert bat_rwlock to bat_lock spinlock
      dm-ploop: prepare bat updates under bat_lock
      dm-ploop: make ploop_bat_write_complete ready for parallel pio
        completion
      dm-ploop: make ploop_submit_metadata_writeback return number of
        requests sent
      dm-ploop: introduce pio runner threads
      dm-ploop: add pio list ids to be used when passing pios to runners
      dm-ploop: process pios via runners
      dm-ploop: disable metadata writeback delay
      dm-ploop: disable fast path
      dm-ploop: use lockless lists for chained cow updates list
      dm-ploop: use lockless lists for data ready pios
      dm-ploop: give runner threads better name
      dm-ploop: resize operation - add holes bitmap locking
      dm-ploop: remove unnecessary operations
      dm-ploop: use filp per thread
      dm-ploop: catch if we try to advance pio past bio end
      dm-ploop: support REQ_FUA for data pios
      dm-ploop: proplerly access nr_bat_entries
      dm-ploop: fix locking and improve error handling when submitting pios
      dm-ploop: fix how ENOTBLK is handled
      dm-ploop: sync when suspended or stopping
      dm-ploop: rework bat completion logic
      dm-ploop: rework logic in pio processing
      dm-ploop: end fsync pios in parallel
      dm-ploop: make filespace preallocations async
      dm-ploop: resubmit enospc pios from dispatcher thread
      dm-ploop: dm-ploop: simplify discard completion
      dm-ploop: use GFP_ATOMIC instead of GFP_NOIO
      dm-ploop: fix locks used in mixed context
      dm-ploop: fix how current flags are managed inside threads
    
    Andrey Zhadchenko (13):
      dm-ploop: do not flush after metadata writes
      dm-ploop: set IOCB_DSYNC on all FUA requests
      dm-ploop: remove extra ploop_cluster_is_in_top_delta()
      dm-ploop: introduce per-md page locking
      dm-ploop: reduce BAT accesses on discard completion
      dm-ploop: simplify llseek
      dm-ploop: speed up ploop_prepare_bat_update()
      dm-ploop: make new allocations immediately visible in BAT
      dm-ploop: drop ploop_cluster_is_in_top_delta()
      dm-ploop: do not wait for BAT update for non-FUA requests
      dm-ploop: add delay for metadata writeback
      dm-ploop: submit all postponed metadata on REQ_OP_FLUSH
      dm-ploop: handle REQ_PREFLUSH
    
    Feature: dm-ploop: ploop target driver
---
 drivers/md/dm-ploop-cmd.c | 13 +++++++++----
 drivers/md/dm-ploop-map.c | 12 ++++++------
 2 files changed, 15 insertions(+), 10 deletions(-)

diff --git a/drivers/md/dm-ploop-cmd.c b/drivers/md/dm-ploop-cmd.c
index 50a23212270d..011d37f8fe3e 100644
--- a/drivers/md/dm-ploop-cmd.c
+++ b/drivers/md/dm-ploop-cmd.c
@@ -465,10 +465,14 @@ static int ploop_process_resize_cmd(struct ploop *ploop, struct ploop_cmd *cmd)
 			ploop_hole_set_bit(dst_clu, ploop);
 			dst_clu--;
 		}
-		swap(ploop->hb_nr, cmd->resize.hb_nr);
+		WRITE_ONCE(ploop->hb_nr, cmd->resize.hb_nr);
 	} else {
 		ploop_add_md_pages(ploop, &cmd->resize.md_pages_root);
-		swap(ploop->nr_bat_entries, cmd->resize.nr_bat_entries);
+		/*
+		 * In some places code reads nr_bat_entries without lock
+		 * it is ok for grow but if we do shrink this may be a problem.
+		 */
+		WRITE_ONCE(ploop->nr_bat_entries, cmd->resize.nr_bat_entries);
 	}
 	spin_unlock_irq(&ploop->bat_lock);
 
@@ -638,7 +642,7 @@ static int ploop_process_merge_latest_snapshot(struct ploop *ploop)
 	int ret = 0;
 	u32 clu;
 
-	for (clu = 0; clu < ploop->nr_bat_entries; clu++) {
+	for (clu = 0; clu < READ_ONCE(ploop->nr_bat_entries); clu++) {
 		if (fatal_signal_pending(current)) {
 			ret = -EINTR;
 			break;
@@ -974,7 +978,8 @@ static int process_flip_upper_deltas(struct ploop *ploop)
 	struct md_page *md;
 	u64 size;
 
-	size = (PLOOP_MAP_OFFSET + ploop->nr_bat_entries) * sizeof(map_index_t);
+	size = (PLOOP_MAP_OFFSET + READ_ONCE(ploop->nr_bat_entries))
+		* sizeof(map_index_t);
         bat_clusters = DIV_ROUND_UP(size, CLU_SIZE(ploop));
 	hb_nr = ploop->hb_nr;
 
diff --git a/drivers/md/dm-ploop-map.c b/drivers/md/dm-ploop-map.c
index a1dcf401409c..3b24778e0894 100644
--- a/drivers/md/dm-ploop-map.c
+++ b/drivers/md/dm-ploop-map.c
@@ -106,14 +106,14 @@ static int ploop_rq_valid(struct ploop *ploop, struct request *rq)
 	end_byte = to_bytes(sector) + blk_rq_bytes(rq) - 1;
 	end_clu = POS_TO_CLU(ploop, end_byte);
 
-	if (unlikely(end_clu >= ploop->nr_bat_entries)) {
+	if (unlikely(end_clu >= READ_ONCE(ploop->nr_bat_entries))) {
 		/*
 		 * This mustn't happen, since we set max_io_len
 		 * via dm_set_target_max_io_len().
 		 */
 		WARN_ONCE(1, "sec=%llu, size=%u, end_clu=%u, nr=%u\n",
 			  sector, blk_rq_bytes(rq),
-			  end_clu, ploop->nr_bat_entries);
+			  end_clu, READ_ONCE(ploop->nr_bat_entries));
 		return -EINVAL;
 	}
 
@@ -824,8 +824,9 @@ static void ploop_advance_local_after_bat_wb(struct ploop *ploop,
 	off = piwb->page_id * PAGE_SIZE / sizeof(map_index_t);
 	off -= PLOOP_MAP_OFFSET;
 
+	spin_lock_irqsave(&ploop->bat_lock, flags);
 	/* Last and first index in copied page */
-	last = ploop->nr_bat_entries - off;
+	last = READ_ONCE(ploop->nr_bat_entries) - off;
 	if (last > PAGE_SIZE / sizeof(map_index_t))
 		last = PAGE_SIZE / sizeof(map_index_t);
 	i = 0;
@@ -835,7 +836,6 @@ static void ploop_advance_local_after_bat_wb(struct ploop *ploop,
 	dst_clu = kmap_local_page(piwb->bat_page);
 
 	/* holes bit map requires bat_lock */
-	spin_lock_irqsave(&ploop->bat_lock, flags);
 	spin_lock(&md->md_lock);
 #ifdef PLOOP_DELAYWB
 	if (piwb->type == PIWB_TYPE_ALLOC)
@@ -1023,7 +1023,7 @@ static int ploop_prepare_bat_update(struct ploop *ploop, struct md_page *md,
 	off -= PLOOP_MAP_OFFSET;
 
 	/* Last and first index in copied page */
-	last = ploop->nr_bat_entries - off;
+	last = READ_ONCE(ploop->nr_bat_entries) - off;
 	if (last > PAGE_SIZE / sizeof(map_index_t)) {
 		last = PAGE_SIZE / sizeof(map_index_t);
 		is_last_page = false;
@@ -2431,7 +2431,7 @@ loff_t ploop_llseek_hole(struct dm_target *ti, loff_t offset, int whence)
 	clu = SEC_TO_CLU(ploop, to_sector(offset) + ploop->skip_off);
 	id = U32_MAX;
 
-	while (clu < ploop->nr_bat_entries) {
+	while (clu < READ_ONCE(ploop->nr_bat_entries)) {
 		if (id != ploop_bat_clu_to_page_nr(clu)) {
 			id = ploop_bat_clu_to_page_nr(clu);
 			md = ploop_md_page_find(ploop, id);


More information about the Devel mailing list