[Devel] [PATCH RHEL9 COMMIT] dm-ploop: convert the rest of the lists to use llist variant
Konstantin Khorenko
khorenko at virtuozzo.com
Mon Jan 27 16:12:34 MSK 2025
The commit is pushed to "branch-rh9-5.14.0-427.44.1.vz9.80.x-ovz" and will appear at git at bitbucket.org:openvz/vzkernel.git
after rh9-5.14.0-427.44.1.vz9.80.6
------>
commit 28bedecf659cf96899de252977052d0578d0ad78
Author: Alexander Atanasov <alexander.atanasov at virtuozzo.com>
Date: Fri Jan 24 17:35:42 2025 +0200
dm-ploop: convert the rest of the lists to use llist variant
Use llist and remove deferred_lock around pio dispatching.
https://virtuozzo.atlassian.net/browse/VSTOR-91820
Signed-off-by: Alexander Atanasov <alexander.atanasov at virtuozzo.com>
======
Patchset description:
ploop: optimistations and scalling
Ploop processes requsts in a different threads in parallel
where possible which results in significant improvement in
performance and makes further optimistations possible.
Known bugs:
- delayed metadata writeback is not working and is missing error handling
- patch to disable it until fixed
- fast path is not working - causes rcu lockups - patch to disable it
Further improvements:
- optimize md pages lookups
Alexander Atanasov (50):
dm-ploop: md_pages map all pages at creation time
dm-ploop: Use READ_ONCE/WRITE_ONCE to access md page data
dm-ploop: fsync after all pios are sent
dm-ploop: move md status to use proper bitops
dm-ploop: convert wait_list and wb_batch_llist to use lockless lists
dm-ploop: convert enospc handling to use lockless lists
dm-ploop: convert suspended_pios list to use lockless list
dm-ploop: convert the rest of the lists to use llist variant
dm-ploop: combine processing of pios thru prepare list and remove
fsync worker
dm-ploop: move from wq to kthread
dm-ploop: move preparations of pios into the caller from worker
dm-ploop: fast path execution for reads
dm-ploop: do not use a wrapper for set_bit to make a page writeback
dm-ploop: BAT use only one list for writeback
dm-ploop: make md writeback timeout to be per page
dm-ploop: add interface to disable bat writeback delay
dm-ploop: convert wb_batch_list to lockless variant
dm-ploop: convert high_prio to status
dm-ploop: split cow processing into two functions
dm-ploop: convert md page rw lock to spin lock
dm-ploop: convert bat_rwlock to bat_lock spinlock
dm-ploop: prepare bat updates under bat_lock
dm-ploop: make ploop_bat_write_complete ready for parallel pio
completion
dm-ploop: make ploop_submit_metadata_writeback return number of
requests sent
dm-ploop: introduce pio runner threads
dm-ploop: add pio list ids to be used when passing pios to runners
dm-ploop: process pios via runners
dm-ploop: disable metadata writeback delay
dm-ploop: disable fast path
dm-ploop: use lockless lists for chained cow updates list
dm-ploop: use lockless lists for data ready pios
dm-ploop: give runner threads better name
dm-ploop: resize operation - add holes bitmap locking
dm-ploop: remove unnecessary operations
dm-ploop: use filp per thread
dm-ploop: catch if we try to advance pio past bio end
dm-ploop: support REQ_FUA for data pios
dm-ploop: proplerly access nr_bat_entries
dm-ploop: fix locking and improve error handling when submitting pios
dm-ploop: fix how ENOTBLK is handled
dm-ploop: sync when suspended or stopping
dm-ploop: rework bat completion logic
dm-ploop: rework logic in pio processing
dm-ploop: end fsync pios in parallel
dm-ploop: make filespace preallocations async
dm-ploop: resubmit enospc pios from dispatcher thread
dm-ploop: dm-ploop: simplify discard completion
dm-ploop: use GFP_ATOMIC instead of GFP_NOIO
dm-ploop: fix locks used in mixed context
dm-ploop: fix how current flags are managed inside threads
Andrey Zhadchenko (13):
dm-ploop: do not flush after metadata writes
dm-ploop: set IOCB_DSYNC on all FUA requests
dm-ploop: remove extra ploop_cluster_is_in_top_delta()
dm-ploop: introduce per-md page locking
dm-ploop: reduce BAT accesses on discard completion
dm-ploop: simplify llseek
dm-ploop: speed up ploop_prepare_bat_update()
dm-ploop: make new allocations immediately visible in BAT
dm-ploop: drop ploop_cluster_is_in_top_delta()
dm-ploop: do not wait for BAT update for non-FUA requests
dm-ploop: add delay for metadata writeback
dm-ploop: submit all postponed metadata on REQ_OP_FLUSH
dm-ploop: handle REQ_PREFLUSH
Feature: dm-ploop: ploop target driver
---
drivers/md/dm-ploop-map.c | 123 +++++++++++++++++++++++++------------------
drivers/md/dm-ploop-target.c | 6 +--
drivers/md/dm-ploop.h | 4 +-
3 files changed, 78 insertions(+), 55 deletions(-)
diff --git a/drivers/md/dm-ploop-map.c b/drivers/md/dm-ploop-map.c
index bdc2ea214242..28244755f3ce 100644
--- a/drivers/md/dm-ploop-map.c
+++ b/drivers/md/dm-ploop-map.c
@@ -343,9 +343,8 @@ ALLOW_ERROR_INJECTION(ploop_split_pio_to_list, ERRNO);
static void ploop_dispatch_pio(struct ploop *ploop, struct pio *pio,
bool *is_data, bool *is_flush)
{
- struct list_head *list = &ploop->pios[pio->queue_list_id];
+ struct llist_head *list = (struct llist_head *)&ploop->pios[pio->queue_list_id];
- lockdep_assert_held(&ploop->deferred_lock);
WARN_ON_ONCE(pio->queue_list_id >= PLOOP_LIST_COUNT);
if (pio->queue_list_id == PLOOP_LIST_FLUSH)
@@ -353,23 +352,20 @@ static void ploop_dispatch_pio(struct ploop *ploop, struct pio *pio,
else
*is_data = true;
- list_add_tail(&pio->list, list);
+ llist_add((struct llist_node *)(&pio->list), list);
}
void ploop_dispatch_pios(struct ploop *ploop, struct pio *pio,
struct list_head *pio_list)
{
bool is_data = false, is_flush = false;
- unsigned long flags;
- spin_lock_irqsave(&ploop->deferred_lock, flags);
if (pio)
ploop_dispatch_pio(ploop, pio, &is_data, &is_flush);
if (pio_list) {
while ((pio = ploop_pio_list_pop(pio_list)) != NULL)
ploop_dispatch_pio(ploop, pio, &is_data, &is_flush);
}
- spin_unlock_irqrestore(&ploop->deferred_lock, flags);
if (is_data)
queue_work(ploop->wq, &ploop->worker);
@@ -815,7 +811,7 @@ static void ploop_advance_local_after_bat_wb(struct ploop *ploop,
wait_llist_pending = llist_reverse_order(wait_llist_pending);
llist_for_each_safe(pos, t, wait_llist_pending) {
pio = list_entry((struct list_head *)pos, typeof(*pio), list);
- list_add(&pio->list, &list);
+ list_add_tail(&pio->list, &list);
}
}
@@ -1155,13 +1151,10 @@ static void ploop_attach_end_action(struct pio *pio, struct ploop_index_wb *piwb
static void ploop_queue_resubmit(struct pio *pio)
{
struct ploop *ploop = pio->ploop;
- unsigned long flags;
pio->queue_list_id = PLOOP_LIST_INVALID;
- spin_lock_irqsave(&ploop->deferred_lock, flags);
- list_add_tail(&pio->list, &ploop->resubmit_pios);
- spin_unlock_irqrestore(&ploop->deferred_lock, flags);
+ llist_add((struct llist_node *)(&pio->list), &ploop->llresubmit_pios);
queue_work(ploop->wq, &ploop->worker);
}
@@ -1417,20 +1410,21 @@ static void ploop_submit_cow_index_wb(struct ploop_cow *cow)
}
static void ploop_process_delta_cow(struct ploop *ploop,
- struct list_head *cow_list)
+ struct llist_node *cow_llist)
{
struct ploop_cow *cow;
struct pio *aux_pio;
+ struct llist_node *pos, *t;
- if (list_empty(cow_list))
- return;
-
- while ((aux_pio = ploop_pio_list_pop(cow_list)) != NULL) {
+ llist_for_each_safe(pos, t, cow_llist) {
+ aux_pio = list_entry((struct list_head *)pos, typeof(*aux_pio), list);
cow = aux_pio->endio_cb_data;
if (unlikely(aux_pio->bi_status != BLK_STS_OK)) {
ploop_complete_cow(cow, aux_pio->bi_status);
continue;
}
+ /* until type is changed */
+ INIT_LIST_HEAD(&aux_pio->list);
if (cow->dst_clu == BAT_ENTRY_NONE) {
/*
@@ -1686,13 +1680,17 @@ static void ploop_prepare_one_embedded_pio(struct ploop *ploop,
}
static void ploop_prepare_embedded_pios(struct ploop *ploop,
- struct list_head *pios,
+ struct llist_node *pios,
struct list_head *deferred_pios)
{
struct pio *pio;
+ struct llist_node *pos, *t;
- while ((pio = ploop_pio_list_pop(pios)) != NULL)
+ llist_for_each_safe(pos, t, pios) {
+ pio = list_entry((struct list_head *)pos, typeof(*pio), list);
+ INIT_LIST_HEAD(&pio->list); /* until type is changed */
ploop_prepare_one_embedded_pio(ploop, pio, deferred_pios);
+ }
}
static void ploop_process_deferred_pios(struct ploop *ploop,
@@ -1755,21 +1753,28 @@ static void ploop_process_one_discard_pio(struct ploop *ploop, struct pio *pio)
}
static void ploop_process_discard_pios(struct ploop *ploop,
- struct list_head *pios)
+ struct llist_node *discpios)
{
struct pio *pio;
+ struct llist_node *pos, *t;
- while ((pio = ploop_pio_list_pop(pios)) != NULL)
+ llist_for_each_safe(pos, t, discpios) {
+ pio = list_entry((struct list_head *)pos, typeof(*pio), list);
+ INIT_LIST_HEAD(&pio->list);
ploop_process_one_discard_pio(ploop, pio);
+ }
}
static void ploop_process_resubmit_pios(struct ploop *ploop,
- struct list_head *pios)
+ struct llist_node *llpios)
{
struct pio *pio;
+ struct llist_node *pos, *t;
- while ((pio = ploop_pio_list_pop(pios)) != NULL) {
+ llist_for_each_safe(pos, t, llpios) {
+ pio = list_entry((struct list_head *)pos, typeof(*pio), list);
pio->queue_list_id = PLOOP_LIST_INVALID;
+ INIT_LIST_HEAD(&pio->list); /* Until type is changed */
ploop_submit_rw_mapped(ploop, pio);
}
}
@@ -1795,19 +1800,21 @@ static void ploop_submit_metadata_writeback(struct ploop *ploop)
static void process_ploop_fsync_work(struct ploop *ploop)
{
- LIST_HEAD(flush_pios);
struct file *file;
struct pio *pio;
int ret;
+ struct llist_node *llflush_pios;
+ struct llist_node *pos, *t;
- spin_lock_irq(&ploop->deferred_lock);
- list_splice_init(&ploop->pios[PLOOP_LIST_FLUSH], &flush_pios);
- spin_unlock_irq(&ploop->deferred_lock);
+ llflush_pios = llist_del_all(&ploop->pios[PLOOP_LIST_FLUSH]);
file = ploop_top_delta(ploop)->file;
+ /* All flushes are done as one */
ret = vfs_fsync(file, 0);
- while ((pio = ploop_pio_list_pop(&flush_pios)) != NULL) {
+ llist_for_each_safe(pos, t, llflush_pios) {
+ pio = list_entry((struct list_head *)pos, typeof(*pio), list);
+ INIT_LIST_HEAD(&pio->list); /* Until type is changed */
if (unlikely(ret)) {
pio->bi_status = errno_to_blk_status(ret);
if (static_branch_unlikely(&ploop_standby_check))
@@ -1820,32 +1827,54 @@ static void process_ploop_fsync_work(struct ploop *ploop)
void do_ploop_work(struct work_struct *ws)
{
struct ploop *ploop = container_of(ws, struct ploop, worker);
- LIST_HEAD(embedded_pios);
LIST_HEAD(deferred_pios);
- LIST_HEAD(discard_pios);
- LIST_HEAD(cow_pios);
- LIST_HEAD(resubmit_pios);
+ struct llist_node *llembedded_pios;
+ struct llist_node *lldeferred_pios;
+ struct llist_node *lldiscard_pios;
+ struct llist_node *llcow_pios;
+ struct llist_node *llresubmit;
bool do_fsync = false;
unsigned int old_flags = current->flags;
current->flags |= PF_IO_THREAD|PF_LOCAL_THROTTLE|PF_MEMALLOC_NOIO;
spin_lock_irq(&ploop->deferred_lock);
- list_splice_init(&ploop->pios[PLOOP_LIST_PREPARE], &embedded_pios);
- list_splice_init(&ploop->pios[PLOOP_LIST_DEFERRED], &deferred_pios);
- list_splice_init(&ploop->pios[PLOOP_LIST_DISCARD], &discard_pios);
- list_splice_init(&ploop->pios[PLOOP_LIST_COW], &cow_pios);
- list_splice_init(&ploop->resubmit_pios, &resubmit_pios);
- if (!list_empty(&ploop->pios[PLOOP_LIST_FLUSH]))
+ llembedded_pios = llist_del_all(&ploop->pios[PLOOP_LIST_PREPARE]);
+ lldeferred_pios = llist_del_all(&ploop->pios[PLOOP_LIST_DEFERRED]);
+ lldiscard_pios = llist_del_all(&ploop->pios[PLOOP_LIST_DISCARD]);
+ llcow_pios = llist_del_all(&ploop->pios[PLOOP_LIST_COW]);
+ llresubmit = llist_del_all(&ploop->llresubmit_pios);
+
+ if (!llist_empty(&ploop->pios[PLOOP_LIST_FLUSH]))
do_fsync = true;
+
spin_unlock_irq(&ploop->deferred_lock);
- ploop_prepare_embedded_pios(ploop, &embedded_pios, &deferred_pios);
+ /* add old deferred to the list */
+ if (lldeferred_pios) {
+ struct llist_node *pos, *t;
+ struct pio *pio;
+
+ llist_for_each_safe(pos, t, llist_reverse_order(lldeferred_pios)) {
+ pio = list_entry((struct list_head *)pos, typeof(*pio), list);
+ INIT_LIST_HEAD(&pio->list);
+ list_add_tail(&pio->list, &deferred_pios);
+ }
+ }
+
+ ploop_prepare_embedded_pios(ploop, llembedded_pios, &deferred_pios);
+
+ if (llresubmit)
+ ploop_process_resubmit_pios(ploop, llist_reverse_order(llresubmit));
+
- ploop_process_resubmit_pios(ploop, &resubmit_pios);
ploop_process_deferred_pios(ploop, &deferred_pios);
- ploop_process_discard_pios(ploop, &discard_pios);
- ploop_process_delta_cow(ploop, &cow_pios);
+
+ if (lldiscard_pios)
+ ploop_process_discard_pios(ploop, llist_reverse_order(lldiscard_pios));
+
+ if (llcow_pios)
+ ploop_process_delta_cow(ploop, llist_reverse_order(llcow_pios));
ploop_submit_metadata_writeback(ploop);
@@ -1868,7 +1897,6 @@ static void ploop_submit_embedded_pio(struct ploop *ploop, struct pio *pio)
struct request *rq = prq->rq;
struct work_struct *worker;
unsigned long flags;
- bool queue = true;
if (blk_rq_bytes(rq)) {
pio->queue_list_id = PLOOP_LIST_PREPARE;
@@ -1879,20 +1907,15 @@ static void ploop_submit_embedded_pio(struct ploop *ploop, struct pio *pio)
worker = &ploop->fsync_worker;
}
- spin_lock_irqsave(&ploop->deferred_lock, flags);
if (unlikely(ploop->stop_submitting_pios)) {
llist_add((struct llist_node *)(&pio->list), &ploop->llsuspended_pios);
- queue = false;
- goto unlock;
+ return;
}
ploop_inc_nr_inflight(ploop, pio);
- list_add_tail(&pio->list, &ploop->pios[pio->queue_list_id]);
-unlock:
- spin_unlock_irqrestore(&ploop->deferred_lock, flags);
+ llist_add((struct llist_node *)(&pio->list), &ploop->pios[pio->queue_list_id]);
- if (queue)
- queue_work(ploop->wq, worker);
+ queue_work(ploop->wq, worker);
}
void ploop_submit_embedded_pios(struct ploop *ploop, struct list_head *list)
diff --git a/drivers/md/dm-ploop-target.c b/drivers/md/dm-ploop-target.c
index 94d6f661f50c..f12c6912f8d0 100644
--- a/drivers/md/dm-ploop-target.c
+++ b/drivers/md/dm-ploop-target.c
@@ -140,7 +140,7 @@ static bool ploop_has_pending_activity(struct ploop *ploop)
spin_lock_irq(&ploop->deferred_lock);
for (i = 0; i < PLOOP_LIST_COUNT; i++)
- has |= !list_empty(&ploop->pios[i]);
+ has |= !llist_empty(&ploop->pios[i]);
spin_unlock_irq(&ploop->deferred_lock);
return has;
@@ -373,9 +373,9 @@ static int ploop_ctr(struct dm_target *ti, unsigned int argc, char **argv)
init_llist_head(&ploop->llsuspended_pios);
for (i = 0; i < PLOOP_LIST_COUNT; i++)
- INIT_LIST_HEAD(&ploop->pios[i]);
+ init_llist_head(&ploop->pios[i]);
- INIT_LIST_HEAD(&ploop->resubmit_pios);
+ init_llist_head(&ploop->llresubmit_pios);
init_llist_head(&ploop->enospc_pios);
INIT_LIST_HEAD(&ploop->cluster_lk_list);
diff --git a/drivers/md/dm-ploop.h b/drivers/md/dm-ploop.h
index 676476bbf655..0cd18c0c7bfa 100644
--- a/drivers/md/dm-ploop.h
+++ b/drivers/md/dm-ploop.h
@@ -195,9 +195,9 @@ struct ploop {
spinlock_t inflight_lock;
spinlock_t deferred_lock;
- struct list_head pios[PLOOP_LIST_COUNT];
+ struct llist_head pios[PLOOP_LIST_COUNT];
- struct list_head resubmit_pios; /* After partial IO */
+ struct llist_head llresubmit_pios; /* After partial IO */
struct llist_head enospc_pios; /* Delayed after ENOSPC */
atomic_t service_pios;
More information about the Devel
mailing list