[Devel] [PATCH RHEL7 COMMIT] ms/rq-qos: fix missed wake-ups in rq_qos_throttle

Konstantin Khorenko khorenko at virtuozzo.com
Fri Oct 14 19:50:18 MSK 2022


The commit is pushed to "branch-rh7-3.10.0-1160.76.1.vz7.189.x-ovz" and will appear at https://src.openvz.org/scm/ovz/vzkernel.git
after rh7-3.10.0-1160.76.1.vz7.189.4
------>
commit 815294bb8cf89219b011febcd0176c1d21a9d5e3
Author: Josef Bacik <josef at toxicpanda.com>
Date:   Thu Sep 29 14:30:09 2022 +0300

    ms/rq-qos: fix missed wake-ups in rq_qos_throttle
    
    We saw a hang in production with WBT where there was only one waiter in
    the throttle path and no outstanding IO.  This is because of the
    has_sleepers optimization that is used to make sure we don't steal an
    inflight counter for new submitters when there are people already on the
    list.
    
    We can race with our check to see if the waitqueue has any waiters (this
    is done locklessly) and the time we actually add ourselves to the
    waitqueue.  If this happens we'll go to sleep and never be woken up
    because nobody is doing IO to wake us up.
    
    Fix this by checking if the waitqueue has a single sleeper on the list
    after we add ourselves, that way we have an uptodate view of the list.
    
    Reviewed-by: Oleg Nesterov <oleg at redhat.com>
    Signed-off-by: Josef Bacik <josef at toxicpanda.com>
    Signed-off-by: Jens Axboe <axboe at kernel.dk>
    
    Changes when porting to vz7:
    - original patch is patching block/blk-rq-qos.c:rq_qos_wait, but in vz7
      similar hunk is in block/blk-wbt.c:__wbt_wait
    
    https://jira.sw.ru/browse/PSBM-141883
    (cherry picked from commit 545fbd0775bafcefc8f7bc844291bd13c44b7fdc)
    Signed-off-by: Pavel Tikhomirov <ptikhomirov at virtuozzo.com>
    
    =================
    Patchset description:
    blk-wbt: Fix hardlockup in wbt_done()
    
    We have a hard lockup detected in this stack:
    
     #13 [ffff9103fe603af8] __enqueue_entity at ffffffffb8ce64c5
     #14 [ffff9103fe603b00] enqueue_entity at ffffffffb8cee27a
     #15 [ffff9103fe603b50] enqueue_task_fair at ffffffffb8ceea9c
     #16 [ffff9103fe603ba0] activate_task at ffffffffb8cdd029
     #17 [ffff9103fe603bc8] ttwu_do_activate at ffffffffb8cdd491
     #18 [ffff9103fe603bf0] try_to_wake_up at ffffffffb8ce124a
     #19 [ffff9103fe603c40] default_wake_function at ffffffffb8ce1552
     #20 [ffff9103fe603c50] autoremove_wake_function at ffffffffb8ccb178
     #21 [ffff9103fe603c78] __wake_up_common at ffffffffb8cd7752
     #22 [ffff9103fe603cd0] __wake_up_common_lock at ffffffffb8cd7873
     #23 [ffff9103fe603d40] __wake_up at ffffffffb8cd78c3
     #24 [ffff9103fe603d50] __wbt_done at ffffffffb8fb6573
     #25 [ffff9103fe603d60] wbt_done at ffffffffb8fb65f2
     #26 [ffff9103fe603d80] __blk_mq_finish_request at ffffffffb8f8daa1
     #27 [ffff9103fe603db8] blk_mq_finish_request at ffffffffb8f8db6a
     #28 [ffff9103fe603dc8] blk_mq_sched_put_request at ffffffffb8f93ee0
     #29 [ffff9103fe603de8] blk_mq_end_request at ffffffffb8f8d1a4
     #30 [ffff9103fe603e08] nvme_complete_rq at ffffffffc033dcfc [nvme_core]
     #31 [ffff9103fe603e18] nvme_pci_complete_rq at ffffffffc038be70 [nvme]
     #32 [ffff9103fe603e40] __blk_mq_complete_request at ffffffffb8f8d316
     #33 [ffff9103fe603e68] blk_mq_complete_request at ffffffffb8f8d3c7
     #34 [ffff9103fe603e78] nvme_irq at ffffffffc038c0b2 [nvme]
     #35 [ffff9103fe603eb0] __handle_irq_event_percpu at ffffffffb8d66bb4
     #36 [ffff9103fe603ef8] handle_irq_event_percpu at ffffffffb8d66d62
     #37 [ffff9103fe603f28] handle_irq_event at ffffffffb8d66dec
     #38 [ffff9103fe603f50] handle_edge_irq at ffffffffb8d69c0f
     #39 [ffff9103fe603f70] handle_irq at ffffffffb8c30524
     #40 [ffff9103fe603fb8] do_IRQ at ffffffffb93d898d
    
    which is exactly the same as ubuntu problem here:
    
    https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1810998
    
    this is because we have writeback throttling ported which does not work
    well in some cases.
    
    In launchpad bug it helped to port these patches from mainstream:
    
      * CPU hard lockup with rigorous writes to NVMe drive (LP: #1810998)
        - blk-wbt: Avoid lock contention and thundering herd issue in wbt_wait
        - blk-wbt: move disable check into get_limit()
        - blk-wbt: use wq_has_sleeper() for wq active check
        - blk-wbt: fix has-sleeper queueing check
        - blk-wbt: abstract out end IO completion handler
        - blk-wbt: improve waking of tasks
    
    which fixes similar lockup issues in wbt.
    
    More over I've found some more small and useful patches which fix races
    (missed wakeups) in this code, so I've also put them in the patchset.
    
    Anchal Agarwal (1):
      blk-wbt: Avoid lock contention and thundering herd issue in wbt_wait
    
    Herbert Xu (1):
      net: Generalise wq_has_sleeper helper
    
    Jens Axboe (5):
      blk-wbt: move disable check into get_limit()
      blk-wbt: use wq_has_sleeper() for wq active check
      blk-wbt: fix has-sleeper queueing check
      blk-wbt: abstract out end IO completion handler
      blk-wbt: improve waking of tasks
    
    Josef Bacik (5):
      wait: add wq_has_single_sleeper helper
      rq-qos: fix missed wake-ups in rq_qos_throttle
      rq-qos: don't reset has_sleepers on spurious wakeups
      rq-qos: set ourself TASK_UNINTERRUPTIBLE after we schedule
      rq-qos: use a mb for got_token
    
    https://jira.sw.ru/browse/PSBM-141883
    
    Ported-by: Pavel Tikhomirov <ptikhomirov at virtuozzo.com>
---
 block/blk-wbt.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/block/blk-wbt.c b/block/blk-wbt.c
index 49d11e089c97..5477c3ffe7a7 100644
--- a/block/blk-wbt.c
+++ b/block/blk-wbt.c
@@ -571,6 +571,7 @@ static void __wbt_wait(struct rq_wb *rwb, enum wbt_flags wb_acct,
 		return;
 
 	prepare_to_wait_exclusive(&rqw->wait, &data.wq, TASK_UNINTERRUPTIBLE);
+	has_sleeper = !wq_has_single_sleeper(&rqw->wait);
 	do {
 		if (data.got_token)
 			break;


More information about the Devel mailing list