[Devel] [PATCH RHEL7 COMMIT] ms/wait: add wq_has_single_sleeper helper

Konstantin Khorenko khorenko at virtuozzo.com
Fri Oct 14 19:50:18 MSK 2022


The commit is pushed to "branch-rh7-3.10.0-1160.76.1.vz7.189.x-ovz" and will appear at https://src.openvz.org/scm/ovz/vzkernel.git
after rh7-3.10.0-1160.76.1.vz7.189.4
------>
commit fe6cdcf174baa72559f3ea81df9bdb15bf10902e
Author: Josef Bacik <josef at toxicpanda.com>
Date:   Thu Sep 29 14:30:08 2022 +0300

    ms/wait: add wq_has_single_sleeper helper
    
    rq-qos sits in the io path so we want to take locks as sparingly as
    possible.  To accomplish this we try not to take the waitqueue head lock
    unless we are sure we need to go to sleep, and we have an optimization
    to make sure that we don't starve out existing waiters.  Since we check
    if there are existing waiters locklessly we need to be able to update
    our view of the waitqueue list after we've added ourselves to the
    waitqueue.  Accomplish this by adding this helper to see if there is
    more than just ourselves on the list.
    
    Reviewed-by: Oleg Nesterov <oleg at redhat.com>
    Signed-off-by: Josef Bacik <josef at toxicpanda.com>
    Signed-off-by: Jens Axboe <axboe at kernel.dk>
    
    Changes porting to vz7:
    - s/wait_queue_head/wait_queue_head_t/
    
    https://jira.sw.ru/browse/PSBM-141883
    (cherry picked from commit a6d81d30d3cd87f85bfd922358eb18b8146c4925)
    Signed-off-by: Pavel Tikhomirov <ptikhomirov at virtuozzo.com>
    
    fix
    =================
    Patchset description:
    blk-wbt: Fix hardlockup in wbt_done()
    
    We have a hard lockup detected in this stack:
    
     #13 [ffff9103fe603af8] __enqueue_entity at ffffffffb8ce64c5
     #14 [ffff9103fe603b00] enqueue_entity at ffffffffb8cee27a
     #15 [ffff9103fe603b50] enqueue_task_fair at ffffffffb8ceea9c
     #16 [ffff9103fe603ba0] activate_task at ffffffffb8cdd029
     #17 [ffff9103fe603bc8] ttwu_do_activate at ffffffffb8cdd491
     #18 [ffff9103fe603bf0] try_to_wake_up at ffffffffb8ce124a
     #19 [ffff9103fe603c40] default_wake_function at ffffffffb8ce1552
     #20 [ffff9103fe603c50] autoremove_wake_function at ffffffffb8ccb178
     #21 [ffff9103fe603c78] __wake_up_common at ffffffffb8cd7752
     #22 [ffff9103fe603cd0] __wake_up_common_lock at ffffffffb8cd7873
     #23 [ffff9103fe603d40] __wake_up at ffffffffb8cd78c3
     #24 [ffff9103fe603d50] __wbt_done at ffffffffb8fb6573
     #25 [ffff9103fe603d60] wbt_done at ffffffffb8fb65f2
     #26 [ffff9103fe603d80] __blk_mq_finish_request at ffffffffb8f8daa1
     #27 [ffff9103fe603db8] blk_mq_finish_request at ffffffffb8f8db6a
     #28 [ffff9103fe603dc8] blk_mq_sched_put_request at ffffffffb8f93ee0
     #29 [ffff9103fe603de8] blk_mq_end_request at ffffffffb8f8d1a4
     #30 [ffff9103fe603e08] nvme_complete_rq at ffffffffc033dcfc [nvme_core]
     #31 [ffff9103fe603e18] nvme_pci_complete_rq at ffffffffc038be70 [nvme]
     #32 [ffff9103fe603e40] __blk_mq_complete_request at ffffffffb8f8d316
     #33 [ffff9103fe603e68] blk_mq_complete_request at ffffffffb8f8d3c7
     #34 [ffff9103fe603e78] nvme_irq at ffffffffc038c0b2 [nvme]
     #35 [ffff9103fe603eb0] __handle_irq_event_percpu at ffffffffb8d66bb4
     #36 [ffff9103fe603ef8] handle_irq_event_percpu at ffffffffb8d66d62
     #37 [ffff9103fe603f28] handle_irq_event at ffffffffb8d66dec
     #38 [ffff9103fe603f50] handle_edge_irq at ffffffffb8d69c0f
     #39 [ffff9103fe603f70] handle_irq at ffffffffb8c30524
     #40 [ffff9103fe603fb8] do_IRQ at ffffffffb93d898d
    
    which is exactly the same as ubuntu problem here:
    
    https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1810998
    
    this is because we have writeback throttling ported which does not work
    well in some cases.
    
    In launchpad bug it helped to port these patches from mainstream:
    
      * CPU hard lockup with rigorous writes to NVMe drive (LP: #1810998)
        - blk-wbt: Avoid lock contention and thundering herd issue in wbt_wait
        - blk-wbt: move disable check into get_limit()
        - blk-wbt: use wq_has_sleeper() for wq active check
        - blk-wbt: fix has-sleeper queueing check
        - blk-wbt: abstract out end IO completion handler
        - blk-wbt: improve waking of tasks
    
    which fixes similar lockup issues in wbt.
    
    More over I've found some more small and useful patches which fix races
    (missed wakeups) in this code, so I've also put them in the patchset.
    
    Anchal Agarwal (1):
      blk-wbt: Avoid lock contention and thundering herd issue in wbt_wait
    
    Herbert Xu (1):
      net: Generalise wq_has_sleeper helper
    
    Jens Axboe (5):
      blk-wbt: move disable check into get_limit()
      blk-wbt: use wq_has_sleeper() for wq active check
      blk-wbt: fix has-sleeper queueing check
      blk-wbt: abstract out end IO completion handler
      blk-wbt: improve waking of tasks
    
    Josef Bacik (5):
      wait: add wq_has_single_sleeper helper
      rq-qos: fix missed wake-ups in rq_qos_throttle
      rq-qos: don't reset has_sleepers on spurious wakeups
      rq-qos: set ourself TASK_UNINTERRUPTIBLE after we schedule
      rq-qos: use a mb for got_token
    
    https://jira.sw.ru/browse/PSBM-141883
    
    Ported-by: Pavel Tikhomirov <ptikhomirov at virtuozzo.com>
---
 include/linux/wait.h | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/include/linux/wait.h b/include/linux/wait.h
index 2cd2201fc1e4..12075edebfd6 100644
--- a/include/linux/wait.h
+++ b/include/linux/wait.h
@@ -123,6 +123,19 @@ static inline int waitqueue_active(wait_queue_head_t *q)
 	return !list_empty(&q->task_list);
 }
 
+/**
+ * wq_has_single_sleeper - check if there is only one sleeper
+ * @wq_head: wait queue head
+ *
+ * Returns true of wq_head has only one sleeper on the list.
+ *
+ * Please refer to the comment for waitqueue_active.
+ */
+static inline bool wq_has_single_sleeper(wait_queue_head_t *q)
+{
+	return list_is_singular(&q->task_list);
+}
+
 /**
  * wq_has_sleeper - check if there are any waiting processes
  * @wq: wait queue head


More information about the Devel mailing list