[Devel] [PATCH RHEL9 COMMIT] drivers/vhost: adjust vhost to flush all workers

Konstantin Khorenko khorenko at virtuozzo.com
Wed Nov 16 17:05:27 MSK 2022


The commit is pushed to "branch-rh9-5.14.0-70.22.1.vz9.17.x-ovz" and will appear at https://src.openvz.org/scm/ovz/vzkernel.git
after rh9-5.14.0-70.22.1.vz9.17.9
------>
commit c9470f867ce167f48ade4a955d64ea33b6b617a6
Author: Andrey Zhadchenko <andrey.zhadchenko at virtuozzo.com>
Date:   Fri Nov 11 12:55:49 2022 +0300

    drivers/vhost: adjust vhost to flush all workers
    
    Make vhost_work_dev_flush support several workers and flush
    them simultaneously
    
    https://jira.sw.ru/browse/PSBM-139414
    Signed-off-by: Andrey Zhadchenko <andrey.zhadchenko at virtuozzo.com>
    
    ======
    Patchset description:
    vhost-blk: in-kernel accelerator for virtio-blk guests
    
    Although QEMU virtio-blk is quite fast, there is still some room for
    improvements. Disk latency can be reduced if we handle virito-blk
    requests in host kernel so we avoid a lot of syscalls and context
    switches.
    The idea is quite simple - QEMU gives us block device and we translate
    any incoming virtio requests into bio and push them into bdev.
    The biggest disadvantage of this vhost-blk flavor is raw format.
    
    Luckily Kirill Thai proposed device mapper driver for QCOW2 format to
    attach files as block devices:
    https://www.spinics.net/lists/kernel/msg4292965.html
    
    Also by using kernel modules we can bypass iothread limitation and
    finaly scale block requests with cpus for high-performance devices.
    
    There have already been several attempts to write vhost-blk:
    
    Asias'   version:       https://lkml.org/lkml/2012/12/1/174
    Badari's version:       https://lwn.net/Articles/379864/
    Vitaly's version:       https://lwn.net/Articles/770965/
    
    The main difference between them is API to access backend file. The
    fastest one is Asias's version with bio flavor. It is also the most
    reviewed and have the most features. So vhost_blk module is partially
    based on it. Multiple virtqueue support was addded, some places
    reworked. Added support for several vhost workers.
    
    test setup and results:
      fio --direct=1 --rw=randread  --bs=4k  --ioengine=libaio --iodepth=128
    QEMU drive options: cache=none
    filesystem: xfs
    
    SSD:
                   | randread, IOPS  | randwrite, IOPS |
    Host           |      95.8k      |      85.3k      |
    QEMU virtio    |      57.5k      |      79.4k      |
    QEMU vhost-blk |      95.6k      |      84.3k      |
    
    RAMDISK (vq == vcpu):
                     | randread, IOPS | randwrite, IOPS |
    virtio, 1vcpu    |      123k      |      129k       |
    virtio, 2vcpu    |      253k (??) |      250k (??)  |
    virtio, 4vcpu    |      158k      |      154k       |
    vhost-blk, 1vcpu |      110k      |      113k       |
    vhost-blk, 2vcpu |      247k      |      252k       |
    vhost-blk, 8vcpu |      497k      |      469k       | *single kernel thread
    vhost-blk, 8vcpu |      730k      |      701k       | *two kernel threads
    
    v2:
    
    patch 1/10
     - removed unused VHOST_BLK_VQ
     - reworked bio handling a bit: now add all pages from signle iov into
       single bio istead of allocating one bio per page
     - changed how to calculate sector incrementation
     - check move_iovec() in vhost_blk_req_handle()
     - remove snprintf check and better check ret from copy_to_iter for
       VIRTIO_BLK_ID_BYTES requests
     - discard vq request if vhost_blk_req_handle() returned negative code
     - forbid to change nonzero backend in vhost_blk_set_backend(). First of
       all, QEMU sets backend only once. Also if we want to change backend
       when we already running requests we need to be much more careful in
       vhost_blk_handle_guest_kick() as it is not taking any references. If
       userspace want to change backend that bad it can always reset device.
     - removed EXPERIMENTAL from Kconfig
    
    patch 3/10
     - don't bother with checking dev->workers[0].worker since dev->nworkers
       will always contain 0 in this case
    
    patch 6/10
     - Make code do what docs suggest. Previously ioctl-supplied new number
       of workers were treated like an amount that should be added. Use new
       number as a ceiling instead and add workers up to that number.
    
    v3:
    patch 1/10
     - reworked bio handling a bit - now create new only if the previous is
       full
    
    patch 2/10
     - set vq->worker = NULL in vhost_vq_reset()
    
    v4:
    patch 1/10
     - vhost_blk_req_done() now won't hide errors for multi-bio requests
     - vhost_blk_prepare_req() now better estimates bio_len
     - alloc bio for max pages_nr_total pages instead of nr_pages
     - added new ioctl VHOST_BLK_SET_SERIAL to set serial
     - rework flush alghoritm a bit - now use two bins "new req" and
       "for flush" and swap them at the start of the flush
     - moved backing file dereference to vhost_blk_req_submit() and
       after request was added to flush bin to avoid race in
       vhost_blk_release().  Now even if we dropped backend and started
       flush the request will either be tracked by flush or be rolled back
    
    patch 2/10
     - moved vq->worker = NULL to patch #7 where this field is introduced.
    
    patch 7/10
     - Set vq->worker = NULL in vhost_vq_reset. This will fix both
       https://jira.sw.ru/browse/PSBM-142058
       https://jira.sw.ru/browse/PSBM-142852
    
    v5:
    patch 1/10
     - several codestyle/spacing fixes
     - added WARN_ON() for vhost_blk_flush
    
    https://jira.sw.ru/browse/PSBM-139414
    Reviewed-by: Pavel Tikhomirov <ptikhomirov at virtuozzo.com>
    
    Andrey Zhadchenko (10):
      drivers/vhost: vhost-blk accelerator for virtio-blk guests
      drivers/vhost: use array to store workers
      drivers/vhost: adjust vhost to flush all workers
      drivers/vhost: rework attaching cgroups to be worker aware
      drivers/vhost: rework worker creation
      drivers/vhost: add ioctl to increase the number of workers
      drivers/vhost: assign workers to virtqueues
      drivers/vhost: add API to queue work at virtqueue worker
      drivers/vhost: allow polls to be bound to workers via vqs
      drivers/vhost: queue vhost_blk works at vq workers
    
    Feature: vhost-blk: in-kernel accelerator for virtio-blk guests
---
 drivers/vhost/vhost.c | 16 ++++++++++------
 1 file changed, 10 insertions(+), 6 deletions(-)

diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index 968601325a37..bcdbbec06ddf 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -246,15 +246,19 @@ static void vhost_work_queue_at_worker(struct vhost_worker *w,
 
 void vhost_work_dev_flush(struct vhost_dev *dev)
 {
-	struct vhost_flush_struct flush;
+	struct vhost_flush_struct flush[VHOST_MAX_WORKERS];
+	int i, nworkers;
 
-	if (dev->workers[0].worker) {
-		init_completion(&flush.wait_event);
-		vhost_work_init(&flush.work, vhost_flush_work);
+	nworkers = READ_ONCE(dev->nworkers);
 
-		vhost_work_queue(dev, &flush.work);
-		wait_for_completion(&flush.wait_event);
+	for (i = 0; i < nworkers; i++) {
+		init_completion(&flush[i].wait_event);
+		vhost_work_init(&flush[i].work, vhost_flush_work);
+		vhost_work_queue_at_worker(&dev->workers[i], &flush[i].work);
 	}
+
+	for (i = 0; i < nworkers; i++)
+		wait_for_completion(&flush[i].wait_event);
 }
 EXPORT_SYMBOL_GPL(vhost_work_dev_flush);
 


More information about the Devel mailing list