[Devel] [PATCH RHEL8 COMMIT] kstat: Make kstat_glob::swap_in percpu

Konstantin Khorenko khorenko at virtuozzo.com
Fri Apr 30 14:01:05 MSK 2021


The commit is pushed to "branch-rh8-4.18.0-240.1.1.vz8.5.x-ovz" and will appear at https://src.openvz.org/scm/ovz/vzkernel.git
after rh8-4.18.0-240.1.1.vz8.5.24
------>
commit c30a7bebb0e73add1e4ef77d74165f44aee0f57b
Author: Kirill Tkhai <ktkhai at virtuozzo.com>
Date:   Fri Apr 30 14:01:05 2021 +0300

    kstat: Make kstat_glob::swap_in percpu
    
    Patchset description:
    Make kstat_glob::swap_in percpu and cleanup
    
    This patchset continues escaping of kstat_glb_lock
    and makes swap_in percpu. Also, newly unused primitives
    are dropped and reduced memory usage by using percpu
    seqcount (instead of separate percpu seqcount for every
    kstat percpu variable).
    
    Kirill Tkhai (4):
          kstat: Make kstat_glob::swap_in percpu
          kstat: Drop global kstat_lat_struct
          kstat: Drop cpu argument in KSTAT_LAT_PCPU_ADD()
          kstat: Make global percpu kstat_pcpu_seq instead of percpu seq for every
                 variable
    
    ==========================================
    This patch description:
    
    Using of global local is not good for scalability.
    Better we make swap_in percpu, and it will be updated
    lockless like other statistics (e.g., page_in).
    
    Signed-off-by: Kirill Tkhai <ktkhai at virtuozzo.com>
    
    Ported to vz8:
     - Dropped all patchset but this patch, since it is already partially included
     - Introduced start in do_swap_page to use it for kstat_glob.swap_in
    
    (cherry picked from vz7 commit ed033a381e01 ("kstat: Make kstat_glob::swap_in
    percpu"))
    Signed-off-by: Andrey Zhadchenko <andrey.zhadchenko at virtuozzo.com>
    Reviewed-by: Kirill Tkhai <ktkhai at virtuozzo.com>
---
 mm/memory.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/mm/memory.c b/mm/memory.c
index b64f317ac313..3a483796dea4 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3037,7 +3037,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 	int locked;
 	int exclusive = 0;
 	vm_fault_t ret = 0;
+	cycles_t start;
 
+	start = get_cycles();
 	if (!pte_unmap_same(vma->vm_mm, vmf->pmd, vmf->pte, vmf->orig_pte))
 		goto out;
 
@@ -3226,6 +3228,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 unlock:
 	pte_unmap_unlock(vmf->pte, vmf->ptl);
 out:
+	local_irq_disable();
+	KSTAT_LAT_PCPU_ADD(&kstat_glob.swap_in, get_cycles() - start);
+	local_irq_enable();
+
 	return ret;
 out_nomap:
 	mem_cgroup_cancel_charge(page, memcg, false);


More information about the Devel mailing list