[Devel] [PATCH RHEL7 COMMIT] ms/vmscan: fix increasing nr_isolated incurred by putback unevictable pages
Konstantin Khorenko
khorenko at virtuozzo.com
Thu Jul 13 18:40:43 MSK 2017
The commit is pushed to "branch-rh7-3.10.0-514.26.1.vz7.33.x-ovz" and will appear at https://src.openvz.org/scm/ovz/vzkernel.git
after rh7-3.10.0-514.26.1.vz7.33.6
------>
commit 7df75aaf78f00e8759ebea070470a44dd4ade20d
Author: Jaewon Kim <jaewon31.kim at samsung.com>
Date: Thu Jul 13 19:40:42 2017 +0400
ms/vmscan: fix increasing nr_isolated incurred by putback unevictable pages
commit c54839a722a02818677bcabe57e957f0ce4f841d upstream.
reclaim_clean_pages_from_list() assumes that shrink_page_list() returns
number of pages removed from the candidate list. But shrink_page_list()
puts back mlocked pages without passing it to caller and without
counting as nr_reclaimed. This increases nr_isolated.
To fix this, this patch changes shrink_page_list() to pass unevictable
pages back to caller. Caller will take care those pages.
Minchan said:
It fixes two issues.
1. With unevictable page, cma_alloc will be successful.
Exactly speaking, cma_alloc of current kernel will fail due to
unevictable pages.
2. fix leaking of NR_ISOLATED counter of vmstat
With it, too_many_isolated works. Otherwise, it could make hang until
the process get SIGKILL.
Signed-off-by: Jaewon Kim <jaewon31.kim at samsung.com>
Acked-by: Minchan Kim <minchan at kernel.org>
Cc: Mel Gorman <mgorman at techsingularity.net>
Acked-by: Vlastimil Babka <vbabka at suse.cz>
Signed-off-by: Andrew Morton <akpm at linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
Signed-off-by: Jiri Slaby <jslaby at suse.cz>
Applied in the scope of
https://jira.sw.ru/browse/PSBM-68029
Signed-off-by: Andrey Ryabinin <aryabinin at virtuozzo.com>
---
mm/vmscan.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 1b4471e..7b24537 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1147,7 +1147,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,
if (PageSwapCache(page))
try_to_free_swap(page);
unlock_page(page);
- putback_lru_page(page);
+ list_add(&page->lru, &ret_pages);
continue;
activate_locked:
More information about the Devel
mailing list