[Devel] [PATCH rh7 1/3] vmscan: fix slab vs lru balance
Konstantin Khorenko
khorenko at virtuozzo.com
Thu Nov 26 08:07:33 PST 2015
Kirill, please review.
--
Best regards,
Konstantin Khorenko,
Virtuozzo Linux Kernel Team
On 11/25/2015 06:42 PM, Vladimir Davydov wrote:
> The comment to shrink_slab states that the portion of kmem objects
> scanned by it equals the portion of lru pages scanned by shrink_zone
> over shrinker->seeks.
>
> shrinker->seeks is supposed to be equal to the number of disk seeks
> required to recreated an object. It is usually set to DEFAULT_SEEKS (2),
> which is quite logical, because most kmem objects (e.g. dentry or inode)
> require random IO to reread (seek to read and seek back).
>
> That said, one would expect that dcache is scanned two times less
> intensively than page cache, which sounds sane as dentries are generally
> more costly to recreate.
>
> However, the formula for distributing memory pressure between slab and
> lru actually looks as follows (see do_shrink_slab):
>
> lru_scanned
> objs_to_scan = objs_total * --------------- * 4 / shrinker->seeks
> lru_reclaimable
>
> That is dcache, as well as most of other slab caches, is scanned two
> times more aggressively than page cache.
>
> Fix this by dropping '4' from the equation above.
>
> Signed-off-by: Vladimir Davydov <vdavydov at virtuozzo.com>
> ---
> mm/vmscan.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 9b1c73bb465c..eadc08b7a5d6 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -262,7 +262,7 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
> nr = atomic_long_xchg(&shrinker->nr_deferred[nid], 0);
>
> total_scan = nr;
> - delta = (4 * nr_scanned) / shrinker->seeks;
> + delta = nr_scanned / shrinker->seeks;
> delta *= max_pass;
> do_div(delta, nr_eligible + 1);
> total_scan += delta;
>
More information about the Devel
mailing list