[Devel] [PATCH 2/3] mm: vmscan: get rid of DEFAULT_SEEKS and document shrink_slab logic
Vladimir Davydov
vdavydov at parallels.com
Thu Feb 6 10:51:21 PST 2014
On 02/06/2014 12:52 AM, Andrew Morton wrote:
> On Wed, 5 Feb 2014 11:16:49 +0400 Vladimir Davydov <vdavydov at parallels.com> wrote:
>
>>> So why did I originally make DEFAULT_SEEKS=2? Because I figured that to
>>> recreate (say) an inode would require a seek to the inode data then a
>>> seek back. Is it legitimate to include the
>>> seek-back-to-what-you-were-doing-before seek in the cost of an inode
>>> reclaim? I guess so...
>> Hmm, that explains this 2. Since we typically don't need to "seek back"
>> when recreating a cache page, as they are usually read in bunches by
>> readahead, the number of seeks to bring back a user page is 1, while the
>> number of seeks to recreate an average inode is 2, right?
> Sounds right to me.
>
>> Then to scan inodes and user pages so that they would generate
>> approximately the same number of seeks, we should calculate the number
>> of objects to scan as follows:
>>
>> nr_objects_to_scan = nr_pages_scanned / lru_pages *
>> nr_freeable_objects /
>> shrinker->seeks
>>
>> where shrinker->seeks = DEFAULT_SEEKS = 2 for inodes.
> hm, I wonder if we should take the size of the object into account.
> Should we be maximizing (memory-reclaimed / seeks-to-reestablish-it).
I'm not sure I understand you quite right. You mean that if two slab
caches have obj sizes 1k and 2k and both of them need 2 seeks to
recreate an object, we should scan the 1k (or 2k?) slab cache more
aggressively than the 2k one? Hmm... I don't know. It depends on what we
want to achieve. But this won't balance the seeks, which is our goal for
now, IIUC.
>> But currently we
>> have four times that. I can explain why we should multiply this by 2 -
>> we do not count pages moving from active to inactive lrus in
>> nr_pages_scanned, and 2*nr_pages_scanned can be a good approximation for
>> that - but I have no idea why we multiply it by 4...
> I don't understand this code at all:
>
> total_scan = nr;
> delta = (4 * nr_pages_scanned) / shrinker->seeks;
> delta *= freeable;
> do_div(delta, lru_pages + 1);
> total_scan += delta;
>
> If it actually makes any sense, it sorely sorely needs documentation.
To find its roots I had to checkout the linux history tree:
commit c3f4656118a78c1c294e0b4d338ac946265a822b
Author: Andrew Morton <akpm at osdl.org>
Date: Mon Dec 29 23:48:44 2003 -0800
[PATCH] shrink_slab acounts for seeks incorrectly
wli points out that shrink_slab inverts the sense of
shrinker->seeks: those
caches which require more seeks to reestablish an object are shrunk
harder.
That's wrong - they should be shrunk less.
So fix that up, but scaling the result so that the patch is actually
a no-op
at this time, because all caches use DEFAULT_SEEKS (2).
diff --git a/mm/vmscan.c b/mm/vmscan.c
index b8594827bbac..f2da3c9fb346 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -154,7 +154,7 @@ static int shrink_slab(long scanned, unsigned int
gfp_mask)
list_for_each_entry(shrinker, &shrinker_list, list) {
unsigned long long delta;
- delta = scanned * shrinker->seeks;
+ delta = 4 * (scanned / shrinker->seeks);
delta *= (*shrinker->shrinker)(0, gfp_mask);
do_div(delta, pages + 1);
shrinker->nr += delta;
So the idea seemed to be fixing a bug without introducing any functional
changes. Since then we have been living with this "4", which makes no
sense (?). Nobody complained though.
Thanks.
> David, you touched it last. Any hints?
More information about the Devel
mailing list