[Devel] [PATCH rh7 v2] mm/memcg: fix cache growth above cache.limit_in_bytes

Evgenii Shatokhin eshatokhin at virtuozzo.com
Thu Jul 30 18:57:24 MSK 2020


On 30.07.2020 17:00, Denis V. Lunev wrote:
> On 7/30/20 4:58 PM, Andrey Ryabinin wrote:
>> Exceeding cache above cache.limit_in_bytes schedules high_work_func()
>> which tries to reclaim 32 pages. If cache generated fast enough or it allows
>> cgroup to steadily grow above cache.limit_in_bytes because we don't reclaim
>> enough. Try to reclaim exceeded amount of cache instead.
>>
>> https://jira.sw.ru/browse/PSBM-106384
>> Signed-off-by: Andrey Ryabinin <aryabinin at virtuozzo.com>
>> ---
>>
>> Changes since v1: add bug link to changelog
>>
>>   mm/memcontrol.c | 10 +++++++---
>>   1 file changed, 7 insertions(+), 3 deletions(-)
>>
>> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
>> index 3cf200f506c3..e5adb0e81cbb 100644
>> --- a/mm/memcontrol.c
>> +++ b/mm/memcontrol.c
>> @@ -3080,12 +3080,16 @@ static void reclaim_high(struct mem_cgroup *memcg,
>>   {
>>   
>>   	do {
>> +		unsigned long cache_overused;
>> +
>>   		if (page_counter_read(&memcg->memory) > memcg->high)
>>   			try_to_free_mem_cgroup_pages(memcg, nr_pages, gfp_mask, 0);
>>   
>> -		if (page_counter_read(&memcg->cache) > memcg->cache.limit)
>> -			try_to_free_mem_cgroup_pages(memcg, nr_pages, gfp_mask,
>> -						MEM_CGROUP_RECLAIM_NOSWAP);
>> +		cache_overused = page_counter_read(&memcg->cache) -
>> +			memcg->cache.limit;
>> +		if (cache_overused)
>> +			try_to_free_mem_cgroup_pages(memcg, cache_overused,
>> +					gfp_mask, MEM_CGROUP_RECLAIM_NOSWAP);
>>   
>>   	} while ((memcg = parent_mem_cgroup(memcg)));
>>   }
> can we run some testing and after that create custom RK to make check with
> HostEurope on Monday?

1. Which kernel version(s)?

2. Would it be enough to prepare the live patch as .ko file and load it 
manually ('kpatch load') or RPM package is preferable?

If you plan to use the fix on more than one node, I think, an RPM 
package is easier to use. For a single node, *.ko file would be enough.

> .
> 



More information about the Devel mailing list