[Devel] Re: [RFC][PATCH -mm 0/5] cgroup: block device i/o controller (v9)

Andrea Righi righi.andrea at gmail.com
Wed Sep 17 03:08:29 PDT 2008


Takuya Yoshikawa wrote:
> Hi,
> 
> Andrea Righi wrote:
>> TODO:
>>
>> * Try to push down the throttling and implement it directly in the I/O
>>   schedulers, using bio-cgroup (http://people.valinux.co.jp/~ryov/bio-cgroup/)
>>   to keep track of the right cgroup context. This approach could lead to more
>>   memory consumption and increases the number of dirty pages (hard/slow to
>>   reclaim pages) in the system, since dirty-page ratio in memory is not
>>   limited. This could even lead to potential OOM conditions, but these problems
>>   can be resolved directly into the memory cgroup subsystem
>>
>> * Handle I/O generated by kswapd: at the moment there's no control on the I/O
>>   generated by kswapd; try to use the page_cgroup functionality of the memory
>>   cgroup controller to track this kind of I/O and charge the right cgroup when
>>   pages are swapped in/out
> 
> Could you explain which cgroup we should charge when swap in or out occurs?
> Are there any difference between the following cases?
> 
> Target page is
> 1. used as page cache and not mapped to any space
> 2. used as page cache and mapped to some space
> 3. not used as page cache and mapped to some space
> 
> I do not think it is fair to charge the process for this kind of I/O, am I wrong?

As a generic implementation, when a read/write request is submitted to the
IO subsystem (i.e. submit_bio()), look at the first page in the struct bio
and charge the IO cost to the owner of that page. It this makes sense, we
have to just keep track of all the pages when they're submitted to the IO
subsystem in this way. Unfortunately, this doesn't seem to work during
swap_writepage(), but maybe bio-cgroup is able to handle this case.

-Andrea
_______________________________________________
Containers mailing list
Containers at lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/containers




More information about the Devel mailing list