[Devel] [PATCH RFC] fsio: filesystem io accounting cgroup

Konstantin Khlebnikov khlebnikov at openvz.org
Tue Jul 9 01:28:15 PDT 2013


Tejun Heo wrote:
> Hello, Vivek.
>
> On Mon, Jul 08, 2013 at 01:52:01PM -0400, Vivek Goyal wrote:
>>> Again, a problem to be fixed in the stack rather than patching up from
>>> up above.  The right thing to do is to propagate pressure through bdi
>>> properly and let whatever is backing the bdi generate appropriate
>>> amount of pressure, be that disk or network.
>>
>> Ok, so use network controller for controlling IO rate on NFS? I had
>> tried it once and it did not work. I think it had problems related
>> to losing the context info as IO propagated through the stack. So
>> we will have to fix that too.
>
> But that's a similar problem we have with blkcg anyway - losing the
> dirtier information by the time writeback comes down through bdi.  It
> might not be exactly the same and might need some impedance matching
> on the network side but I don't see any fundamental differences.
>
> Thanks.
>

Yep, blkio has plenty problems and flaws and I don't get how it's related
to vfs layer, dirty set control and non-disk or network backed filesystems.
Any problem can be fixed by introducing new abstract layer, except too many
abstraction levels. Cgroup is pluggable subsystem, blkio has it's own plugins
and it's build on top of io scheduler plugin. All this stuff always have worked
with block devices. Now you suggest to handle all filesystems in this stack.
I think binding them to unrealated cgroup is rough leveling violation.

NFS cannot be controlled only by network throttlers because we cannot slow down
writeback process when it happens, we must slow down tasks who generates dirty memory.
Plus it's close to impossible to separate several workloads if they share one NFS sb.



More information about the Devel mailing list