[Devel] Re: [patch 0/4] [RFC] Another proportional weight IO controller

Vivek Goyal vgoyal at redhat.com
Thu Nov 13 14:13:04 PST 2008


On Thu, Nov 13, 2008 at 10:58:34AM -0500, Vivek Goyal wrote:
> On Thu, Nov 13, 2008 at 06:05:58PM +0900, Ryo Tsuruta wrote:
> > Hi,
> > 
> > From: vgoyal at redhat.com
> > Subject: [patch 0/4] [RFC] Another proportional weight IO controller 
> > Date: Thu, 06 Nov 2008 10:30:22 -0500
> > 
> > > Hi,
> > > 
> > > If you are not already tired of so many io controller implementations, here
> > > is another one.
> > > 
> > > This is a very eary very crude implementation to get early feedback to see
> > > if this approach makes any sense or not.
> > > 
> > > This controller is a proportional weight IO controller primarily
> > > based on/inspired by dm-ioband. One of the things I personally found little
> > > odd about dm-ioband was need of a dm-ioband device for every device we want
> > > to control.  I thought that probably we can make this control per request
> > > queue and get rid of device mapper driver. This should make configuration
> > > aspect easy.
> > > 
> > > I have picked up quite some amount of code from dm-ioband especially for
> > > biocgroup implementation.
> > > 
> > > I have done very basic testing and that is running 2-3 dd commands in different
> > > cgroups on x86_64. Wanted to throw out the code early to get some feedback.
> > > 
> > > More details about the design and how to are in documentation patch.
> > > 
> > > Your comments are welcome.
> > 
> > Do you have any benchmark results?
> > I'm especially interested in the followings:
> > - Comparison of disk performance with and without the I/O controller patch.
> 
> If I dynamically disable the bio control, then I did not observe any
> impact on performance. Because in that case practically it boils down
> to just an additional variable check in __make_request().
> 

Oh.., I understood your question wrong. You are looking for what's the 
performance penalty if I enable the IO controller on a device.

I have not done any extensive benchmarking. If I run two dd commands
without controller, I get 80MB/s from disk (roughly 40 MB for each task).
With bio group enabled (default token=2000), I was getting total BW of
roughly 68 MB/s.

I have not done any performance analysis or optimizations at this point of
time. I plan to do that once we have some sort of common understanding about
a particular approach. There are so many IO controllers floating, right now
I am more concerned if we can all come to a common platform.

Thanks
Vivek

> > - Put uneven I/O loads. Processes, which belong to a cgroup which is
> >   given a smaller weight than another cgroup, put heavier I/O load
> >   like the following.
> > 
> >      echo 1024 > /cgroup/bio/test1/bio.shares
> >      echo 8192 > /cgroup/bio/test2/bio.shares
> > 
> >      echo $$ > /cgroup/bio/test1/tasks
> >      dd if=/somefile1-1 of=/dev/null &
> >      dd if=/somefile1-2 of=/dev/null &
> >      ... 
> >      dd if=/somefile1-100 of=/dev/null
> >      echo $$ > /cgroup/bio/test2/tasks
> >      dd if=/somefile2-1 of=/dev/null &
> >      dd if=/somefile2-2 of=/dev/null &
> >      ...
> >      dd if=/somefile2-10 of=/dev/null &
> 
> I have not tried this case.
> 
> Ryo, do you still want to stick to two level scheduling? Given the problem
> of it breaking down underlying scheduler's assumptions, probably it makes
> more sense to the IO control at each individual IO scheduler.
> 
> I have had a very brief look at BFQ's hierarchical proportional
> weight/priority IO control and it looks good. May be we can adopt it for
> other IO schedulers also. 
> 
> Thanks
> Vivek
_______________________________________________
Containers mailing list
Containers at lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/containers




More information about the Devel mailing list