[Devel] Re: [Test Result] I/O bandwidth Control by dm-ioband - partition-based environment

Ryo Tsuruta ryov at valinux.co.jp
Sun Aug 30 19:12:33 PDT 2009


Hi Dong-Jae,

Thanks for testing.
Could you do the same test without dm-ioband? I would like to know
the throughput of your disk drive and a difference with and withour
dm-ioband.

Thanks,
Ryo Tsuruta

Dong-Jae Kang <baramsori72 at gmail.com> wrote:
> Hi Ryo
> I attached new file that includes I/O total bandwidth of evaluation system.
> We tested total bandwidth of weight policy by I/O in Dom0 and DomU system
> and it is measured through iostat tool and dm-ioband debug patch which I
> gave you several months ago.
> Of course, the result in prior report was measured by dm-ioband debug patch.
> 
> As a result, the big difference in prior report derives from the location
> where we measured I/O bandwidth
> iostat counts it in application level and dm-ioband debug patch does it in
> dm-ioband controller.
> I think the difference is related with buffer cache.
> 
> Thank you.
> Have a nice weekend
> 
> 2009/8/27 Dong-Jae Kang <baramsori72 at gmail.com>
> 
> > Hi Ryo
> >
> > 2009/8/27 Ryo Tsuruta <ryov at valinux.co.jp>
> >
> > Hi Dong-Jae,
> >>
> >> # I've added dm-devel to Cc:.
> >>
> >> Dong-Jae Kang <baramsori72 at gmail.com> wrote:
> >> > Hi Ryo
> >> >
> >> > I attached new test result file(ioband-partition-based-evaluation.xls)in
> >> > this mail.
> >>
> >> Thanks for your great job.
> >>
> >> > In this time, it is not virtualization environment.
> >> > I evaluated partition-based use cases before I do it in vitualization
> >> > environment.
> >> > because I think the two cases are smilar each other.
> >> >
> >> > The detailed information about the evaluation can be referred in
> >> attached
> >> > file.
> >> >
> >> > If you have any questions or comments after examine it,
> >> > please give me your opinion.
> >>
> >> I would like to know the throughput without dm-ioband in your
> >> environment. Because the total throughput of range-bw policy is
> >> 8000KB/s, which means the device has a capability to perform over
> >> 8000KB/s, but the total throughput of weight policy is lower than
> >> the range-bw policy. In my environment, there is no significant
> >> difference in average throughput between with and without dm-ioband.
> >> I ran fio in the way described in your result file. Here are the
> >> results of my environment. The throughputs were calculated from
> >> "iostat -k 1" outputs.
> >>
> >>            buffered write test
> >>           Avg. throughput [KB/s]
> >>        w/o ioband     w/ioband
> >> sdb2         14485         5788
> >> sdb3         12494        22295
> >> total        26979        28030
> >>
> >
> > OK, good comments.
> > I omitted the total bandwidth of the evaluation system.
> >
> > I will reply to you about it tomorrow after I check and re-test it again.
> >
> >>
> >> Thanks,
> >> Ryo Tsuruta
> >>
> >
> > Thank you for comments.
> >
> >
> > --
> > Best Regards,
> > Dong-Jae Kang
> >
> 
> 
> 
> -- 
> Best Regards,
> Dong-Jae Kang
_______________________________________________
Containers mailing list
Containers at lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/containers




More information about the Devel mailing list