[Devel] Re: [RFC] CPU hard limits
Avi Kivity
avi at redhat.com
Thu Jun 4 23:01:50 PDT 2009
Bharata B Rao wrote:
> On Fri, Jun 05, 2009 at 01:27:55PM +0800, Balbir Singh wrote:
>
>> * Avi Kivity <avi at redhat.com> [2009-06-05 08:21:43]:
>>
>>
>>> Balbir Singh wrote:
>>>
>>>>> But then there is no other way to make a *guarantee*, guarantees come
>>>>> at a cost of idling resources, no? Can you show me any other
>>>>> combination that will provide the guarantee and without idling the
>>>>> system for the specified guarantees?
>>>>>
>>>>>
>>>> OK, I see part of your concern, but I think we could do some
>>>> optimizations during design. For example if all groups have reached
>>>> their hard-limit and the system is idle, should we do start a new hard
>>>> limit interval and restart, so that idleness can be removed. Would
>>>> that be an acceptable design point?
>>>>
>>> I think so. Given guarantees G1..Gn (0 <= Gi <= 1; sum(Gi) <= 1), and a
>>> cpu hog running in each group, how would the algorithm divide resources?
>>>
>>>
>> As per the matrix calculation, but as soon as we reach an idle point,
>> we redistribute the b/w and start a new quantum so to speak, where all
>> groups are charged up to their hard limits.
>>
>
> But could there be client models where you are required to strictly
> adhere to the limit within the bandwidth and not provide more (by advancing
> the bandwidth period) in the presence of idle cycles ?
>
That's the limit part. I'd like to be able to specify limits and
guarantees on the same host and for the same groups; I don't think that
works when you advance the bandwidth period.
I think we need to treat guarantees as first-class goals, not something
derived from limits (in fact I think guarantees are more useful as they
can be used to provide SLAs).
--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.
_______________________________________________
Containers mailing list
Containers at lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/containers
More information about the Devel
mailing list