[Devel] Re: [RFC] CPU hard limits

Avi Kivity avi at redhat.com
Thu Jun 4 22:16:21 PDT 2009


Balbir Singh wrote:

    

>>> How, it works out fine in my calculation
>>>
>>> 50 + 40 for G2 and G3, make sure that G1 gets 10%, since others are
>>> limited to 90%
>>> 50 + 40 for G1 and G3, make sure that G2 gets 10%, since others are
>>> limited to 90%
>>> 50 + 50 for G1 and G2, make sure that G3 gets 0%, since others are
>>> limited to 100%
>>>   
>>>       
>> It's fine in that it satisfies the guarantees, but it is deeply  
>> suboptimal.  If I ran a cpu hog in the first group, while the other two  
>> were idle, it would be limited to 50% cpu.  On the other hand, if it  
>> consumed all 100% cpu it would still satisfy the guarantees (as the  
>> other groups are idle).
>>
>> The result is that in such a situation, wall clock time would double  
>> even though cpu resources are available.
>>     
>
> But then there is no other way to make a *guarantee*, guarantees come
> at a cost of idling resources, no? Can you show me any other
> combination that will provide the guarantee and without idling the
> system for the specified guarantees?
>   

Suppose in my example cgroup 1 consumed 100% of the cpu resources and 
cgroup 2 and 3 were completely idle.  All of the guarantees are met (if 
cgroup 2 is idle, there's no need to give it the 10% cpu time it is 
guaranteed).

If  your only tool to achieve the guarantees is a limit system, then 
yes, the equation yields the correct results.  But given that it yields 
such inferior results, I think we need to look for a more involved solution.

I think the limits method fits cases where it is difficult to evict a 
resource (say, disk quotas -- if you want to guarantee 10% of space to 
cgroups 1, you must limit all others to 90%).  But for processor usage, 
you can evict a cgroup instantly, so nothing prevents a cgroup from 
consuming all available resources as long as others do not contend for them.

-- 
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.

_______________________________________________
Containers mailing list
Containers at lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/containers




More information about the Devel mailing list