[Devel] Re: Write Error: No space left on device

Mukesh G mukgbv at gmail.com
Tue Oct 28 22:43:23 PDT 2008


Hi Dhaval,
     Back to the drawing board for me...

I have expressed my example in a diagram to help me understand better.

Parent Container / (cpu.shares = 1024; Will get complete CPU bandwidth)
|
|
----------------------------------------------------------------------------
|                                |    |
Container C2048                          C1024     |
    ----------------        ---------------- ---------------
    |               |      |               | |       |
Processes   A1              A2             B1              B2         T1
     T2


A1:A2:B1:B2:T1:T2 = 2:2:1:1:1:1 (Is my understanding right? If so, then my
question on hierachial priority still stands especially for parent container
processes, Is this by design?)

A1:B1 or A1:T1 = 2:1

In a single threaded case,it makes sense that we cannot have more than 100%
CPu utilization. Hence if T1 can take up 100 but A1 can go upto 100 but not
beyond.

In a multi-threaded case, a process can take potentially than more 100% cpu
utilization (In my dual core case, both the cores) then in that case, I
would expect the above ordering to apply with A1 getting more cpu time than
B1 or T1.

Thanks

Mukesh


On Tue, Oct 28, 2008 at 5:42 PM, Dhaval Giani <dhaval at linux.vnet.ibm.com>wrote:

> On Tue, Oct 28, 2008 at 03:37:52PM +0530, Mukesh G wrote:
> > Hi Dhaval,
> >     Thanks for clarifying some doubts. Further to your mail and my
> > understanding, I can make some conclusions
> >
> > 1) Even though there is a hierarchical order for containers, the shares
> are
> > not allocated in terms of that order i.e. If the parent container has a
> > certain no of cpu.shares and more child containers. The parent and child
> > containers have the same priority.
> >
>
> No, the hierarchy is always maintained for the group scheduler. The parent
> group's shares are used to see how long/when it will get scheduled with
> respect to other tasks/groups at that level. As an example, if you have
> a hierarchy like this
>
> /
> |-A
> | |-A1
> | |-A2
> |
> |-B
> |-t1
> |-t2
>
> Let A, B have shares 1024 and t1 and t2 be tasks at nice 0. With this
> setting, A, B, t1 and t2 will get equal CPU bandwidth. The tasks and
> groups in A will split bandwidth available to A in a similar fashion. So
> the cpu shares is not some resource which is split but it is a value which
> will decide in what proportion the CPU bandwidth will be shared. So, if
> you have no tasks in /, then setting the shares of A and B to 10 is the
> same as setting the shares of A and B to 1000 (not exactly in terms of
> load balancing, but enough for the purpose of this discussion).
>
> > 2) For only 2 process running on a dual core system and If I have one
> > process assigned to C2048 container and another process assigned to C1024
> > container, I would expect the scheduler to allocate in the order of 2:1.
> > Behavior 2 indicates that allocation did not happen.
> >
>
> Well, its not possible to run one thread on more than one core at the
> sametime :-). So we do not see the 2:1 usage happen.
>
> > 3) I ran the experiment having 2 instances of multi-threaded version of
> java
> > workload on the same system and noticed that 2:1 scenario did not repeat.
> > Hence, a multi threaded code does not benefit from this kind of a
> scheduler.
> >
>
> I am not sure if I have understood your question. But I think the answer
> to the previous question should help.
>
> > 4) If you have more processes than the no of cores for the cpu shares to
> > work.
> >
>
> Well, the physical limit on how much CPU a thread can hog is 100%, so
> yes, if you have lesser threads than number of CPUs, you don't expect
> shares to  work as expected, and each thread will get 100% CPU time.
>
> thanks,
> --
> regards,
> Dhaval
>
_______________________________________________
Containers mailing list
Containers at lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/containers




More information about the Devel mailing list