[Devel] Re: [Bugme-new] [Bug 16417] New: Slow context switches with SMP and CONFIG_FAIR_GROUP_SCHED

Andrew Morton akpm at linux-foundation.org
Thu Jul 22 15:52:22 PDT 2010


(switched to email.  Please respond via emailed reply-to-all, not via the
bugzilla web interface).

sched suckage!  Do we have a linear search in there?


On Mon, 19 Jul 2010 14:38:09 GMT
bugzilla-daemon at bugzilla.kernel.org wrote:

> https://bugzilla.kernel.org/show_bug.cgi?id=16417
> 
>            Summary: Slow context switches with SMP and
>                     CONFIG_FAIR_GROUP_SCHED
>            Product: Process Management
>            Version: 2.5
>     Kernel Version: 2.6.34.1
>           Platform: All
>         OS/Version: Linux
>               Tree: Mainline
>             Status: NEW
>           Severity: normal
>           Priority: P1
>          Component: Scheduler
>         AssignedTo: mingo at elte.hu
>         ReportedBy: pbourdon at excellency.fr
>         Regression: No
> 
> 
> Hello,
> 
> We have been experiencing slow context switches using a large number of cgroups
> (around 600 groups) and CONFIG_FAIR_GROUP_SCHED. This causes a system time
> usage increase on context switching heavy processes (measured with pidstat -w)
> and a drop in timer interrupts handling.
> 
> This problem only appears on SMP : when booting with nosmp, the issue does not
> appear. From maxprocs=2 to maxprocs=8 we were able to reproduce it accurately.
> 
> Steps to reproduce :
> - mount the cgroup filesystem in /dev/cgroup
> - cd /dev/cgroup && for i in $(seq 1 5000); do mkdir test_group_$i; done
> - launch lat_ctx from lmbench, for instance ./lat_ctx -N 200 100
> 
> The results from lat_ctx were the following :
> - SMP enabled, no cgroups : 2.65
> - SMP enabled, 1000 cgroups : 3.40
> - SMP enabled, 6000 cgroups : 3957.36
> - SMP disabled, 6000 cgroups : 1.58
> 
> We can see that from a certain amount of cgroups, the context switching starts
> taking a lot of time. Another way to reproduce this problem :
> - launch cat /dev/zero | pv -L 1G > /dev/null
> - look at the CPU usage (about 40% here)
> - cd /dev/cgroup && for i in $(seq 1 5000); do mkdir test_group_$i; done
> - look at the CPU usage (about 80% here)
> 
> Also note that when a lot of cgroups are present, the system is spending a lot
> of time in softirqs, and there are less timer interrupts handled than normally
> (according to our graphs).
> 


_______________________________________________
Containers mailing list
Containers at lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/containers




More information about the Devel mailing list