aboutsummaryrefslogtreecommitdiff
path: root/kernel/sched_fair.c
diff options
context:
space:
mode:
authorPeter Zijlstra <a.p.zijlstra@chello.nl>2008-06-27 13:41:19 +0200
committerIngo Molnar <mingo@elte.hu>2008-06-27 14:31:33 +0200
commit4d8d595dfa69e1c807bf928f364668a7f30da5dc (patch)
treeaf61c1d6d53aea66fac272e7dad67ae93a832a66 /kernel/sched_fair.c
parentb6a86c746f5b708012809958462234d19e9c8177 (diff)
sched: update aggregate when holding the RQs
It was observed that in __update_group_shares_cpu() rq_weight > aggregate()->rq_weight This is caused by forks/wakeups in between the initial aggregate pass and locking of the RQs for load balance. To avoid this situation partially re-do the aggregation once we have the RQs locked (which avoids new tasks from appearing). Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com> Cc: Mike Galbraith <efault@gmx.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel/sched_fair.c')
0 files changed, 0 insertions, 0 deletions