diff options
author | Peter Zijlstra <a.p.zijlstra@chello.nl> | 2008-10-17 19:27:02 +0200 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2008-10-20 14:05:02 +0200 |
commit | ffda12a17a324103e9900fa1035309811eecbfe5 (patch) | |
tree | 79fe8aae79a41b467f2cdd055036b3017642a9f6 /include | |
parent | b0aa51b999c449e5e3f9faa1ee406e052d407fe7 (diff) |
sched: optimize group load balancer
I noticed that tg_shares_up() unconditionally takes rq-locks for all cpus
in the sched_domain. This hurts.
We need the rq-locks whenever we change the weight of the per-cpu group sched
entities. To allevate this a little, only change the weight when the new
weight is at least shares_thresh away from the old value.
This avoids the rq-lock for the top level entries, since those will never
be re-weighted, and fuzzes the lower level entries a little to gain performance
in semi-stable situations.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'include')
-rw-r--r-- | include/linux/sched.h | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/include/linux/sched.h b/include/linux/sched.h index 6eda6ad735d..4f59c8e8597 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1621,6 +1621,7 @@ extern unsigned int sysctl_sched_features; extern unsigned int sysctl_sched_migration_cost; extern unsigned int sysctl_sched_nr_migrate; extern unsigned int sysctl_sched_shares_ratelimit; +extern unsigned int sysctl_sched_shares_thresh; int sched_nr_latency_handler(struct ctl_table *table, int write, struct file *file, void __user *buffer, size_t *length, |