[PATCH] sched: avoid div in rebalance_tick
Avoid expensive integer divide 3 times per CPU per tick. A userspace test of this loop went from 26ns, down to 19ns on a G5; and from 123ns down to 28ns on a P3. (Also avoid a variable bit shift, as suggested by Alan. The effect of this wasn't noticable on the CPUs I tested with). Signed-off-by: Nick Piggin <npiggin@suse.de> Cc: Ingo Molnar <mingo@elte.hu> Cc: Alan Cox <alan@lxorguk.ukuu.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
committed by
Linus Torvalds
parent
0a9ac38246
commit
ff91691bcc
@@ -2897,14 +2897,16 @@ static void active_load_balance(struct rq *busiest_rq, int busiest_cpu)
|
|||||||
static void update_load(struct rq *this_rq)
|
static void update_load(struct rq *this_rq)
|
||||||
{
|
{
|
||||||
unsigned long this_load;
|
unsigned long this_load;
|
||||||
int i, scale;
|
unsigned int i, scale;
|
||||||
|
|
||||||
this_load = this_rq->raw_weighted_load;
|
this_load = this_rq->raw_weighted_load;
|
||||||
|
|
||||||
/* Update our load: */
|
/* Update our load: */
|
||||||
for (i = 0, scale = 1; i < 3; i++, scale <<= 1) {
|
for (i = 0, scale = 1; i < 3; i++, scale += scale) {
|
||||||
unsigned long old_load, new_load;
|
unsigned long old_load, new_load;
|
||||||
|
|
||||||
|
/* scale is effectively 1 << i now, and >> i divides by scale */
|
||||||
|
|
||||||
old_load = this_rq->cpu_load[i];
|
old_load = this_rq->cpu_load[i];
|
||||||
new_load = this_load;
|
new_load = this_load;
|
||||||
/*
|
/*
|
||||||
@@ -2914,7 +2916,7 @@ static void update_load(struct rq *this_rq)
|
|||||||
*/
|
*/
|
||||||
if (new_load > old_load)
|
if (new_load > old_load)
|
||||||
new_load += scale-1;
|
new_load += scale-1;
|
||||||
this_rq->cpu_load[i] = (old_load*(scale-1) + new_load) / scale;
|
this_rq->cpu_load[i] = (old_load*(scale-1) + new_load) >> i;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
Reference in New Issue
Block a user