cpu-hotplug: replace lock_cpu_hotplug() with get_online_cpus()
Replace all lock_cpu_hotplug/unlock_cpu_hotplug from the kernel and use get_online_cpus and put_online_cpus instead as it highlights the refcount semantics in these operations. The new API guarantees protection against the cpu-hotplug operation, but it doesn't guarantee serialized access to any of the local data structures. Hence the changes needs to be reviewed. In case of pseries_add_processor/pseries_remove_processor, use cpu_maps_update_begin()/cpu_maps_update_done() as we're modifying the cpu_present_map there. Signed-off-by: Gautham R Shenoy <ego@in.ibm.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
This commit is contained in:
committed by
Ingo Molnar
parent
d221938c04
commit
86ef5c9a8e
@@ -537,10 +537,10 @@ static int cpusets_overlap(struct cpuset *a, struct cpuset *b)
|
||||
*
|
||||
* Call with cgroup_mutex held. May take callback_mutex during
|
||||
* call due to the kfifo_alloc() and kmalloc() calls. May nest
|
||||
* a call to the lock_cpu_hotplug()/unlock_cpu_hotplug() pair.
|
||||
* a call to the get_online_cpus()/put_online_cpus() pair.
|
||||
* Must not be called holding callback_mutex, because we must not
|
||||
* call lock_cpu_hotplug() while holding callback_mutex. Elsewhere
|
||||
* the kernel nests callback_mutex inside lock_cpu_hotplug() calls.
|
||||
* call get_online_cpus() while holding callback_mutex. Elsewhere
|
||||
* the kernel nests callback_mutex inside get_online_cpus() calls.
|
||||
* So the reverse nesting would risk an ABBA deadlock.
|
||||
*
|
||||
* The three key local variables below are:
|
||||
@@ -691,9 +691,9 @@ restart:
|
||||
|
||||
rebuild:
|
||||
/* Have scheduler rebuild sched domains */
|
||||
lock_cpu_hotplug();
|
||||
get_online_cpus();
|
||||
partition_sched_domains(ndoms, doms);
|
||||
unlock_cpu_hotplug();
|
||||
put_online_cpus();
|
||||
|
||||
done:
|
||||
if (q && !IS_ERR(q))
|
||||
@@ -1617,10 +1617,10 @@ static struct cgroup_subsys_state *cpuset_create(
|
||||
*
|
||||
* If the cpuset being removed has its flag 'sched_load_balance'
|
||||
* enabled, then simulate turning sched_load_balance off, which
|
||||
* will call rebuild_sched_domains(). The lock_cpu_hotplug()
|
||||
* will call rebuild_sched_domains(). The get_online_cpus()
|
||||
* call in rebuild_sched_domains() must not be made while holding
|
||||
* callback_mutex. Elsewhere the kernel nests callback_mutex inside
|
||||
* lock_cpu_hotplug() calls. So the reverse nesting would risk an
|
||||
* get_online_cpus() calls. So the reverse nesting would risk an
|
||||
* ABBA deadlock.
|
||||
*/
|
||||
|
||||
|
Reference in New Issue
Block a user