x86/mm: Fix pgd_lock deadlock

It's forbidden to take the page_table_lock with the irq disabled
or if there's contention the IPIs (for tlb flushes) sent with
the page_table_lock held will never run leading to a deadlock.

Nobody takes the pgd_lock from irq context so the _irqsave can be
removed.

Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
Acked-by: Rik van Riel <riel@redhat.com>
Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: <stable@kernel.org>
LKML-Reference: <201102162345.p1GNjMjm021738@imap1.linux-foundation.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
This commit is contained in:
Andrea Arcangeli
2011-02-16 15:45:22 -08:00
committed by Ingo Molnar
parent f86268549f
commit a79e53d856
5 changed files with 22 additions and 30 deletions

View File

@ -986,10 +986,9 @@ static void xen_pgd_pin(struct mm_struct *mm)
*/
void xen_mm_pin_all(void)
{
unsigned long flags;
struct page *page;
spin_lock_irqsave(&pgd_lock, flags);
spin_lock(&pgd_lock);
list_for_each_entry(page, &pgd_list, lru) {
if (!PagePinned(page)) {
@ -998,7 +997,7 @@ void xen_mm_pin_all(void)
}
}
spin_unlock_irqrestore(&pgd_lock, flags);
spin_unlock(&pgd_lock);
}
/*
@ -1099,10 +1098,9 @@ static void xen_pgd_unpin(struct mm_struct *mm)
*/
void xen_mm_unpin_all(void)
{
unsigned long flags;
struct page *page;
spin_lock_irqsave(&pgd_lock, flags);
spin_lock(&pgd_lock);
list_for_each_entry(page, &pgd_list, lru) {
if (PageSavePinned(page)) {
@ -1112,7 +1110,7 @@ void xen_mm_unpin_all(void)
}
}
spin_unlock_irqrestore(&pgd_lock, flags);
spin_unlock(&pgd_lock);
}
void xen_activate_mm(struct mm_struct *prev, struct mm_struct *next)