[PATCH] mm: pagefault_{disable,enable}()

Introduce pagefault_{disable,enable}() and use these where previously we did
manual preempt increments/decrements to make the pagefault handler do the
atomic thing.

Currently they still rely on the increased preempt count, but do not rely on
the disabled preemption, this might go away in the future.

(NOTE: the extra barrier() in pagefault_disable might fix some holes on
       machines which have too many registers for their own good)

[heiko.carstens@de.ibm.com: s390 fix]
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Nick Piggin <npiggin@suse.de>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This commit is contained in:
Peter Zijlstra
2006-12-06 20:32:20 -08:00
committed by Linus Torvalds
parent 6edaf68a87
commit a866374aec
17 changed files with 88 additions and 62 deletions

View File

@ -39,7 +39,7 @@ void *__kmap_atomic(struct page *page, enum km_type type)
unsigned long vaddr;
/* even !CONFIG_PREEMPT needs this, for in_atomic in do_page_fault */
inc_preempt_count();
pagefault_disable();
if (!PageHighMem(page))
return page_address(page);
@ -62,8 +62,7 @@ void __kunmap_atomic(void *kvaddr, enum km_type type)
enum fixed_addresses idx = type + KM_TYPE_NR*smp_processor_id();
if (vaddr < FIXADDR_START) { // FIXME
dec_preempt_count();
preempt_check_resched();
pagefault_enable();
return;
}
@ -78,8 +77,7 @@ void __kunmap_atomic(void *kvaddr, enum km_type type)
local_flush_tlb_one(vaddr);
#endif
dec_preempt_count();
preempt_check_resched();
pagefault_enable();
}
#ifndef CONFIG_LIMITED_DMA
@ -92,7 +90,7 @@ void *kmap_atomic_pfn(unsigned long pfn, enum km_type type)
enum fixed_addresses idx;
unsigned long vaddr;
inc_preempt_count();
pagefault_disable();
idx = type + KM_TYPE_NR*smp_processor_id();
vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);