[PATCH] freepgt: free_pgtables use vma list
Recent woes with some arches needing their own pgd_addr_end macro; and 4-level clear_page_range regression since 2.6.10's clear_page_tables; and its long-standing well-known inefficiency in searching throughout the higher-level page tables for those few entries to clear and free: all can be blamed on ignoring the list of vmas when we free page tables. Replace exit_mmap's clear_page_range of the total user address space by free_pgtables operating on the mm's vma list; unmap_region use it in the same way, giving floor and ceiling beyond which it may not free tables. This brings lmbench fork/exec/sh numbers back to 2.6.10 (unless preempt is enabled, in which case latency fixes spoil unmap_vmas throughput). Beware: the do_mmap_pgoff driver failure case must now use unmap_region instead of zap_page_range, since a page table might have been allocated, and can only be freed while it is touched by some vma. Move free_pgtables from mmap.c to memory.c, where its lower levels are adapted from the clear_page_range levels. (Most of free_pgtables' old code was actually for a non-existent case, prev not properly set up, dating from before hch gave us split_vma.) Pass mmu_gather** in the public interfaces, since we might want to add latency lockdrops later; but no attempt to do so yet, going by vma should itself reduce latency. But what if is_hugepage_only_range? Those ia64 and ppc64 cases need careful examination: put that off until a later patch of the series. What of x86_64's 32bit vdso page __map_syscall32 maps outside any vma? And the range to sparc64's flush_tlb_pgtables? It's less clear to me now that we need to do more than is done here - every PMD_SIZE ever occupied will be flushed, do we really have to flush every PGDIR_SIZE ever partially occupied? A shame to complicate it unnecessarily. Special thanks to David Miller for time spent repairing my ceilings. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This commit is contained in:
committed by
Linus Torvalds
parent
9f6c6fc505
commit
e0da382c92
@ -187,45 +187,12 @@ follow_huge_pmd(struct mm_struct *mm, unsigned long address, pmd_t *pmd, int wri
|
||||
}
|
||||
|
||||
/*
|
||||
* Same as generic free_pgtables(), except constant PGDIR_* and pgd_offset
|
||||
* are hugetlb region specific.
|
||||
* Do nothing, until we've worked out what to do! To allow build, we
|
||||
* must remove reference to clear_page_range since it no longer exists.
|
||||
*/
|
||||
void hugetlb_free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *prev,
|
||||
unsigned long start, unsigned long end)
|
||||
{
|
||||
unsigned long first = start & HUGETLB_PGDIR_MASK;
|
||||
unsigned long last = end + HUGETLB_PGDIR_SIZE - 1;
|
||||
struct mm_struct *mm = tlb->mm;
|
||||
|
||||
if (!prev) {
|
||||
prev = mm->mmap;
|
||||
if (!prev)
|
||||
goto no_mmaps;
|
||||
if (prev->vm_end > start) {
|
||||
if (last > prev->vm_start)
|
||||
last = prev->vm_start;
|
||||
goto no_mmaps;
|
||||
}
|
||||
}
|
||||
for (;;) {
|
||||
struct vm_area_struct *next = prev->vm_next;
|
||||
|
||||
if (next) {
|
||||
if (next->vm_start < start) {
|
||||
prev = next;
|
||||
continue;
|
||||
}
|
||||
if (last > next->vm_start)
|
||||
last = next->vm_start;
|
||||
}
|
||||
if (prev->vm_end > first)
|
||||
first = prev->vm_end;
|
||||
break;
|
||||
}
|
||||
no_mmaps:
|
||||
if (last < first) /* for arches with discontiguous pgd indices */
|
||||
return;
|
||||
clear_page_range(tlb, first, last);
|
||||
}
|
||||
|
||||
void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start, unsigned long end)
|
||||
|
Reference in New Issue
Block a user