thp: remove PG_buddy

PG_buddy can be converted to _mapcount == -2.  So the PG_compound_lock can
be added to page->flags without overflowing (because of the sparse section
bits increasing) with CONFIG_X86_PAE=y and CONFIG_X86_PAT=y.  This also
has to move the memory hotplug code from _mapcount to lru.next to avoid
any risk of clashes.  We can't use lru.next for PG_buddy removal, but
memory hotplug can use lru.next even more easily than the mapcount
instead.

Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
Andrea Arcangeli
2011-01-13 15:47:00 -08:00
committed by Linus Torvalds
parent 21ae5b0175
commit 5f24ce5fd3
7 changed files with 52 additions and 29 deletions

View File

@@ -397,6 +397,27 @@ static inline void init_page_count(struct page *page)
atomic_set(&page->_count, 1);
}
/*
* PageBuddy() indicate that the page is free and in the buddy system
* (see mm/page_alloc.c).
*/
static inline int PageBuddy(struct page *page)
{
return atomic_read(&page->_mapcount) == -2;
}
static inline void __SetPageBuddy(struct page *page)
{
VM_BUG_ON(atomic_read(&page->_mapcount) != -1);
atomic_set(&page->_mapcount, -2);
}
static inline void __ClearPageBuddy(struct page *page)
{
VM_BUG_ON(!PageBuddy(page));
atomic_set(&page->_mapcount, -1);
}
void put_page(struct page *page);
void put_pages_list(struct list_head *pages);