ARM: 6379/1: Assume new page cache pages have dirty D-cache
There are places in Linux where writes to newly allocated page cache pages happen without a subsequent call to flush_dcache_page() (several PIO drivers including USB HCD). This patch changes the meaning of PG_arch_1 to be PG_dcache_clean and always flush the D-cache for a newly mapped page in update_mmu_cache(). The patch also sets the PG_arch_1 bit in the DMA cache maintenance function to avoid additional cache flushing in update_mmu_cache(). Tested-by: Rabin Vincent <rabin.vincent@stericsson.com> Cc: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This commit is contained in:
committed by
Russell King
parent
0fc73099dd
commit
c01778001a
@@ -141,7 +141,7 @@ make_coherent(struct address_space *mapping, struct vm_area_struct *vma,
|
||||
* a page table, or changing an existing PTE. Basically, there are two
|
||||
* things that we need to take care of:
|
||||
*
|
||||
* 1. If PG_dcache_dirty is set for the page, we need to ensure
|
||||
* 1. If PG_dcache_clean is not set for the page, we need to ensure
|
||||
* that any cache entries for the kernels virtual memory
|
||||
* range are written back to the page.
|
||||
* 2. If we have multiple shared mappings of the same space in
|
||||
@@ -169,7 +169,7 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr,
|
||||
|
||||
mapping = page_mapping(page);
|
||||
#ifndef CONFIG_SMP
|
||||
if (test_and_clear_bit(PG_dcache_dirty, &page->flags))
|
||||
if (!test_and_set_bit(PG_dcache_clean, &page->flags))
|
||||
__flush_dcache_page(mapping, page);
|
||||
#endif
|
||||
if (mapping) {
|
||||
|
Reference in New Issue
Block a user