memcg: fix shmem's swap accounting
Now, you can see following even when swap accounting is enabled. 1. Create Group 01, and 02. 2. allocate a "file" on tmpfs by a task under 01. 3. swap out the "file" (by memory pressure) 4. Read "file" from a task in group 02. 5. the charge of "file" is moved to group 02. This is not ideal behavior. This is because SwapCache which was loaded by read-ahead is not taken into account.. This is a patch to fix shmem's swapcache behavior. - remove mem_cgroup_cache_charge_swapin(). - Add SwapCache handler routine to mem_cgroup_cache_charge(). By this, shmem's file cache is charged at add_to_page_cache() with GFP_NOWAIT. - pass the page of swapcache to shrink_mem_cgroup. Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Cc: Balbir Singh <balbir@in.ibm.com> Cc: Paul Menage <menage@google.com> Cc: Li Zefan <lizf@cn.fujitsu.com> Cc: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
committed by
Linus Torvalds
parent
544122e5e0
commit
b5a84319a4
@@ -335,16 +335,8 @@ static inline void disable_swap_token(void)
|
||||
}
|
||||
|
||||
#ifdef CONFIG_CGROUP_MEM_RES_CTLR
|
||||
extern int mem_cgroup_cache_charge_swapin(struct page *page,
|
||||
struct mm_struct *mm, gfp_t mask, bool locked);
|
||||
extern void mem_cgroup_uncharge_swapcache(struct page *page, swp_entry_t ent);
|
||||
#else
|
||||
static inline
|
||||
int mem_cgroup_cache_charge_swapin(struct page *page,
|
||||
struct mm_struct *mm, gfp_t mask, bool locked)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
static inline void
|
||||
mem_cgroup_uncharge_swapcache(struct page *page, swp_entry_t ent)
|
||||
{
|
||||
|
Reference in New Issue
Block a user