[Devel] [PATCH rh7 0/9] Manage LRU lists under per-filesystem lock
Kirill Tkhai
ktkhai at virtuozzo.com
Tue Aug 15 14:23:04 MSK 2017
Currently, linking and unlinking LRU pages occur under per-NUMA node lock.
This scales bad, because volume of NUMA memory may be huge, and the only
lock is bottle neck here. Imagine, 8GB memory is 2097152 pages, and we
take the lock on putting/getting of every page from tcache.
Patch changes the logic. It introiduces new per-pool*node lock and is
used to keep LRU lists. Using it, isolating actions may be executed
without tcache_nodeinfo::lock, and they do not act on other pools
(i.e., containers).
Note, the only time, we continue us tcache_nodeinfo::lock, is inserting
to reclaim_tree, and it happens only every 1024 pages add/del.
---
Kirill Tkhai (9):
tcache: Decrement removed from LRU pages out of __tcache_lru_del()
tcache: Add tcache_pool_nodeinfo::lock
tcache: Cleanup unused expression from tcache_lru_isolate()
tcache: Make tcache_lru_isolate() keep ni->lock less
tcache: Move erase-insert logic out of tcache_check_events()
tcache: Move reclaim weight calculations out of __tcache_insert_reclaim_node()
tcache: Make tcache_nodeinfo::nr_pages atomic_long_t
tcache: Use ni->lock only for inserting and erasing from rbtree.
tcache: Move add/sub out of pni->lock
mm/tcache.c | 153 +++++++++++++++++++++++++++++++++++++++--------------------
1 file changed, 101 insertions(+), 52 deletions(-)
--
Signed-off-by: Kirill Tkhai <ktkhai at virtuozzo.com>
More information about the Devel
mailing list