[Devel] [PATCH 2/3] port/slub: do not assert not having lock in removing freed partial
Dmitry Safonov
dsafonov at virtuozzo.com
Mon Mar 14 06:23:59 PDT 2016
On 03/14/2016 04:19 PM, Dmitry Safonov wrote:
> From: Steven Rostedt <rostedt at goodmis.org>
Steven, sorry for including you in this backport-mail, forgot to use
--suppress-cc=author. Excuse for the noise.
>
> Vladimir reported the following issue:
>
> Commit c65c1877bd68 ("slub: use lockdep_assert_held") requires
> remove_partial() to be called with n->list_lock held, but free_partial()
> called from kmem_cache_close() on cache destruction does not follow this
> rule, leading to a warning:
>
> WARNING: CPU: 0 PID: 2787 at mm/slub.c:1536 __kmem_cache_shutdown+0x1b2/0x1f0()
> Modules linked in:
> CPU: 0 PID: 2787 Comm: modprobe Tainted: G W 3.14.0-rc1-mm1+ #1
> Hardware name:
> 0000000000000600 ffff88003ae1dde8 ffffffff816d9583 0000000000000600
> 0000000000000000 ffff88003ae1de28 ffffffff8107c107 0000000000000000
> ffff880037ab2b00 ffff88007c240d30 ffffea0001ee5280 ffffea0001ee52a0
> Call Trace:
> __kmem_cache_shutdown+0x1b2/0x1f0
> kmem_cache_destroy+0x43/0xf0
> xfs_destroy_zones+0x103/0x110 [xfs]
> exit_xfs_fs+0x38/0x4e4 [xfs]
> SyS_delete_module+0x19a/0x1f0
> system_call_fastpath+0x16/0x1b
>
> His solution was to add a spinlock in order to quiet lockdep. Although
> there would be no contention to adding the lock, that lock also requires
> disabling of interrupts which will have a larger impact on the system.
>
> Instead of adding a spinlock to a location where it is not needed for
> lockdep, make a __remove_partial() function that does not test if the
> list_lock is held, as no one should have it due to it being freed.
>
> Also added a __add_partial() function that does not do the lock
> validation either, as it is not needed for the creation of the cache.
>
> Signed-off-by: Steven Rostedt <rostedt at goodmis.org>
> Reported-by: Vladimir Davydov <vdavydov at parallels.com>
> Suggested-by: David Rientjes <rientjes at google.com>
> Acked-by: David Rientjes <rientjes at google.com>
> Acked-by: Vladimir Davydov <vdavydov at parallels.com>
> Acked-by: Christoph Lameter <cl at linux.com>
> Cc: Pekka Enberg <penberg at kernel.org>
> Signed-off-by: Andrew Morton <akpm at linux-foundation.org>
> Signed-off-by: Linus Torvalds <torvalds at linux-foundation.org>
>
> [ported from ms, omitted commit 67b6c900dc6d ("mm: slub: work around
> unneeded lockdep warning") as there is no need in it after this patch]
> Signed-off-by: Dmitry Safonov <dsafonov at virtuozzo.com>
> ---
> mm/slub.c | 31 +++++++++++++++++++++++--------
> 1 file changed, 23 insertions(+), 8 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 93068da..7a4ad43 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -1508,11 +1508,9 @@ static void discard_slab(struct kmem_cache *s, struct page *page)
> /*
> * Management of partially allocated slabs.
> */
> -static inline void add_partial(struct kmem_cache_node *n,
> - struct page *page, int tail)
> +static inline void
> +__add_partial(struct kmem_cache_node *n, struct page *page, int tail)
> {
> - lockdep_assert_held(&n->list_lock);
> -
> n->nr_partial++;
> if (tail == DEACTIVATE_TO_TAIL)
> list_add_tail(&page->lru, &n->partial);
> @@ -1520,15 +1518,27 @@ static inline void add_partial(struct kmem_cache_node *n,
> list_add(&page->lru, &n->partial);
> }
>
> -static inline void remove_partial(struct kmem_cache_node *n,
> - struct page *page)
> +static inline void add_partial(struct kmem_cache_node *n,
> + struct page *page, int tail)
> {
> lockdep_assert_held(&n->list_lock);
> + __add_partial(n, page, tail);
> +}
>
> +static inline void
> +__remove_partial(struct kmem_cache_node *n, struct page *page)
> +{
> list_del(&page->lru);
> n->nr_partial--;
> }
>
> +static inline void remove_partial(struct kmem_cache_node *n,
> + struct page *page)
> +{
> + lockdep_assert_held(&n->list_lock);
> + __remove_partial(n, page);
> +}
> +
> /*
> * Remove slab from the partial list, freeze it and
> * return the pointer to the freelist.
> @@ -2906,7 +2916,12 @@ static void early_kmem_cache_node_alloc(int node)
> init_kmem_cache_node(n);
> inc_slabs_node(kmem_cache_node, node, page->objects);
>
> - add_partial(n, page, DEACTIVATE_TO_HEAD);
> + /*
> + * No locks need to be taken here as it has just been
> + * initialized and there is no concurrent access.
> + */
> +
> + __add_partial(n, page, DEACTIVATE_TO_HEAD);
> }
>
> static void free_kmem_cache_nodes(struct kmem_cache *s)
> @@ -3192,7 +3207,7 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n)
>
> list_for_each_entry_safe(page, h, &n->partial, lru) {
> if (!page->inuse) {
> - remove_partial(n, page);
> + __remove_partial(n, page);
> discard_slab(s, page);
> } else {
> list_slab_objects(s, page,
--
Regards,
Dmitry Safonov
More information about the Devel
mailing list