[Devel] [PATCH rh7] mm/tcache: replace BUG_ON()s with WARN_ON()s

Kirill Tkhai ktkhai at virtuozzo.com
Thu Nov 30 16:24:19 MSK 2017


On 30.11.2017 15:06, Andrey Ryabinin wrote:
> Tcache code filled with BUG_ON() checks. However the most cases
> issues that BUG_ON() supposed to catch are not serious enough
> to kill machine. So relax it's to WARN_ON.
> Remove BUG_ON() in tcache_init_fs(), because it's useless.
> It's called from the only one place in the kernel, which looks
> like this:
> 		pool_id = cleancache_ops->init_fs(PAGE_SIZE);
> 
> https://jira.sw.ru/browse/PSBM-77154
> Signed-off-by: Andrey Ryabinin <aryabinin at virtuozzo.com>
> ---
>  mm/tcache.c | 15 ++++++++-------
>  1 file changed, 8 insertions(+), 7 deletions(-)
> 
> diff --git a/mm/tcache.c b/mm/tcache.c
> index b5157d9861d0..99c799a9d290 100644
> --- a/mm/tcache.c
> +++ b/mm/tcache.c
> @@ -473,7 +473,7 @@ static void tcache_destroy_pool(int id)
>  	for (i = 0; i < num_node_trees; i++)
>  		tcache_invalidate_node_tree(&pool->node_tree[i]);
>  
> -	BUG_ON(atomic_long_read(&pool->nr_nodes) != 0);
> +	WARN_ON(atomic_long_read(&pool->nr_nodes) != 0);

Patch looks good for me. One small question about above. Shouldn't we abort
pool destroy in case of this WARN_ON() fires like you did for node below?
Also, if so it seems it would be useful to know the exact count of pool->nr_nodes:
either there is overcount or undercount...

>  	kfree(pool->node_tree);
>  	kfree_rcu(pool, rcu);
> @@ -590,9 +590,10 @@ retry:
>  	spin_unlock_irqrestore(&tree->lock, flags);
>  
>  	if (node) {
> -		BUG_ON(node->pool != pool);
>  		if (node != new_node)
>  			kfree(new_node);
> +		if (WARN_ON(node->pool != pool))
> +			node = NULL;
>  		return node;
>  	}
>  
> @@ -696,9 +697,9 @@ tcache_invalidate_node_tree(struct tcache_node_tree *tree)
>  				struct tcache_node, tree_node);
>  
>  		/* Remaining nodes must be held solely by their pages */
> -		BUG_ON(atomic_read(&node->kref.refcount) != 1);
> -		BUG_ON(node->nr_pages == 0);
> -		BUG_ON(node->invalidated);
> +		WARN_ON(atomic_read(&node->kref.refcount) != 1);
> +		WARN_ON(node->nr_pages == 0);
> +		WARN_ON(node->invalidated);
>  
>  		tcache_hold_node(node);
>  		tcache_invalidate_node_pages(node);
> @@ -1182,7 +1183,8 @@ static unsigned long tcache_shrink_scan(struct shrinker *shrink,
>  	struct page **pages = get_cpu_var(tcache_page_vec);
>  	int nr_isolated, nr_reclaimed;
>  
> -	BUG_ON(sc->nr_to_scan > TCACHE_SCAN_BATCH);
> +	if (WARN_ON(sc->nr_to_scan > TCACHE_SCAN_BATCH))
> +		sc->nr_to_scan = TCACHE_SCAN_BATCH;
>  
>  	nr_isolated = tcache_lru_isolate(sc->nid, pages, sc->nr_to_scan);
>  	if (!nr_isolated) {
> @@ -1209,7 +1211,6 @@ struct shrinker tcache_shrinker = {
>  
>  static int tcache_cleancache_init_fs(size_t pagesize)
>  {
> -	BUG_ON(pagesize != PAGE_SIZE);
>  	return tcache_create_pool();
>  }
>  
> 


More information about the Devel mailing list