[Devel] [PATCH rh7 3/3] net: core: attempt a single high order allocation

Anatoly Stepanov astepanov at cloudlinux.com
Fri Oct 21 04:36:01 PDT 2016


This is a port of the following upstream commit:

commit d9b2938aabf757da2d40153489b251d4fc3fdd18

net: attempt a single high order allocation

In commit ed98df3361f0 ("net: use __GFP_NORETRY for high order
allocations") we tried to address one issue caused by order-3
allocations.

We still observe high latencies and system overhead in situations
where
compaction is not successful.

Instead of trying order-3, order-2, and order-1, do a single order-3
best effort and immediately fallback to plain order-0.

This mimics slub strategy to fallback to slab min order if the high
order allocation used for performance failed.

Order-3 allocations give a performance boost only if they can be
done
without recurring and expensive memory scan.

Quoting David :

The page allocator relies on synchronous (sync light) memory
compaction
after direct reclaim for allocations that don't retry and deferred
compaction doesn't work with this strategy because the allocation
order
is always decreasing from the previous failed attempt.

This means sync light compaction will always be encountered if
memory
cannot be defragmented or reclaimed several times during the
skb_page_frag_refill() iteration.

Signed-off-by: Eric Dumazet <edumazet at google.com>
Acked-by: David Rientjes <rientjes at google.com>
Signed-off-by: David S. Miller <davem at davemloft.net>

Signed-off-by: Anatoly Stepanov <astepanov at cloudlinux.com>
---
 net/core/sock.c | 30 ++++++++++++++++--------------
 1 file changed, 16 insertions(+), 14 deletions(-)

diff --git a/net/core/sock.c b/net/core/sock.c
index 763bd5d..7730f16 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -1823,6 +1823,9 @@ struct sk_buff *sock_alloc_send_pskb(struct sock *sk, unsigned long header_len,
 							   order);
 					if (page)
 						goto fill_page;
+					/* Do not retry other high order allocations */
+					order = 1;
+					max_page_order = 0;
 				}
 				order--;
 			}
@@ -1862,7 +1865,7 @@ EXPORT_SYMBOL(sock_alloc_send_skb);
 
 bool sk_page_frag_refill(struct sock *sk, struct page_frag *pfrag)
 {
-	int order;
+	gfp_t gfp = sk->sk_allocation;
 
 	if (pfrag->page) {
 		if (atomic_read(&pfrag->page->_count) == 1) {
@@ -1874,22 +1877,21 @@ bool sk_page_frag_refill(struct sock *sk, struct page_frag *pfrag)
 		put_page(pfrag->page);
 	}
 
-	order = SKB_FRAG_PAGE_ORDER;
-
-	do {
-		gfp_t gfp = sk->sk_allocation;
-
-		if (order) {
-			gfp |= __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY;
-			gfp &= ~__GFP_WAIT;
-		}
-		pfrag->page = alloc_pages(gfp, order);
+	pfrag->offset = 0;
+	if (SKB_FRAG_PAGE_ORDER > 0) {
+		pfrag->page = alloc_pages((gfp & ~__GFP_WAIT)| __GFP_COMP |
+					__GFP_NOWARN | __GFP_NORETRY,
+					SKB_FRAG_PAGE_ORDER);
 		if (likely(pfrag->page)) {
-			pfrag->offset = 0;
-			pfrag->size = PAGE_SIZE << order;
+			pfrag->size = PAGE_SIZE << SKB_FRAG_PAGE_ORDER;
 			return true;
 		}
-	} while (--order >= 0);
+	}
+	pfrag->page = alloc_page(gfp);
+	if (likely(pfrag->page)) {
+		pfrag->size = PAGE_SIZE;
+		return true;
+	}
 
 	sk_enter_memory_pressure(sk);
 	sk_stream_moderate_sndbuf(sk);
-- 
1.8.3.1



More information about the Devel mailing list