From: Eric Dumazet edumazet@google.com
[ Upstream commit 16c610162d1f1c332209de1c91ffb09b659bb65d ]
While stress testing TCP I had unexpected retransmits and sack packets when a single cpu receives data from multiple high-throughput flows.
super_netperf 4 -H srv -T,10 -l 3000 &
Tcpdump extract:
00:00:00.000007 IP6 clnt > srv: Flags [.], seq 26062848:26124288, ack 1, win 66, options [nop,nop,TS val 651460834 ecr 3100749131], length 61440 00:00:00.000006 IP6 clnt > srv: Flags [.], seq 26124288:26185728, ack 1, win 66, options [nop,nop,TS val 651460834 ecr 3100749131], length 61440 00:00:00.000005 IP6 clnt > srv: Flags [P.], seq 26185728:26243072, ack 1, win 66, options [nop,nop,TS val 651460834 ecr 3100749131], length 57344 00:00:00.000006 IP6 clnt > srv: Flags [.], seq 26243072:26304512, ack 1, win 66, options [nop,nop,TS val 651460844 ecr 3100749141], length 61440 00:00:00.000005 IP6 clnt > srv: Flags [.], seq 26304512:26365952, ack 1, win 66, options [nop,nop,TS val 651460844 ecr 3100749141], length 61440 00:00:00.000007 IP6 clnt > srv: Flags [P.], seq 26365952:26423296, ack 1, win 66, options [nop,nop,TS val 651460844 ecr 3100749141], length 57344 00:00:00.000006 IP6 clnt > srv: Flags [.], seq 26423296:26484736, ack 1, win 66, options [nop,nop,TS val 651460853 ecr 3100749150], length 61440 00:00:00.000005 IP6 clnt > srv: Flags [.], seq 26484736:26546176, ack 1, win 66, options [nop,nop,TS val 651460853 ecr 3100749150], length 61440 00:00:00.000005 IP6 clnt > srv: Flags [P.], seq 26546176:26603520, ack 1, win 66, options [nop,nop,TS val 651460853 ecr 3100749150], length 57344 00:00:00.003932 IP6 clnt > srv: Flags [P.], seq 26603520:26619904, ack 1, win 66, options [nop,nop,TS val 651464844 ecr 3100753141], length 16384 00:00:00.006602 IP6 clnt > srv: Flags [.], seq 24862720:24866816, ack 1, win 66, options [nop,nop,TS val 651471419 ecr 3100759716], length 4096 00:00:00.013000 IP6 clnt > srv: Flags [.], seq 24862720:24866816, ack 1, win 66, options [nop,nop,TS val 651484421 ecr 3100772718], length 4096 00:00:00.000416 IP6 srv > clnt: Flags [.], ack 26619904, win 1393, options [nop,nop,TS val 3100773185 ecr 651484421,nop,nop,sack 1 {24862720:24866816}], length 0
After analysis, it appears this is because of the cond_resched() call from __release_sock().
When current thread is yielding, while still holding the TCP socket lock, it might regain the cpu after a very long time.
Other peer TLP/RTO is firing (multiple times) and packets are retransmit, while the initial copy is waiting in the socket backlog or receive queue.
In this patch, I call cond_resched() only once every 16 packets.
Modern TCP stack now spends less time per packet in the backlog, especially because ACK are no longer sent (commit 133c4c0d3717 "tcp: defer regular ACK while processing socket backlog")
Before:
clnt:/# nstat -n;sleep 10;nstat|egrep "TcpOutSegs|TcpRetransSegs|TCPFastRetrans|TCPTimeouts|Probes|TCPSpuriousRTOs|DSACK" TcpOutSegs 19046186 0.0 TcpRetransSegs 1471 0.0 TcpExtTCPTimeouts 1397 0.0 TcpExtTCPLossProbes 1356 0.0 TcpExtTCPDSACKRecv 1352 0.0 TcpExtTCPSpuriousRTOs 114 0.0 TcpExtTCPDSACKRecvSegs 1352 0.0
After:
clnt:/# nstat -n;sleep 10;nstat|egrep "TcpOutSegs|TcpRetransSegs|TCPFastRetrans|TCPTimeouts|Probes|TCPSpuriousRTOs|DSACK" TcpOutSegs 19218936 0.0
Signed-off-by: Eric Dumazet edumazet@google.com Reviewed-by: Kuniyuki Iwashima kuniyu@google.com Link: https://patch.msgid.link/20250903174811.1930820-1-edumazet@google.com Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org ---
LLM Generated explanations, may be completely bogus:
YES
- What it fixes - Addresses real-world retransmits, TLP/RTO timeouts, SACK/DSACK bursts under load when a single CPU handles multiple high-throughput TCP flows. The root cause is overly frequent voluntary scheduling in the socket backlog flush path while the socket remains “owned by user,” delaying backlog/receive-queue draining and provoking peer timeouts. This is a user-visible correctness/performance issue, not a pure optimization.
- Scope and change details - Single, small adjustment confined to `__release_sock()` in `net/core/sock.c`. - Before: `cond_resched()` runs after every packet in the backlog processing loop, which can cause long delays if the task does not get CPU back promptly while still holding the socket ownership. - Reference: `net/core/sock.c:3165-3187` (pre-change), unconditional `cond_resched()` between `sk_backlog_rcv()` and advancing `skb`. - After: throttle voluntary reschedule to once per 16 packets. - Adds a local counter: `int nb = 0;` at `net/core/sock.c:3165`. - Replaces the `do { ... } while (skb != NULL);` with a `while (1) { ... if (!skb) break; }` loop (`net/core/sock.c:3170`). - Gates rescheduling: `if (!(++nb & 15)) cond_resched();` (`net/core/sock.c:3181`). - The `cond_resched()` remains correctly placed outside the spinlock region (the code still `spin_unlock_bh()` before the loop and `spin_lock_bh()` after), so there is no locking semantic change. - `sk->sk_backlog.len = 0;` zeroing remains unchanged to ensure no unbounded loops (`net/core/sock.c:3185-3187` vicinity).
- Why it’s safe - Minimal behavioral change: still voluntarily yields in long loops, just less frequently (once per 16 SKBs) to avoid pathological delays that leave the socket owned and backlog unprocessed. - No API or architectural changes; no protocol semantics touched. The processing order and lock/unlock pattern around the backlog remain the same. - `cond_resched()` has no effect unless needed; reducing its frequency only affects voluntary yield cadence, not correctness. Preemption- enabled kernels are largely unaffected; non-preemptible builds still get periodic relief. - Generic applicability: while motivated by TCP, `__release_sock()` is generic and the change is neutral/beneficial for other protocols using the backlog path.
- Stable backport criteria - Fixes an important user-visible bug (spurious retransmits/RTOs/DSACK) under realistic load. - Touches a critical path but with a very small, contained change. - No new features or interfaces; no architectural refactor. - Low regression risk; preserves fairness via periodic `cond_resched()`.
- Dependencies and interactions - The commit references “tcp: defer regular ACK while processing socket backlog” as context; the throttling of `cond_resched()` does not depend on that change to be correct and remains beneficial independently. - Call sites like `__sk_flush_backlog()` remain unchanged and continue to call into `__release_sock()` with the same locking protocol (`net/core/sock.c:3189-3199`).
Given the above, this is a focused, low-risk bug fix with clear impact on correctness/performance under load and should be backported to stable trees.
net/core/sock.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/net/core/sock.c b/net/core/sock.c index 1382bddcbaff4..bdeea7cc134df 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -3162,23 +3162,27 @@ void __release_sock(struct sock *sk) __acquires(&sk->sk_lock.slock) { struct sk_buff *skb, *next; + int nb = 0;
while ((skb = sk->sk_backlog.head) != NULL) { sk->sk_backlog.head = sk->sk_backlog.tail = NULL;
spin_unlock_bh(&sk->sk_lock.slock);
- do { + while (1) { next = skb->next; prefetch(next); DEBUG_NET_WARN_ON_ONCE(skb_dst_is_noref(skb)); skb_mark_not_on_list(skb); sk_backlog_rcv(sk, skb);
- cond_resched(); - skb = next; - } while (skb != NULL); + if (!skb) + break; + + if (!(++nb & 15)) + cond_resched(); + }
spin_lock_bh(&sk->sk_lock.slock); }