On Fri, Jul 26, 2019 at 04:39:48PM +0200, Eric Dumazet wrote:
On Fri, Jul 26, 2019 at 2:45 PM Greg KH gregkh@linuxfoundation.org wrote:
On Fri, Jul 26, 2019 at 02:38:14PM +0200, gregkh@linuxfoundation.org wrote:
The patch below does not apply to the 4.14-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to stable@vger.kernel.org.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
From b617158dc096709d8600c53b6052144d12b89fab Mon Sep 17 00:00:00 2001
From: Eric Dumazet edumazet@google.com Date: Fri, 19 Jul 2019 11:52:33 -0700 Subject: [PATCH] tcp: be more careful in tcp_fragment()
Some applications set tiny SO_SNDBUF values and expect TCP to just work. Recent patches to address CVE-2019-11478 broke them in case of losses, since retransmits might be prevented.
We should allow these flows to make progress.
This patch allows the first and last skb in retransmit queue to be split even if memory limits are hit.
It also adds the some room due to the fact that tcp_sendmsg() and tcp_sendpage() might overshoot sk_wmem_queued by about one full TSO skb (64KB size). Note this allowance was already present in stable backports for kernels < 4.15
Note for < 4.15 backports : tcp_rtx_queue_tail() will probably look like :
static inline struct sk_buff *tcp_rtx_queue_tail(const struct sock *sk) { struct sk_buff *skb = tcp_send_head(sk);
return skb ? tcp_write_queue_prev(sk, skb) : tcp_write_queue_tail(sk);
}
Note, I tried the above, but still ran into problems a 4.14 does not have tcp_rtx_queue_head() and while I could guess as to what it would be (tcp_sent_head()?), I figured it would be safer to ask for a backport :)
tcp_rtx_queue_head(sk) would be implemented by : { struct sk_buff *skb = tcp_write_queue_head(sk); if (skb == tcp_send_head(sk)) skb = NULL; return skb; }
I can provide the backport, of course.
A backport would be great to ensure I didn't mess it up :)
thanks,
greg k-h