On Mon, Apr 12, 2021 at 05:11:40AM -0400, Michael S. Tsirkin wrote:
On Mon, Apr 12, 2021 at 10:39:29AM +0200, Greg Kroah-Hartman wrote:
From: Eric Dumazet edumazet@google.com
commit 0f6925b3e8da0dbbb52447ca8a8b42b371aac7db upstream.
Xuan Zhuo reported that commit 3226b158e67c ("net: avoid 32 x truesize under-estimation for tiny skbs") brought a ~10% performance drop.
The reason for the performance drop was that GRO was forced to chain sk_buff (using skb_shinfo(skb)->frag_list), which uses more memory but also cause packet consumers to go over a lot of overhead handling all the tiny skbs.
It turns out that virtio_net page_to_skb() has a wrong strategy : It allocates skbs with GOOD_COPY_LEN (128) bytes in skb->head, then copies 128 bytes from the page, before feeding the packet to GRO stack.
This was suboptimal before commit 3226b158e67c ("net: avoid 32 x truesize under-estimation for tiny skbs") because GRO was using 2 frags per MSS, meaning we were not packing MSS with 100% efficiency.
Fix is to pull only the ethernet header in page_to_skb()
Then, we change virtio_net_hdr_to_skb() to pull the missing headers, instead of assuming they were already pulled by callers.
This fixes the performance regression, but could also allow virtio_net to accept packets with more than 128bytes of headers.
Many thanks to Xuan Zhuo for his report, and his tests/help.
Fixes: 3226b158e67c ("net: avoid 32 x truesize under-estimation for tiny skbs") Reported-by: Xuan Zhuo xuanzhuo@linux.alibaba.com Link: https://www.spinics.net/lists/netdev/msg731397.html Co-Developed-by: Xuan Zhuo xuanzhuo@linux.alibaba.com Signed-off-by: Xuan Zhuo xuanzhuo@linux.alibaba.com Signed-off-by: Eric Dumazet edumazet@google.com Cc: "Michael S. Tsirkin" mst@redhat.com Cc: Jason Wang jasowang@redhat.com Cc: virtualization@lists.linux-foundation.org Acked-by: Jason Wang jasowang@redhat.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
Note that an issue related to this patch was recently reported. It's quite possible that the root cause is a bug elsewhere in the kernel, but it probably makes sense to defer the backport until we know more ...
Thanks, I'll go drop it from all 4 queues. If you all find out that all is good, and it should be added back, please let us at stable@vger know about it.
thanks,
greg k-h