On Wed, May 14, 2025 at 03:42:19PM +0000, Michael Kelley wrote:
From: Simon Horman horms@kernel.org Sent: Wednesday, May 14, 2025 2:35 AM
On Mon, May 12, 2025 at 05:06:02PM -0700, mhkelley58@gmail.com wrote:
From: Michael Kelley mhklinux@outlook.com
...
for (i = 0; i < frags; i++) { skb_frag_t *frag = skb_shinfo(skb)->frags + i;
struct hv_page_buffer *cur_pb = &pb[i + 2];
Hi Michael,
If I got things right then then pb is allocated on the stack in netvsc_xmit and has MAX_DATA_RANGES elements.
Correct.
If MAX_SKB_FRAGS is largs and MAX_DATA_RANGES has been limited to MAX_DATA_RANGES. And frags is large. Is is possible to overrun pb here?
I don't think it's possible. Near the top of netvsc_xmit() there's a call to netvsc_get_slots(), along with code ensuring that all the data in the skb (and its frags) exists on no more than MAX_PAGE_BUFFER_COUNT (i.e., 32) pages. There can't be more frags than pages, so it should not be possible to overrun the pb array even if the frag count is large.
If the kernel is built with CONFIG_MAX_SKB_FRAGS greater than 30, and there are more than 30 frags in the skb (allowing for 2 pages for the rndis header), netvsc_xmit() tries to linearize the skb to reduce the frag count. But if that doesn't work, netvsc_xmit() drops the xmit request, which isn't a great outcome. But that's a limitation of the existing code, and this patch set doesn't change that limitation.
Hi Michael,
Thanks for addressing my concern. I do now see that the check in netvsc_xmit() prevents an overrun.
And I agree that while not ideal dropping oversize packets is not strictly related to this patch-set (and is indeed much better than an overrun).
With the above in mind I'm now happy with this patch.
Reviewed-by: Simon Horman horms@kernel.org
...