On Tue, 20 Aug 2024 00:01:02 -0400 Mina Almasry wrote:
Took a bit of a look here. Forgive me, I'm not that familiar with XDP and virtual interfaces, so I'm a bit unsure what to do here.
For veth, it seems, the device behind the veth is stored in veth_priv->peer, so it seems maybe a dev_get_max_mp_channel() check on veth_priv->peer is the way to go to disable this for veth? I think we need to do this check on creation of the veth and on the ndo_bpf of veth.
veth is a SW device pair, it can't reasonably support netmem. Given all the unreasonable features it grew over time we can't rule out that someone will try, but that's not our problem now.
For bonding, it seems we need to add mp channel check in bond_xdp_set, and bond_enslave?
Sort of, I'd factor out that logic into the core first, as some sort of "xdp propagate" helper. Then we can add that check once. I don't see anything bond specific in the logic.
There are a few other drivers that define ndo_add_slave, seems a check in br_add_slave is needed as well.
I don't think it's that broad. Not many drivers propagate XDP:
$ git grep -C 200 '.ndo_add_slave' | grep '.ndo_bpf' drivers/net/bonding/bond_main.c- .ndo_bpf = bond_xdp,
$ git grep --files-with-matches 'ops->ndo_bpf' -- drivers/ drivers/net/bonding/bond_main.c drivers/net/hyperv/netvsc_bpf.c
This seems like a potentially deep rabbit hole with a few checks to add all of the place. Is this blocking the series?
Protecting the stack from unreadable memory is *the* challenge in this series. The rest is a fairly straightforward.