On 2/22/25 8:15 PM, Mina Almasry wrote: [...]
@@ -119,6 +122,13 @@ void net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding) unsigned long xa_idx; unsigned int rxq_idx;
- xa_erase(&net_devmem_dmabuf_bindings, binding->id);
- /* Ensure no tx net_devmem_lookup_dmabuf() are in flight after the
* erase.
*/
- synchronize_net();
Is the above statement always true? can the dmabuf being stuck in some qdisc? or even some local socket due to redirect?
@@ -252,13 +261,23 @@ net_devmem_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd, * binding can be much more flexible than that. We may be able to * allocate MTU sized chunks here. Leave that for future work... */
- binding->chunk_pool =
gen_pool_create(PAGE_SHIFT, dev_to_node(&dev->dev));
- binding->chunk_pool = gen_pool_create(PAGE_SHIFT,
if (!binding->chunk_pool) { err = -ENOMEM; goto err_unmap; }dev_to_node(&dev->dev));
- if (direction == DMA_TO_DEVICE) {
binding->tx_vec = kvmalloc_array(dmabuf->size / PAGE_SIZE,
sizeof(struct net_iov *),
GFP_KERNEL);
if (!binding->tx_vec) {
err = -ENOMEM;
goto err_free_chunks;
Possibly my comment on v3 has been lost:
""" It looks like the later error paths (in the for_each_sgtable_dma_sg() loop) could happen even for 'direction == DMA_TO_DEVICE', so I guess an additional error label is needed to clean tx_vec on such paths. """
[...]
@@ -1071,6 +1072,16 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size) flags = msg->msg_flags;
- sockc = (struct sockcm_cookie){ .tsflags = READ_ONCE(sk->sk_tsflags),
.dmabuf_id = 0 };
- if (msg->msg_controllen) {
err = sock_cmsg_send(sk, msg, &sockc);
if (unlikely(err)) {
err = -EINVAL;
goto out_err;
}
- }
I'm unsure how much that would be a problem, but it looks like that unblocking sendmsg(MSG_FASTOPEN) with bad msg argument will start to fail on top of this patch, while they should be successful (EINPROGRESS) before.
/P