On Thu, Oct 24, 2024 at 03:23:06PM +0100, Pavel Begunkov wrote:
> > That's not what this series does. It adds the new memory_provider_ops
> > set of hooks, with once implementation for dmabufs, and one for
> > io_uring zero copy.
>
> First, it's not a _new_ abstraction over a buffer as you called it
> before, the abstraction (net_iov) is already merged.
Umm, it is a new ops vector.
> Second, you mention devmem TCP, and it's not just a page pool with
> "dmabufs", it's a user API to use it and other memory agnostic
> allocation logic. And yes, dmabufs there is the least technically
> important part. Just having a dmabuf handle solves absolutely nothing.
It solves a lot, becaue it provides a proper abstraction.
> > So you are precluding zero copy RX into anything but your magic
> > io_uring buffers, and using an odd abstraction for that.
>
> Right io_uring zero copy RX API expects transfer to happen into io_uring
> controlled buffers, and that's the entire idea. Buffers that are based
> on an existing network specific abstraction, which are not restricted to
> pages or anything specific in the long run, but the flow of which from
> net stack to user and back is controlled by io_uring. If you worry about
> abuse, io_uring can't even sanely initialise those buffers itself and
> therefore asking the page pool code to do that.
No, I worry about trying to io_uring for not good reason. This
pre-cludes in-kernel uses which would be extremly useful for
network storage drivers, and it precludes device memory of all
kinds.
> I'm even more confused how that would help. The user API has to
> be implemented and adding a new dmabuf gives nothing, not even
> mentioning it's not clear what semantics of that beast is
> supposed to be.
>
The dma-buf maintainers already explained to you last time
that there is absolutely no need to use the dmabuf UAPI, you
can use dma-bufs through in-kernel interfaces just fine.
Hi guys,
turned out that userspace can also merge dma_fence_chain contains
which can result in really huge arrays.
Fix those merges to sort the arrays and remove the duplicates.
Additional to that start to use kvzalloc() for dma_fence_array
containers so that can handle much larger arrays if necessary.
Please review and comment,
Christian.
On Wed, Oct 23, 2024 at 03:34:53PM +0100, Pavel Begunkov wrote:
> It doesn't care much what kind of memory it is, nor it's important
> for internals how it's imported, it's user addresses -> pages for
> user convenience sake. All the net_iov setup code is in the page pool
> core code. What it does, however, is implementing the user API, so
That's not what this series does. It adds the new memory_provider_ops
set of hooks, with once implementation for dmabufs, and one for
io_uring zero copy.
So you are precluding zero copy RX into anything but your magic
io_uring buffers, and using an odd abstraction for that.
The right way would be to support zero copy RX into every
designated dmabuf, and make io_uring work with udmabuf or if
absolutely needed it's own kind of dmabuf. Instead we create
a maze of incompatible abstractions here. The use case of e.g.
doing zero copy receive into a NVMe CMB using PCIe P2P transactions
is every but made up, so this does create a problem.
On Wed, Oct 16, 2024 at 11:52:39AM -0700, David Wei wrote:
> From: Pavel Begunkov <asml.silence(a)gmail.com>
>
> Currently net_iov stores a pointer to struct dmabuf_genpool_chunk_owner,
> which serves as a useful abstraction to share data and provide a
> context. However, it's too devmem specific, and we want to reuse it for
> other memory providers, and for that we need to decouple net_iov from
> devmem. Make net_iov to point to a new base structure called
> net_iov_area, which dmabuf_genpool_chunk_owner extends.
We've been there before. Instead of reinventing your own memory
provider please enhance dmabufs for your use case. We don't really
need to build memory buffer abstraction over memory buffer abstraction.
On Tue, Oct 22, 2024 at 1:38 AM Maxime Ripard <mripard(a)redhat.com> wrote:
>
> I wanted to follow-up on the discussion we had at Plumbers with John and
> T.J. about (among other things) adding new heaps to the kernel.
>
> I'm still interested in merging a carve-out driver[1], since it seems to be
> in every vendor BSP and got asked again last week.
>
> I remember from our discussion that for new heap types to be merged, we
> needed a kernel use-case. Looking back, I'm not entirely sure how one
> can provide that given that heaps are essentially facilities for
> user-space.
>
> Am I misremembering or missing something? What are the requirements for
> you to consider adding a new heap driver?
It's basically the same as the DRM subsystem rules.
https://docs.kernel.org/gpu/drm-uapi.html#open-source-userspace-requirements
ie: There has to be opensource user for it, and the user has to be
more significant than a "toy" implementation (which can be a bit
subjective and contentious when trying to get out of a chicken and egg
loop).
thanks
-john