On 1/4/26 02:42, Ming Lei wrote:
On Thu, Dec 04, 2025 at 02:10:25PM +0100, Christoph Hellwig wrote:
On Thu, Dec 04, 2025 at 12:09:46PM +0100, Christian König wrote:
I find the naming pretty confusing a well. But what this does is to tell the file system/driver that it should expect a future read_iter/write_iter operation that takes data from / puts data into the dmabuf passed to this operation.
That explanation makes much more sense.
The remaining question is why does the underlying file system / driver needs to know that it will get addresses from a DMA-buf?
This eventually ends up calling dma_buf_dynamic_attach and provides a way to find the dma_buf_attachment later in the I/O path.
Maybe it can be named as ->dma_buf_attach()? For wiring dma-buf and the importer side(nvme).
Yeah that would make it much more cleaner.
Also some higher level documentation would certainly help.
But I am wondering why not make it as one subsystem interface, such as nvme ioctl, then the whole implementation can be simplified a lot. It is reasonable because subsystem is exactly the side for consuming/importing the dma-buf.
Yeah that it might be better if it's more nvme specific came to me as well.
Regards, Christian.
Thanks, Ming
On Wed, Jan 07, 2026 at 04:56:05PM +0100, Christian König wrote:
But I am wondering why not make it as one subsystem interface, such as nvme ioctl, then the whole implementation can be simplified a lot. It is reasonable because subsystem is exactly the side for consuming/importing the dma-buf.
Yeah that it might be better if it's more nvme specific came to me as well.
The feature is in no way nvme specific. nvme is just the initial underlying driver. It makes total sense to support this for any high performance block device, and to pass it through file systems.
linaro-mm-sig@lists.linaro.org