On Mon, May 13, 2024 at 11:10:00AM -0400, Nicolas Dufresne wrote:
Le lundi 13 mai 2024 à 11:34 +0300, Laurent Pinchart a écrit :
On Mon, May 13, 2024 at 10:29:22AM +0200, Maxime Ripard wrote:
On Wed, May 08, 2024 at 10:36:08AM +0200, Daniel Vetter wrote:
On Tue, May 07, 2024 at 04:07:39PM -0400, Nicolas Dufresne wrote:
Hi,
Le mardi 07 mai 2024 à 21:36 +0300, Laurent Pinchart a écrit :
Shorter term, we have a problem to solve, and the best option we have found so far is to rely on dma-buf heaps as a backend for the frame buffer allocatro helper in libcamera for the use case described above. This won't work in 100% of the cases, clearly. It's a stop-gap measure until we can do better.
Considering the security concerned raised on this thread with dmabuf heap allocation not be restricted by quotas, you'd get what you want quickly with memfd + udmabuf instead (which is accounted already).
It was raised that distro don't enable udmabuf, but as stated there by Hans, in any cases distro needs to take action to make the softISP works. This alternative is easy and does not interfere in anyway with your future plan or the libcamera API. You could even have both dmabuf heap (for Raspbian) and the safer memfd+udmabuf for the distro with security concerns.
And for the long term plan, we can certainly get closer by fixing that issue with accounting. This issue also applied to v4l2 io-ops, so it would be nice to find common set of helpers to fix these exporters.
Yeah if this is just for softisp, then memfd + udmabuf is also what I was about to suggest. Not just as a stopgap, but as the real official thing.
udmabuf does kinda allow you to pin memory, but we can easily fix that by adding the right accounting and then either let mlock rlimits or cgroups kernel memory limits enforce good behavior.
I think the main drawback with memfd is that it'll be broken for devices without an IOMMU, and while you said that it's uncommon for GPUs, it's definitely not for codecs and display engines.
If the application wants to share buffers between the camera and a display engine or codec, it should arguably not use the libcamera FrameBufferAllocator, but allocate the buffers from the display or the encoder. memfd wouldn't be used in that case.
We need to eat our own dogfood though. If we want to push the responsibility for buffer allocation in the buffer sharing case to the application, we need to modify the cam application to do so when using the KMS backend.
Agreed, and the new dmabuf feedback on wayland can also be used on top of this.
You'll hit the same limitation as we hit in GStreamer, which is that KMS driver only offer allocation for render buffers and most of them are missing allocators for YUV buffers, even though they can import in these formats. (kms allocators, except dumb, which has other issues, are format aware).
My experience on Arm platforms is that the KMS drivers offer allocation for scanout buffers, not render buffers, and mostly using the dumb allocator API. If the KMS device can scan out YUV natively, YUV buffer allocation should be supported. Am I missing something here ?