On 1/4/26 02:42, Ming Lei wrote:
> On Thu, Dec 04, 2025 at 02:10:25PM +0100, Christoph Hellwig wrote:
>> On Thu, Dec 04, 2025 at 12:09:46PM +0100, Christian König wrote:
>>>> I find the naming pretty confusing a well. But what this does is to
>>>> tell the file system/driver that it should expect a future
>>>> read_iter/write_iter operation that takes data from / puts data into
>>>> the dmabuf passed to this operation.
>>>
>>> That explanation makes much more sense.
>>>
>>> The remaining question is why does the underlying file system / driver
>>> needs to know that it will get addresses from a DMA-buf?
>>
>> This eventually ends up calling dma_buf_dynamic_attach and provides
>> a way to find the dma_buf_attachment later in the I/O path.
>
> Maybe it can be named as ->dma_buf_attach()? For wiring dma-buf and the
> importer side(nvme).
Yeah that would make it much more cleaner.
Also some higher level documentation would certainly help.
> But I am wondering why not make it as one subsystem interface, such as nvme
> ioctl, then the whole implementation can be simplified a lot. It is reasonable
> because subsystem is exactly the side for consuming/importing the dma-buf.
Yeah that it might be better if it's more nvme specific came to me as well.
Regards,
Christian.
>
>
> Thanks,
> Ming
>
On 12/19/25 16:58, Maxime Ripard wrote:
> On Fri, Dec 19, 2025 at 02:50:50PM +0100, Christian König wrote:
>> On 12/19/25 11:25, Maxime Ripard wrote:
>>> On Mon, Dec 15, 2025 at 03:53:22PM +0100, Christian König wrote:
>>>> On 12/15/25 14:59, Maxime Ripard wrote:
>> ...
>>>>>>> The shared ownership is indeed broken, but it's not more or less broken
>>>>>>> than, say, memfd + udmabuf, and I'm sure plenty of others.
>>>>>>>
>>>>>>> So we really improve the common case, but only make the "advanced"
>>>>>>> slightly more broken than it already is.
>>>>>>>
>>>>>>> Would you disagree?
>>>>>>
>>>>>> I strongly disagree. As far as I can see there is a huge chance we
>>>>>> break existing use cases with that.
>>>>>
>>>>> Which ones? And what about the ones that are already broken?
>>>>
>>>> Well everybody that expects that driver resources are *not* accounted to memcg.
>>>
>>> Which is a thing only because these buffers have never been accounted
>>> for in the first place.
>>
>> Yeah, completely agree. By not accounting it for such a long time we
>> ended up with people depending on this behavior.
>>
>> Not nice, but that's what it is.
>>
>>> So I guess the conclusion is that we shouldn't
>>> even try to do memory accounting, because someone somewhere might not
>>> expect that one of its application would take too much RAM in the
>>> system?
>>
>> Well we do need some kind of solution to the problem. Either having
>> some setting where you say "This memcg limit is inclusive/exclusive
>> device driver allocated memory" or have a completely separate limit
>> for device driver allocated memory.
>
> A device driver memory specific limit sounds like a good idea because it
> would make it easier to bridge the gap with dmem.
Completely agree, but that approach was rejected by the cgroups people.
I mean we can already use udmabuf to allocate memcg accounted system memory which then can be imported into device drivers.
So I don't see much reason why we should account dma-buf heaps and driver interfaces to memcg as well, we just need some way to limit them.
Regards,
Christian.
>
> Happy holidays,
> Maxime
On Tue, Jan 06, 2026 at 07:51:12PM +0000, Pavel Begunkov wrote:
>> But I am wondering why not make it as one subsystem interface, such as nvme
>> ioctl, then the whole implementation can be simplified a lot. It is reasonable
>> because subsystem is exactly the side for consuming/importing the dma-buf.
>
> It's not an nvme specific interface, and so a file op was much more
> convenient.
It is the much better abstraction. Also the nvme subsystems is not
an actor, and registering things to the subsystems does not work.
The nvme controller is the entity that does the dma mapping, and this
interface works very well for that.
On Fri, Dec 19, 2025 at 7:19 PM Maxime Ripard <mripard(a)redhat.com> wrote:
>
> Hi,
>
> On Tue, Dec 16, 2025 at 11:06:59AM +0900, T.J. Mercier wrote:
> > On Mon, Dec 15, 2025 at 7:51 PM Maxime Ripard <mripard(a)redhat.com> wrote:
> > > On Fri, Dec 12, 2025 at 08:25:19AM +0900, T.J. Mercier wrote:
> > > > On Fri, Dec 12, 2025 at 4:31 AM Eric Chanudet <echanude(a)redhat.com> wrote:
> > > > >
> > > > > The system dma-buf heap lets userspace allocate buffers from the page
> > > > > allocator. However, these allocations are not accounted for in memcg,
> > > > > allowing processes to escape limits that may be configured.
> > > > >
> > > > > Pass the __GFP_ACCOUNT for our allocations to account them into memcg.
> > > >
> > > > We had a discussion just last night in the MM track at LPC about how
> > > > shared memory accounted in memcg is pretty broken. Without a way to
> > > > identify (and possibly transfer) ownership of a shared buffer, this
> > > > makes the accounting of shared memory, and zombie memcg problems
> > > > worse. :\
> > >
> > > Are there notes or a report from that discussion anywhere?
> >
> > The LPC vids haven't been clipped yet, and actually I can't even find
> > the recorded full live stream from Hall A2 on the first day. So I
> > don't think there's anything to look at, but I bet there's probably
> > nothing there you don't already know.
>
> Ack, thanks for looking at it still :)
>
> > > The way I see it, the dma-buf heaps *trivial* case is non-existent at
> > > the moment and that's definitely broken. Any application can bypass its
> > > cgroups limits trivially, and that's a pretty big hole in the system.
> >
> > Agree, but if we only charge the first allocator then limits can still
> > easily be bypassed assuming an app can cause an allocation outside of
> > its cgroup tree.
> >
> > I'm not sure using static memcg limits where a significant portion of
> > the memory can be shared is really feasible. Even with just pagecache
> > being charged to memcgs, we're having trouble defining a static memcg
> > limit that is really useful since it has to be high enough to
> > accomodate occasional spikes due to shared memory that might or might
> > not be charged (since it can only be charged to one memcg - it may be
> > spread around or it may all get charged to one memcg). So excessive
> > anonymous use has to get really bad before it gets punished.
> >
> > What I've been hearing lately is that folks are polling memory.stat or
> > PSI or other metrics and using that to take actions (memory.reclaim /
> > killing / adjust memory.high) at runtime rather than relying on
> > memory.high/max behavior with a static limit.
>
> But that's only side effects of a buffer being shared, right? (which,
> for a buffer sharing mechanism is still pretty important, but still)
>
> > > The shared ownership is indeed broken, but it's not more or less broken
> > > than, say, memfd + udmabuf, and I'm sure plenty of others.
> >
> > One thing that's worse about system heap buffers is that unlike memfd
> > the memory isn't reclaimable. So without killing all users there's
> > currently no way to deal with the zombie issue. Harry's proposing
> > reparenting, but I don't think our current interfaces support that
> > because we'd have to mess with the page structs behind system heap
> > dmabufs to change the memcg during reparenting.
> >
> > Ah... but udmabuf pins the memfd pages, so you're right that memfd +
> > udmabuf isn't worse.
> >
> > > So we really improve the common case, but only make the "advanced"
> > > slightly more broken than it already is.
> > >
> > > Would you disagree?
> >
> > I think memcg limits in this case just wouldn't be usable because of
> > what I mentioned above. In our common case the allocator is in a
> > different cgroup tree than the real users of the buffer.
>
> So, my issue with this is that we want to fix not only dma-buf itself,
> but every device buffer allocation mechanism, so also v4l2, drm, etc.
>
> So we'll need a lot of infrastructure and rework outside of dma-buf to
> get there, and figuring out how to solve the shared buffer accounting is
> indeed one of them, but was so far considered kind the thing to do last
> last time we discussed.
>
> What I get from that discussion is that we now consider it a
> prerequisite, and given how that topic has been advancing so far, one
> that would take a couple of years at best to materialize into something
> useful and upstream.
>
> Thus, it blocks all the work around it for years.
>
> Would you be open to merging patches that work on it but only enabled
> through a kernel parameter for example (and possibly taint the kernel?)?
> That would allow to work towards that goal while not being blocked by
> the shared buffer accounting, and not affecting the general case either.
>
> Maxime
Hi Maxime,
A kernel param or a CONFIG sound like a good compromise to allow work
to progress. I'd be happy to add my R-B to that.
Hi Alain,
kernel test robot noticed the following build warnings:
[auto build test WARNING on atorgue-stm32/stm32-next]
[also build test WARNING on robh/for-next linus/master v6.19-rc1 next-20251219]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Alain-Volmat/media-stm32-dcm…
base: https://git.kernel.org/pub/scm/linux/kernel/git/atorgue/stm32.git stm32-next
patch link: https://lore.kernel.org/r/20251218-stm32-dcmi-dma-chaining-v1-1-39948ca6cbf…
patch subject: [PATCH 01/12] media: stm32: dcmi: Switch from __maybe_unused to pm_sleep_ptr()
config: arc-allyesconfig (https://download.01.org/0day-ci/archive/20251221/202512210044.xNNW6QJZ-lkp@…)
compiler: arc-linux-gcc (GCC) 15.1.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20251221/202512210044.xNNW6QJZ-lkp@…)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp(a)intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202512210044.xNNW6QJZ-lkp@intel.com/
All warnings (new ones prefixed by >>):
>> drivers/media/platform/st/stm32/stm32-dcmi.c:2127:12: warning: 'dcmi_resume' defined but not used [-Wunused-function]
2127 | static int dcmi_resume(struct device *dev)
| ^~~~~~~~~~~
>> drivers/media/platform/st/stm32/stm32-dcmi.c:2116:12: warning: 'dcmi_suspend' defined but not used [-Wunused-function]
2116 | static int dcmi_suspend(struct device *dev)
| ^~~~~~~~~~~~
vim +/dcmi_resume +2127 drivers/media/platform/st/stm32/stm32-dcmi.c
2115
> 2116 static int dcmi_suspend(struct device *dev)
2117 {
2118 /* disable clock */
2119 pm_runtime_force_suspend(dev);
2120
2121 /* change pinctrl state */
2122 pinctrl_pm_select_sleep_state(dev);
2123
2124 return 0;
2125 }
2126
> 2127 static int dcmi_resume(struct device *dev)
2128 {
2129 /* restore pinctl default state */
2130 pinctrl_pm_select_default_state(dev);
2131
2132 /* clock enable */
2133 pm_runtime_force_resume(dev);
2134
2135 return 0;
2136 }
2137
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
On 12/19/25 11:25, Maxime Ripard wrote:
> On Mon, Dec 15, 2025 at 03:53:22PM +0100, Christian König wrote:
>> On 12/15/25 14:59, Maxime Ripard wrote:
...
>>>>> The shared ownership is indeed broken, but it's not more or less broken
>>>>> than, say, memfd + udmabuf, and I'm sure plenty of others.
>>>>>
>>>>> So we really improve the common case, but only make the "advanced"
>>>>> slightly more broken than it already is.
>>>>>
>>>>> Would you disagree?
>>>>
>>>> I strongly disagree. As far as I can see there is a huge chance we
>>>> break existing use cases with that.
>>>
>>> Which ones? And what about the ones that are already broken?
>>
>> Well everybody that expects that driver resources are *not* accounted to memcg.
>
> Which is a thing only because these buffers have never been accounted
> for in the first place.
Yeah, completely agree. By not accounting it for such a long time we ended up with people depending on this behavior.
Not nice, but that's what it is.
> So I guess the conclusion is that we shouldn't
> even try to do memory accounting, because someone somewhere might not
> expect that one of its application would take too much RAM in the
> system?
Well we do need some kind of solution to the problem. Either having some setting where you say "This memcg limit is inclusive/exclusive device driver allocated memory" or have a completely separate limit for device driver allocated memory.
Key point is we have both use cases, so we need to support both.
>>>> There has been some work on TTM by Dave but I still haven't found time
>>>> to wrap my head around all possible side effects such a change can
>>>> have.
>>>>
>>>> The fundamental problem is that neither memcg nor the classic resource
>>>> tracking (e.g. the OOM killer) has a good understanding of shared
>>>> resources.
>>>
>>> And yet heap allocations don't necessarily have to be shared. But they
>>> all have to be allocated.
>>>
>>>> For example you can use memfd to basically kill any process in the
>>>> system because the OOM killer can't identify the process which holds
>>>> the reference to the memory in question. And that is a *MUCH* bigger
>>>> problem than just inaccurate memcg accounting.
>>>
>>> When you frame it like that, sure. Also, you can use the system heap to
>>> DoS any process in the system. I'm not saying that what you're concerned
>>> about isn't an issue, but let's not brush off other people legitimate
>>> issues as well.
>>
>> Completely agree, but we should prioritize.
>>
>> That driver allocated memory is not memcg accounted is actually uAPI,
>> e.g. that is not something which can easily change.
>>
>> While fixing the OOM killer looks perfectly doable and will then most
>> likely also show a better path how to fix the memcg accounting.
>
> I don't necessarily disagree, but we don't necessarily have the same
> priorities either. Your use-cases are probably quite different from
> mine, and that's ok. But that's precisely why all these discussions
> should be made on the ML when possible, or at least have some notes when
> a discussion has happened at a conference or something.
>
> So far, my whole experience with this topic, despite being the only one
> (afaik) sending patches about this for the last 1.5y, is that everytime
> some work on this is done the answer is "oh but you shouldn't have
> worked on it because we completely changed our mind", and that's pretty
> frustrating.
Welcome to the club :)
I've already posted patches to start addressing at least the OOM killer issue ~10 years ago.
Those patches were not well received because back then driver memory was negligible and the problem simply didn't hurt much.
But by now we have GPUs and AI accelerators which eat up 90% of your system memory, security researchers stumbling over it and IIRC even multiple CVE numbers for some of the resulting issues...
I should probably dig it up and re-send my patch set.
Happy holidays,
Christian.
>
> Maxime
Hi,
Here are kernel-doc fixes for mm subsystem, based on mm-hotfixes-unstable
branch. This series is split from previous assorted kernel-doc fixes series
[1] with review trailers applied.
I'm also including textsearch fix since there's currently no maintainer
for include/linux/textsearch.h (get_maintainer.pl only shows LKML).
Enjoy!
[1]: https://lore.kernel.org/linux-fsdevel/20251215113903.46555-1-bagasdotme@gma…
Bagas Sanjaya (4):
mm: Describe @flags parameter in memalloc_flags_save()
textsearch: Describe @list member in ts_ops search
mm: vmalloc: Fix up vrealloc_node_align() kernel-doc macro name
mm, kfence: Describe @slab parameter in __kfence_obj_info()
include/linux/kfence.h | 1 +
include/linux/sched/mm.h | 1 +
include/linux/textsearch.h | 1 +
mm/vmalloc.c | 2 +-
4 files changed, 4 insertions(+), 1 deletion(-)
base-commit: 980dbceadd50af9437257d8095d4a3606818e8c4
--
An old man doll... just what I always wanted! - Clara
On 12/11/25 16:04, Tvrtko Ursulin wrote:
...
>> @@ -90,6 +73,11 @@ static int test_signaling(void *arg)
>> goto err_free;
>> }
>> + if (rcu_dereference_protected(f->ops, true)) {
>> + pr_err("Fence ops not cleared on signal\n");
>> + goto err_free;
>> + }
>
> Bump to after the signaled check just below? Otherwise the signaled state hasn't been ascertained yet.
Done. I've put it to the end of the test.
>> +
>> if (!dma_fence_is_signaled(f)) {
>> pr_err("Fence not reporting signaled\n");
>> goto err_free;
>> @@ -540,19 +528,7 @@ int dma_fence(void)
>> SUBTEST(test_stub),
>> SUBTEST(race_signal_callback),
>> };
>> - int ret;
>> pr_info("sizeof(dma_fence)=%zu\n", sizeof(struct dma_fence));
>> -
>> - slab_fences = KMEM_CACHE(mock_fence,
>> - SLAB_TYPESAFE_BY_RCU |
>
> Hm.. race_signal_callback looks like it could be depending on SLAB_TYPESAFE_BY_RCU. To you not?
Hui? As far as I can see it doesn't.
The race_signal_callback test just depends on the general RCU functionality of fences.
Regards,
Christian.
>
> Regards,
>
> Tvrtko
>
>> - SLAB_HWCACHE_ALIGN);
>> - if (!slab_fences)
>> - return -ENOMEM;
>> -
>> - ret = subtests(tests, NULL);
>> -
>> - kmem_cache_destroy(slab_fences);
>> -
>> - return ret;
>> + return subtests(tests, NULL);
>> }
>