Hi Jonathan,
Le mardi 05 mars 2024 à 10:07 +0000, Jonathan Cameron a écrit :
> On Mon, 04 Mar 2024 08:59:47 +0100
> Nuno Sá <noname.nuno(a)gmail.com> wrote:
>
> > On Sun, 2024-03-03 at 17:42 +0000, Jonathan Cameron wrote:
> > > On Fri, 23 Feb 2024 13:13:58 +0100
> > > Nuno Sa <nuno.sa(a)analog.com> wrote:
> > >
> > > > Hi Jonathan, likely you're wondering why I'm sending v7. Well,
> > > > to be
> > > > honest, we're hoping to get this merged this for the 6.9 merge
> > > > window.
> > > > Main reason is because the USB part is already in (so it would
> > > > be nice
> > > > to get the whole thing in). Moreover, the changes asked in v6
> > > > were simple
> > > > (even though I'm not quite sure in one of them) and Paul has no
> > > > access to
> > > > it's laptop so he can't send v7 himself. So he kind of
> > > > said/asked for me to
> > > > do it.
> > >
> > > So, we are cutting this very fine. If Linus hints strongly at an
> > > rc8 maybe we
> > > can sneak this in. However, I need an Ack from Vinod for the dma
> > > engine
> > > changes first.
> > >
> > > Also I'd love a final 'looks ok' comment from DMABUF folk (Ack
> > > even better!)
> > >
> > > Seems that the other side got resolved in the USB gadget, but
> > > last we heard
> > > form
> > > Daniel and Christian looks to have been back on v5. I'd like them
> > > to confirm
> > > they are fine with the changes made as a result.
> > >
> >
> > I can ask Christian or Daniel for some acks but my feeling (I still
> > need, at
> > some point, to get really familiar with all of this) is that this
> > should be
> > pretty similar to the USB series (from a DMABUF point of view) as
> > they are both
> > importers.
> >
> > > I've been happy with the IIO parts for a few versions now but my
> > > ability to
> > > review
> > > the DMABUF and DMA engine bits is limited.
> > >
> > > A realistic path to get this in is rc8 is happening, is all Acks
> > > in place by
> > > Wednesday,
> > > I get apply it and hits Linux-next Thursday, Pull request to Greg
> > > on Saturday
> > > and Greg
> > > is feeling particularly generous to take one on the day he
> > > normally closes his
> > > trees.
> > >
> >
> > Well, it looks like we still have a shot. I'll try to see if Vinod
> > is fine with
> > the DMAENGINE stuff.
> >
>
> Sadly, looks like rc7 was at the end of a quiet week, so almost
> certain to not
> be an rc8 in the end. Let's aim to get this in at the start of the
> next cycle
> so we can build on it from there.
And it looks like I'll need a V8 for the few things noted by Christian.
Having it in 6.9 would have been great but having it eventually merged
is all that matters - so I'm fine to have it queued for 6.10 instead.
Cheers,
-Paul
On Mon, Mar 4, 2024 at 5:46 AM Maxime Ripard <mripard(a)redhat.com> wrote:
> On Wed, Feb 28, 2024 at 08:17:55PM -0800, John Stultz wrote:
> > On Wed, Feb 28, 2024 at 7:24 AM Maxime Ripard <mripard(a)redhat.com> wrote:
> > >
> > > I'm currently working on a platform that seems to have togglable RAM ECC
> > > support. Enabling ECC reduces the memory capacity and memory bandwidth,
> > > so while it's a good idea to protect most of the system, it's not worth
> > > it for things like framebuffers that won't really be affected by a
> > > bitflip.
> > >
> > > It's currently setup by enabling ECC on the entire memory, and then
> > > having a region of memory where ECC is disabled and where we're supposed
> > > to allocate from for allocations that don't need it.
> > >
> > > My first thought to support this was to create a reserved memory region
> > > for the !ECC memory, and to create a heap to allocate buffers from that
> > > region. That would leave the system protected by ECC, while enabling
> > > userspace to be nicer to the system by allocating buffers from the !ECC
> > > region if it doesn't need it.
> > >
> > > However, this creates basically a new combination compared to the one we
> > > already have (ie, physically contiguous vs virtually contiguous), and we
> > > probably would want to throw in cacheable vs non-cacheable too.
> > >
> > > If we had to provide new heaps for each variation, we would have 8 heaps
> > > (and 6 new ones), which could be fine I guess but would still increase
> > > quite a lot the number of heaps we have so far.
> > >
> > > Is it something that would be a problem? If it is, do you see another
> > > way to support those kind of allocations (like providing hints through
> > > the ioctl maybe?)?
> >
> > So, the dma-buf heaps interface uses chardevs so that we can have a
> > lot of flexibility in the types of heaps (and don't have the risk of
> > bitmask exhaustion like ION had). So I don't see adding many
> > differently named heaps as particularly problematic.
>
> Ok
>
> > That said, if there are truly generic properties (cacheable vs
> > non-cachable is maybe one of those) which apply to most heaps, I'm
> > open to making use of the flags. But I want to avoid having per-heap
> > flags, it really needs to be a generic attribute.
> >
> > And I personally don't mind the idea of having things added as heaps
> > initially, and potentially upgrading them to flags if needed (allowing
> > heap drivers to optionally enumerate the old chardevs behind a config
> > option for backwards compatibility).
> >
> > How common is the hardware that is going to provide this configurable
> > ECC option
>
> In terms of number of SoCs with the feature, it's probably a handful. In
> terms of number of units shipped, we're in the fairly common range :)
>
Sure, I guess I was trying to get a sense of is this a feature we'll
likely be seeing commonly across hardware (such that internal kernel
allocators would be considering it as a flag), or is it more tied to a
single vendor such that enabling/isolating it in a driver is the right
place in the abstraction to put it.
> > and will you really want the option on all of the heap types?
>
> Aside from the cacheable/uncacheable discussion, yes. We could probably
> get away with only physically contiguous allocations at the moment
> though, I'll double check.
Ok, that will be useful to know.
> > Will there be any hardware constraint limitations caused by the
> > ECC/!ECC flags? (ie: Devices that can't use !ECC allocated buffers?)
>
> My understanding is that there's no device restriction. It will be a
> carved out memory so we will need to maintain a separate pool and it
> will be limited in size, but that's pretty much the only one afaik.
Ok.
> > If not, I wonder if it would make sense to have something more along
> > the lines using a fcntl() like how F_SEAL_* is used with memfds?
> > With some of the discussion around "restricted"/"secure" heaps that
> > can change state, I've liked this idea of just allocating dmabufs from
> > normal heaps and then using fcntl or something similar to modify
> > properties of the buffer that are separate from the type of memory
> > that is needed to be allocated to satisfy device constraints.
>
> Sorry, I'm not super familiar with F_SEAL so I don't really follow what
> you have in mind here. Do you have any additional resources I could read
> to understand better what you're thinking about?
See the File Sealing section: https://man7.org/linux/man-pages/man2/fcntl.2.html
> Also, if we were to modify the ECC attributes after the dma-buf has been
> allocated by dma-buf, and if the !ECC memory is carved out only, then
> wouldn't that mean we would need to reallocate the backing buffer for
> that dma-buf?
So yea, having to work on a larger pool likely makes this not useful
here, so apologies for the tangent.
To explain myself, part of what I'm thinking of is, the dmabuf heaps
(and really ION before it) try to solve how to allocate a buffer type
that can be used across a number of devices that may have different
constraints. So the focus was on "types of memory" to satisfy the
constraint (contiguous, non-contiguous, secure/restricted, etc), which
come down to what pages actually get used. However, outside of the
"constraint type" the buffer may have, there are other "properties"
that may affect performance (like cacheability, and some variants of
"restricted buffers" which can change over their lifetime). With ION
vendors would mix these two together in their vendor heaps, and with
out-of-tree dmabuf heaps it is also common to tangle types and
properties together.
So I'm sort of stewing on how to best distinguish between heaps for
"types of memory/pages" (ie: what's *required* to share the buffer
between devices) vs these buffer properties (which affect performance)
that may apply to multiple memory types.
(And I'm also not 100% convinced that distinguishing between this is
necessary, but casually mixing them feels messy to me)
For buffers where those properties might change over time (like some
variants of restricted buffers), I think the fnctl/F_SEAL_* idea makes
sense to allow the buffer to become restricted.
For cacheability, it seems likely an allocation flag would be nicest,
but we don't have upstream users and not a lot of heap types yet, thus
the out-of-tree "system-uncached" heap which sort of mixes types and
properties.
With ECC I was trying to get a sense of where it would sit between
this "type of memory" vs a "buffer property". If internal allocators
are likely to consider it in a way similar to CMA (and with the pool
granular control, it sounds like it), then yeah, it probably should be
a type of memory, so a new heap name is likely the way to go - but
there is still the question of how to best support the various
combinations of (contiguous, cacheable) along with ECC. But if it
were something that was dynamically controllable at a finer grained
level in the future, maybe it would be something like a buffer
property.
thanks
-john
On Fri, 2024-02-23 at 13:13 +0100, Nuno Sa wrote:
> From: Paul Cercueil <paul(a)crapouillou.net>
>
> This function can be used to initiate a scatter-gather DMA transfer,
> where the address and size of each segment is located in one entry of
> the dma_vec array.
>
> The major difference with dmaengine_prep_slave_sg() is that it supports
> specifying the lengths of each DMA transfer; as trying to override the
> length of the transfer with dmaengine_prep_slave_sg() is a very tedious
> process. The introduction of a new API function is also justified by the
> fact that scatterlists are on their way out.
>
> Note that dmaengine_prep_interleaved_dma() is not helpful either in that
> case, as it assumes that the address of each segment will be higher than
> the one of the previous segment, which we just cannot guarantee in case
> of a scatter-gather transfer.
>
> Signed-off-by: Paul Cercueil <paul(a)crapouillou.net>
> Signed-off-by: Nuno Sa <nuno.sa(a)analog.com>
> ---
Hi Vinod,
Is this already good for you? I do not want to be pushy but we're trying to see
if we can have this in the 6.9 cycle and Jonathan definitely wants an ack from
you before merging this in his tree. I've more or less till Wednesday so that's
why I'm asking already today so I still have time to re-spin if you want some
changes.
- Nuno Sá
On Fri, 23 Feb 2024 13:13:58 +0100
Nuno Sa <nuno.sa(a)analog.com> wrote:
> Hi Jonathan, likely you're wondering why I'm sending v7. Well, to be
> honest, we're hoping to get this merged this for the 6.9 merge window.
> Main reason is because the USB part is already in (so it would be nice
> to get the whole thing in). Moreover, the changes asked in v6 were simple
> (even though I'm not quite sure in one of them) and Paul has no access to
> it's laptop so he can't send v7 himself. So he kind of said/asked for me to do it.
So, we are cutting this very fine. If Linus hints strongly at an rc8 maybe we
can sneak this in. However, I need an Ack from Vinod for the dma engine changes first.
Also I'd love a final 'looks ok' comment from DMABUF folk (Ack even better!)
Seems that the other side got resolved in the USB gadget, but last we heard form
Daniel and Christian looks to have been back on v5. I'd like them to confirm
they are fine with the changes made as a result.
I've been happy with the IIO parts for a few versions now but my ability to review
the DMABUF and DMA engine bits is limited.
A realistic path to get this in is rc8 is happening, is all Acks in place by Wednesday,
I get apply it and hits Linux-next Thursday, Pull request to Greg on Saturday and Greg
is feeling particularly generous to take one on the day he normally closes his trees.
Whilst I'll cross my fingers, looks like 6.10 material to me :(
I'd missed the progress on the USB side so wasn't paying enough attention. Sorry!
Jonathan
>
> v6:
> * https://lore.kernel.org/linux-iio/20240129170201.133785-1-paul@crapouillou.…
>
> v7:
> - Patch 1
> * Renamed *device_prep_slave_dma_vec() -> device_prep_peripheral_dma_vec();
> * Added a new flag parameter to the function as agreed between Paul
> and Vinod. I renamed the first parameter to prep_flags as it's supposed to
> be used (I think) with enum dma_ctrl_flags. I'm not really sure how that API
> can grow but I was thinking in just having a bool cyclic parameter (as the
> first intention of the flags is to support cyclic transfers) but ended up
> "respecting" the previously agreed approach.
> - Patch 2
> * Adapted patch for the changes made in patch 1.
> - Patch 5
> * Adapted patch for the changes made in patch 1.
>
> Patchset based on next-20240223.
>
> ---
> Paul Cercueil (6):
> dmaengine: Add API function dmaengine_prep_peripheral_dma_vec()
> dmaengine: dma-axi-dmac: Implement device_prep_peripheral_dma_vec
> iio: core: Add new DMABUF interface infrastructure
> iio: buffer-dma: Enable support for DMABUFs
> iio: buffer-dmaengine: Support new DMABUF based userspace API
> Documentation: iio: Document high-speed DMABUF based API
>
> Documentation/iio/dmabuf_api.rst | 54 +++
> Documentation/iio/index.rst | 2 +
> drivers/dma/dma-axi-dmac.c | 40 ++
> drivers/iio/buffer/industrialio-buffer-dma.c | 181 +++++++-
> drivers/iio/buffer/industrialio-buffer-dmaengine.c | 59 ++-
> drivers/iio/industrialio-buffer.c | 462 +++++++++++++++++++++
> include/linux/dmaengine.h | 27 ++
> include/linux/iio/buffer-dma.h | 31 ++
> include/linux/iio/buffer_impl.h | 33 ++
> include/uapi/linux/iio/buffer.h | 22 +
> 10 files changed, 894 insertions(+), 17 deletions(-)
> ---
> base-commit: 33e1d31873f87d119e5120b88cd350efa68ef276
> change-id: 20240223-iio-dmabuf-5ee0530195ca
> --
>
> Thanks!
> - Nuno Sá
>
On Fri, Mar 01, 2024 at 04:02:53PM +0100, Julien Panis wrote:
> This patch adds XDP (eXpress Data Path) support to TI AM65 CPSW
> Ethernet driver. The following features are implemented:
> - NETDEV_XDP_ACT_BASIC (XDP_PASS, XDP_TX, XDP_DROP, XDP_ABORTED)
> - NETDEV_XDP_ACT_REDIRECT (XDP_REDIRECT)
> - NETDEV_XDP_ACT_NDO_XMIT (ndo_xdp_xmit callback)
>
> The page pool memory model is used to get better performance.
Do you have any benchmark numbers? It should help with none XDP
traffic as well. So maybe iperf numbers before and after?
Andrew
From: Jason-jh Lin <jason-jh.lin(a)mediatek.corp-partner.google.com>
Since MT8195 supports GAMMA 12-bit LUT after the landing of [1] series,
we can now add support for MT8188.
[1] MediaTek DDP GAMMA - 12-bit LUT support
- https://patchwork.kernel.org/project/linux-mediatek/list/?series=792516
Change in v2:
1. Keep MT8195 compatible in the group of MT8183.
2. Move MT8195 compatible group to the end of items list.
Jason-JH.Lin (3):
dt-bindings: display: mediatek: gamma: Change MT8195 to single enum
group
dt-bindings: display: mediatek: gamma: Add support for MT8188
drm/mediatek: Add gamma support for MT8195
.../devicetree/bindings/display/mediatek/mediatek,gamma.yaml | 5 +++++
drivers/gpu/drm/mediatek/mtk_drm_drv.c | 2 ++
2 files changed, 7 insertions(+)
--
2.18.0
On Thu, Feb 29, 2024 at 05:19:44PM +0100, Julien Panis wrote:
> On 2/27/24 00:18, Andrew Lunn wrote:
> > > +static struct sk_buff *am65_cpsw_alloc_skb(struct net_device *ndev, unsigned int len)
> > > +{
> > > + struct page *page;
> > > + struct sk_buff *skb;
> > > +
> > > + page = dev_alloc_pages(0);
> > You are likely to get better performance if you use the page_pool.
> >
> > When FEC added XDP support, the first set of changes was to make use
> > of page_pool. That improved the drivers performance. Then XDP was
> > added on top. Maybe you can follow that pattern.
> >
> > Andrew
>
> Hello Andrew,
>
> Thanks for this suggestion. I've been working on it over the last days.
> I encountered issues and I begin to wonder if that's a good option for
> this driver. Let me explain...
I'm not a page pool expert, so hopefully those that are will jump in
and help.
> This device has 3 ports:
> - Port0 is the host port (internal to the subsystem and referred as CPPI
> in the driver).
> - Port1 and 2 are the external ethernet ports.
> Each port's RX FIFO has 1 queue.
>
> As mentioned in page_pool documentation:
> https://docs.kernel.org/networking/page_pool.html
> "The number of pools created MUST match the number of hardware
> queues unless hardware restrictions make that impossible. This would
> otherwise beat the purpose of page pool, which is allocate pages fast
> from cache without locking."
My guess is, this last bit is the important part. Locking. Do ports 1
and port 2 rx and tx run in parallel on different CPUs? Hence do you
need locking?
> So, for this driver I should use 2 page pools (for port1 and 2) if possible.
Maybe, maybe not. It is not really the number of front panel interface
which matters here. It is the number of entities which need buffers.
> But, if I I replace any alloc_skb() with page_pool_alloc() in the original
> driver, I will have to create only 1 page_pool.
> This is visible in am65_cpsw_nuss_common_open(), which starts with:
> "if (common->usage_count) return 0;" to ensure that subsequent code
> will be executed only once even if the 2 interfaces are ndo_open'd.
> IOW, skbs will be allocated for only 1 RX channel. I checked that behavior,
> and that's the way it works.
> This is because the host port (CPPI) has only 1 RX queue. This single
> queue is used to get all the packets, from both Ports and 2 (port ID is
> stored in each descriptor).
So you have one entity which needs buffers. CPPI. It seems like Port1
and Port2 do not need buffers? So to me, you need one page pool.
> So, because of this constraint, I ended up working on the "single
> page pool" option.
>
> Questions:
> 1) Is the behavior described above usual for ethernet switch devices ?
This might be the first time page pool has been used by a switch? I
would check the melanox and sparx5 driver and see if they use page
pool.
> 2) Given that I can't use a page pool for each HW queue, is it worth
> using the page pool memory model ?
> 3) Currently I use 2 xdp_rxq (one for port1 and another one for port2).
> If an XDP program is attached to port1, my page pool dma parameter
> will need to be DMA_BIDIRECTIONAL (because of XDP_TX). As a result,
> even if there is no XDP program attached to port2, the setting for page
> pool will need to be DMA_BIDIRECTIONAL instead of DMA_FROM_DEVICE.
> In such situation, isn't it a problem for port2 ?
> 4) Because of 1) and 2), will page pool performance really be better for
> this driver ?
You probably need to explain the TX architecture a bit. How are
packets passed to the hardware? Is it again via a single CPPI entity?
Or are there two entities?
DMA_BIDIRECTIONAL and DMA_FROM_DEVICE is about cache flushing and
invalidation. Before you pass a buffer to the hardware for it to
receive into, you need to invalidate the cache. That means when the
hardware gives the buffer back with a packet in it, there is a cache
miss and it fetches the new data from memory. If that packet gets
turned into an XDP_TX, you need to flush the cache to force any
changes out of the cache and into memory. The DMA from memory then
gets the up to date packet contents.
My guess would be, always using DMA_BIDIRECTIONAL is fine, so long as
your cache operations are correct. Turn on DMA_API_DEBUG and make sure
it is happy.
Andrew
From: Jason-jh Lin <jason-jh.lin(a)mediatek.corp-partner.google.com>
Since MT8195 supports GAMMA 12-bit LUT after the landing of [1] series,
we can now add support for MT8188.
[1] MediaTek DDP GAMMA - 12-bit LUT support
- https://patchwork.kernel.org/project/linux-mediatek/list/?series=792516
Jason-JH.Lin (3):
dt-bindings: display: mediatek: gamma: Change MT8195 to single enum
group
dt-bindings: display: mediatek: gamma: Add support for MT8188
drm/mediatek: Add gamma support for MT8195
.../bindings/display/mediatek/mediatek,gamma.yaml | 6 +++++-
drivers/gpu/drm/mediatek/mtk_drm_drv.c | 2 ++
2 files changed, 7 insertions(+), 1 deletion(-)
--
2.18.0