Hi Tomeu,
kernel test robot noticed the following build errors:
[auto build test ERROR on 83a7eefedc9b56fe7bfeff13b6c7356688ffa670]
url: https://github.com/intel-lab-lkp/linux/commits/Tomeu-Vizoso/iommu-rockchip-…
base: 83a7eefedc9b56fe7bfeff13b6c7356688ffa670
patch link: https://lore.kernel.org/r/20240612-6-10-rocket-v1-6-060e48eea250%40tomeuviz…
patch subject: [PATCH 6/9] accel/rocket: Add a new driver for Rockchip's NPU
config: loongarch-allmodconfig (https://download.01.org/0day-ci/archive/20240613/202406130901.oiofrkFe-lkp@…)
compiler: loongarch64-linux-gcc (GCC) 13.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240613/202406130901.oiofrkFe-lkp@…)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp(a)intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202406130901.oiofrkFe-lkp@intel.com/
All errors (new ones prefixed by >>):
In file included from arch/loongarch/include/asm/processor.h:17,
from arch/loongarch/include/asm/thread_info.h:15,
from include/linux/thread_info.h:60,
from include/asm-generic/current.h:6,
from ./arch/loongarch/include/generated/asm/current.h:1,
from include/linux/mutex.h:14,
from include/linux/notifier.h:14,
from include/linux/clk.h:14,
from drivers/accel/rocket/rocket_core.c:6:
>> arch/loongarch/include/uapi/asm/ptrace.h:25:25: error: expected identifier before '(' token
25 | #define PC (GPR_END + 2)
| ^
drivers/accel/rocket/rocket_registers.h:53:9: note: in expansion of macro 'PC'
53 | PC = 0x00000100,
| ^~
vim +25 arch/loongarch/include/uapi/asm/ptrace.h
803b0fc5c3f2ba Huacai Chen 2022-05-31 16
803b0fc5c3f2ba Huacai Chen 2022-05-31 17 /*
803b0fc5c3f2ba Huacai Chen 2022-05-31 18 * For PTRACE_{POKE,PEEK}USR. 0 - 31 are GPRs,
803b0fc5c3f2ba Huacai Chen 2022-05-31 19 * 32 is syscall's original ARG0, 33 is PC, 34 is BADVADDR.
803b0fc5c3f2ba Huacai Chen 2022-05-31 20 */
803b0fc5c3f2ba Huacai Chen 2022-05-31 21 #define GPR_BASE 0
803b0fc5c3f2ba Huacai Chen 2022-05-31 22 #define GPR_NUM 32
803b0fc5c3f2ba Huacai Chen 2022-05-31 23 #define GPR_END (GPR_BASE + GPR_NUM - 1)
803b0fc5c3f2ba Huacai Chen 2022-05-31 24 #define ARG0 (GPR_END + 1)
803b0fc5c3f2ba Huacai Chen 2022-05-31 @25 #define PC (GPR_END + 2)
803b0fc5c3f2ba Huacai Chen 2022-05-31 26 #define BADVADDR (GPR_END + 3)
803b0fc5c3f2ba Huacai Chen 2022-05-31 27
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
On Wed, Jun 5, 2024 at 7:02 PM Barry Song <21cnbao(a)gmail.com> wrote:
>
> From: Barry Song <v-songbaohua(a)oppo.com>
>
> dma_heap_allocation_data defines the UAPI as follows:
>
> struct dma_heap_allocation_data {
> __u64 len;
> __u32 fd;
> __u32 fd_flags;
> __u64 heap_flags;
> };
>
> But dma heaps are casting both fd_flags and heap_flags into
> unsigned long. This patch makes dma heaps - cma heap and
> system heap have consistent types with UAPI.
>
> Signed-off-by: Barry Song <v-songbaohua(a)oppo.com>
Thanks for submitting this additional cleanup!
Acked-by: John Stultz <jstultz(a)google.com>
On Mon, Jun 3, 2024 at 11:30 PM Hailong Liu <hailong.liu(a)oppo.com> wrote:
> On 6/4/2024 2:06 AM, John Stultz wrote:
> > On Mon, Jun 3, 2024 at 10:21 AM Hailong Liu <hailong.liu(a)oppo.com> wrote:
> >> We now aim to improve priority dma-buf allocation. Consider android
> >> animations scene:
> >>
> >> when device is in low memory, Allocating dma-buf as animation
> >> buffers enter direct_reclaimation, longer allocation time result in a
> >> laggy UI. But if we know the usage of the dma-buf, we can use some
> >> mechanisms to boost, e.g. animation-memory-pool.
> >
> > Can you generalize this a bit further? When would userland know to use
> > this new flag?
> > If it is aware, would it make sense to just use a separate heap name instead?
> >
> > (Also: These other mechanisms you mention should probably also be
> > submitted upstream, however for upstream there's also the requirement
> > that we have open users and are not just enabling proprietary blob
> > userspace, which makes any changes to dma-buf heaps for out of tree
> > code quite difficult)
> >
> >> However, dma-buf usage identification becomes a challenge. A potential
> >> solution could be heap_flags. the use of heap_flags seems ugly and
> >> contrary to the intended design as you said, How aboult extending
> >> dma_heap_allocation_data as follows?
> >>
> >> struct dma_heap_allocation_data {
> >> __u64 len;
> >> __u32 fd;
> >> __u32 fd_flags;
> >> __u64 heap_flags;
> >> __u64 buf_flags: // buf usage
> >> };
> >
> > This would affect the ABI (forcing a new ioctl number). And it's
> > unclear what flags you envision as buffer specific (rather than heap
> > specific as this patch suggested).
> >
> > I think we need more details about the specific problem you're seeing
> > and trying to resolve.
> This patch mainly focuses on optimization for Android scenarios. Let’s
> discuss it on the issue website.
> Bug: 344501512
Ok, we can do that if you need.
But if this is ever going to go upstream (and it's more and more
important that we minimize out of tree technical debt), conversations
about how to generalize this will need to happen on the list.
thanks
-john
On Mon, Jun 3, 2024 at 10:21 AM Hailong Liu <hailong.liu(a)oppo.com> wrote:
> On Mon, 03. Jun 09:01, John Stultz wrote:
> > On Mon, Jun 3, 2024 at 4:40 AM <hailong.liu(a)oppo.com> wrote:
> > >
> > > From: "Hailong.Liu" <hailong.liu(a)oppo.com>
> > >
> > > This help module use heap_flags to determine the type of dma-buf,
> > > so that some mechanisms can be used to speed up allocation, such as
> > > memory_pool, to optimize the allocation time of dma-buf.
> >
> > This feels like it's trying to introduce heap specific flags, but
> > doesn't introduce any details about what those flags might be?
> >
> > This seems like it would re-allow the old opaque vendor specific heap
> > flags that we saw in the ION days, which was problematic as different
> > userspaces would use the same interface with potentially colliding
> > heap flags with different meanings. Resulting in no way to properly
> > move to an upstream solution.
> >
> > With the dma-heaps interface, we're trying to make sure it is well
> > defined. One can register a number of heaps with different behaviors,
> > and the heap name is used to differentiate the behavior. Any flags
> > introduced will need to be well defined and behaviorally consistent
> > between heaps. That way when an upstream solution lands, if necessary
> > we can provide backwards compatibility via symlinks.
> >
> > So I don't think this is a good direction to go for dma-heaps.
> >
> > It would be better if you were able to clarify what flag requirements
> > you need, so we can better understand how they might apply to other
> > heaps, and see if it was something we would want to define as a flag
> > (see the discussion here for similar thoughts:
> > https://lore.kernel.org/lkml/CANDhNCoOKwtpstFE2VDcUvzdXUWkZ-Zx+fz6xrdPWTyci…
> > )
> >
> > But if your vendor heap really needs some sort of flags argument that
> > you can't generalize, you can always implement your own dmabuf
> > exporter driver with whatever ioctl interface you'd prefer.
>
> Thanks for your reply. Let’s continue our discussion here instead
> of on android-review. We aim to enhance memory allocation on each
> all heaps. Your pointer towards heap_flags used in /dev/ion for heap
> identification was helpful.
>
> We now aim to improve priority dma-buf allocation. Consider android
> animations scene:
>
> when device is in low memory, Allocating dma-buf as animation
> buffers enter direct_reclaimation, longer allocation time result in a
> laggy UI. But if we know the usage of the dma-buf, we can use some
> mechanisms to boost, e.g. animation-memory-pool.
Can you generalize this a bit further? When would userland know to use
this new flag?
If it is aware, would it make sense to just use a separate heap name instead?
(Also: These other mechanisms you mention should probably also be
submitted upstream, however for upstream there's also the requirement
that we have open users and are not just enabling proprietary blob
userspace, which makes any changes to dma-buf heaps for out of tree
code quite difficult)
> However, dma-buf usage identification becomes a challenge. A potential
> solution could be heap_flags. the use of heap_flags seems ugly and
> contrary to the intended design as you said, How aboult extending
> dma_heap_allocation_data as follows?
>
> struct dma_heap_allocation_data {
> __u64 len;
> __u32 fd;
> __u32 fd_flags;
> __u64 heap_flags;
> __u64 buf_flags: // buf usage
> };
This would affect the ABI (forcing a new ioctl number). And it's
unclear what flags you envision as buffer specific (rather than heap
specific as this patch suggested).
I think we need more details about the specific problem you're seeing
and trying to resolve.
thanks
-john
On Mon, Jun 3, 2024 at 4:40 AM <hailong.liu(a)oppo.com> wrote:
>
> From: "Hailong.Liu" <hailong.liu(a)oppo.com>
>
> This help module use heap_flags to determine the type of dma-buf,
> so that some mechanisms can be used to speed up allocation, such as
> memory_pool, to optimize the allocation time of dma-buf.
This feels like it's trying to introduce heap specific flags, but
doesn't introduce any details about what those flags might be?
This seems like it would re-allow the old opaque vendor specific heap
flags that we saw in the ION days, which was problematic as different
userspaces would use the same interface with potentially colliding
heap flags with different meanings. Resulting in no way to properly
move to an upstream solution.
With the dma-heaps interface, we're trying to make sure it is well
defined. One can register a number of heaps with different behaviors,
and the heap name is used to differentiate the behavior. Any flags
introduced will need to be well defined and behaviorally consistent
between heaps. That way when an upstream solution lands, if necessary
we can provide backwards compatibility via symlinks.
So I don't think this is a good direction to go for dma-heaps.
It would be better if you were able to clarify what flag requirements
you need, so we can better understand how they might apply to other
heaps, and see if it was something we would want to define as a flag
(see the discussion here for similar thoughts:
https://lore.kernel.org/lkml/CANDhNCoOKwtpstFE2VDcUvzdXUWkZ-Zx+fz6xrdPWTyci…
)
But if your vendor heap really needs some sort of flags argument that
you can't generalize, you can always implement your own dmabuf
exporter driver with whatever ioctl interface you'd prefer.
thanks
-john
On Tue, May 28, 2024 at 07:15:34AM GMT, Jason-JH Lin (林睿祥) wrote:
> Hi Maxime,
>
> On Mon, 2024-05-27 at 16:06 +0200, Maxime Ripard wrote:
> > Hi,
> >
> > On Sun, May 26, 2024 at 07:29:21AM GMT, Jason-JH.Lin wrote:
> > > From: Jason-jh Lin <jason-jh.lin(a)mediatek.corp-partner.google.com>
> > >
> > > Memory Definitions:
> > > secure memory - Memory allocated in the TEE (Trusted Execution
> > > Environment) which is inaccessible in the REE (Rich Execution
> > > Environment, i.e. linux kernel/userspace).
> > > secure handle - Integer value which acts as reference to 'secure
> > > memory'. Used in communication between TEE and REE to reference
> > > 'secure memory'.
> > > secure buffer - 'secure memory' that is used to store decrypted,
> > > compressed video or for other general purposes in the TEE.
> > > secure surface - 'secure memory' that is used to store graphic
> > > buffers.
> > >
> > > Memory Usage in SVP:
> > > The overall flow of SVP starts with encrypted video coming in from
> > > an
> > > outside source into the REE. The REE will then allocate a 'secure
> > > buffer' and send the corresponding 'secure handle' along with the
> > > encrypted, compressed video data to the TEE. The TEE will then
> > > decrypt
> > > the video and store the result in the 'secure buffer'. The REE will
> > > then allocate a 'secure surface'. The REE will pass the 'secure
> > > handles' for both the 'secure buffer' and 'secure surface' into the
> > > TEE for video decoding. The video decoder HW will then decode the
> > > contents of the 'secure buffer' and place the result in the 'secure
> > > surface'. The REE will then attach the 'secure surface' to the
> > > overlay
> > > plane for rendering of the video.
> > >
> > > Everything relating to ensuring security of the actual contents of
> > > the
> > > 'secure buffer' and 'secure surface' is out of scope for the REE
> > > and
> > > is the responsibility of the TEE.
> > >
> > > DRM driver handles allocation of gem objects that are backed by a
> > > 'secure
> > > surface' and for displaying a 'secure surface' on the overlay
> > > plane.
> > > This introduces a new flag for object creation called
> > > DRM_MTK_GEM_CREATE_RESTRICTED which indicates it should be a
> > > 'secure
> > > surface'. All changes here are in MediaTek specific code.
> > > ---
> > > TODO:
> > > 1) Drop MTK_DRM_IOCTL_GEM_CREATE and use DMA_HEAP_IOCTL_ALLOC in
> > > userspace
> > > 2) DRM driver use secure mailbox channel to handle normal and
> > > secure flow
> > > 3) Implement setting mmsys routing table in the secure world series
> >
> > I'm not sure what you mean here. Why are you trying to upstream
> > something that still needs to be removed from your patch series?
> >
> Because their is too much patches need to be fixed in this series, so I
> list down the remaining TODO items and send to review for the other
> patches.
>
> Sorry for the bothering, I'll drop this at the next version.
If you don't intend to use it, we just shouldn't add it. Removing the
TODO item doesn't make sense, even more so if heaps should be the way
you handle this.
> > Also, I made some comments on the previous version that have been
> > entirely ignored and still apply on this version:
> >
> https://lore.kernel.org/dri-devel/20240415-guppy-of-perpetual-current-3a797…
> >
>
> I lost that mail in my mailbox, so I didn't reply at that time.
> I have imported that mail and replied to you. Hope you don't mind :)
I haven't received that answer
Maxime
On Wed, May 22, 2024 at 2:02 AM Barry Song <21cnbao(a)gmail.com> wrote:
>
> From: Barry Song <v-songbaohua(a)oppo.com>
>
> dma_heap_allocation_data defines the UAPI as follows:
>
> struct dma_heap_allocation_data {
> __u64 len;
> __u32 fd;
> __u32 fd_flags;
> __u64 heap_flags;
> };
>
> However, dma_heap_buffer_alloc() casts them into unsigned int. It's unclear
> whether this is intentional or what the purpose is, but it can be quite
> confusing for users.
>
> Adding to the confusion, dma_heap_ops.allocate defines both of these as
> unsigned long. Fortunately, since dma_heap_ops is not part of the UAPI,
> it is less of a concern.
>
> struct dma_heap_ops {
> struct dma_buf *(*allocate)(struct dma_heap *heap,
> unsigned long len,
> unsigned long fd_flags,
> unsigned long heap_flags);
> };
>
> I am sending this RFC in hopes of clarifying these confusions.
>
> If the goal is to constrain both flags to 32 bits while ensuring the struct
> is aligned to 64 bits, it would have been more suitable to define
> dma_heap_allocation_data accordingly from the beginning, like so:
>
> struct dma_heap_allocation_data {
> __u64 len;
> __u32 fd;
> __u32 fd_flags;
> __u32 heap_flags;
> __u32 padding;
> };
So here, if I recall, the intent was to keep 64bits for potential
future heap_flags.
But your point above that we're inconsistent with types in the non
UAPI arguments is valid.
So I think your patch makes sense.
Thanks for raising this issue!
Acked-by: John Stultz <jstultz(a)google.com>
When dma_resv_reserve_fences() is called with num_fences=0 it usually
means that a driver or other component messed up its calculation how
many fences are needed. Warn in that situation.
When no fence are needed the function shouldn't be called in the first
place.
Signed-off-by: Christian König <christian.koenig(a)amd.com>
---
drivers/dma-buf/dma-resv.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
index e2869fb31140..5f8d010516f0 100644
--- a/drivers/dma-buf/dma-resv.c
+++ b/drivers/dma-buf/dma-resv.c
@@ -186,6 +186,13 @@ int dma_resv_reserve_fences(struct dma_resv *obj, unsigned int num_fences)
dma_resv_assert_held(obj);
+ /* Driver and component code should never call this function with
+ * num_fences=0. If they do it usually points to bugs when calculating
+ * the number of needed fences dynamically.
+ */
+ if (WARN_ON(!num_fences))
+ return -EINVAL;
+
old = dma_resv_fences_list(obj);
if (old && old->max_fences) {
if ((old->num_fences + num_fences) <= old->max_fences)
--
2.34.1