On Mon, Oct 30, 2023 at 11:58:20AM +0100, Marco Pagani wrote:
> On 2023-10-25 10:43, Maxime Ripard wrote:
> > On Tue, Oct 24, 2023 at 07:14:25PM +0200, Marco Pagani wrote:
> >>>> +static void drm_gem_shmem_test_obj_create_private(struct kunit *test)
> >>>> +{
> >>>> + struct fake_dev *fdev = test->priv;
> >>>> + struct drm_gem_shmem_object *shmem;
> >>>> + struct drm_gem_object *gem_obj;
> >>>> + struct dma_buf buf_mock;
> >>>> + struct dma_buf_attachment attach_mock;
> >>>> + struct sg_table *sgt;
> >>>> + char *buf;
> >>>> + int ret;
> >>>> +
> >>>> + /* Create a mock scatter/gather table */
> >>>> + buf = kunit_kzalloc(test, TEST_SIZE, GFP_KERNEL);
> >>>> + KUNIT_ASSERT_NOT_NULL(test, buf);
> >>>> +
> >>>> + sgt = kzalloc(sizeof(*sgt), GFP_KERNEL);
> >>>> + KUNIT_ASSERT_NOT_NULL(test, sgt);
> >>>> +
> >>>> + ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
> >>>> + KUNIT_ASSERT_EQ(test, ret, 0);
> >>>> + sg_init_one(sgt->sgl, buf, TEST_SIZE);
> >>>> +
> >>>> + /* Init a mock DMA-BUF */
> >>>> + buf_mock.size = TEST_SIZE;
> >>>> + attach_mock.dmabuf = &buf_mock;
> >>>> +
> >>>> + gem_obj = drm_gem_shmem_prime_import_sg_table(&fdev->drm_dev, &attach_mock, sgt);
> >>>> + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, gem_obj);
> >>>> + KUNIT_ASSERT_EQ(test, gem_obj->size, TEST_SIZE);
> >>>> + KUNIT_ASSERT_NULL(test, gem_obj->filp);
> >>>> + KUNIT_ASSERT_NOT_NULL(test, gem_obj->funcs);
> >>>> +
> >>>> + shmem = to_drm_gem_shmem_obj(gem_obj);
> >>>> + KUNIT_ASSERT_PTR_EQ(test, shmem->sgt, sgt);
> >>>> +
> >>>> + /* The scatter/gather table is freed by drm_gem_shmem_free */
> >>>> + drm_gem_shmem_free(shmem);
> >>>> +}
> >>>
> >>> KUNIT_ASSERT_* will stop the execution of the test on failure, you
> >>> should probably use a bit more of KUNIT_EXPECT_* calls otherwise you'll
> >>> leak resources.
> >>>
> >>> You also probably want to use a kunit_action to clean up and avoid that
> >>> whole discussion
> >>>
> >>
> >> You are right. I slightly prefer using KUnit expectations (unless actions
> >> are strictly necessary) since I feel using actions makes test cases a bit
> >> less straightforward to understand. Is this okay for you?
> >
> > I disagree. Actions make it easier to reason about, even when comparing
> > assertion vs expectation
> >
> > Like, for the call to sg_alloc_table and
> > drm_gem_shmem_prime_import_sg_table(), the reasonable use of assert vs
> > expect would be something like:
> >
> > sgt = kzalloc(sizeof(*sgt), GFP_KERNEL);
> > KUNIT_ASSERT_NOT_NULL(test, sgt);
> >
> > ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
> > KUNIT_ASSERT_EQ(test, ret, 0);
> >
> > /*
> > * Here, it's already not super clear whether you want to expect vs
> > * assert. expect will make you handle the failure case later, assert will
> > * force you to call kfree on sgt. Both kind of suck in their own ways.
> > */
> >
> > sg_init_one(sgt->sgl, buf, TEST_SIZE);
> >
> > gem_obj = drm_gem_shmem_prime_import_sg_table(&fdev->drm_dev, &attach_mock, sgt);
> > KUNIT_ASSERT_NOT_ERR_OR_NULL(test, gem_obj);
> >
> > /*
> > * If the assert fails, we forgot to call sg_free_table(sgt) and kfree(sgt).
> > */
> >
> > KUNIT_EXPECT_EQ(test, gem_obj->size, TEST_SIZE);
> > KUNIT_EXPECT_NULL(test, gem_obj->filp);
> > KUNIT_EXPECT_NOT_NULL(test, gem_obj->funcs);
> >
> > /*
> > * And here we have to handle the case where the expectation was wrong,
> > * but the test still continued.
> > */
> >
> > But if you're not using an action, you still have to call kfree(sgt),
> > which means that you might still
> >
> > shmem = to_drm_gem_shmem_obj(gem_obj);
> > KUNIT_ASSERT_PTR_EQ(test, shmem->sgt, sgt);
> >
> > /*
> > * If the assertion fails, we now have to call drm_gem_shmem_free(shmem)
> > */
> >
> > /* The scatter/gather table is freed by drm_gem_shmem_free */
> > drm_gem_shmem_free(shmem);
> >
> > /* everything's fine now */
> >
> > The semantics around drm_gem_shmem_free make it a bit convoluted, but
> > doing it using goto/labels, plus handling the assertions and error
> > reporting would be difficult.
> >
> > Using actions, we have:
> >
> > sgt = kzalloc(sizeof(*sgt), GFP_KERNEL);
> > KUNIT_ASSERT_NOT_NULL(test, sgt);
> >
> > ret = kunit_add_action_or_reset(test, kfree_wrapper, sgt);
> > KUNIT_ASSERT_EQ(test, ret, 0);
> >
> > ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
> > KUNIT_ASSERT_EQ(test, ret, 0);
> >
> > ret = kunit_add_action_or_reset(test, sg_free_table_wrapper, sgt);
> > KUNIT_ASSERT_EQ(test, ret, 0);
> >
> > sg_init_one(sgt->sgl, buf, TEST_SIZE);
> >
> > gem_obj = drm_gem_shmem_prime_import_sg_table(&fdev->drm_dev, &attach_mock, sgt);
> > KUNIT_ASSERT_NOT_ERR_OR_NULL(test, gem_obj);
> > KUNIT_EXPECT_EQ(test, gem_obj->size, TEST_SIZE);
> > KUNIT_EXPECT_NULL(test, gem_obj->filp);
> > KUNIT_EXPECT_NOT_NULL(test, gem_obj->funcs);
> >
> > /* drm_gem_shmem_free will free the struct sg_table itself */
> > kunit_remove_action(test, sg_free_table_wrapper, sgt);
> > kunit_remove_action(test, kfree_wrapper, sgt);
>
> I agree that using actions makes error handling cleaner. However, I still
> have some concerns about the additional complexity that actions introduce.
> For instance, I feel these two lines make the testing harness more complex
> without asserting any additional property of the component under test.
If anything, the API makes it more difficult to deal with. It would
actually be harder/messier to handle without an action.
> In some sense, I wonder if it is worth worrying about memory leaks when
> a test case fails. At that point, the system is already in an inconsistent
> state due to a bug in the component under test, so it is unsafe to continue
> anyway.
I guess the larger issue is: once that code will be merged, we're going
to have patches to convert to actions because they make it nicer and fix
a couple of issues anyway.
So, if it's still the state we're going to end up in, why not doing it
right from the beginning?
Maxime
On Wed, 2023-11-01 at 11:20 +0530, Jaskaran Singh wrote:
>
> External email : Please do not click links or open attachments until
> you have verified the sender or the content.
> On 10/20/2023 3:20 PM, Yong Wu (吴勇) wrote:
> > On Thu, 2023-10-19 at 10:16 +0530, Vijayanand Jitta wrote:
> >>
> >> Instead of having a vendor specific binding for cma area, How
> about
> >> retrieving
> >>
> >
> https://lore.kernel.org/lkml/1594948208-4739-1-git-send-email-hayashi.kunih…
> >> ?
> >> dma_heap_add_cma can just associate cma region and create a heap.
> So,
> >> we can reuse cma heap
> >> code for allocation instead of replicating that code here.
> >>
> >
> > Thanks for the reference. I guess we can't use it. There are two
> > reasons:
> >
> > a) The secure heap driver is a pure software driver and we have no
> > device for it, therefore we cannot call dma_heap_add_cma.
> >
>
> Hi Yong,
>
> We're considering using struct cma as the function argument to
> dma_heap_add_cma() rather than struct device. Would this help
> resolve the problem of usage with dma_heap_add_cma()?
Yes. If we use "struct cma", I guess it works.
>
> > b) The CMA area here is dynamic for SVP. Normally this CMA can be
> used
> > in the kernel. In the SVP case we use cma_alloc to get it and pass
> the
> > entire CMA physical start address and size into TEE to protect the
> CMA
> > region. The original CMA heap cannot help with the TEE part.
> >
>
> Referring the conversation at
>
https://lore.kernel.org/lkml/7a2995de23c24ef22c071c6976c02b97e9b50126.camel…
> ;
>
> since we're considering abstracting secure mem ops, would it make
> sense
> to use the default CMA heap ops (cma_heap_ops), allocate buffers from
> it
> and secure each allocated buffer?
Then it looks you also need tee operation like tee_client_open_session
and tee_client_invoke_func, right?
It seems we also need change a bit for "cma_heap_allocate" to allow cma
support operations from secure heap.
I will send a v2 to move the discussion forward. The v2 is based on our
case, It won't include the cma part.
>
> Thanks,
> Jaskaran.
>
> > Thanks.
> >
> >> Thanks,
> >> Vijay
> >>
> >>
> >>
>
This patchset consists of two parts, the first is from John and TJ.
It adds some heap interfaces, then our kernel users could allocate buffer
from special heap. The second part is adding MTK secure heap for SVP
(Secure Video Path). A total of two heaps are added, one is mtk_svp and
the other is mtk_svp_cma. The mtk_svp buffer is reserved for the secure
world after bootup and it is used for ES/working buffer, while the
mtk_svp_cma buffer is dynamically reserved for the secure world and will
be get ready when we start playing secure videos, this heap is used for the
frame buffer. Once the security video playing is complete, the CMA will be
released.
For easier viewing, I've split the new heap file into several patches.
The consumers of new heap and new interfaces are our codec and drm which
will send upstream soon, probably this week.
Base on v6.6-rc1.
John Stultz (2):
dma-heap: Add proper kref handling on dma-buf heaps
dma-heap: Provide accessors so that in-kernel drivers can allocate
dmabufs from specific heaps
T.J. Mercier (1):
dma-buf: heaps: Deduplicate docs and adopt common format
Yong Wu (6):
dma-buf: heaps: Initialise MediaTek secure heap
dma-buf: heaps: mtk_sec_heap: Initialise tee session
dma-buf: heaps: mtk_sec_heap: Add tee service call for buffer
allocating/freeing
dma-buf: heaps: mtk_sec_heap: Add dma_ops
dt-bindings: reserved-memory: MediaTek: Add reserved memory for SVP
dma_buf: heaps: mtk_sec_heap: Add a new CMA heap
.../mediatek,secure_cma_chunkmem.yaml | 42 ++
drivers/dma-buf/dma-heap.c | 127 +++--
drivers/dma-buf/heaps/Kconfig | 8 +
drivers/dma-buf/heaps/Makefile | 1 +
drivers/dma-buf/heaps/mtk_secure_heap.c | 458 ++++++++++++++++++
include/linux/dma-heap.h | 42 +-
6 files changed, 630 insertions(+), 48 deletions(-)
create mode 100644 Documentation/devicetree/bindings/reserved-memory/mediatek,secure_cma_chunkmem.yaml
create mode 100644 drivers/dma-buf/heaps/mtk_secure_heap.c
--
2.18.0
Hi,
On Tue, Oct 24, 2023 at 07:14:25PM +0200, Marco Pagani wrote:
> >> +static void drm_gem_shmem_test_obj_create_private(struct kunit *test)
> >> +{
> >> + struct fake_dev *fdev = test->priv;
> >> + struct drm_gem_shmem_object *shmem;
> >> + struct drm_gem_object *gem_obj;
> >> + struct dma_buf buf_mock;
> >> + struct dma_buf_attachment attach_mock;
> >> + struct sg_table *sgt;
> >> + char *buf;
> >> + int ret;
> >> +
> >> + /* Create a mock scatter/gather table */
> >> + buf = kunit_kzalloc(test, TEST_SIZE, GFP_KERNEL);
> >> + KUNIT_ASSERT_NOT_NULL(test, buf);
> >> +
> >> + sgt = kzalloc(sizeof(*sgt), GFP_KERNEL);
> >> + KUNIT_ASSERT_NOT_NULL(test, sgt);
> >> +
> >> + ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
> >> + KUNIT_ASSERT_EQ(test, ret, 0);
> >> + sg_init_one(sgt->sgl, buf, TEST_SIZE);
> >> +
> >> + /* Init a mock DMA-BUF */
> >> + buf_mock.size = TEST_SIZE;
> >> + attach_mock.dmabuf = &buf_mock;
> >> +
> >> + gem_obj = drm_gem_shmem_prime_import_sg_table(&fdev->drm_dev, &attach_mock, sgt);
> >> + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, gem_obj);
> >> + KUNIT_ASSERT_EQ(test, gem_obj->size, TEST_SIZE);
> >> + KUNIT_ASSERT_NULL(test, gem_obj->filp);
> >> + KUNIT_ASSERT_NOT_NULL(test, gem_obj->funcs);
> >> +
> >> + shmem = to_drm_gem_shmem_obj(gem_obj);
> >> + KUNIT_ASSERT_PTR_EQ(test, shmem->sgt, sgt);
> >> +
> >> + /* The scatter/gather table is freed by drm_gem_shmem_free */
> >> + drm_gem_shmem_free(shmem);
> >> +}
> >
> > KUNIT_ASSERT_* will stop the execution of the test on failure, you
> > should probably use a bit more of KUNIT_EXPECT_* calls otherwise you'll
> > leak resources.
> >
> > You also probably want to use a kunit_action to clean up and avoid that
> > whole discussion
> >
>
> You are right. I slightly prefer using KUnit expectations (unless actions
> are strictly necessary) since I feel using actions makes test cases a bit
> less straightforward to understand. Is this okay for you?
I disagree. Actions make it easier to reason about, even when comparing
assertion vs expectation
Like, for the call to sg_alloc_table and
drm_gem_shmem_prime_import_sg_table(), the reasonable use of assert vs
expect would be something like:
sgt = kzalloc(sizeof(*sgt), GFP_KERNEL);
KUNIT_ASSERT_NOT_NULL(test, sgt);
ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
KUNIT_ASSERT_EQ(test, ret, 0);
/*
* Here, it's already not super clear whether you want to expect vs
* assert. expect will make you handle the failure case later, assert will
* force you to call kfree on sgt. Both kind of suck in their own ways.
*/
sg_init_one(sgt->sgl, buf, TEST_SIZE);
gem_obj = drm_gem_shmem_prime_import_sg_table(&fdev->drm_dev, &attach_mock, sgt);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, gem_obj);
/*
* If the assert fails, we forgot to call sg_free_table(sgt) and kfree(sgt).
*/
KUNIT_EXPECT_EQ(test, gem_obj->size, TEST_SIZE);
KUNIT_EXPECT_NULL(test, gem_obj->filp);
KUNIT_EXPECT_NOT_NULL(test, gem_obj->funcs);
/*
* And here we have to handle the case where the expectation was wrong,
* but the test still continued.
*/
But if you're not using an action, you still have to call kfree(sgt),
which means that you might still
shmem = to_drm_gem_shmem_obj(gem_obj);
KUNIT_ASSERT_PTR_EQ(test, shmem->sgt, sgt);
/*
* If the assertion fails, we now have to call drm_gem_shmem_free(shmem)
*/
/* The scatter/gather table is freed by drm_gem_shmem_free */
drm_gem_shmem_free(shmem);
/* everything's fine now */
The semantics around drm_gem_shmem_free make it a bit convoluted, but
doing it using goto/labels, plus handling the assertions and error
reporting would be difficult.
Using actions, we have:
sgt = kzalloc(sizeof(*sgt), GFP_KERNEL);
KUNIT_ASSERT_NOT_NULL(test, sgt);
ret = kunit_add_action_or_reset(test, kfree_wrapper, sgt);
KUNIT_ASSERT_EQ(test, ret, 0);
ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
KUNIT_ASSERT_EQ(test, ret, 0);
ret = kunit_add_action_or_reset(test, sg_free_table_wrapper, sgt);
KUNIT_ASSERT_EQ(test, ret, 0);
sg_init_one(sgt->sgl, buf, TEST_SIZE);
gem_obj = drm_gem_shmem_prime_import_sg_table(&fdev->drm_dev, &attach_mock, sgt);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, gem_obj);
KUNIT_EXPECT_EQ(test, gem_obj->size, TEST_SIZE);
KUNIT_EXPECT_NULL(test, gem_obj->filp);
KUNIT_EXPECT_NOT_NULL(test, gem_obj->funcs);
/* drm_gem_shmem_free will free the struct sg_table itself */
kunit_remove_action(test, sg_free_table_wrapper, sgt);
kunit_remove_action(test, kfree_wrapper, sgt);
shmem = to_drm_gem_shmem_obj(gem_obj);
KUNIT_ASSERT_PTR_EQ(test, shmem->sgt, sgt);
ret = kunit_add_action_or_reset(test, drm_gem_shmem_free_wrapper, shmem);
KUNIT_ASSERT_EQ(test, ret, 0);
The last one is arguable, but for the previous ones it makes error
handling much more convenient and easy to reason about.
The wrappers are also a bit inconvenient to use, but it's mostly there
to avoid a compiler warning at the moment.
This patch will help hopefully:
https://lore.kernel.org/linux-kselftest/20230915050125.3609689-1-davidgow@g…
Maxime
Hi,
On Mon, Oct 23, 2023 at 10:25:50AM -0700, Doug Anderson wrote:
> On Mon, Oct 23, 2023 at 9:31 AM Yuran Pereira <yuran.pereira(a)hotmail.com> wrote:
> >
> > Since "Clean up checks for already prepared/enabled in panels" has
> > already been done and merged [1], I think there is no longer a need
> > for this item to be in the gpu TODO.
> >
> > [1] https://patchwork.freedesktop.org/patch/551421/
> >
> > Signed-off-by: Yuran Pereira <yuran.pereira(a)hotmail.com>
> > ---
> > Documentation/gpu/todo.rst | 25 -------------------------
> > 1 file changed, 25 deletions(-)
>
> It's not actually all done. It's in a bit of a limbo state right now,
> unfortunately. I landed all of the "simple" cases where panels were
> needlessly tracking prepare/enable, but the less simple cases are
> still outstanding.
>
> Specifically the issue is that many panels have code to properly power
> cycle themselves off at shutdown time and in order to do that they
> need to keep track of the prepare/enable state. After a big, long
> discussion [1] it was decided that we could get rid of all the panel
> code handling shutdown if only all relevant DRM KMS drivers would
> properly call drm_atomic_helper_shutdown().
>
> I made an attempt to get DRM KMS drivers to call
> drm_atomic_helper_shutdown() [2] [3] [4]. I was able to land the
> patches that went through drm-misc, but currently many of the
> non-drm-misc ones are blocked waiting for attention.
>
> ...so things that could be done to help out:
>
> a) Could review patches that haven't landed in [4]. Maybe adding a
> Reviewed-by tag would help wake up maintainers?
>
> b) Could see if you can identify panels that are exclusively used w/
> DRM drivers that have already been converted and then we could post
> patches for just those panels. I have no idea how easy this task would
> be. Is it enough to look at upstream dts files by "compatible" string?
I think it is, yes.
Maxime
The patch series provides drm driver support for enabling secure video
path (SVP) playback on MediaiTek hardware in the Linux kernel.
Memory Definitions:
secure memory - Memory allocated in the TEE (Trusted Execution
Environment) which is inaccessible in the REE (Rich Execution
Environment, i.e. linux kernel/userspace).
secure handle - Integer value which acts as reference to 'secure
memory'. Used in communication between TEE and REE to reference
'secure memory'.
secure buffer - 'secure memory' that is used to store decrypted,
compressed video or for other general purposes in the TEE.
secure surface - 'secure memory' that is used to store graphic buffers.
Memory Usage in SVP:
The overall flow of SVP starts with encrypted video coming in from an
outside source into the REE. The REE will then allocate a 'secure
buffer' and send the corresponding 'secure handle' along with the
encrypted, compressed video data to the TEE. The TEE will then decrypt
the video and store the result in the 'secure buffer'. The REE will
then allocate a 'secure surface'. The REE will pass the 'secure
handles' for both the 'secure buffer' and 'secure surface' into the
TEE for video decoding. The video decoder HW will then decode the
contents of the 'secure buffer' and place the result in the 'secure
surface'. The REE will then attach the 'secure surface' to the overlay
plane for rendering of the video.
Everything relating to ensuring security of the actual contents of the
'secure buffer' and 'secure surface' is out of scope for the REE and
is the responsibility of the TEE.
DRM driver handles allocation of gem objects that are backed by a 'secure
surface' and for displaying a 'secure surface' on the overlay plane.
This introduces a new flag for object creation called
DRM_MTK_GEM_CREATE_ENCRYPTED which indicates it should be a 'secure
surface'. All changes here are in MediaTek specific code.
---
Based on 3 series and 1 patch:
[1] dma-buf: heaps: Add MediaTek secure heap
- https://patchwork.kernel.org/project/linux-mediatek/list/?series=782776
[2] add driver to support secure video decoder
- https://patchwork.kernel.org/project/linux-mediatek/list/?series=782922
[3] soc: mediatek: Add register definitions for GCE
- https://patchwork.kernel.org/project/linux-mediatek/patch/20231017064717.21…
[4] Add CMDQ secure driver for SVP
- https://patchwork.kernel.org/project/linux-mediatek/list/?series=795502
---
Change in v2:
1. remove the DRIVER_RDNDER flag for mtk_drm_ioctl
2. move cmdq_insert_backup_cookie into client driver
3. move secure gce node define from mt8195-cherry.dtsi to mt8195.dtsi
---
CK Hu (1):
drm/mediatek: Add interface to allocate MediaTek GEM buffer.
Jason-JH.Lin (10):
drm/mediatek/uapi: Add DRM_MTK_GEM_CREATED_ENCRYPTTED flag
drm/mediatek: Add secure buffer control flow to mtk_drm_gem
drm/mediatek: Add secure identify flag and funcution to mtk_drm_plane
drm/mediatek: Add mtk_ddp_sec_write to config secure buffer info
drm/mediatek: Add get_sec_port interface to mtk_ddp_comp
drm/mediatek: Add secure layer config support for ovl
drm/mediatek: Add secure layer config support for ovl_adaptor
drm/mediatek: Add secure flow support to mediatek-drm
drm/mediatek: Add cmdq_insert_backup_cookie before secure pkt finalize
arm64: dts: mt8195: Add secure mbox settings for vdosys
arch/arm64/boot/dts/mediatek/mt8195.dtsi | 6 +-
drivers/gpu/drm/mediatek/mtk_disp_drv.h | 3 +
drivers/gpu/drm/mediatek/mtk_disp_ovl.c | 31 +-
.../gpu/drm/mediatek/mtk_disp_ovl_adaptor.c | 15 +
drivers/gpu/drm/mediatek/mtk_drm_crtc.c | 274 +++++++++++++++++-
drivers/gpu/drm/mediatek/mtk_drm_crtc.h | 1 +
drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c | 14 +
drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.h | 13 +
drivers/gpu/drm/mediatek/mtk_drm_drv.c | 13 +
drivers/gpu/drm/mediatek/mtk_drm_gem.c | 121 ++++++++
drivers/gpu/drm/mediatek/mtk_drm_gem.h | 16 +
drivers/gpu/drm/mediatek/mtk_drm_plane.c | 7 +
drivers/gpu/drm/mediatek/mtk_drm_plane.h | 2 +
drivers/gpu/drm/mediatek/mtk_mdp_rdma.c | 11 +-
drivers/gpu/drm/mediatek/mtk_mdp_rdma.h | 2 +
include/uapi/drm/mediatek_drm.h | 59 ++++
16 files changed, 570 insertions(+), 18 deletions(-)
create mode 100644 include/uapi/drm/mediatek_drm.h
--
2.18.0
Currently, a NULL pointer dereference will happen in function
`dma_fence_enable_sw_signaling()` (at line 615), in case `chain`
is not allocated in `mock_chain()` and this function returns
`NULL` (at line 86). See below:
drivers/dma-buf/st-dma-fence-chain.c:
86 chain = mock_chain(NULL, f, 1);
87 if (!chain)
88 err = -ENOMEM;
89
90 dma_fence_enable_sw_signaling(chain);
drivers/dma-buf/dma-fence.c:
611 void dma_fence_enable_sw_signaling(struct dma_fence *fence)
612 {
613 unsigned long flags;
614
615 spin_lock_irqsave(fence->lock, flags);
^^^^^^^^^^^
|
NULL pointer reference
if fence == NULL
616 __dma_fence_enable_signaling(fence);
617 spin_unlock_irqrestore(fence->lock, flags);
618 }
Fix this by adding a NULL check before dereferencing `fence` in
`dma_fence_enable_sw_signaling()`. This will prevent any other NULL
pointer dereference when the `fence` passed as an argument is `NULL`.
Addresses-Coverity: ("Dereference after null check")
Fixes: d62c43a953ce ("dma-buf: Enable signaling on fence for selftests")
Cc: stable(a)vger.kernel.org
Signed-off-by: Gustavo A. R. Silva <gustavoars(a)kernel.org>
---
drivers/dma-buf/dma-fence.c | 9 ++++++++-
include/linux/dma-fence.h | 2 +-
2 files changed, 9 insertions(+), 2 deletions(-)
diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
index 8aa8f8cb7071..4d2f13560d0f 100644
--- a/drivers/dma-buf/dma-fence.c
+++ b/drivers/dma-buf/dma-fence.c
@@ -607,14 +607,21 @@ static bool __dma_fence_enable_signaling(struct dma_fence *fence)
* This will request for sw signaling to be enabled, to make the fence
* complete as soon as possible. This calls &dma_fence_ops.enable_signaling
* internally.
+ *
+ * Returns 0 on success and a negative error value when @fence is NULL.
*/
-void dma_fence_enable_sw_signaling(struct dma_fence *fence)
+int dma_fence_enable_sw_signaling(struct dma_fence *fence)
{
unsigned long flags;
+ if (!fence)
+ return -EINVAL;
+
spin_lock_irqsave(fence->lock, flags);
__dma_fence_enable_signaling(fence);
spin_unlock_irqrestore(fence->lock, flags);
+
+ return 0;
}
EXPORT_SYMBOL(dma_fence_enable_sw_signaling);
diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h
index ebe78bd3d121..1e4025e925e6 100644
--- a/include/linux/dma-fence.h
+++ b/include/linux/dma-fence.h
@@ -399,7 +399,7 @@ int dma_fence_add_callback(struct dma_fence *fence,
dma_fence_func_t func);
bool dma_fence_remove_callback(struct dma_fence *fence,
struct dma_fence_cb *cb);
-void dma_fence_enable_sw_signaling(struct dma_fence *fence);
+int dma_fence_enable_sw_signaling(struct dma_fence *fence);
/**
* dma_fence_is_signaled_locked - Return an indication if the fence
--
2.34.1