On Tue, Jan 04, 2022 at 03:35:45PM +0800, Weizhao Ouyang wrote:
Fix cma_heap_buffer mutex locking critical section to protect vmap_cnt and vaddr.
Fixes: a5d2d29e24be ("dma-buf: heaps: Move heap-helper logic into the cma_heap implementation") Signed-off-by: Weizhao Ouyang o451686892@gmail.com
drivers/dma-buf/heaps/cma_heap.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c index 0c05b79870f9..83f02bd51dda 100644 --- a/drivers/dma-buf/heaps/cma_heap.c +++ b/drivers/dma-buf/heaps/cma_heap.c @@ -124,10 +124,11 @@ static int cma_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, struct cma_heap_buffer *buffer = dmabuf->priv; struct dma_heap_attachment *a;
- mutex_lock(&buffer->lock);
- if (buffer->vmap_cnt) invalidate_kernel_vmap_range(buffer->vaddr, buffer->len);
Since this creates nesting with mm/, but optionally I think it'd be good to prime lockdep so it knows about this. See e.g. dma_resv_lockdep() in dma-resv.c, except I don't know offhand what the right lock for invalidate_kernel_vmap_range is. -Daniel
- mutex_lock(&buffer->lock); list_for_each_entry(a, &buffer->attachments, list) { if (!a->mapped) continue;
@@ -144,10 +145,11 @@ static int cma_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf, struct cma_heap_buffer *buffer = dmabuf->priv; struct dma_heap_attachment *a;
- mutex_lock(&buffer->lock);
- if (buffer->vmap_cnt) flush_kernel_vmap_range(buffer->vaddr, buffer->len);
- mutex_lock(&buffer->lock); list_for_each_entry(a, &buffer->attachments, list) { if (!a->mapped) continue;
-- 2.32.0