Adding a new dma attribute which can be used by the
platform drivers to avoid creating iommu mappings.
In some cases the buffers are allocated by display
controller driver using dma alloc apis but are not
used for scanout. Though the buffers are allocated
by display controller but are only used for sharing
among different devices.
With this attribute the platform drivers can choose
not to create iommu mapping at the time of buffer
allocation and only create the mapping when they
access this buffer.
Change-Id: I2178b3756170982d814e085ca62474d07b616a21
Signed-off-by: Abhinav Kochhar <abhinav.k(a)samsung.com>
---
arch/arm/mm/dma-mapping.c | 8 +++++---
include/linux/dma-attrs.h | 1 +
2 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index c0f0f43..e73003c 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -1279,9 +1279,11 @@ static void *arm_iommu_alloc_attrs(struct device *dev, size_t size,
if (!pages)
return NULL;
- *handle = __iommu_create_mapping(dev, pages, size);
- if (*handle == DMA_ERROR_CODE)
- goto err_buffer;
+ if (!dma_get_attr(DMA_ATTR_NO_IOMMU_MAPPING, attrs)) {
+ *handle = __iommu_create_mapping(dev, pages, size);
+ if (*handle == DMA_ERROR_CODE)
+ goto err_buffer;
+ }
if (dma_get_attr(DMA_ATTR_NO_KERNEL_MAPPING, attrs))
return pages;
diff --git a/include/linux/dma-attrs.h b/include/linux/dma-attrs.h
index c8e1831..1f04419 100644
--- a/include/linux/dma-attrs.h
+++ b/include/linux/dma-attrs.h
@@ -15,6 +15,7 @@ enum dma_attr {
DMA_ATTR_WEAK_ORDERING,
DMA_ATTR_WRITE_COMBINE,
DMA_ATTR_NON_CONSISTENT,
+ DMA_ATTR_NO_IOMMU_MAPPING,
DMA_ATTR_NO_KERNEL_MAPPING,
DMA_ATTR_SKIP_CPU_SYNC,
DMA_ATTR_FORCE_CONTIGUOUS,
--
1.7.8.6
Hi all,
With all of the attention that the Common Display Framework is getting, I
was wondering if it was worth having a BoF discussion at ELC next month in
San Francisco. This will be only a couple of weeks after FOSDEM, but given
the pace that things seem to be moving, that could be a great opportunity
either to have a follow-on discussion, or simply to involve a slightly
different cross-section of the community in a face-to-face discussion. If
folks could let me know that they'll be at ELC and are interested in a BoF
there, I'll look into getting it set up.
cheers,
Jesse
This patch adds EXPORT_SYMBOL_GPL calls to the three arm iommu
functions - arm_iommu_create_mapping, arm_iommu_free_mapping
and arm_iommu_attach_device. These three functions are arm specific
wrapper functions for creating/freeing/using an iommu mapping and
they are called by various drivers. If any of these drivers need
to be built as dynamic modules, these functions need to be exported.
Changelog v2: using EXPORT_SYMBOL_GPL as suggested by Marek.
Signed-off-by: Prathyush K <prathyush.k(a)samsung.com>
---
arch/arm/mm/dma-mapping.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index 6b2fb87..226ebcf 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -1797,6 +1797,7 @@ err2:
err:
return ERR_PTR(err);
}
+EXPORT_SYMBOL_GPL(arm_iommu_create_mapping);
static void release_iommu_mapping(struct kref *kref)
{
@@ -1813,6 +1814,7 @@ void arm_iommu_release_mapping(struct dma_iommu_mapping *mapping)
if (mapping)
kref_put(&mapping->kref, release_iommu_mapping);
}
+EXPORT_SYMBOL_GPL(arm_iommu_release_mapping);
/**
* arm_iommu_attach_device
@@ -1841,5 +1843,6 @@ int arm_iommu_attach_device(struct device *dev,
pr_debug("Attached IOMMU controller to %s device.\n", dev_name(dev));
return 0;
}
+EXPORT_SYMBOL_GPL(arm_iommu_attach_device);
#endif
--
1.8.0
From: Inki Dae <inki.dae(a)samsung.com>
This patch adds a new attribute, DMA_ATTR_SKIP_BUFFER_CLEAR
to skip buffer clearing. The buffer clearing also flushes CPU cache
so this operation has performance deterioration a little bit.
With this patch, allocated buffer region is cleared as default.
So if you want to skip the buffer clearing, just set this attribute.
But this flag should be used carefully because this use might get
access to some vulnerable content such as security data. So with this
patch, we make sure that all pages will be somehow cleared before
exposing to userspace.
For example, let's say that the security data had been stored
in some memory and freed without clearing it.
And then malicious process allocated the region though some buffer
allocator such as gem and ion without clearing it, and requested blit
operation with cleared another buffer though gpu or other drivers.
At this time, the malicious process could access the security data.
Signed-off-by: Inki Dae <inki.dae(a)samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park(a)samsung.com>
---
arch/arm/mm/dma-mapping.c | 6 ++++--
include/linux/dma-attrs.h | 1 +
2 files changed, 5 insertions(+), 2 deletions(-)
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index 6b2fb87..fbe9dff 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -1058,7 +1058,8 @@ static struct page **__iommu_alloc_buffer(struct device *dev, size_t size,
if (!page)
goto error;
- __dma_clear_buffer(page, size);
+ if (!dma_get_attr(DMA_ATTR_SKIP_BUFFER_CLEAR, attrs))
+ __dma_clear_buffer(page, size);
for (i = 0; i < count; i++)
pages[i] = page + i;
@@ -1082,7 +1083,8 @@ static struct page **__iommu_alloc_buffer(struct device *dev, size_t size,
pages[i + j] = pages[i] + j;
}
- __dma_clear_buffer(pages[i], PAGE_SIZE << order);
+ if (!dma_get_attr(DMA_ATTR_SKIP_BUFFER_CLEAR, attrs))
+ __dma_clear_buffer(pages[i], PAGE_SIZE << order);
i += 1 << order;
count -= 1 << order;
}
diff --git a/include/linux/dma-attrs.h b/include/linux/dma-attrs.h
index c8e1831..2592c05 100644
--- a/include/linux/dma-attrs.h
+++ b/include/linux/dma-attrs.h
@@ -18,6 +18,7 @@ enum dma_attr {
DMA_ATTR_NO_KERNEL_MAPPING,
DMA_ATTR_SKIP_CPU_SYNC,
DMA_ATTR_FORCE_CONTIGUOUS,
+ DMA_ATTR_SKIP_BUFFER_CLEAR,
DMA_ATTR_MAX,
};
--
1.7.4.1
All drivers which implement this need to have some sort of refcount to
allow concurrent vmap usage. Hence implement this in the dma-buf core.
To protect against concurrent calls we need a lock, which potentially
causes new funny locking inversions. But this shouldn't be a problem
for exporters with statically allocated backing storage, and more
dynamic drivers have decent issues already anyway.
Inspired by some refactoring patches from Aaron Plattner, who
implemented the same idea, but only for drm/prime drivers.
v2: Check in dma_buf_release that no dangling vmaps are left.
Suggested by Aaron Plattner. We might want to do similar checks for
attachments, but that's for another patch. Also fix up ERR_PTR return
for vmap.
Cc: Aaron Plattner <aplattner(a)nvidia.com>
Signed-off-by: Daniel Vetter <daniel.vetter(a)ffwll.ch>
---
Compile-tested only - Aaron has been bugging me too a bit too often
about this on irc.
Cheers, Daniel
---
Documentation/dma-buf-sharing.txt | 6 +++++-
drivers/base/dma-buf.c | 42 ++++++++++++++++++++++++++++++++++-----
include/linux/dma-buf.h | 4 +++-
3 files changed, 45 insertions(+), 7 deletions(-)
diff --git a/Documentation/dma-buf-sharing.txt b/Documentation/dma-buf-sharing.txt
index 0188903..4966b1b 100644
--- a/Documentation/dma-buf-sharing.txt
+++ b/Documentation/dma-buf-sharing.txt
@@ -302,7 +302,11 @@ Access to a dma_buf from the kernel context involves three steps:
void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr)
The vmap call can fail if there is no vmap support in the exporter, or if it
- runs out of vmalloc space. Fallback to kmap should be implemented.
+ runs out of vmalloc space. Fallback to kmap should be implemented. Note that
+ the dma-buf layer keeps a reference count for all vmap access and calls down
+ into the exporter's vmap function only when no vmapping exists, and only
+ unmaps it once. Protection against concurrent vmap/vunmap calls is provided
+ by taking the dma_buf->lock mutex.
3. Finish access
diff --git a/drivers/base/dma-buf.c b/drivers/base/dma-buf.c
index a3f79c4..36af5de 100644
--- a/drivers/base/dma-buf.c
+++ b/drivers/base/dma-buf.c
@@ -39,6 +39,8 @@ static int dma_buf_release(struct inode *inode, struct file *file)
dmabuf = file->private_data;
+ BUG_ON(dmabuf->vmapping_counter);
+
dmabuf->ops->release(dmabuf);
kfree(dmabuf);
return 0;
@@ -482,12 +484,34 @@ EXPORT_SYMBOL_GPL(dma_buf_mmap);
*/
void *dma_buf_vmap(struct dma_buf *dmabuf)
{
+ void *ptr;
+
if (WARN_ON(!dmabuf))
return NULL;
- if (dmabuf->ops->vmap)
- return dmabuf->ops->vmap(dmabuf);
- return NULL;
+ if (!dmabuf->ops->vmap)
+ return NULL;
+
+ mutex_lock(&dmabuf->lock);
+ if (dmabuf->vmapping_counter) {
+ dmabuf->vmapping_counter++;
+ BUG_ON(!dmabuf->vmap_ptr);
+ ptr = dmabuf->vmap_ptr;
+ goto out_unlock;
+ }
+
+ BUG_ON(dmabuf->vmap_ptr);
+
+ ptr = dmabuf->ops->vmap(dmabuf);
+ if (IS_ERR_OR_NULL(ptr))
+ goto out_unlock;
+
+ dmabuf->vmap_ptr = ptr;
+ dmabuf->vmapping_counter = 1;
+
+out_unlock:
+ mutex_unlock(&dmabuf->lock);
+ return ptr;
}
EXPORT_SYMBOL_GPL(dma_buf_vmap);
@@ -501,7 +525,15 @@ void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr)
if (WARN_ON(!dmabuf))
return;
- if (dmabuf->ops->vunmap)
- dmabuf->ops->vunmap(dmabuf, vaddr);
+ BUG_ON(!dmabuf->vmap_ptr);
+ BUG_ON(dmabuf->vmapping_counter > 0);
+
+ mutex_lock(&dmabuf->lock);
+ if (--dmabuf->vmapping_counter == 0) {
+ if (dmabuf->ops->vunmap)
+ dmabuf->ops->vunmap(dmabuf, vaddr);
+ dmabuf->vmap_ptr = NULL;
+ }
+ mutex_unlock(&dmabuf->lock);
}
EXPORT_SYMBOL_GPL(dma_buf_vunmap);
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index bd2e52c..e3bf2f6 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -119,8 +119,10 @@ struct dma_buf {
struct file *file;
struct list_head attachments;
const struct dma_buf_ops *ops;
- /* mutex to serialize list manipulation and attach/detach */
+ /* mutex to serialize list manipulation, attach/detach and vmap/unmap */
struct mutex lock;
+ unsigned vmapping_counter;
+ void *vmap_ptr;
void *priv;
};
--
1.7.11.7
Hi Linus,
A fairly small dma-buf pull request for 3.8 - only 2 patches. Could
you please pull?
Thanks!
~Sumit.
The following changes since commit f01af9f85855e38fbd601e033a8eac204cc4cc1c:
Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc
(2012-12-19 20:31:02 -0800)
are available in the git repository at:
git://git.linaro.org/people/sumitsemwal/linux-dma-buf.git
tags/tag-for-linus-3.8
for you to fetch changes up to ada65c74059f8c104f1b467c126205471634c435:
dma-buf: remove fallback for !CONFIG_DMA_SHARED_BUFFER (2012-12-20
12:05:06 +0530)
----------------------------------------------------------------
3.8: dma-buf minor updates
----------------------------------------------------------------
Maarten Lankhorst (1):
dma-buf: remove fallback for !CONFIG_DMA_SHARED_BUFFER
Rob Clark (1):
dma-buf: might_sleep() in dma_buf_unmap_attachment()
drivers/base/dma-buf.c | 2 +
include/linux/dma-buf.h | 99 -----------------------------------------------
2 files changed, 2 insertions(+), 99 deletions(-)
So I've gotten back to playing with prime for a day, and found some
old intel/radeon tests I had failing,
Tracked it down to a lifetime issue with the current code and can
think of two fixes,
The problem scenario is
1. i915: create gem object
2. i915: export gem object to prime
3. radeon: import gem object
4. close prime fd
5. radeon: unref object
6. i915: unref object
So we end up at this point, with a imported buffer record for the
dma_buf on the i915 file private.
Now if a subsequent test (without closing the drm fd) reallocates the
dma_buf with the same address, we can end up seeing that.
So why doesn't that reference get cleaned up?
So the reference gets added above at step 2, and when radeon unrefs
the object, i915 gets the dma-buf release callback, however at that
stage
we don't actually have the file priv to remove the pointer from, so it
dangles there.
Possible fixes:
a) take a reference on the dma_buf attached to the gem handle when we
export it, keep it until the gem handle goes away. I'm unsure if this
could create zombie objects, since the dma buf has a reference on the
gem object, but since the gem handle is separate to the object it
might work.
b) don't keep track of dma_buf, keep track of gem objects, when we get
a lookup, check inside the gem object, since currently we NULL out the
export_dma_buf when we hit the release path, apart from the fact I'm
sure the locking is foobar,
c) scan all the file privs for all the devices, no.
Anyone else any better plans?
Dave.
The debugfs show functions for client and heap were showing info
for the heap type instead of showing info of the individual heap.
Change-Id: Id5afe7963c8ddfafae1f959ce48dd5c2a5fcca07
Signed-off-by: Nishanth Peethambaran <nishanth(a)broadcom.com>
---
drivers/gpu/ion/ion.c | 18 +++++++++---------
include/linux/ion.h | 4 ++--
2 files changed, 11 insertions(+), 11 deletions(-)
diff --git a/drivers/gpu/ion/ion.c b/drivers/gpu/ion/ion.c
index 6aa817a..48cda5d 100644
--- a/drivers/gpu/ion/ion.c
+++ b/drivers/gpu/ion/ion.c
@@ -413,7 +413,7 @@ struct ion_handle *ion_alloc(struct ion_client
*client, size_t len,
/* if the client doesn't support this heap type */
if (!((1 << heap->type) & client->heap_mask))
continue;
- /* if the caller didn't specify this heap type */
+ /* if the caller didn't specify this heap id */
if (!((1 << heap->id) & heap_mask))
continue;
buffer = ion_buffer_create(heap, dev, len, align, flags);
@@ -597,11 +597,11 @@ static int ion_debug_client_show(struct seq_file
*s, void *unused)
for (n = rb_first(&client->handles); n; n = rb_next(n)) {
struct ion_handle *handle = rb_entry(n, struct ion_handle,
node);
- enum ion_heap_type type = handle->buffer->heap->type;
+ int id = handle->buffer->heap->id;
- if (!names[type])
- names[type] = handle->buffer->heap->name;
- sizes[type] += handle->buffer->size;
+ if (!names[id])
+ names[id] = handle->buffer->heap->name;
+ sizes[id] += handle->buffer->size;
}
mutex_unlock(&client->lock);
@@ -1176,7 +1176,7 @@ static const struct file_operations ion_fops = {
};
static size_t ion_debug_heap_total(struct ion_client *client,
- enum ion_heap_type type)
+ int id)
{
size_t size = 0;
struct rb_node *n;
@@ -1186,7 +1186,7 @@ static size_t ion_debug_heap_total(struct
ion_client *client,
struct ion_handle *handle = rb_entry(n,
struct ion_handle,
node);
- if (handle->buffer->heap->type == type)
+ if (handle->buffer->heap->id == id)
size += handle->buffer->size;
}
mutex_unlock(&client->lock);
@@ -1207,7 +1207,7 @@ static int ion_debug_heap_show(struct seq_file
*s, void *unused)
for (n = rb_first(&dev->clients); n; n = rb_next(n)) {
struct ion_client *client = rb_entry(n, struct ion_client,
node);
- size_t size = ion_debug_heap_total(client, heap->type);
+ size_t size = ion_debug_heap_total(client, heap->id);
if (!size)
continue;
if (client->task) {
@@ -1228,7 +1228,7 @@ static int ion_debug_heap_show(struct seq_file
*s, void *unused)
for (n = rb_first(&dev->buffers); n; n = rb_next(n)) {
struct ion_buffer *buffer = rb_entry(n, struct ion_buffer,
node);
- if (buffer->heap->type != heap->type)
+ if (buffer->heap->id != heap->id)
continue;
total_size += buffer->size;
if (!buffer->handle_count) {
diff --git a/include/linux/ion.h b/include/linux/ion.h
index a7d399c..d8168fb 100644
--- a/include/linux/ion.h
+++ b/include/linux/ion.h
@@ -135,7 +135,7 @@ void ion_client_destroy(struct ion_client *client);
* @len: size of the allocation
* @align: requested allocation alignment, lots of hardware blocks have
* alignment requirements of some kind
- * @heap_mask: mask of heaps to allocate from, if multiple bits are set
+ * @heap_mask: mask of heap ids to allocate from, if multiple bits are set
* heaps will be tried in order from lowest to highest order bit
* @flags: heap flags, the low 16 bits are consumed by ion, the high 16
* bits are passed on to the respective heap and can be heap
@@ -236,7 +236,7 @@ struct ion_handle *ion_import_dma_buf(struct
ion_client *client, int fd);
* struct ion_allocation_data - metadata passed from userspace for allocations
* @len: size of the allocation
* @align: required alignment of the allocation
- * @heap_mask: mask of heaps to allocate from
+ * @heap_mask: mask of heap ids to allocate from
* @flags: flags passed to heap
* @handle: pointer that will be populated with a cookie to use to refer
* to this allocation
--
1.7.0.4
The heap id is compared against the heap mask in ion_alloc which is
incorrect, the heap type should be used.
Signed-off-by: Johan Mossberg <johan.mossberg(a)stericsson.com>
---
drivers/gpu/ion/ion.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/ion/ion.c b/drivers/gpu/ion/ion.c
index fc152b9..c811380f 100644
--- a/drivers/gpu/ion/ion.c
+++ b/drivers/gpu/ion/ion.c
@@ -414,7 +414,7 @@ struct ion_handle *ion_alloc(struct ion_client *client, size_t len,
if (!((1 << heap->type) & client->heap_mask))
continue;
/* if the caller didn't specify this heap type */
- if (!((1 << heap->id) & heap_mask))
+ if (!((1 << heap->type) & heap_mask))
continue;
buffer = ion_buffer_create(heap, dev, len, align, flags);
if (!IS_ERR_OR_NULL(buffer))
--
1.8.0
The heap mask used during client create was a bitmask of heap types
and the heap mask used during buffer allocation was a bitmask of
heap ids. Now, both of them assume a bitmask of heap ids.
The debugfs show functions for client and heap were showing info
for the heap type instead of showing info of the individual heap.
Change-Id: I9c2c142a63865f3d4250cd681dc5cde9638ca09f
Signed-off-by: Nishanth Peethambaran <nishanth(a)broadcom.com>
---
drivers/gpu/ion/ion.c | 26 +++++++++++++-------------
include/linux/ion.h | 8 ++++----
2 files changed, 17 insertions(+), 17 deletions(-)
diff --git a/drivers/gpu/ion/ion.c b/drivers/gpu/ion/ion.c
index 6aa817a..b808972 100644
--- a/drivers/gpu/ion/ion.c
+++ b/drivers/gpu/ion/ion.c
@@ -62,7 +62,7 @@ struct ion_device {
* @dev: backpointer to ion device
* @handles: an rb tree of all the handles in this client
* @lock: lock protecting the tree of handles
- * @heap_mask: mask of all supported heaps
+ * @heap_mask: mask of all supported heap ids
* @name: used for debugging
* @task: used for debugging
*
@@ -398,7 +398,7 @@ struct ion_handle *ion_alloc(struct ion_client
*client, size_t len,
align, heap_mask, flags);
/*
* traverse the list of heaps available in this system in priority
- * order. If the heap type is supported by the client, and matches the
+ * order. If the heap id is supported by the client, and matches the
* request of the caller allocate from it. Repeat until allocate has
* succeeded or all heaps have been tried
*/
@@ -410,10 +410,10 @@ struct ion_handle *ion_alloc(struct ion_client
*client, size_t len,
down_read(&dev->lock);
for (n = rb_first(&dev->heaps); n != NULL; n = rb_next(n)) {
struct ion_heap *heap = rb_entry(n, struct ion_heap, node);
- /* if the client doesn't support this heap type */
- if (!((1 << heap->type) & client->heap_mask))
+ /* if the client doesn't support this heap id */
+ if (!((1 << heap->id) & client->heap_mask))
continue;
- /* if the caller didn't specify this heap type */
+ /* if the caller didn't specify this heap id */
if (!((1 << heap->id) & heap_mask))
continue;
buffer = ion_buffer_create(heap, dev, len, align, flags);
@@ -597,11 +597,11 @@ static int ion_debug_client_show(struct seq_file
*s, void *unused)
for (n = rb_first(&client->handles); n; n = rb_next(n)) {
struct ion_handle *handle = rb_entry(n, struct ion_handle,
node);
- enum ion_heap_type type = handle->buffer->heap->type;
+ int id = handle->buffer->heap->id;
- if (!names[type])
- names[type] = handle->buffer->heap->name;
- sizes[type] += handle->buffer->size;
+ if (!names[id])
+ names[id] = handle->buffer->heap->name;
+ sizes[id] += handle->buffer->size;
}
mutex_unlock(&client->lock);
@@ -1176,7 +1176,7 @@ static const struct file_operations ion_fops = {
};
static size_t ion_debug_heap_total(struct ion_client *client,
- enum ion_heap_type type)
+ int id)
{
size_t size = 0;
struct rb_node *n;
@@ -1186,7 +1186,7 @@ static size_t ion_debug_heap_total(struct
ion_client *client,
struct ion_handle *handle = rb_entry(n,
struct ion_handle,
node);
- if (handle->buffer->heap->type == type)
+ if (handle->buffer->heap->id == id)
size += handle->buffer->size;
}
mutex_unlock(&client->lock);
@@ -1207,7 +1207,7 @@ static int ion_debug_heap_show(struct seq_file
*s, void *unused)
for (n = rb_first(&dev->clients); n; n = rb_next(n)) {
struct ion_client *client = rb_entry(n, struct ion_client,
node);
- size_t size = ion_debug_heap_total(client, heap->type);
+ size_t size = ion_debug_heap_total(client, heap->id);
if (!size)
continue;
if (client->task) {
@@ -1228,7 +1228,7 @@ static int ion_debug_heap_show(struct seq_file
*s, void *unused)
for (n = rb_first(&dev->buffers); n; n = rb_next(n)) {
struct ion_buffer *buffer = rb_entry(n, struct ion_buffer,
node);
- if (buffer->heap->type != heap->type)
+ if (buffer->heap->id != heap->id)
continue;
total_size += buffer->size;
if (!buffer->handle_count) {
diff --git a/include/linux/ion.h b/include/linux/ion.h
index a7d399c..f0399ae 100644
--- a/include/linux/ion.h
+++ b/include/linux/ion.h
@@ -72,7 +72,7 @@ struct ion_buffer;
/**
* struct ion_platform_heap - defines a heap in the given platform
* @type: type of the heap from ion_heap_type enum
- * @id: unique identifier for heap. When allocating (lower numbers
+ * @id: unique identifier for heap. When allocating (lower numbers
* will be allocated from first)
* @name: used for debug purposes
* @base: base address of heap in physical memory if applicable
@@ -114,7 +114,7 @@ void ion_reserve(struct ion_platform_data *data);
/**
* ion_client_create() - allocate a client and returns it
* @dev: the global ion device
- * @heap_mask: mask of heaps this client can allocate from
+ * @heap_mask: mask of heap ids this client can allocate from
* @name: used for debugging
*/
struct ion_client *ion_client_create(struct ion_device *dev,
@@ -135,7 +135,7 @@ void ion_client_destroy(struct ion_client *client);
* @len: size of the allocation
* @align: requested allocation alignment, lots of hardware blocks have
* alignment requirements of some kind
- * @heap_mask: mask of heaps to allocate from, if multiple bits are set
+ * @heap_mask: mask of heap ids to allocate from, if multiple bits are set
* heaps will be tried in order from lowest to highest order bit
* @flags: heap flags, the low 16 bits are consumed by ion, the high 16
* bits are passed on to the respective heap and can be heap
@@ -236,7 +236,7 @@ struct ion_handle *ion_import_dma_buf(struct
ion_client *client, int fd);
* struct ion_allocation_data - metadata passed from userspace for allocations
* @len: size of the allocation
* @align: required alignment of the allocation
- * @heap_mask: mask of heaps to allocate from
+ * @heap_mask: mask of heap ids to allocate from
* @flags: flags passed to heap
* @handle: pointer that will be populated with a cookie to use to refer
* to this allocation
--
1.7.0.4
Hi Linus,
I would like to ask for pulling another set of Contiguous Memory Allocator and
DMA-mapping framework updates for v3.8.
The following changes since commit 29594404d7fe73cd80eaa4ee8c43dcc53970c60e:
Linux 3.7 (2012-12-10 19:30:57 -0800)
are available in the git repository at:
git://git.linaro.org/people/mszyprowski/linux-dma-mapping.git for-v3.8
for you to fetch changes up to 4009793e15d44469da1547a46ab129cc08ffa503:
drivers: cma: represent physical addresses as phys_addr_t (2012-12-11 09:28:09 +0100)
----------------------------------------------------------------
This pull request consists only of two patches. First fixes long standing
issue with dmapools (the code predates current GIT history), which forced all
allocations to use GFP_ATOMIC flag, ignoring the flags passed by the caller.
The second patch changes CMA code to correctly use phys_addr_t type what
enables support for LPAE systems.
Thanks!
Best regards
Marek Szyprowski
Samsung Poland R&D Center
Patch summary:
Marek Szyprowski (1):
mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls
Vitaly Andrianov (1):
drivers: cma: represent physical addresses as phys_addr_t
drivers/base/dma-contiguous.c | 24 ++++++++++--------------
include/linux/dma-contiguous.h | 4 ++--
mm/dmapool.c | 31 +++++++------------------------
3 files changed, 19 insertions(+), 40 deletions(-)
The goal of those patches is to allow ION clients (drivers or userland applications)
to use Contiguous Memory Allocator (CMA).
To get more info about CMA:
http://lists.linaro.org/pipermail/linaro-mm-sig/2012-February/001328.html
patches version 8:
- fix memory leak when release sg_table
- remove virt_to_phys from ion_cma_phys
patches version 7:
- rebased on Android kernel
- fix ion Makefile
- add ion_cma_map_kernel function
- remove CONFIG_CMA compilation flags from ion_heap.c
patches version 6:
- add private field in ion_platform_heap to pass the device
linked with CMA.
- rework CMA heap to use private field.
- prepare CMA heap for incoming dma_common_get_sgtable function
http://lists.linaro.org/pipermail/linaro-mm-sig/2012-June/002109.html
- simplify ion-ux500 driver.
patches version 5:
- port patches on android kernel 3.4 where ION use dmabuf
- add ion_cma_heap_map_dma and ion_cma_heap_unmap_dma functions
patches version 4:
- add ION_HEAP_TYPE_DMA heap type in ion_heap_type enum.
- CMA heap is now a "native" ION heap.
- add ion_heap_create_full function to keep backward compatibilty.
- clean up included files in CMA heap
- ux500-ion is using ion_heap_create_full instead of ion_heap_create
patches version 3:
- add a private field in ion_heap structure instead of expose ion_device
structure to all heaps
- ion_cma_heap is no more a platform driver
- ion_cma_heap use ion_heap private field to store the device pointer and
make the link with reserved CMA regions
- provide ux500-ion driver and configuration file for snowball board to give
an example of how use CMA heaps
patches version 2:
- fix comments done by Andy Green
Benjamin Gaignard (3):
gpu: ion: fix ion_platform_data definition
gpu: ion: add private field in ion_heap and ion_platform_heap
structure
gpu: ion: add CMA heap
drivers/gpu/ion/Kconfig | 5 ++
drivers/gpu/ion/Makefile | 1 +
drivers/gpu/ion/ion_cma_heap.c | 192 ++++++++++++++++++++++++++++++++++++++++
drivers/gpu/ion/ion_heap.c | 7 ++
drivers/gpu/ion/ion_priv.h | 16 ++++
include/linux/ion.h | 7 +-
6 files changed, 227 insertions(+), 1 deletion(-)
create mode 100644 drivers/gpu/ion/ion_cma_heap.c
--
1.7.10