Welcome everyone again!
This is yet another quick update on Contiguous Memory Allocator patches.
This version includes another set of code cleanups requested by Mel
Gorman and a few minor bug fixes. I really hope that this version will
be accepted for merging and future development will be handled by
incremental patches.
ARM integration code has not been changed since v16. It provides
implementation of the ideas that has been discussed during Linaro Sprint
meeting in Cambourne, August 2011. Here are the details:
This version provides a solution for complete integration of CMA to
DMA mapping subsystem on ARM architecture. The issue caused by double
dma pages mapping and possible aliasing in coherent memory mapping has
been finally resolved, both for GFP_ATOMIC case (allocations comes from
coherent memory pool) and non-GFP_ATOMIC case (allocations comes from
CMA managed areas).
For coherent, nommu, ARMv4 and ARMv5 systems the current DMA-mapping
implementation has been kept.
For ARMv6+ systems, CMA has been enabled and a special pool of coherent
memory for atomic allocations has been created. The size of this pool
defaults to DEFAULT_CONSISTEN_DMA_SIZE/8, but can be changed with
coherent_pool kernel parameter (if really required).
All atomic allocations are served from this pool. I've did a little
simplification here, because there is no separate pool for writecombine
memory - such requests are also served from coherent pool. I don't
think that such simplification is a problem here - I found no driver
that use dma_alloc_writecombine with GFP_ATOMIC flags.
All non-atomic allocation are served from CMA area. Kernel mappings are
updated to reflect required memory attributes changes. This is possible
because during early boot, all CMA area are remapped with 4KiB pages in
kernel low-memory.
This version have been tested on Samsung S5PC110 based Goni machine and
Exynos4 UniversalC210 board with various V4L2 multimedia drivers.
Coherent atomic allocations has been tested by manually enabling the dma
bounce for the s3c-sdhci device.
All patches are prepared on top of Linux Kernel v3.3-rc2.
A few words for these who see CMA for the first time:
The Contiguous Memory Allocator (CMA) makes it possible for device
drivers to allocate big contiguous chunks of memory after the system
has booted.
The main difference from the similar frameworks is the fact that CMA
allows to transparently reuse memory region reserved for the big
chunk allocation as a system memory, so no memory is wasted when no
big chunk is allocated. Once the alloc request is issued, the
framework will migrate system pages to create a required big chunk of
physically contiguous memory.
For more information you can refer to nice LWN articles:
http://lwn.net/Articles/447405/ and http://lwn.net/Articles/450286/
as well as links to previous versions of the CMA framework.
The CMA framework has been initially developed by Michal Nazarewicz
at Samsung Poland R&D Center. Since version 9, I've taken over the
development, because Michal has left the company. Since version v17
Michal is working again on CMA patches and the current version is
the result of our joint open-source effort.
Best regards
Marek Szyprowski
Samsung Poland R&D Center
Links to previous versions of the patchset:
v19: <http://www.spinics.net/lists/linux-mm/msg29145.html>
v18: <http://www.spinics.net/lists/linux-mm/msg28125.html>
v17: <http://www.spinics.net/lists/arm-kernel/msg148499.html>
v16: <http://www.spinics.net/lists/linux-mm/msg25066.html>
v15: <http://www.spinics.net/lists/linux-mm/msg23365.html>
v14: <http://www.spinics.net/lists/linux-media/msg36536.html>
v13: (internal, intentionally not released)
v12: <http://www.spinics.net/lists/linux-media/msg35674.html>
v11: <http://www.spinics.net/lists/linux-mm/msg21868.html>
v10: <http://www.spinics.net/lists/linux-mm/msg20761.html>
v9: <http://article.gmane.org/gmane.linux.kernel.mm/60787>
v8: <http://article.gmane.org/gmane.linux.kernel.mm/56855>
v7: <http://article.gmane.org/gmane.linux.kernel.mm/55626>
v6: <http://article.gmane.org/gmane.linux.kernel.mm/55626>
v5: (intentionally left out as CMA v5 was identical to CMA v4)
v4: <http://article.gmane.org/gmane.linux.kernel.mm/52010>
v3: <http://article.gmane.org/gmane.linux.kernel.mm/51573>
v2: <http://article.gmane.org/gmane.linux.kernel.mm/50986>
v1: <http://article.gmane.org/gmane.linux.kernel.mm/50669>
Changelog:
v20:
1. Addressed even more comments from Mel Gorman and added his Acked-by tag
on most of the core memory management patches.
2. Squashed a few minor fixes here and there (corrected alignment calculation
for region limit, added adjusting low watermark level on reclaim, fixed
return value of __alloc_contig_migrate_range function)
3. Removed problematic "mm: page_alloc: update migrate type of pages on pcp
when isolating" patch and sligtly altered MIGRATE_CMA type handling what
solved the problem
4. Rebased onto v3.3-rc2
v19:
1. Addressed another set of comments and suggestions from Mel Gorman, mainly
related to breaking patches into smaller, single-feature related chunks
and rewriting already existing functions in memory compaction code.
2. Reworked completely page reclaim code, removed it from split_free_page()
and introduce direct call from alloc_contig_range().
3. Merged a fix from Mans Rullgard for correct cma area limit alignment.
4. Replaced broken "mm: page_alloc: set_migratetype_isolate: drain PCP prior
to isolating" patch with "mm: page_alloc: update migrate type of pages on
pcp when isolating" which is another attempt to solve this issue without
touching free_pcppages_bulk().
5. Rebased onto v3.3-rc1
v18:
1. Addressed comments and suggestions from Mel Gorman related to changes
in memory compaction code, most important points:
- removed "mm: page_alloc: handle MIGRATE_ISOLATE in free_pcppages_bulk()"
and moved all the logic to set_migratetype_isolate - see
"mm: page_alloc: set_migratetype_isolate: drain PCP prior to isolating"
patch
- code in "mm: compaction: introduce isolate_{free,migrate}pages_range()"
patch have been simplified and improved
- removed "mm: mmzone: introduce zone_pfn_same_memmap()" patch
2. Fixed crash on initialization if HIGHMEM is available on ARM platforms
3. Fixed problems with allocation of contiguous memory if all free pages
are occupied by page cache and reclaim is required.
4. Added a workaround for temporary migration failures (now CMA tries
to allocate different memory block in such case), what heavily increased
reliability of the CMA.
5. Minor cleanup here and there.
6. Rebased onto v3.2-rc7 kernel tree.
v17:
1. Replaced whole CMA core memory migration code to the new one kindly
provided by Michal Nazarewicz. The new code is based on memory
compaction framework not the memory hotplug, like it was before. This
change has been suggested by Mel Godman.
2. Addressed most of the comments from Andrew Morton and Mel Gorman in
the rest of the CMA code.
3. Fixed broken initialization on ARM systems with DMA zone enabled.
4. Rebased onto v3.2-rc2 kernel.
v16:
1. merged a fixup from Michal Nazarewicz to address comments from Dave
Hansen about checking if pfns belong to the same memory zone
2. merged a fix from Michal Nazarewicz for incorrect handling of pages
which belong to page block that is in MIGRATE_ISOLATE state, in very
rare cases the migrate type of page block might have been changed
from MIGRATE_CMA to MIGRATE_MOVABLE because of this bug
3. moved some common code to include/asm-generic
4. added support for x86 DMA-mapping framework for pci-dma hardware,
CMA can be now even more widely tested on KVM/QEMU and a lot of common
x86 boxes
5. rebased onto next-20111005 kernel tree, which includes changes in ARM
DMA-mapping subsystem (CONSISTENT_DMA_SIZE removal)
6. removed patch for CMA s5p-fimc device private regions (served only as
example) and provided the one that matches real life case - s5p-mfc
device
v15:
1. fixed calculation of the total memory after activating CMA area (was
broken from v12)
2. more code cleanup in drivers/base/dma-contiguous.c
3. added address limit for default CMA area
4. rewrote ARM DMA integration:
- removed "ARM: DMA: steal memory for DMA coherent mappings" patch
- kept current DMA mapping implementation for coherent, nommu and
ARMv4/ARMv5 systems
- enabled CMA for all ARMv6+ systems
- added separate, small pool for coherent atomic allocations, defaults
to CONSISTENT_DMA_SIZE/8, but can be changed with kernel parameter
coherent_pool=[size]
v14:
1. Merged with "ARM: DMA: steal memory for DMA coherent mappings"
patch, added support for GFP_ATOMIC allocations.
2. Added checks for NULL device pointer
v13: (internal, intentionally not released)
v12:
1. Fixed 2 nasty bugs in dma-contiguous allocator:
- alignment argument was not passed correctly
- range for dma_release_from_contiguous was not checked correctly
2. Added support for architecture specfic dma_contiguous_early_fixup()
function
3. CMA and DMA-mapping integration for ARM architechture has been
rewritten to take care of the memory aliasing issue that might
happen for newer ARM CPUs (mapping of the same pages with different
cache attributes is forbidden). TODO: add support for GFP_ATOMIC
allocations basing on the "ARM: DMA: steal memory for DMA coherent
mappings" patch and implement support for contiguous memory areas
that are placed in HIGHMEM zone
v11:
1. Removed genalloc usage and replaced it with direct calls to
bitmap_* functions, dropped patches that are not needed
anymore (genalloc extensions)
2. Moved all contiguous area management code from mm/cma.c
to drivers/base/dma-contiguous.c
3. Renamed cm_alloc/free to dma_alloc/release_from_contiguous
4. Introduced global, system wide (default) contiguous area
configured with kernel config and kernel cmdline parameters
5. Simplified initialization to just one function:
dma_declare_contiguous()
6. Added example of device private memory contiguous area
v10:
1. Rebased onto 3.0-rc2 and resolved all conflicts
2. Simplified CMA to be just a pure memory allocator, for use
with platfrom/bus specific subsystems, like dma-mapping.
Removed all device specific functions are calls.
3. Integrated with ARM DMA-mapping subsystem.
4. Code cleanup here and there.
5. Removed private context support.
v9: 1. Rebased onto 2.6.39-rc1 and resolved all conflicts
2. Fixed a bunch of nasty bugs that happened when the allocation
failed (mainly kernel oops due to NULL ptr dereference).
3. Introduced testing code: cma-regions compatibility layer and
videobuf2-cma memory allocator module.
v8: 1. The alloc_contig_range() function has now been separated from
CMA and put in page_allocator.c. This function tries to
migrate all LRU pages in specified range and then allocate the
range using alloc_contig_freed_pages().
2. Support for MIGRATE_CMA has been separated from the CMA code.
I have not tested if CMA works with ZONE_MOVABLE but I see no
reasons why it shouldn't.
3. I have added a @private argument when creating CMA contexts so
that one can reserve memory and not share it with the rest of
the system. This way, CMA acts only as allocation algorithm.
v7: 1. A lot of functionality that handled driver->allocator_context
mapping has been removed from the patchset. This is not to say
that this code is not needed, it's just not worth posting
everything in one patchset.
Currently, CMA is "just" an allocator. It uses it's own
migratetype (MIGRATE_CMA) for defining ranges of pageblokcs
which behave just like ZONE_MOVABLE but dispite the latter can
be put in arbitrary places.
2. The migration code that was introduced in the previous version
actually started working.
v6: 1. Most importantly, v6 introduces support for memory migration.
The implementation is not yet complete though.
Migration support means that when CMA is not using memory
reserved for it, page allocator can allocate pages from it.
When CMA wants to use the memory, the pages have to be moved
and/or evicted as to make room for CMA.
To make it possible it must be guaranteed that only movable and
reclaimable pages are allocated in CMA controlled regions.
This is done by introducing a MIGRATE_CMA migrate type that
guarantees exactly that.
Some of the migration code is "borrowed" from Kamezawa
Hiroyuki's alloc_contig_pages() implementation. The main
difference is that thanks to MIGRATE_CMA migrate type CMA
assumes that memory controlled by CMA are is always movable or
reclaimable so that it makes allocation decisions regardless of
the whether some pages are actually allocated and migrates them
if needed.
The most interesting patches from the patchset that implement
the functionality are:
09/13: mm: alloc_contig_free_pages() added
10/13: mm: MIGRATE_CMA migration type added
11/13: mm: MIGRATE_CMA isolation functions added
12/13: mm: cma: Migration support added [wip]
Currently, kernel panics in some situations which I am trying
to investigate.
2. cma_pin() and cma_unpin() functions has been added (after
a conversation with Johan Mossberg). The idea is that whenever
hardware does not use the memory (no transaction is on) the
chunk can be moved around. This would allow defragmentation to
be implemented if desired. No defragmentation algorithm is
provided at this time.
3. Sysfs support has been replaced with debugfs. I always felt
unsure about the sysfs interface and when Greg KH pointed it
out I finally got to rewrite it to debugfs.
v5: (intentionally left out as CMA v5 was identical to CMA v4)
v4: 1. The "asterisk" flag has been removed in favour of requiring
that platform will provide a "*=<regions>" rule in the map
attribute.
2. The terminology has been changed slightly renaming "kind" to
"type" of memory. In the previous revisions, the documentation
indicated that device drivers define memory kinds and now,
v3: 1. The command line parameters have been removed (and moved to
a separate patch, the fourth one). As a consequence, the
cma_set_defaults() function has been changed -- it no longer
accepts a string with list of regions but an array of regions.
2. The "asterisk" attribute has been removed. Now, each region
has an "asterisk" flag which lets one specify whether this
region should by considered "asterisk" region.
3. SysFS support has been moved to a separate patch (the third one
in the series) and now also includes list of regions.
v2: 1. The "cma_map" command line have been removed. In exchange,
a SysFS entry has been created under kernel/mm/contiguous.
The intended way of specifying the attributes is
a cma_set_defaults() function called by platform initialisation
code. "regions" attribute (the string specified by "cma"
command line parameter) can be overwritten with command line
parameter; the other attributes can be changed during run-time
using the SysFS entries.
2. The behaviour of the "map" attribute has been modified
slightly. Currently, if no rule matches given device it is
assigned regions specified by the "asterisk" attribute. It is
by default built from the region names given in "regions"
attribute.
3. Devices can register private regions as well as regions that
can be shared but are not reserved using standard CMA
mechanisms. A private region has no name and can be accessed
only by devices that have the pointer to it.
4. The way allocators are registered has changed. Currently,
a cma_allocator_register() function is used for that purpose.
Moreover, allocators are attached to regions the first time
memory is registered from the region or when allocator is
registered which means that allocators can be dynamic modules
that are loaded after the kernel booted (of course, it won't be
possible to allocate a chunk of memory from a region if
allocator is not loaded).
5. Index of new functions:
+static inline dma_addr_t __must_check
+cma_alloc_from(const char *regions, size_t size,
+ dma_addr_t alignment)
+static inline int
+cma_info_about(struct cma_info *info, const const char *regions)
+int __must_check cma_region_register(struct cma_region *reg);
+dma_addr_t __must_check
+cma_alloc_from_region(struct cma_region *reg,
+ size_t size, dma_addr_t alignment);
+static inline dma_addr_t __must_check
+cma_alloc_from(const char *regions,
+ size_t size, dma_addr_t alignment);
+int cma_allocator_register(struct cma_allocator *alloc);
Patches in this patchset:
Marek Szyprowski (6):
mm: extract reclaim code from __alloc_pages_direct_reclaim()
mm: trigger page reclaim in alloc_contig_range() to stabilize
watermarks
drivers: add Contiguous Memory Allocator
X86: integrate CMA with DMA-mapping subsystem
ARM: integrate CMA with DMA-mapping subsystem
ARM: Samsung: use CMA for 2 memory banks for s5p-mfc device
Michal Nazarewicz (9):
mm: page_alloc: remove trailing whitespace
mm: compaction: introduce isolate_migratepages_range().
mm: compaction: introduce map_pages()
mm: compaction: introduce isolate_freepages_range()
mm: compaction: export some of the functions
mm: page_alloc: introduce alloc_contig_range()
mm: page_alloc: change fallbacks array handling
mm: mmzone: MIGRATE_CMA migration type added
mm: page_isolation: MIGRATE_CMA isolation functions added
Documentation/kernel-parameters.txt | 9 +
arch/Kconfig | 3 +
arch/arm/Kconfig | 2 +
arch/arm/include/asm/dma-contiguous.h | 16 ++
arch/arm/include/asm/mach/map.h | 1 +
arch/arm/kernel/setup.c | 9 +-
arch/arm/mm/dma-mapping.c | 368 ++++++++++++++++++++++++------
arch/arm/mm/init.c | 22 ++-
arch/arm/mm/mm.h | 3 +
arch/arm/mm/mmu.c | 31 ++-
arch/arm/plat-s5p/dev-mfc.c | 51 +----
arch/x86/Kconfig | 1 +
arch/x86/include/asm/dma-contiguous.h | 13 +
arch/x86/include/asm/dma-mapping.h | 4 +
arch/x86/kernel/pci-dma.c | 18 ++-
arch/x86/kernel/pci-nommu.c | 8 +-
arch/x86/kernel/setup.c | 2 +
drivers/base/Kconfig | 89 +++++++
drivers/base/Makefile | 1 +
drivers/base/dma-contiguous.c | 405 ++++++++++++++++++++++++++++++++
include/asm-generic/dma-contiguous.h | 27 ++
include/linux/device.h | 4 +
include/linux/dma-contiguous.h | 110 +++++++++
include/linux/gfp.h | 12 +
include/linux/mmzone.h | 38 +++-
include/linux/page-isolation.h | 18 +-
mm/Kconfig | 2 +-
mm/Makefile | 3 +-
mm/compaction.c | 418 +++++++++++++++++++++------------
mm/internal.h | 33 +++
mm/memory-failure.c | 2 +-
mm/memory_hotplug.c | 6 +-
mm/page_alloc.c | 377 ++++++++++++++++++++++++++---
mm/page_isolation.c | 15 +-
mm/vmstat.c | 3 +
35 files changed, 1757 insertions(+), 367 deletions(-)
create mode 100644 arch/arm/include/asm/dma-contiguous.h
create mode 100644 arch/x86/include/asm/dma-contiguous.h
create mode 100644 drivers/base/dma-contiguous.c
create mode 100644 include/asm-generic/dma-contiguous.h
create mode 100644 include/linux/dma-contiguous.h
--
1.7.1.569.g6f426
Hi,
Recently we've implemented IOMMU heap as an attachment which is one of
the ION memory manager(*1) heap/backend. This implementation is
completely independent of any SoC, and this can be used for other SoC
as well. If our implementation is not totally wrong, it would be nice
to share some experience/code here since Ion is still not so clear to
me yet.
I found that Linaro also seems to have started some ION work(*2). I
think that some of Ion feature could be supported/replaced with Linaro
UMM. For example, presently "ion_iommu_heap" is implemented with the
standard IOMMU API, but it could be also implemented with the coming
DMA API? Also DMABUF can be used in Ion core part as well, I guess.
Currently there's no Ion memmgr code in the upstream
"drivers/staging/android"(*3). Is there any plan to support this? Or
is this something considered as a completely _temporary_ solution, and
never going to be added?
It would be nice if we can share some of our effort here since not
small Android users need Ion, even temporary.
Any comment would be really appreciated.
Hiroshi DOYU
*1: https://android.googlesource.com/kernel/common.git
$ git clone https://android.googlesource.com/kernel/common.git
$ cd common
$ git checkout -b android origin/android-3.0
$ git grep -e "<linux/ion.h>" drivers/
drivers/gpu/ion/ion.c:#include <linux/ion.h>
drivers/gpu/ion/ion_carveout_heap.c:#include <linux/ion.h>
drivers/gpu/ion/ion_heap.c:#include <linux/ion.h>
drivers/gpu/ion/ion_priv.h:#include <linux/ion.h>
drivers/gpu/ion/ion_system_heap.c:#include <linux/ion.h>
drivers/gpu/ion/ion_system_mapper.c:#include <linux/ion.h>
drivers/gpu/ion/tegra/tegra_ion.c:#include <linux/ion.h>
*2: https://blueprints.launchpad.net/linaro-mm-sig/+spec/linaro-mmwg-cma-ion
*3: http://git.kernel.org/?p=linux/kernel/git/next/linux-next.git;a=tree;f=driv…
Hello Everyone,
A very happy new year 2012! :)
This patchset is an RFC for the way videobuf2 can be adapted to add support for
DMA buffer sharing framework[1].
The original patch-set for the idea, and PoC of buffer sharing was by
Tomasz Stanislawski <t.stanislaws(a)samsung.com>, who demonstrated buffer sharing
between two v4l2 devices[2]. This RFC is needed to adapt these patches to the
changes that have happened in the DMA buffer sharing framework over past few
months.
To begin with, I have tried to adapt only the dma-contig allocator, and only as
a user of dma-buf buffer. I am currently working on the v4l2-as-an-exporter
changes, and will share as soon as I get it in some shape.
As with the PoC [2], the handle for sharing buffers is a file-descriptor (fd).
The usage documentation is also a part of [1].
So, the current RFC has the following limitations:
- Only buffer sharing as a buffer user,
- doesn't handle cases where even for a contiguous buffer, the sg_table can have
more than one scatterlist entry.
Thanks and best regards,
~Sumit.
[1]: dma-buf patchset at: https://lkml.org/lkml/2011/12/26/29
[2]: http://lwn.net/Articles/454389
Sumit Semwal (4):
v4l: Add DMABUF as a memory type
v4l:vb2: add support for shared buffer (dma_buf)
v4l:vb: remove warnings about MEMORY_DMABUF
v4l:vb2: Add dma-contig allocator as dma_buf user
drivers/media/video/videobuf-core.c | 4 +
drivers/media/video/videobuf2-core.c | 186 +++++++++++++++++++++++++++-
drivers/media/video/videobuf2-dma-contig.c | 125 +++++++++++++++++++
include/linux/videodev2.h | 8 ++
include/media/videobuf2-core.h | 30 +++++
5 files changed, 352 insertions(+), 1 deletions(-)
--
1.7.5.4
PASR Frameworks brings support for the Partial Array Self-Refresh DDR power
management feature. PASR has been introduced in LP-DDR2, and is also present
in DDR3.
PASR provides 4 modes:
* Single-Ended: Only 1/1, 1/2, 1/4 or 1/8 are refreshed, masking starting at
the end of the DDR die.
* Double-Ended: Same as Single-Ended, but refresh-masking does not start
necessairly at the end of the DDR die.
* Bank-Selective: Refresh of each bank of a die can be masked or unmasked via
a dedicated DDR register (MR16). This mode is convenient for DDR configured
in BRC (Bank-Row-Column) mode.
* Segment-Selective: Refresh of each segment of a die can be masked or unmasked
via a dedicated DDR register (MR17). This mode is convenient for DDR configured
in RBC (Row-Bank-Column) mode.
The role of this framework is to stop the refresh of unused memory to enhance
DDR power consumption.
It supports Bank-Selective and Segment-Selective modes, as the more adapted to
modern OSes.
At early boot stage, a representation of the physical DDR layout is built:
Die 0
_______________________________
| I--------------------------I |
| I Bank or Segment 0 I |
| I--------------------------I |
| I--------------------------I |
| I Bank or Segment 1 I |
| I--------------------------I |
| I--------------------------I |
| I Bank or Segment ... I |
| I--------------------------I |
| I--------------------------I |
| I Bank or Segment n I |
| I--------------------------I |
|______________________________|
...
Die n
_______________________________
| I--------------------------I |
| I Bank or Segment 0 I |
| I--------------------------I |
| I--------------------------I |
| I Bank or Segment 1 I |
| I--------------------------I |
| I--------------------------I |
| I Bank or Segment ... I |
| I--------------------------I |
| I--------------------------I |
| I Bank or Segment n I |
| I--------------------------I |
|______________________________|
The first level is a table where elements represent a die:
* Base address,
* Number of segments,
* Table representing banks/segments,
* MR16/MR17 refresh mask,
* DDR Controller callback to update MR16/MR17 refresh mask.
The second level is the section tables representing the banks or segments,
depending on hardware configuration:
* Base address,
* Unused memory size counter,
* Possible pointer to another section it depends on (E.g. Interleaving)
When some memory becomes unused, the allocator owning this memory calls the PASR
Framework's pasr_put(phys_addr, size) function. The framework finds the
sections impacted and updates their counters accordingly.
If a section counter reach the section size, the refresh of the section is
masked. If the corresponding section has a dependency with another section
(E.g. because of DDR interleaving, see figure below), it checks the "paired" section
is also unused before updating the refresh mask.
When some unused memory is requested by the allocator, the allocator owning
this memory calls the PASR Framework's pasr_get(phys_addr, size) function. The
framework find the section impacted and updates their counters accordingly.
If before the update, the section counter was to the section size, the refrewh
of the section is unmasked. If the corresponding section has a dependency with
another section, it also unmask the refresh of the other section.
Patch 3/6 contains modifications for the Buddy allocator. Overhead induced is
very low because the PASR framework is notified only on "MAX_ORDER" pageblocs.
Any allocator support(PMEM, HWMEM...) and Memory Hotplug would be added in next
patch set revisions.
Maxime Coquelin (6):
PASR: Initialize DDR layout
PASR: Add core Framework
PASR: mm: Integrate PASR in Buddy allocator
PASR: Call PASR initialization
PASR: Add Documentation
PASR: Ux500: Add PASR support
Documentation/pasr.txt | 183 ++++++++++++
arch/arm/Kconfig | 1 +
arch/arm/kernel/setup.c | 1 +
arch/arm/mach-ux500/include/mach/hardware.h | 11 +
arch/arm/mach-ux500/include/mach/memory.h | 8 +
drivers/mfd/db8500-prcmu.c | 67 +++++
drivers/staging/Kconfig | 2 +
drivers/staging/Makefile | 1 +
drivers/staging/pasr/Kconfig | 19 ++
drivers/staging/pasr/Makefile | 6 +
drivers/staging/pasr/core.c | 168 +++++++++++
drivers/staging/pasr/helper.c | 84 ++++++
drivers/staging/pasr/helper.h | 16 +
drivers/staging/pasr/init.c | 403 +++++++++++++++++++++++++++
drivers/staging/pasr/ux500.c | 58 ++++
include/linux/pasr.h | 143 ++++++++++
include/linux/ux500-pasr.h | 11 +
init/main.c | 8 +
mm/page_alloc.c | 9 +
19 files changed, 1199 insertions(+), 0 deletions(-)
create mode 100644 Documentation/pasr.txt
create mode 100644 drivers/staging/pasr/Kconfig
create mode 100644 drivers/staging/pasr/Makefile
create mode 100644 drivers/staging/pasr/core.c
create mode 100644 drivers/staging/pasr/helper.c
create mode 100644 drivers/staging/pasr/helper.h
create mode 100644 drivers/staging/pasr/init.c
create mode 100644 drivers/staging/pasr/ux500.c
create mode 100644 include/linux/pasr.h
create mode 100644 include/linux/ux500-pasr.h
--
1.7.8
When using DMABUF streaming in non-planar mode, the v4l2_buffer::length
field holds the length of the buffer as required by userspace. Copy it
to the length of the first plane at QBUF time, as the plane length is
later checked against the dma-buf size.
Signed-off-by: Laurent Pinchart <laurent.pinchart(a)ideasonboard.com>
---
drivers/media/video/videobuf2-core.c | 1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/drivers/media/video/videobuf2-core.c b/drivers/media/video/videobuf2-core.c
index 29cf6ed..8eb4d08 100644
--- a/drivers/media/video/videobuf2-core.c
+++ b/drivers/media/video/videobuf2-core.c
@@ -927,6 +927,7 @@ static int __fill_vb2_buffer(struct vb2_buffer *vb, const struct v4l2_buffer *b,
}
if (b->memory == V4L2_MEMORY_DMABUF) {
v4l2_planes[0].m.fd = b->m.fd;
+ v4l2_planes[0].length = b->length;
}
}
--
1.7.3.4
From: "joro(a)8bytes.org" <joro(a)8bytes.org>
Subject: Re: [PATCH 2/2] ARM: IOMMU: Tegra30: Add iommu_ops for SMMU driver
Date: Tue, 24 Jan 2012 15:25:21 +0100
Message-ID: <20120124142521.GE6269(a)8bytes.org>
> On Tue, Jan 24, 2012 at 03:46:01PM +0200, Felipe Balbi wrote:
> > On Tue, Jan 24, 2012 at 02:41:21PM +0100, Hiroshi Doyu wrote:
> > > Actually I really like the concept of this "domain" now, which hides
> > > the H/W hierarchy from users.
> > >
> > > But in Tegra SMMU/GART case, there's a single one IOMMU device in the
> > > system. Keeping a iommu device list in a domain and iterating iommu
> > > device list in each iommu_ops seem to be so nice, but I'm afraid that
> > > this may be a bit too much when one already knows that there's only
> > > one IOMMU device in the system.
> > >
> > > If there's no actual problem for 1-1 mapping between IOMMU H/Ws and
> > > domains, I think that it may not so bad to keep the original code(1-1)
> > > for GART and SMMU. What do you think?
> >
> > I think it boils down to "extensability". If you can truly/fully
> > guarantee that there will *always* be a single IOMMU on all upcoming
> > Tegras, then it's really overkill.
> >
> > But if there's even a remote possibility of the HW being changed and you
> > end up with more IOMMUs, things start to feel necessary for the sake of
> > making it easy to extend.
>
> Right. But I am fine with the logic as-is when there is only one SMMU in
> the system. But please also change the IOMMU driver so that it really
> only initializes a single SMMU. When boards pop up with more than one
> you we notice that assumption in the code again and are reminded to
> change it.
Fixed.
I'll revisit 4MB pagesize support and the above multiple iommu device
support in a domain later.
Attached the update patch.
Hello Everyone,
Post some discussion as an RFC, here is the patch for introducing
DMA buffer sharing mechanism - change history is in the changelog below.
Various subsystems - V4L2, GPU-accessors, DRI to name a few - have felt the
need to have a common mechanism to share memory buffers across different
devices - ARM, video hardware, GPU.
This need comes forth from a variety of use cases including cameras, image
processing, video recorders, sound processing, DMA engines, GPU and display
buffers, amongst others.
This patch attempts to define such a buffer sharing mechanism - it is the
result of discussions from a couple of memory-management mini-summits held by
Linaro to understand and address common needs around memory management. [1]
A new dma_buf buffer object is added, with operations and API to allow easy
sharing of this buffer object across devices.
The framework allows:
- a new buffer object to be created with fixed size, associated with a file
pointer and allocator-defined operations for this buffer object. This
operation is called the 'export' operation.
- different devices to 'attach' themselves to this buffer object, to facilitate
backing storage negotiation, using dma_buf_attach() API.
- this exported buffer object to be shared with the other entity by asking for
its 'file-descriptor (fd)', and sharing the fd across.
- a received fd to get the buffer object back, where it can be accessed using
the associated exporter-defined operations.
- the exporter and user to share the buffer object's scatterlist using
map_dma_buf and unmap_dma_buf operations.
Documentation present in the patch-set gives more details.
For 1st version, dma-buf is marked as an EXPERIMENTAL driver, which we can
remove for later versions with additional usage and testing.
*IMPORTANT*: [see https://lkml.org/lkml/2011/12/20/211 for more details]
For this first version, A buffer shared using the dma_buf sharing API:
- *may* be exported to user space using "mmap" *ONLY* by exporter, outside of
this framework.
- may be used *ONLY* by importers that do not need CPU access to the buffer.
This is based on design suggestions from many people at the mini-summits,
most notably from Arnd Bergmann <arnd(a)arndb.de>, Rob Clark <rob(a)ti.com> and
Daniel Vetter <daniel(a)ffwll.ch>.
The implementation is inspired from proof-of-concept patch-set from
Tomasz Stanislawski <t.stanislaws(a)samsung.com>, who demonstrated buffer sharing
between two v4l2 devices. [2]
Some sample implementations and WIP for dma-buf users and exporters are
available at [3] and [4]. [These are not being submitted for discussion /
inclusion right now, but are for reference only]
References:
[1]: https://wiki.linaro.org/OfficeofCTO/MemoryManagement
[2]: http://lwn.net/Articles/454389
[3]: Dave Airlie's prime support:
http://cgit.freedesktop.org/~airlied/linux/log/?h=drm-prime-dmabuf
[4]: Rob Clark's sharing between DRM and V4L2:
https://github.com/robclark/kernel-omap4/commits/drmplane-dmabuf
Patchset based on top of 3.2-rc7, the current version can be found at
http://git.linaro.org/gitweb?p=people/sumitsemwal/linux-3.x.git
Branch: dmabuf-patch-v1
Earlier versions:
RFC:
v3 at: https://lkml.org/lkml/2011/12/19/50
v2 at: https://lkml.org/lkml/2011/12/2/53
v1 at: https://lkml.org/lkml/2011/10/11/92
Wish you all happy vacations and a very happy, joyous and prosperous new year
2012 :)
Best regards,
~Sumit Semwal
History:
v4:
- Review comments incorporated:
- from Konrad Rzeszutek Wilk [https://lkml.org/lkml/2011/12/20/209]
- corrected language in some comments
- re-ordered struct definitions for readability
- added might_sleep() call in dma_buf_map_attachment() wrapper
- from Rob Clark [https://lkml.org/lkml/2011/12/23/196]
- Made dma-buf EXPERIMENTAL for 1st version.
v3:
- Review comments incorporated:
- from Konrad Rzeszutek Wilk [https://lkml.org/lkml/2011/12/3/45]
- replaced BUG_ON with WARN_ON - various places
- added some error-checks
- replaced EXPORT_SYMBOL with EXPORT_SYMBOL_GPL
- some cosmetic / documentation comments
- from Arnd Bergmann, Daniel Vetter, Rob Clark
[https://lkml.org/lkml/2011/12/5/321]
- removed mmap() fop and dma_buf_op, also the sg_sync* operations, and
documented that mmap is not allowed for exported buffer
- updated documentation to clearly state when migration is allowed
- changed kconfig
- some error code checks
- from Rob Clark [https://lkml.org/lkml/2011/12/5/572]
- update documentation to allow map_dma_buf to return -EINTR
v2:
- Review comments incorporated:
- from Tomasz Stanislawski [https://lkml.org/lkml/2011/10/14/136]
- kzalloc moved out of critical section
- corrected some in-code comments
- from Dave Airlie [https://lkml.org/lkml/2011/11/25/123]
- from Daniel Vetter and Rob Clark [https://lkml.org/lkml/2011/11/26/53]
- use struct sg_table in place of struct scatterlist
- rename {get,put}_scatterlist to {map,unmap}_dma_buf
- add new wrapper APIs dma_buf_{map,unmap}_attachment for ease of users
- documentation updates as per review comments from Randy Dunlap
[https://lkml.org/lkml/2011/10/12/439]
v1: original
Sumit Semwal (3):
dma-buf: Introduce dma buffer sharing mechanism
dma-buf: Documentation for buffer sharing framework
dma-buf: mark EXPERIMENTAL for 1st release.
Documentation/dma-buf-sharing.txt | 224 ++++++++++++++++++++++++++++
drivers/base/Kconfig | 11 ++
drivers/base/Makefile | 1 +
drivers/base/dma-buf.c | 291 +++++++++++++++++++++++++++++++++++++
include/linux/dma-buf.h | 176 ++++++++++++++++++++++
5 files changed, 703 insertions(+), 0 deletions(-)
create mode 100644 Documentation/dma-buf-sharing.txt
create mode 100644 drivers/base/dma-buf.c
create mode 100644 include/linux/dma-buf.h
--
1.7.5.4