On Thu, Aug 23, 2012 at 07:49:42AM -0700, Laura Abbott wrote:
The problem is more fundamental than that. In setting up the sg_list, Ion calls phys_to_page on the carved out memory. There's no guarantee in general that the page returned is valid at all so if addr is a physical address of memory carved out with memblock_remove, page_to_phys(phys_to_page(addr)) != addr.
Internally, we've gotten around this by using the dma_address field in the sg_list to indicate carved out memory. Obviously most APIs rely on sg_lists having a page so this isn't very generic.
Really this problem boils down to the question of should the dma APIs support memory removed with memblock_remove and if so, how?
They should not, plain and simple. They're designed to be used with bits of memory which have valid struct page structures associated with them, and that's pretty fundamental to them.
However, this raises one question. You say you're using memblock_remove(). How? Show the code, I want to know.
Now, the thing is, using memblock_remove, you are removing the memory from the system memory map, taking it away from the system. You're taking full responsibility for managing that memory. The system won't map it for you in any way.
Now, since you have full control over that memory, why would you want the overhead of using the DMA API with it?
Moreover, you have one bloody big problem with this if you want to map the memory into userspace _and_ have it cacheable there, and do DMA off it. There is _no_ _way_ for userspace to issue the cache maintanence instructions (not that userspace _should_ be doing that anyway.) The cacheflush system call is also _NOT_ _DESIGNED_ for DMA purposes (it's there to support self-modifying code ONLY.)
Now, next question, what's this large buffer for?