Safeguard is not just another name in the crowded field of cryptocurrency recovery; they are renowned for their effectiveness and expertise in tracing lost funds. Their team comprises skilled professionals who understand the intricate workings of blockchain technology and the tactics employed by online scammers. This specialized knowledge enables them to devise tailored strategies to recover your assets.
Email,.. safeguardbitcoin(a)consultant.com
WhatsApp,.. +44 7426 168300
Web:., https://safeguardbitcoin.wixsite.com/safeguard-bitcoin--1
Don't worry, TREK Tech Corp is here to help you get your lost coins back. Our expert team specializes in recovering funds from fraudulent schemes and can assist you in reclaiming what is rightfully yours. Contact us today for a free consultation and let us help you get back on track. Don't let scammers hold you back any longer, trust TREK Tech Corp to help you recover your lost funds.
Contact email: trektechcorp1(a)gmail.com / trektechcorp(a)consultant.com.
In the volatile world of cryptocurrency, where billions are lost annually to scams, hacks, phishing attacks, rug pulls, and wallet compromises, the demand for legitimate recovery services has never been higher. As of 02/ 22/ 2026, victims often face a dual challenge: recovering stolen or inaccessible funds while avoiding secondary scams that prey on desperation with false promises and upfront fees. Amid this landscape, TREK Tech Corp—commonly referred to as CCS—has established itself as a credible, professional firm specializing in blockchain forensics, crypto asset tracing, fraud investigation, and realistic recovery support.
TREK Tech Corp operates with a foundation built on 14 years of experience in digital investigations, long predating the widespread adoption of cryptocurrencies. This extensive background in forensics gives CCS a significant edge over many newer entrants in the crypto recovery space. The firm focuses on helping individuals, families, and institutions trace lost or stolen digital assets using advanced, transparent methods rather than speculative or guaranteed outcomes. Their official website is https://www.trektechcorp.net, and direct inquiries can be sent to trektechcorp1(a)gmail.com.
What sets TREK Tech Corp apart as a legitimate provider is its commitment to ethical standards and evidence-based processes. Unlike fraudulent recovery operations that demand large upfront payments without case evaluation or promise 100% success (a clear red flag in the industry), CCS conducts honest feasibility assessments from the outset. They never require clients to share private keys, seed phrases, or sensitive wallet credentials during initial consultations. Fees are typically aligned with outcomes, and the firm maintains strict confidentiality with robust data protection protocols.
The core of TREK Tech Corp’ service is multi-layer blockchain attribution—a proprietary technique that tracks funds through complex laundering paths. Scammers frequently use mixers, cross-chain bridges, decentralized exchanges, privacy protocols, flash-loan obfuscation, and automated smart-contract tumbling to break direct traceability. Basic block explorers lose visibility after one or two hops, but CCS reconstructs these movements by analyzing on-chain behavioral patterns: timing correlations, amount similarities, address clustering via co-spending heuristics, change address reuse, and interactions with known services.
A typical engagement begins with a secure intake process. Victims submit transaction hashes (TXIDs), wallet addresses, scam communications, screenshots, and timelines. TREK Tech Corp then performs comprehensive on-chain and off-chain analysis, building detailed transaction graphs and identifying probable endpoints—most commonly centralized exchanges that enforce KYC/AML compliance. When funds reach such platforms, CCS prepares evidence-grade forensic reports that include visualized flow diagrams, confidence-scored address clusters, identified laundering techniques, and recommended intervention steps. These reports are frequently used to support asset freeze requests submitted to exchange compliance teams or filed with law enforcement agencies such as the FBI’s Internet Crime Complaint Center (IC3), local cybercrime units, or financial regulators.
TREK Tech Corp emphasizes speed: the sooner a theft is reported and analyzed, the higher the chance of intervention before funds are fully dispersed. In cases where rapid action was taken, partial recoveries—often 70–90% of stolen amounts—have been achieved through coordinated freezes and subsequent legal processes. While full recovery is never guaranteed due to blockchain’s immutable design and variables like scammer sophistication and jurisdictional limits, CCS provides clear, realistic expectations from day one.
Beyond recovery support, TREK Tech Corp prioritizes victim education and prevention. Clients receive tailored guidance on hardening security: using hardware wallets, enabling strong multi-factor authentication, securely backing up seed phrases in multiple encrypted locations, verifying addresses before every transfer, monitoring wallet activity proactively, and recognizing emerging threats such as AI-enhanced impersonation scams, clipboard hijacking, or malicious browser extensions. This preventive focus helps reduce the likelihood of future incidents and empowers users in an environment where threats evolve rapidly.
The firm’s legitimacy is further reinforced by its performance metrics and client feedback. As of early 2026, TREK Tech Corp has successfully handled over 426 documented projects and maintains a 4.28 out of 5 rating based on more than 2,467 verified reviews. Clients consistently highlight the team’s professionalism, technical precision, clear communication, regular updates, and genuine support during high-stress situations. In an industry rife with advance-fee fraud and misleading marketing, CCS stands out for refusing high-pressure tactics, avoiding unrealistic guarantees, and focusing on evidence-driven results.
For anyone who has suffered a crypto loss—whether through a sophisticated scam, forgotten credentials, hardware failure, or inheritance complications—TREK Tech Corp offers a professional, confidential pathway forward. Their website, https://www.trektechcorp.net, provides in-depth information about services, the investigation process, anonymized case examples, and how to initiate contact. Direct email communication is available at trektechcorp1(a)gmail.com for a no-obligation initial consultation.
In conclusion, legitimate crypto recovery requires expertise, transparency, and ethical conduct—qualities TREK Tech Corp consistently demonstrates. While blockchain’s architecture limits reversals, CCS leverages decades of investigative rigor, advanced multi-layer attribution, and strategic coordination to deliver clarity, evidence, and realistic recovery opportunities. In 2026’s high-risk digital asset environment, partnering with a reputable firm like TREK Tech Corp can make the critical difference between permanent loss and meaningful progress toward reclamation and renewed security.
Safeguard is not just another name in the crowded field of cryptocurrency recovery; they are renowned for their effectiveness and expertise in tracing lost funds. Their team comprises skilled professionals who understand the intricate workings of blockchain technology and the tactics employed by online scammers. This specialized knowledge enables them to devise tailored strategies to recover your assets.
Email,.. safeguardbitcoin(a)consultant.com
WhatsApp,.. +44 7426 168300
Web:., https://safeguardbitcoin.wixsite.com/safeguard-bitcoin--1
Safeguard is not just another name in the crowded field of cryptocurrency recovery; they are renowned for their effectiveness and expertise in tracing lost funds. Their team comprises skilled professionals who understand the intricate workings of blockchain technology and the tactics employed by online scammers. This specialized knowledge enables them to devise tailored strategies to recover your assets.
Email,.. safeguardbitcoin(a)consultant.com
WhatsApp,.. +44 7426 168300
Web:., https://safeguardbitcoin.wixsite.com/safeguard-bitcoin--1
On 4/2/26 18:13, Bence Csókás wrote:
> [Sie erhalten nicht häufig E-Mails von bence.csokas(a)arm.com. Weitere Informationen, warum dies wichtig ist, finden Sie unter https://aka.ms/LearnAboutSenderIdentification ]
>
> Hi,
>
> I just came across this commit while researching something else.
> Original patch had too few context lines, so I here's the diff with `-U10`.
>
> On 3/18/25 20:22, Daniel Almeida wrote:
>> From: Asahi Lina <lina(a)asahilina.net>
>>
>> Since commit 21aa27ddc582 ("drm/shmem-helper: Switch to reservation
>> lock"), the drm_gem_shmem_vmap and drm_gem_shmem_vunmap functions
>> require that the caller holds the DMA reservation lock for the object.
>> Add lockdep assertions to help validate this.
>
> There were already lockdep assertions...
Good point, I completely missed that.
>
>> Signed-off-by: Asahi Lina <lina(a)asahilina.net>
>> Signed-off-by: Daniel Almeida <daniel.almeida(a)collabora.com>
>> Reviewed-by: Christian König <christian.koenig(a)amd.com>
>> Signed-off-by: Lyude Paul <lyude(a)redhat.com>
>> Link: https://lore.kernel.org/r/20250318-drm-gem-shmem-v1-1-64b96511a84f@collabor…
>> ---
>> Â drivers/gpu/drm/drm_gem_shmem_helper.c | 4 ++++
>> Â 1 file changed, 4 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
>> index aa43265f4f4f..0b41f0346bad 100644
>> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
>> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
>> @@ -341,20 +341,22 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_unpin);
>> Â Â *
>> Â Â * Returns:
>> Â Â * 0 on success or a negative error code on failure.
>> Â Â */
>> Â int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â struct iosys_map *map)
>> Â {
>> Â Â Â Â Â struct drm_gem_object *obj = &shmem->base;
>> Â Â Â Â Â int ret = 0;
>>
>> +Â Â Â Â dma_resv_assert_held(obj->resv);
>> +
>> Â Â Â Â Â if (drm_gem_is_imported(obj)) {
>> Â Â Â Â Â Â Â Â Â Â Â Â Â ret = dma_buf_vmap(obj->dma_buf, map);
>> Â Â Â Â Â } else {
>> Â Â Â Â Â Â Â Â Â Â Â Â Â pgprot_t prot = PAGE_KERNEL;
>>
>> Â Â Â Â Â Â Â Â Â Â Â Â Â dma_resv_assert_held(shmem->base.resv);
>
> ... right here, and
>
>> Â Â Â Â Â Â Â Â Â Â Â Â Â if (refcount_inc_not_zero(&shmem->vmap_use_count)) {
>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â iosys_map_set_vaddr(map, shmem->vaddr);
>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â return 0;
>> @@ -401,20 +403,22 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_vmap_locked);
>> Â Â * drops to zero.
>> Â Â *
>> Â Â * This function hides the differences between dma-buf imported and natively
>> Â Â * allocated objects.
>> Â Â */
>> Â void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â struct iosys_map *map)
>> Â {
>> Â Â Â Â Â struct drm_gem_object *obj = &shmem->base;
>>
>> +Â Â Â Â dma_resv_assert_held(obj->resv);
>> +
>> Â Â Â Â Â if (drm_gem_is_imported(obj)) {
>> Â Â Â Â Â Â Â Â Â Â Â Â Â dma_buf_vunmap(obj->dma_buf, map);
>> Â Â Â Â Â } else {
>> Â Â Â Â Â Â Â Â Â Â Â Â Â dma_resv_assert_held(shmem->base.resv);
>
> ...here.
>
>> Â Â Â Â Â Â Â Â Â Â Â Â Â if (refcount_dec_and_test(&shmem->vmap_use_count)) {
>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â vunmap(shmem->vaddr);
>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â shmem->vaddr = NULL;
>>
>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â drm_gem_shmem_unpin_locked(shmem);
>
> Or were those insufficient for some reason? If so, should we keep both
> of them, or should the older ones have been removed?
The dma_buf_vmap()/dma_buf_vunmap() functions require the caller to be holding the reservation lock as well.
So it kind of makes sense to move the assertions to the beginning of the functions.
Regards,
Christian.
>
> Bence
Hi,
The recent introduction of heaps in the optee driver [1] made possible
the creation of heaps as modules.
It's generally a good idea if possible, including for the already
existing system and CMA heaps.
The system one is pretty trivial, the CMA one is a bit more involved,
especially since we have a call from kernel/dma/contiguous.c to the CMA
heap code. This was solved by turning the logic around and making the
CMA heap call into the contiguous DMA code.
Let me know what you think,
Maxime
1: https://lore.kernel.org/dri-devel/20250911135007.1275833-4-jens.wiklander@l…
Signed-off-by: Maxime Ripard <mripard(a)kernel.org>
---
Changes in v4:
- Fix compilation failure
- Rework to take into account OF_RESERVED_MEM
- Fix regression making the default CMA area disappear if not created
through the DT
- Added some documentation and comments
- Link to v3: https://lore.kernel.org/r/20260303-dma-buf-heaps-as-modules-v3-0-24344812c7…
Changes in v3:
- Squashed cma_get_name and cma_alloc/release patches
- Fixed typo in Export dev_get_cma_area commit title
- Fixed compilation failure with DMA_CMA but not OF_RESERVED_MEM
- Link to v2: https://lore.kernel.org/r/20260227-dma-buf-heaps-as-modules-v2-0-454aee7e06…
Changes in v2:
- Collect tags
- Don't export dma_contiguous_default_area anymore, but export
dev_get_cma_area instead
- Mentioned that heap modules can't be removed
- Link to v1: https://lore.kernel.org/r/20260225-dma-buf-heaps-as-modules-v1-0-2109225a09…
---
Maxime Ripard (8):
dma: contiguous: Turn heap registration logic around
dma: contiguous: Make dev_get_cma_area() a proper function
dma: contiguous: Make dma_contiguous_default_area static
dma: contiguous: Export dev_get_cma_area()
mm: cma: Export cma_alloc(), cma_release() and cma_get_name()
dma-buf: heaps: Export mem_accounting parameter
dma-buf: heaps: cma: Turn the heap into a module
dma-buf: heaps: system: Turn the heap into a module
drivers/dma-buf/dma-heap.c | 1 +
drivers/dma-buf/heaps/Kconfig | 4 +--
drivers/dma-buf/heaps/cma_heap.c | 22 +++----------
drivers/dma-buf/heaps/system_heap.c | 5 +++
include/linux/dma-buf/heaps/cma.h | 16 ---------
include/linux/dma-map-ops.h | 14 ++++----
kernel/dma/contiguous.c | 66 +++++++++++++++++++++++++++++++++----
mm/cma.c | 3 ++
8 files changed, 82 insertions(+), 49 deletions(-)
---
base-commit: c081b71f11732ad2c443f170ab19c3ebe8a1a422
change-id: 20260225-dma-buf-heaps-as-modules-1034b3ec9f2a
Best regards,
--
Maxime Ripard <mripard(a)kernel.org>
Hi,
I know I'm late to the party here...
Like John, I'm also not very close to this stuff any more, but I agree
with the other discussions: makes sense for this to be a separate
heap, and cc_shared makes sense too.
I'm not clear why the heap depends on !CONFIG_HIGHMEM, but I also
don't know anything about SEV/TDX.
-Brian
On Wed, Mar 25, 2026 at 08:23:50PM +0000, Jiri Pirko wrote:
> From: Jiri Pirko <jiri(a)nvidia.com>
>
> Confidential computing (CoCo) VMs/guests, such as AMD SEV and Intel TDX,
> run with private/encrypted memory which creates a challenge
> for devices that do not support DMA to it (no TDISP support).
>
> For kernel-only DMA operations, swiotlb bounce buffering provides a
> transparent solution by copying data through shared memory.
> However, the only way to get this memory into userspace is via the DMA
> API's dma_alloc_pages()/dma_mmap_pages() type interfaces which limits
> the use of the memory to a single DMA device, and is incompatible with
> pin_user_pages().
>
> These limitations are particularly problematic for the RDMA subsystem
> which makes heavy use of pin_user_pages() and expects flexible memory
> usage between many different DMA devices.
>
> This patch series enables userspace to explicitly request shared
> (decrypted) memory allocations from new dma-buf system_cc_shared heap.
> Userspace can mmap this memory and pass the dma-buf fd to other
> existing importers such as RDMA or DRM devices to access the
> memory. The DMA API is improved to allow the dma heap exporter to DMA
> map the shared memory to each importing device.
>
> Based on dma-mapping-for-next e7442a68cd1ee797b585f045d348781e9c0dde0d
>
> Jiri Pirko (2):
> dma-mapping: introduce DMA_ATTR_CC_SHARED for shared memory
> dma-buf: heaps: system: add system_cc_shared heap for explicitly
> shared memory
>
> drivers/dma-buf/heaps/system_heap.c | 103 ++++++++++++++++++++++++++--
> include/linux/dma-mapping.h | 10 +++
> include/trace/events/dma.h | 3 +-
> kernel/dma/direct.h | 14 +++-
> kernel/dma/mapping.c | 13 +++-
> 5 files changed, 132 insertions(+), 11 deletions(-)
>
> --
> 2.51.1
>
On 4/2/26 10:36, Ekansh Gupta wrote:
> On 3/9/2026 12:29 PM, Ekansh Gupta wrote:
>>
>> On 2/24/2026 2:42 PM, Christian König wrote:
>>> On 2/23/26 20:09, Ekansh Gupta wrote:
>>>> [Sie erhalten nicht häufig E-Mails von ekansh.gupta(a)oss.qualcomm.com. Weitere Informationen, warum dies wichtig ist, finden Sie unter https://aka.ms/LearnAboutSenderIdentification ]
>>>>
>>>> Add PRIME dma-buf import support for QDA GEM buffer objects and integrate
>>>> it with the existing per-process memory manager and IOMMU device model.
>>>>
>>>> The implementation extends qda_gem_obj to represent imported dma-bufs,
>>>> including dma_buf references, attachment state, scatter-gather tables
>>>> and an imported DMA address used for DSP-facing book-keeping. The
>>>> qda_gem_prime_import() path handles reimports of buffers originally
>>>> exported by QDA as well as imports of external dma-bufs, attaching them
>>>> to the assigned IOMMU device
>>> That is usually an absolutely clear NO-GO for DMA-bufs. Where exactly in the code is that?
>> dma_buf_attach* to comute-cb iommu devices are critical for DSPs to access the buffer.
>> This is needed if the buffer is exported by anyone other than QDA(say system heap). If this is not
>> the correct way, what should be the right way here? On the current fastrpc driver also,
>> the DMABUF is getting attached with iommu device[1] due to the same requirement.
>>
>> [1] https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/tree/dr…
>
> Hi Christian,
>
> Do you have any suggestions for the shared requirements?
Well I don't fully understand what you are trying to do with the iommu. Usually it is the job of the exporter to provide the importer with DMA addresses which are valid for its device structure, that includes IOMMU mapping.
Can you iterate what exactly this iommu group is and why you have to attach the imported buffers to it, how that attachment works and how lifetime is managed?
Regards,
Christian.
>
> I'm reworking on the next version and currently I don't see any other way
> to handle dma_buf_attach* cases.
>
> //Ekansh
>
>>>> and mapping them through the memory manager
>>>> for DSP access. The GEM free path is updated to unmap and detach
>>>> imported buffers while preserving the existing behaviour for locally
>>>> allocated memory.
>>>>
>>>> The PRIME fd-to-handle path is implemented in qda_prime_fd_to_handle(),
>>>> which records the calling drm_file in a driver-private import context
>>>> before invoking the core DRM helpers. The GEM import callback retrieves
>>>> this context to ensure that an IOMMU device is assigned to the process
>>>> and that imported buffers follow the same per-process IOMMU selection
>>>> rules as natively allocated GEM objects.
>>>>
>>>> This patch prepares the driver for interoperable buffer sharing between
>>>> QDA and other dma-buf capable subsystems while keeping IOMMU mapping and
>>>> lifetime handling consistent with the existing GEM allocation flow.
>>>>
>>>> Signed-off-by: Ekansh Gupta <ekansh.gupta(a)oss.qualcomm.com>
>>> ...
>>>
>>>> @@ -15,23 +16,29 @@ static int validate_gem_obj_for_mmap(struct qda_gem_obj *qda_gem_obj)
>>>> qda_err(NULL, "Invalid GEM object size\n");
>>>> return -EINVAL;
>>>> }
>>>> - if (!qda_gem_obj->iommu_dev || !qda_gem_obj->iommu_dev->dev) {
>>>> - qda_err(NULL, "Allocated buffer missing IOMMU device\n");
>>>> - return -EINVAL;
>>>> - }
>>>> - if (!qda_gem_obj->iommu_dev->dev) {
>>>> - qda_err(NULL, "Allocated buffer missing IOMMU device\n");
>>>> - return -EINVAL;
>>>> - }
>>>> - if (!qda_gem_obj->virt) {
>>>> - qda_err(NULL, "Allocated buffer missing virtual address\n");
>>>> - return -EINVAL;
>>>> - }
>>>> - if (qda_gem_obj->dma_addr == 0) {
>>>> - qda_err(NULL, "Allocated buffer missing DMA address\n");
>>>> - return -EINVAL;
>>>> + if (qda_gem_obj->is_imported) {
>>> Absolutely clear NAK to that. Imported buffers *can't* be mmaped through the importer!
>>>
>>> Userspace needs to mmap() them through the exporter.
>>>
>>> If you absolutely have to map them through the importer for uAPI backward compatibility then there is dma_buf_mmap() for that, but this is clearly not the case here.
>>>
>>> ...
>> Okay, the requirement is slightly different here. Any buffer which is not allocated using the
>> QDA GEM interface needs to be attached to the iommu device for that particular process to
>> enable DSP for the access. I should not call it `mmap` instead it should be called importing the
>> buffer to a particular iommu context bank. With this definition, is it fine to keep it this way? Or
>> should the dma_buf_attach* calls be moved to some other place?
>>>> +static int qda_memory_manager_map_imported(struct qda_memory_manager *mem_mgr,
>>>> + struct qda_gem_obj *gem_obj,
>>>> + struct qda_iommu_device *iommu_dev)
>>>> +{
>>>> + struct scatterlist *sg;
>>>> + dma_addr_t dma_addr;
>>>> + int ret = 0;
>>>> +
>>>> + if (!gem_obj->is_imported || !gem_obj->sgt || !iommu_dev) {
>>>> + qda_err(NULL, "Invalid parameters for imported buffer mapping\n");
>>>> + return -EINVAL;
>>>> + }
>>>> +
>>>> + gem_obj->iommu_dev = iommu_dev;
>>>> +
>>>> + sg = gem_obj->sgt->sgl;
>>>> + if (sg) {
>>>> + dma_addr = sg_dma_address(sg);
>>>> + dma_addr += ((u64)iommu_dev->sid << 32);
>>>> +
>>>> + gem_obj->imported_dma_addr = dma_addr;
>>> Well that looks like you are only using the first DMA address from the imported sgt. What about the others?
>> I might have a proper appach for this now, will update in the next spin.
>>> Regards,
>>> Christian.
>
On 16.03.2026 13:08, Maxime Ripard wrote:
> On Wed, Mar 11, 2026 at 08:18:28AM -0500, Andrew Davis wrote:
>> On 3/11/26 5:19 AM, Albert Esteve wrote:
>>> On Tue, Mar 10, 2026 at 4:34 PM Andrew Davis <afd(a)ti.com> wrote:
>>>> On 3/6/26 4:36 AM, Albert Esteve wrote:
>>>>> Expose DT coherent reserved-memory pools ("shared-dma-pool"
>>>>> without "reusable") as dma-buf heaps, creating one heap per
>>>>> region so userspace can allocate from the exact device-local
>>>>> pool intended for coherent DMA.
>>>>>
>>>>> This is a missing backend in the long-term effort to steer
>>>>> userspace buffer allocations (DRM, v4l2, dma-buf heaps)
>>>>> through heaps for clearer cgroup accounting. CMA and system
>>>>> heaps already exist; non-reusable coherent reserved memory
>>>>> did not.
>>>>>
>>>>> The heap binds the heap device to each memory region so
>>>>> coherent allocations use the correct dev->dma_mem, and
>>>>> it defers registration until module_init when normal
>>>>> allocators are available.
>>>>>
>>>>> Signed-off-by: Albert Esteve <aesteve(a)redhat.com>
>>>>> ---
>>>>> drivers/dma-buf/heaps/Kconfig | 9 +
>>>>> drivers/dma-buf/heaps/Makefile | 1 +
>>>>> drivers/dma-buf/heaps/coherent_heap.c | 414 ++++++++++++++++++++++++++++++++++
>>>>> 3 files changed, 424 insertions(+)
>>>>>
>>>>> (...)
>>>> You are doing this DMA allocation using a non-DMA pseudo-device (heap_dev).
>>>> This is why you need to do that dma_coerce_mask_and_coherent(64) nonsense, you
>>>> are doing a DMA alloc for the CPU itself. This might still work, but only if
>>>> dma_map_sgtable() can handle swiotlb/iommu for all attaching devices at map
>>>> time.
>>> The concern is valid. We're allocating via a synthetic device, which
>>> ties the allocation to that device's DMA domain. I looked deeper into
>>> this trying to address the concern.
>>>
>>> The approach works because dma_map_sgtable() handles both
>>> dma_map_direct and use_dma_iommu cases in __dma_map_sg_attrs(). For
>>> each physical address in the sg_table (extracted via sg_phys()), it
>>> creates device-specific DMA mappings:
>>> - For direct mapping: it checks if the address is directly accessible
>>> (dma_capable()), and if not, it falls back to swiotlb.
>>> - For IOMMU: it creates mappings that allow the device to access
>>> physical addresses.
>>>
>>> This means every attached device gets its own device-specific DMA
>>> mapping, properly handling cases where the physical addresses are
>>> inaccessible or have DMA constraints.
>>>
>> While this means it might still "work" it won't always be ideal. Take
>> the case where the consuming device(s) have a 32bit address restriction,
>> if the allocation was done using the real devices then the backing buffer
>> itself would be allocated in <32bit mem. Whereas here the allocation
>> could end up in >32bit mem, as the CPU/synthetic device supports that.
>> Then each mapping device would instead get a bounce buffer.
>>
>> (this example might not be great as we usually know the address of
>> carveout/reserved memory regions, but substitute in whatever restriction
>> makes more sense)
>>
>> These non-reusable carveouts tend to be made for some specific device, and
>> they are made specifically because that device has some memory restriction.
>> So we might run into the situation above more than one would expect.
>>
>> Not a blocker here, but just something worth thinking on.
> As I detailed in the previous version [1] the main idea behind that work
> is to allow to get rid of dma_alloc_attrs for framework and drivers to
> allocate from the heaps instead.
>
> Robin was saying he wasn't comfortable with exposing this heap to
> userspace, and we're saying here that maybe this might not always work
> anyway (or at least that we couldn't test it fully).
>
> Maybe the best thing is to defer this series until we are at a point
> where we can start enabling the "heap allocations" in frameworks then?
> Hopefully we will have hardware to test it with by then, and we might
> not even need to expose it to userspace at all but only to the kernel.
>
> What do you think?
IMHO a good idea. Maybe in-kernel heap for the coherent allocations will
be just enough.
Best regards
--
Marek Szyprowski, PhD
Samsung R&D Institute Poland