In the volatile world of cryptocurrency, where billions are lost annually to scams, hacks, phishing attacks, rug pulls, and wallet compromises, the demand for legitimate recovery services has never been higher. As of 02/ 22/ 2026, victims often face a dual challenge: recovering stolen or inaccessible funds while avoiding secondary scams that prey on desperation with false promises and upfront fees. Amid this landscape, TREK Tech Corp—commonly referred to as CCS—has established itself as a credible, professional firm specializing in blockchain forensics, crypto asset tracing, fraud investigation, and realistic recovery support.
TREK Tech Corp operates with a foundation built on 14 years of experience in digital investigations, long predating the widespread adoption of cryptocurrencies. This extensive background in forensics gives CCS a significant edge over many newer entrants in the crypto recovery space. The firm focuses on helping individuals, families, and institutions trace lost or stolen digital assets using advanced, transparent methods rather than speculative or guaranteed outcomes. Their official website is https://www.trektechcorp.net, and direct inquiries can be sent to trektechcorp1(a)gmail.com.
What sets TREK Tech Corp apart as a legitimate provider is its commitment to ethical standards and evidence-based processes. Unlike fraudulent recovery operations that demand large upfront payments without case evaluation or promise 100% success (a clear red flag in the industry), CCS conducts honest feasibility assessments from the outset. They never require clients to share private keys, seed phrases, or sensitive wallet credentials during initial consultations. Fees are typically aligned with outcomes, and the firm maintains strict confidentiality with robust data protection protocols.
The core of TREK Tech Corp’ service is multi-layer blockchain attribution—a proprietary technique that tracks funds through complex laundering paths. Scammers frequently use mixers, cross-chain bridges, decentralized exchanges, privacy protocols, flash-loan obfuscation, and automated smart-contract tumbling to break direct traceability. Basic block explorers lose visibility after one or two hops, but CCS reconstructs these movements by analyzing on-chain behavioral patterns: timing correlations, amount similarities, address clustering via co-spending heuristics, change address reuse, and interactions with known services.
A typical engagement begins with a secure intake process. Victims submit transaction hashes (TXIDs), wallet addresses, scam communications, screenshots, and timelines. TREK Tech Corp then performs comprehensive on-chain and off-chain analysis, building detailed transaction graphs and identifying probable endpoints—most commonly centralized exchanges that enforce KYC/AML compliance. When funds reach such platforms, CCS prepares evidence-grade forensic reports that include visualized flow diagrams, confidence-scored address clusters, identified laundering techniques, and recommended intervention steps. These reports are frequently used to support asset freeze requests submitted to exchange compliance teams or filed with law enforcement agencies such as the FBI’s Internet Crime Complaint Center (IC3), local cybercrime units, or financial regulators.
TREK Tech Corp emphasizes speed: the sooner a theft is reported and analyzed, the higher the chance of intervention before funds are fully dispersed. In cases where rapid action was taken, partial recoveries—often 70–90% of stolen amounts—have been achieved through coordinated freezes and subsequent legal processes. While full recovery is never guaranteed due to blockchain’s immutable design and variables like scammer sophistication and jurisdictional limits, CCS provides clear, realistic expectations from day one.
Beyond recovery support, TREK Tech Corp prioritizes victim education and prevention. Clients receive tailored guidance on hardening security: using hardware wallets, enabling strong multi-factor authentication, securely backing up seed phrases in multiple encrypted locations, verifying addresses before every transfer, monitoring wallet activity proactively, and recognizing emerging threats such as AI-enhanced impersonation scams, clipboard hijacking, or malicious browser extensions. This preventive focus helps reduce the likelihood of future incidents and empowers users in an environment where threats evolve rapidly.
The firm’s legitimacy is further reinforced by its performance metrics and client feedback. As of early 2026, TREK Tech Corp has successfully handled over 426 documented projects and maintains a 4.28 out of 5 rating based on more than 2,467 verified reviews. Clients consistently highlight the team’s professionalism, technical precision, clear communication, regular updates, and genuine support during high-stress situations. In an industry rife with advance-fee fraud and misleading marketing, CCS stands out for refusing high-pressure tactics, avoiding unrealistic guarantees, and focusing on evidence-driven results.
For anyone who has suffered a crypto loss—whether through a sophisticated scam, forgotten credentials, hardware failure, or inheritance complications—TREK Tech Corp offers a professional, confidential pathway forward. Their website, https://www.trektechcorp.net, provides in-depth information about services, the investigation process, anonymized case examples, and how to initiate contact. Direct email communication is available at trektechcorp1(a)gmail.com for a no-obligation initial consultation.
In conclusion, legitimate crypto recovery requires expertise, transparency, and ethical conduct—qualities TREK Tech Corp consistently demonstrates. While blockchain’s architecture limits reversals, CCS leverages decades of investigative rigor, advanced multi-layer attribution, and strategic coordination to deliver clarity, evidence, and realistic recovery opportunities. In 2026’s high-risk digital asset environment, partnering with a reputable firm like TREK Tech Corp can make the critical difference between permanent loss and meaningful progress toward reclamation and renewed security.
Safeguard is not just another name in the crowded field of cryptocurrency recovery; they are renowned for their effectiveness and expertise in tracing lost funds. Their team comprises skilled professionals who understand the intricate workings of blockchain technology and the tactics employed by online scammers. This specialized knowledge enables them to devise tailored strategies to recover your assets.
Email,.. safeguardbitcoin(a)consultant.com
WhatsApp,.. +44 7426 168300
Web:., https://safeguardbitcoin.wixsite.com/safeguard-bitcoin--1
Safeguard is not just another name in the crowded field of cryptocurrency recovery; they are renowned for their effectiveness and expertise in tracing lost funds. Their team comprises skilled professionals who understand the intricate workings of blockchain technology and the tactics employed by online scammers. This specialized knowledge enables them to devise tailored strategies to recover your assets.
Email,.. safeguardbitcoin(a)consultant.com
WhatsApp,.. +44 7426 168300
Web:., https://safeguardbitcoin.wixsite.com/safeguard-bitcoin--1
Hi,
I just came across this commit while researching something else.
Original patch had too few context lines, so I here's the diff with `-U10`.
On 3/18/25 20:22, Daniel Almeida wrote:
> From: Asahi Lina <lina(a)asahilina.net>
>
> Since commit 21aa27ddc582 ("drm/shmem-helper: Switch to reservation
> lock"), the drm_gem_shmem_vmap and drm_gem_shmem_vunmap functions
> require that the caller holds the DMA reservation lock for the object.
> Add lockdep assertions to help validate this.
There were already lockdep assertions...
> Signed-off-by: Asahi Lina <lina(a)asahilina.net>
> Signed-off-by: Daniel Almeida <daniel.almeida(a)collabora.com>
> Reviewed-by: Christian König <christian.koenig(a)amd.com>
> Signed-off-by: Lyude Paul <lyude(a)redhat.com>
> Link: https://lore.kernel.org/r/20250318-drm-gem-shmem-v1-1-64b96511a84f@collabor…
> ---
> drivers/gpu/drm/drm_gem_shmem_helper.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index aa43265f4f4f..0b41f0346bad 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -341,20 +341,22 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_unpin);
> *
> * Returns:
> * 0 on success or a negative error code on failure.
> */
> int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
> struct iosys_map *map)
> {
> struct drm_gem_object *obj = &shmem->base;
> int ret = 0;
>
> + dma_resv_assert_held(obj->resv);
> +
> if (drm_gem_is_imported(obj)) {
> ret = dma_buf_vmap(obj->dma_buf, map);
> } else {
> pgprot_t prot = PAGE_KERNEL;
>
> dma_resv_assert_held(shmem->base.resv);
... right here, and
> if (refcount_inc_not_zero(&shmem->vmap_use_count)) {
> iosys_map_set_vaddr(map, shmem->vaddr);
> return 0;
> @@ -401,20 +403,22 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_vmap_locked);
> * drops to zero.
> *
> * This function hides the differences between dma-buf imported and natively
> * allocated objects.
> */
> void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
> struct iosys_map *map)
> {
> struct drm_gem_object *obj = &shmem->base;
>
> + dma_resv_assert_held(obj->resv);
> +
> if (drm_gem_is_imported(obj)) {
> dma_buf_vunmap(obj->dma_buf, map);
> } else {
> dma_resv_assert_held(shmem->base.resv);
...here.
> if (refcount_dec_and_test(&shmem->vmap_use_count)) {
> vunmap(shmem->vaddr);
> shmem->vaddr = NULL;
>
> drm_gem_shmem_unpin_locked(shmem);
Or were those insufficient for some reason? If so, should we keep both
of them, or should the older ones have been removed?
Bence
Hi,
The recent introduction of heaps in the optee driver [1] made possible
the creation of heaps as modules.
It's generally a good idea if possible, including for the already
existing system and CMA heaps.
The system one is pretty trivial, the CMA one is a bit more involved,
especially since we have a call from kernel/dma/contiguous.c to the CMA
heap code. This was solved by turning the logic around and making the
CMA heap call into the contiguous DMA code.
Let me know what you think,
Maxime
1: https://lore.kernel.org/dri-devel/20250911135007.1275833-4-jens.wiklander@l…
Signed-off-by: Maxime Ripard <mripard(a)kernel.org>
---
Changes in v4:
- Fix compilation failure
- Rework to take into account OF_RESERVED_MEM
- Fix regression making the default CMA area disappear if not created
through the DT
- Added some documentation and comments
- Link to v3: https://lore.kernel.org/r/20260303-dma-buf-heaps-as-modules-v3-0-24344812c7…
Changes in v3:
- Squashed cma_get_name and cma_alloc/release patches
- Fixed typo in Export dev_get_cma_area commit title
- Fixed compilation failure with DMA_CMA but not OF_RESERVED_MEM
- Link to v2: https://lore.kernel.org/r/20260227-dma-buf-heaps-as-modules-v2-0-454aee7e06…
Changes in v2:
- Collect tags
- Don't export dma_contiguous_default_area anymore, but export
dev_get_cma_area instead
- Mentioned that heap modules can't be removed
- Link to v1: https://lore.kernel.org/r/20260225-dma-buf-heaps-as-modules-v1-0-2109225a09…
---
Maxime Ripard (8):
dma: contiguous: Turn heap registration logic around
dma: contiguous: Make dev_get_cma_area() a proper function
dma: contiguous: Make dma_contiguous_default_area static
dma: contiguous: Export dev_get_cma_area()
mm: cma: Export cma_alloc(), cma_release() and cma_get_name()
dma-buf: heaps: Export mem_accounting parameter
dma-buf: heaps: cma: Turn the heap into a module
dma-buf: heaps: system: Turn the heap into a module
drivers/dma-buf/dma-heap.c | 1 +
drivers/dma-buf/heaps/Kconfig | 4 +--
drivers/dma-buf/heaps/cma_heap.c | 22 +++----------
drivers/dma-buf/heaps/system_heap.c | 5 +++
include/linux/dma-buf/heaps/cma.h | 16 ---------
include/linux/dma-map-ops.h | 14 ++++----
kernel/dma/contiguous.c | 66 +++++++++++++++++++++++++++++++++----
mm/cma.c | 3 ++
8 files changed, 82 insertions(+), 49 deletions(-)
---
base-commit: c081b71f11732ad2c443f170ab19c3ebe8a1a422
change-id: 20260225-dma-buf-heaps-as-modules-1034b3ec9f2a
Best regards,
--
Maxime Ripard <mripard(a)kernel.org>
This patch series introduces the Qualcomm DSP Accelerator (QDA) driver,
a modern DRM-based accelerator implementation for Qualcomm Hexagon DSPs.
The driver provides a standardized interface for offloading computational
tasks to DSPs found on Qualcomm SoCs, supporting all DSP domains (ADSP,
CDSP, SDSP, GDSP).
The QDA driver is designed as an alternative for the FastRPC driver
in drivers/misc/, offering improved resource management, better integration
with standard kernel subsystems, and alignment with the Linux kernel's
Compute Accelerators framework.
User-space staging branch
============
https://github.com/qualcomm/fastrpc/tree/accel/staging
Key Features
============
* Standard DRM accelerator interface via /dev/accel/accelN
* GEM-based buffer management with DMA-BUF import/export support
* IOMMU-based memory isolation using per-process context banks
* FastRPC protocol implementation for DSP communication
* RPMsg transport layer for reliable message passing
* Support for all DSP domains (ADSP, CDSP, SDSP, GDSP)
* Comprehensive IOCTL interface for DSP operations
High-Level Architecture Differences with Existing FastRPC Driver
=================================================================
The QDA driver represents a significant architectural departure from the
existing FastRPC driver (drivers/misc/fastrpc.c), addressing several key
limitations while maintaining protocol compatibility:
1. DRM Accelerator Framework Integration
- FastRPC: Custom character device (/dev/fastrpc-*)
- QDA: Standard DRM accel device (/dev/accel/accelN)
- Benefit: Leverages established DRM infrastructure for device
management.
2. Memory Management
- FastRPC: Custom memory allocator with ION/DMA-BUF integration
- QDA: Native GEM objects with full PRIME support
- Benefit: Seamless buffer sharing using standard DRM mechanisms
3. IOMMU Context Bank Management
- FastRPC: Direct IOMMU domain manipulation, limited isolation
- QDA: Custom compute bus (qda_cb_bus_type) with proper device model
- Benefit: Each CB device is a proper struct device with IOMMU group
support, enabling better isolation and resource tracking.
- https://lore.kernel.org/all/245d602f-3037-4ae3-9af9-d98f37258aae@oss.qualco…
4. Memory Manager Architecture
- FastRPC: Monolithic allocator
- QDA: Pluggable memory manager with backend abstraction
- Benefit: Currently uses DMA-coherent backend, easily extensible for
future memory types (e.g., carveout, CMA)
5. Transport Layer
- FastRPC: Direct RPMsg integration in core driver
- QDA: Abstracted transport layer (qda_rpmsg.c)
- Benefit: Clean separation of concerns, easier to add alternative
transports if needed
8. Code Organization
- FastRPC: ~3000 lines in single file
- QDA: Modular design across multiple files (~4600 lines total)
* qda_drv.c: Core driver and DRM integration
* qda_gem.c: GEM object management
* qda_memory_manager.c: Memory and IOMMU management
* qda_fastrpc.c: FastRPC protocol implementation
* qda_rpmsg.c: Transport layer
* qda_cb.c: Context bank device management
- Benefit: Better maintainability, clearer separation of concerns
9. UAPI Design
- FastRPC: Custom IOCTL interface
- QDA: DRM-style IOCTLs with proper versioning support
- Benefit: Follows DRM conventions, easier userspace integration
10. Documentation
- FastRPC: Minimal in-tree documentation
- QDA: Comprehensive documentation in Documentation/accel/qda/
- Benefit: Better developer experience, clearer API contracts
11. Buffer Reference Mechanism
- FastRPC: Uses buffer file descriptors (FDs) for all book-keeping
in both kernel and DSP
- QDA: Uses GEM handles for kernel-side management, providing better
integration with DRM subsystem
- Benefit: Leverages DRM GEM infrastructure for reference counting,
lifetime management, and integration with other DRM components
Key Technical Improvements
===========================
* Proper device model: CB devices are real struct device instances on a
custom bus, enabling proper IOMMU group management and power management
integration
* Reference-counted IOMMU devices: Multiple file descriptors from the same
process share a single IOMMU device, reducing overhead
* GEM-based buffer lifecycle: Automatic cleanup via DRM GEM reference
counting, eliminating many resource leak scenarios
* Modular memory backends: The memory manager supports pluggable backends,
currently implementing DMA-coherent allocations with SID-prefixed
addresses for DSP firmware
* Context-based invocation tracking: XArray-based context management with
proper synchronization and cleanup
Patch Series Organization
==========================
Patches 1-2: Driver skeleton and documentation
Patches 3-6: RPMsg transport and IOMMU/CB infrastructure
Patches 7-9: DRM device registration and basic IOCTL
Patches 10-12: GEM buffer management and PRIME support
Patches 13-17: FastRPC protocol implementation (attach, invoke, create,
map/unmap)
Patch 18: MAINTAINERS entry
Open Items
===========
The following items are identified as open items:
1. Privilege Level Management
- Currently, daemon processes and user processes have the same access
level as both use the same accel device node. This needs to be
addressed as daemons attach to privileged DSP PDs and require
higher privilege levels for system-level operations
- Seeking guidance on the best approach: separate device nodes,
capability-based checks, or DRM master/authentication mechanisms
2. UAPI Compatibility Layer
- Add UAPI compat layer to facilitate migration of client applications
from existing FastRPC UAPI to the new QDA accel driver UAPI,
ensuring smooth transition for existing userspace code
- Seeking guidance on implementation approach: in-kernel translation
layer, userspace wrapper library, or hybrid solution
3. Documentation Improvements
- Add detailed IOCTL usage examples
- Document DSP firmware interface requirements
- Create migration guide from existing FastRPC
4. Per-Domain Memory Allocation
- Develop new userspace API to support memory allocation on a per
domain basis, enabling domain-specific memory management and
optimization
5. Audio and Sensors PD Support
- The current patch series does not handle Audio PD and Sensors PD
functionalities. These specialized protection domains require
additional support for real-time constraints and power management
Interface Compatibility
========================
The QDA driver maintains compatibility with existing FastRPC infrastructure:
* Device Tree Bindings: The driver uses the same device tree bindings as
the existing FastRPC driver, ensuring no changes are required to device
tree sources. The "qcom,fastrpc" compatible string and child node
structure remain unchanged.
* Userspace Interface: While the driver provides a new DRM-based UAPI,
the underlying FastRPC protocol and DSP firmware interface remain
compatible. This ensures that DSP firmware and libraries continue to
work without modification.
* Migration Path: The modular design allows for gradual migration, where
both drivers can coexist during the transition period. Applications can
be migrated incrementally to the new UAPI with the help of the planned
compatibility layer.
References
==========
Previous discussions on this migration:
- https://lkml.org/lkml/2024/6/24/479
- https://lkml.org/lkml/2024/6/21/1252
Testing
=======
The driver has been tested on Qualcomm platforms with:
- Basic FastRPC attach/release operations
- DSP process creation and initialization
- Memory mapping/unmapping operations
- Dynamic invocation with various buffer types
- GEM buffer allocation and mmap
- PRIME buffer import from other subsystems
Signed-off-by: Ekansh Gupta <ekansh.gupta(a)oss.qualcomm.com>
---
Ekansh Gupta (18):
accel/qda: Add Qualcomm QDA DSP accelerator driver docs
accel/qda: Add Qualcomm DSP accelerator driver skeleton
accel/qda: Add RPMsg transport for Qualcomm DSP accelerator
accel/qda: Add built-in compute CB bus for QDA and integrate with IOMMU
accel/qda: Create compute CB devices on QDA compute bus
accel/qda: Add memory manager for CB devices
accel/qda: Add DRM accel device registration for QDA driver
accel/qda: Add per-file DRM context and open/close handling
accel/qda: Add QUERY IOCTL and basic QDA UAPI header
accel/qda: Add DMA-backed GEM objects and memory manager integration
accel/qda: Add GEM_CREATE and GEM_MMAP_OFFSET IOCTLs
accel/qda: Add PRIME dma-buf import support
accel/qda: Add initial FastRPC attach and release support
accel/qda: Add FastRPC dynamic invocation support
accel/qda: Add FastRPC DSP process creation support
accel/qda: Add FastRPC-based DSP memory mapping support
accel/qda: Add FastRPC-based DSP memory unmapping support
MAINTAINERS: Add MAINTAINERS entry for QDA driver
Documentation/accel/index.rst | 1 +
Documentation/accel/qda/index.rst | 14 +
Documentation/accel/qda/qda.rst | 129 ++++
MAINTAINERS | 9 +
arch/arm64/configs/defconfig | 2 +
drivers/accel/Kconfig | 1 +
drivers/accel/Makefile | 2 +
drivers/accel/qda/Kconfig | 35 ++
drivers/accel/qda/Makefile | 19 +
drivers/accel/qda/qda_cb.c | 182 ++++++
drivers/accel/qda/qda_cb.h | 26 +
drivers/accel/qda/qda_compute_bus.c | 23 +
drivers/accel/qda/qda_drv.c | 375 ++++++++++++
drivers/accel/qda/qda_drv.h | 171 ++++++
drivers/accel/qda/qda_fastrpc.c | 1002 ++++++++++++++++++++++++++++++++
drivers/accel/qda/qda_fastrpc.h | 433 ++++++++++++++
drivers/accel/qda/qda_gem.c | 211 +++++++
drivers/accel/qda/qda_gem.h | 103 ++++
drivers/accel/qda/qda_ioctl.c | 271 +++++++++
drivers/accel/qda/qda_ioctl.h | 118 ++++
drivers/accel/qda/qda_memory_dma.c | 91 +++
drivers/accel/qda/qda_memory_dma.h | 46 ++
drivers/accel/qda/qda_memory_manager.c | 382 ++++++++++++
drivers/accel/qda/qda_memory_manager.h | 148 +++++
drivers/accel/qda/qda_prime.c | 194 +++++++
drivers/accel/qda/qda_prime.h | 43 ++
drivers/accel/qda/qda_rpmsg.c | 327 +++++++++++
drivers/accel/qda/qda_rpmsg.h | 57 ++
drivers/iommu/iommu.c | 4 +
include/linux/qda_compute_bus.h | 22 +
include/uapi/drm/qda_accel.h | 224 +++++++
31 files changed, 4665 insertions(+)
---
base-commit: d4906ae14a5f136ceb671bb14cedbf13fa560da6
change-id: 20260223-qda-firstpost-4ab05249e2cc
Best regards,
--
Ekansh Gupta <ekansh.gupta(a)oss.qualcomm.com>
This patch series adds a new dma-buf heap driver that exposes coherent,
non‑reusable reserved-memory regions as named heaps, so userspace can
explicitly allocate buffers from those device‑specific pools.
Motivation: we want cgroup accounting for all userspace‑visible buffer
allocations (DRM, v4l2, dma‑buf heaps, etc.). That’s hard to do when
drivers call dma_alloc_attrs() directly because the accounting controller
(memcg vs dmem) is ambiguous. The long‑term plan is to steer those paths
toward dma‑buf heaps, where each heap can unambiguously charge a single
controller. To reach that goal, we need a heap backend for each
dma_alloc_attrs() memory type. CMA and system heaps already exist;
coherent reserved‑memory was the missing piece, since many SoCs define
dedicated, device‑local coherent pools in DT under /reserved-memory using
"shared-dma-pool" with non‑reusable regions (i.e., not CMA) that are
carved out exclusively for coherent DMA and are currently only usable by
in‑kernel drivers.
Because these regions are device‑dependent, each heap instance binds a
heap device to its reserved‑mem region via a newly introduced helper
function -namely, of_reserved_mem_device_init_with_mem()- so coherent
allocations use the correct dev->dma_mem.
Charging to cgroups for these buffers is intentionally left out to keep
review focused on the new heap; I plan to follow up based on Eric’s [1]
and Maxime’s [2] work on dmem charging from userspace.
This series also makes the new heap driver modular, in line with the CMA
heap change in [3].
[1] https://lore.kernel.org/all/20260218-dmabuf-heap-cma-dmem-v2-0-b249886fb7b2…
[2] https://lore.kernel.org/all/20250310-dmem-cgroups-v1-0-2984c1bc9312@kernel.…
[3] https://lore.kernel.org/all/20260303-dma-buf-heaps-as-modules-v3-0-24344812…
Signed-off-by: Albert Esteve <aesteve(a)redhat.com>
---
Changes in v3:
- Reorganized changesets among patches to ensure bisectability
- Removed unused dma_heap_coherent_register() leftover
- Removed fallback when setting mask in coherent heap dev, since
dma_set_mask() already truncates to supported masks
- Moved struct rmem_assigned_device (rd) logic to
of_reserved_mem_device_init_with_mem() to allow listing the device
- Link to v2: https://lore.kernel.org/r/20260303-b4-dmabuf-heap-coherent-rmem-v2-0-65a465…
Changes in v2:
- Removed dmem charging parts
- Moved coherent heap registering logic to coherent.c
- Made heap device a member of struct dma_heap
- Split dma_heap_add logic into create/register, to be able to
access the stored heap device before registered.
- Avoid platform device in favour of heap device
- Added a wrapper to rmem device_init() op
- Switched from late_initcall() to module_init()
- Made the coherent heap driver modular
- Link to v1: https://lore.kernel.org/r/20260224-b4-dmabuf-heap-coherent-rmem-v1-1-dffef4…
---
Albert Esteve (5):
dma-buf: dma-heap: split dma_heap_add
of_reserved_mem: add a helper for rmem device_init op
dma: coherent: store reserved memory coherent regions
dma-buf: heaps: Add Coherent heap to dmabuf heaps
dma-buf: heaps: coherent: Turn heap into a module
John Stultz (1):
dma-buf: dma-heap: Keep track of the heap device struct
drivers/dma-buf/dma-heap.c | 138 +++++++++--
drivers/dma-buf/heaps/Kconfig | 9 +
drivers/dma-buf/heaps/Makefile | 1 +
drivers/dma-buf/heaps/coherent_heap.c | 417 ++++++++++++++++++++++++++++++++++
drivers/of/of_reserved_mem.c | 68 ++++--
include/linux/dma-heap.h | 5 +
include/linux/dma-map-ops.h | 7 +
include/linux/of_reserved_mem.h | 8 +
kernel/dma/coherent.c | 34 +++
9 files changed, 640 insertions(+), 47 deletions(-)
---
base-commit: 6de23f81a5e08be8fbf5e8d7e9febc72a5b5f27f
change-id: 20260223-b4-dmabuf-heap-coherent-rmem-91fd3926afe9
Best regards,
--
Albert Esteve <aesteve(a)redhat.com>
I often get this question in my mind when I think about visiting the mountains — is Leh Ladakh safe for visitors like me? The place looks absolutely beautiful in photos, with huge mountains, quiet monasteries, and those long scenic roads. But at the same time, I always wonder about the altitude, weather, and road conditions. Since it’s a high-altitude region, I feel it’s important to understand how the body reacts there and how much time I should give myself to adjust.
From what I’ve learned so far, many travelers visit every year and most of them have a wonderful experience. Still, I always prefer to read carefully and plan things before going anywhere new. I like checking routes, the best months to visit, and what kind of preparations I should make. That’s why I started looking through a Leh Ladakh travel guide, because it answers many of the small questions I have in my mind.
I also feel safer knowing that tourism is common there and local people are welcoming to visitors. So maybe the real question for me is not just safety, but how well I plan my trip. With the right preparation, the journey can be amazing. 🏔️✨
More details:
https://www.indiahighlight.com/destination/leh-ladakh
📧 emmastone(a)yzcalo.com
📞 19707840507
begin_cpu_udmabuf() maps the sg_table with the caller-provided direction
(e.g., DMA_TO_DEVICE for a write-only sync), and caches it in ubuf->sg
for reuse. However, release_udmabuf() always unmaps this sg_table with
a hardcoded DMA_BIDIRECTIONAL, regardless of the direction that was
originally used for the mapping.
With CONFIG_DMA_API_DEBUG=y this produces:
DMA-API: misc udmabuf: device driver frees DMA memory with different
direction [device address=0x000000044a123000] [size=4096 bytes]
[mapped with DMA_TO_DEVICE] [unmapped with DMA_BIDIRECTIONAL]
The issue was found during video playback when GStreamer performed a
write-only DMA_BUF_IOCTL_SYNC on a udmabuf. It can be reproduced
with CONFIG_DMA_API_DEBUG=y by creating a udmabuf from a memfd,
performing a write-only sync (DMA_BUF_SYNC_WRITE without
DMA_BUF_SYNC_READ), and closing the file descriptor.
Fix this by storing the DMA direction used when the sg_table is first
created in begin_cpu_udmabuf(), and passing that same direction to
put_sg_table() in release_udmabuf().
Fixes: 284562e1f348 ("udmabuf: implement begin_cpu_access/end_cpu_access hooks")
Cc: stable(a)vger.kernel.org
Signed-off-by: Mikhail Gavrilov <mikhail.v.gavrilov(a)gmail.com>
---
drivers/dma-buf/udmabuf.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c
index 94b8ecb892bb..d0836febefdd 100644
--- a/drivers/dma-buf/udmabuf.c
+++ b/drivers/dma-buf/udmabuf.c
@@ -40,6 +40,7 @@ struct udmabuf {
struct folio **pinned_folios;
struct sg_table *sg;
+ enum dma_data_direction sg_dir;
struct miscdevice *device;
pgoff_t *offsets;
};
@@ -235,7 +236,7 @@ static void release_udmabuf(struct dma_buf *buf)
struct device *dev = ubuf->device->this_device;
if (ubuf->sg)
- put_sg_table(dev, ubuf->sg, DMA_BIDIRECTIONAL);
+ put_sg_table(dev, ubuf->sg, ubuf->sg_dir);
deinit_udmabuf(ubuf);
kfree(ubuf);
@@ -253,6 +254,8 @@ static int begin_cpu_udmabuf(struct dma_buf *buf,
if (IS_ERR(ubuf->sg)) {
ret = PTR_ERR(ubuf->sg);
ubuf->sg = NULL;
+ } else {
+ ubuf->sg_dir = direction;
}
} else {
dma_sync_sgtable_for_cpu(dev, ubuf->sg, direction);
--
2.53.0