I want to share my experience with OPTIMISTIC HACKER GAIUS after I was scammed of my cryptocurrency. Feeling helpless, I reached out and received clear, professional guidance from the start. I explained the process, communicated consistently, and handled the situation with care. Through his expertise and persistence, my scammed crypto was successfully recovered. The support restored my confidence and eased the stress of the order. I’m grateful for the help and would recommend his services.
Website: optimistichackargaius .c o m
Email: support @ optimistichackargaius .c om
WhatsApp: +44 737 674 0569
OPTIMISTIC HACKER GAIUS is a professional crypto recovery service specializing in lost Bitcoin recovery, scammed USDT recovery, and digital asset tracing. The company uses advanced blockchain analysis and investigative techniques to track and recover stolen or inaccessible cryptocurrency. Known for clear communication and ethical practices, OPTIMISTIC HACKER GAIUS provides realistic assessments and reliable support for individuals and businesses affected by crypto-related losses.
Mail-Box: support @ optimistichackargaius. c om
WhatsApp: +44 737 674 0569
Website: optimistichackargaius. co m
From: Srinivasan Shanmugam <srinivasan.shanmugam(a)amd.com>
[ Upstream commit bd8150a1b3370a9f7761c5814202a3fe5a79f44f ]
This commit simplifies the amdgpu_gem_va_ioctl function, key updates
include:
- Moved the logic for managing the last update fence directly into
amdgpu_gem_va_update_vm.
- Introduced checks for the timeline point to enable conditional
replacement or addition of fences.
v2: Addressed review comments from Christian.
v3: Updated comments (Christian).
v4: The previous version selected the fence too early and did not manage its
reference correctly, which could lead to stale or freed fences being used.
This resulted in refcount underflows and could crash when updating GPU
timelines.
The fence is now chosen only after the VA mapping work is completed, and its
reference is taken safely. After exporting it to the VM timeline syncobj, the
driver always drops its local fence reference, ensuring balanced refcounting
and avoiding use-after-free on dma_fence.
Crash signature:
[ 205.828135] refcount_t: underflow; use-after-free.
[ 205.832963] WARNING: CPU: 30 PID: 7274 at lib/refcount.c:28 refcount_warn_saturate+0xbe/0x110
...
[ 206.074014] Call Trace:
[ 206.076488] <TASK>
[ 206.078608] amdgpu_gem_va_ioctl+0x6ea/0x740 [amdgpu]
[ 206.084040] ? __pfx_amdgpu_gem_va_ioctl+0x10/0x10 [amdgpu]
[ 206.089994] drm_ioctl_kernel+0x86/0xe0 [drm]
[ 206.094415] drm_ioctl+0x26e/0x520 [drm]
[ 206.098424] ? __pfx_amdgpu_gem_va_ioctl+0x10/0x10 [amdgpu]
[ 206.104402] amdgpu_drm_ioctl+0x4b/0x80 [amdgpu]
[ 206.109387] __x64_sys_ioctl+0x96/0xe0
[ 206.113156] do_syscall_64+0x66/0x2d0
...
[ 206.553351] BUG: unable to handle page fault for address: ffffffffc0dfde90
...
[ 206.553378] RIP: 0010:dma_fence_signal_timestamp_locked+0x39/0xe0
...
[ 206.553405] Call Trace:
[ 206.553409] <IRQ>
[ 206.553415] ? __pfx_drm_sched_fence_free_rcu+0x10/0x10 [gpu_sched]
[ 206.553424] dma_fence_signal+0x30/0x60
[ 206.553427] drm_sched_job_done.isra.0+0x123/0x150 [gpu_sched]
[ 206.553434] dma_fence_signal_timestamp_locked+0x6e/0xe0
[ 206.553437] dma_fence_signal+0x30/0x60
[ 206.553441] amdgpu_fence_process+0xd8/0x150 [amdgpu]
[ 206.553854] sdma_v4_0_process_trap_irq+0x97/0xb0 [amdgpu]
[ 206.554353] edac_mce_amd(E) ee1004(E)
[ 206.554270] amdgpu_irq_dispatch+0x150/0x230 [amdgpu]
[ 206.554702] amdgpu_ih_process+0x6a/0x180 [amdgpu]
[ 206.555101] amdgpu_irq_handler+0x23/0x60 [amdgpu]
[ 206.555500] __handle_irq_event_percpu+0x4a/0x1c0
[ 206.555506] handle_irq_event+0x38/0x80
[ 206.555509] handle_edge_irq+0x92/0x1e0
[ 206.555513] __common_interrupt+0x3e/0xb0
[ 206.555519] common_interrupt+0x80/0xa0
[ 206.555525] </IRQ>
[ 206.555527] <TASK>
...
[ 206.555650] RIP: 0010:dma_fence_signal_timestamp_locked+0x39/0xe0
...
[ 206.555667] Kernel panic - not syncing: Fatal exception in interrupt
Link: https://patchwork.freedesktop.org/patch/654669/
Cc: Alex Deucher <alexander.deucher(a)amd.com>
Cc: Christian König <christian.koenig(a)amd.com>
Suggested-by: Christian König <christian.koenig(a)amd.com>
Signed-off-by: Srinivasan Shanmugam <srinivasan.shanmugam(a)amd.com>
Reviewed-by: Christian König <christian.koenig(a)amd.com>
Signed-off-by: Alex Deucher <alexander.deucher(a)amd.com>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
LLM Generated explanations, may be completely bogus:
I now have a thorough understanding of this commit. Let me compile my
complete analysis.
---
## Complete Analysis
### 1. COMMIT MESSAGE ANALYSIS
The commit title says "Refactor" but the v4 description reveals this is
a **critical bug fix**:
> "The previous version selected the fence too early and did not manage
its reference correctly, which could lead to stale or freed fences being
used. This resulted in refcount underflows and could crash when updating
GPU timelines."
The commit includes a **concrete crash signature** showing:
- `refcount_t: underflow; use-after-free` in `amdgpu_gem_va_ioctl`
- `BUG: unable to handle page fault` in
`dma_fence_signal_timestamp_locked` from IRQ context
- `Kernel panic - not syncing: Fatal exception in interrupt`
This is a **kernel panic** triggered via a userspace ioctl path
(`amdgpu_gem_va_ioctl`).
### 2. CODE CHANGE ANALYSIS - THE BUG
The bug is in `amdgpu_gem_update_bo_mapping()`, which was introduced by
commit 70773bef4e091 ("drm/amdgpu: update userqueue BOs and PDs") in
v6.16-rc1.
**Bug mechanism** in the old code at lines 115-154 of the current file:
```132:154:drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
/* Find the last update fence */
switch (operation) {
case AMDGPU_VA_OP_MAP:
case AMDGPU_VA_OP_REPLACE:
if (bo && (bo->tbo.base.resv ==
vm->root.bo->tbo.base.resv))
last_update = vm->last_update;
else
last_update = bo_va->last_pt_update;
break;
case AMDGPU_VA_OP_UNMAP:
case AMDGPU_VA_OP_CLEAR:
last_update = fence;
break;
default:
return;
}
/* Add fence to timeline */
if (!point)
drm_syncobj_replace_fence(syncobj, last_update);
else
drm_syncobj_add_point(syncobj, chain, last_update,
point);
```
**Problem 1 - Missing dma_fence references**: `last_update =
vm->last_update` and `last_update = bo_va->last_pt_update` read fence
pointers **without taking a reference** via `dma_fence_get()`. These
fences can be replaced and their references dropped by concurrent VM
operations or fence signaling. The bare pointer is then passed to
`drm_syncobj_replace_fence()` or `drm_syncobj_add_point()`, which
attempt `dma_fence_get()` on a potentially freed fence - **use-after-
free**.
I verified that `vm->last_update` is replaced via `swap()` in
`amdgpu_vm_sdma_commit()` (line 106-146 of `amdgpu_vm_sdma.c`), which
puts the old fence. If the fence was already signaled and no other
holders exist, it's freed.
**Problem 2 - Fence leak**: In the calling code (lines 971-983 of
6.16.y), when `timeline_syncobj` is set and the operation is
MAP/REPLACE, the `fence` (clear_freed fence) returned from
`amdgpu_gem_va_update_vm()` is **never put** - it's passed to
`amdgpu_gem_update_bo_mapping()` which ignores it for MAP/REPLACE
operations.
**The fix** properly addresses both issues:
- Moves fence selection into `amdgpu_gem_va_update_vm()`, which runs
immediately after the VM operations complete
- Takes explicit references with `dma_fence_get()` on the selected fence
- Returns the properly referenced fence to the caller
- The caller **always** calls `dma_fence_put(fence)` regardless of
whether a timeline syncobj is used
### 3. CLASSIFICATION
This is a **use-after-free / refcount underflow bug fix** disguised as a
"refactor." The crash is a kernel panic from interrupt context - one of
the most severe possible outcomes. It's reachable from userspace via the
`amdgpu_gem_va_ioctl` ioctl.
### 4. AFFECTED VERSIONS
- Feature introduced: commit 70773bef4e091, first in **v6.16-rc1** (not
in v6.15 or earlier)
- Partial memleak fix: commit ad6c120f68880, also in v6.16-rc1
- **Stable trees affected**: 6.16.y, 6.17.y, 6.18.y (all confirmed to
have the buggy code)
- **NOT affected**: 6.12.y LTS, 6.6.y LTS, 6.1.y LTS, 5.15.y LTS (no
timeline syncobj code)
### 5. SCOPE AND RISK
- **Lines changed**: ~+70/-41 (net ~+30 lines, mostly restructuring)
- **Files changed**: 1 (`amdgpu_gem.c`)
- **Risk**: Moderate - it's a structural change, but well-contained and
well-reviewed
- **Review quality**: Reviewed-by Christian König (the top DRM
fence/amdgpu expert), Suggested-by Christian König
- **Dependency**: Uses `amdgpu_vm_is_bo_always_valid()` which exists
since v6.11 - available in all affected trees
- **Backport concern**: The diff may need minor adaptation for
6.16.y/6.17.y due to `va_flags` changes (6716a823d18d4 in v6.18-rc1)
### 6. USER IMPACT
- **Severity**: Kernel panic (fatal exception in interrupt handler)
- **Trigger**: Userspace GPU applications using the VM timeline syncobj
feature (usermode queues)
- **Who is affected**: AMD GPU users on 6.16+ kernels using MESA/AMDVLK
with timeline synchronization
### 7. VERDICT
Despite being titled "Refactor," this commit fixes:
1. A **use-after-free** on `dma_fence` objects (accessing freed memory)
2. A **refcount underflow** leading to premature fence destruction
3. A **kernel panic** in interrupt context
(`dma_fence_signal_timestamp_locked`)
4. A **fence memory leak** (clear_freed fence never put when timeline is
used)
The fix is well-reviewed by the subsystem expert, self-contained to one
file, and addresses a real crash that users hit. The affected code
exists in 6.16.y, 6.17.y, and 6.18.y stable trees.
**YES**
drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 135 ++++++++++++++----------
1 file changed, 82 insertions(+), 53 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index 3e38c5db29871..c5570a9698ed2 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -112,47 +112,6 @@ amdgpu_gem_update_timeline_node(struct drm_file *filp,
return 0;
}
-static void
-amdgpu_gem_update_bo_mapping(struct drm_file *filp,
- struct amdgpu_bo_va *bo_va,
- uint32_t operation,
- uint64_t point,
- struct dma_fence *fence,
- struct drm_syncobj *syncobj,
- struct dma_fence_chain *chain)
-{
- struct amdgpu_bo *bo = bo_va ? bo_va->base.bo : NULL;
- struct amdgpu_fpriv *fpriv = filp->driver_priv;
- struct amdgpu_vm *vm = &fpriv->vm;
- struct dma_fence *last_update;
-
- if (!syncobj)
- return;
-
- /* Find the last update fence */
- switch (operation) {
- case AMDGPU_VA_OP_MAP:
- case AMDGPU_VA_OP_REPLACE:
- if (bo && (bo->tbo.base.resv == vm->root.bo->tbo.base.resv))
- last_update = vm->last_update;
- else
- last_update = bo_va->last_pt_update;
- break;
- case AMDGPU_VA_OP_UNMAP:
- case AMDGPU_VA_OP_CLEAR:
- last_update = fence;
- break;
- default:
- return;
- }
-
- /* Add fence to timeline */
- if (!point)
- drm_syncobj_replace_fence(syncobj, last_update);
- else
- drm_syncobj_add_point(syncobj, chain, last_update, point);
-}
-
static vm_fault_t amdgpu_gem_fault(struct vm_fault *vmf)
{
struct ttm_buffer_object *bo = vmf->vma->vm_private_data;
@@ -764,16 +723,19 @@ amdgpu_gem_va_update_vm(struct amdgpu_device *adev,
struct amdgpu_bo_va *bo_va,
uint32_t operation)
{
- struct dma_fence *fence = dma_fence_get_stub();
+ struct dma_fence *clear_fence = dma_fence_get_stub();
+ struct dma_fence *last_update = NULL;
int r;
if (!amdgpu_vm_ready(vm))
- return fence;
+ return clear_fence;
- r = amdgpu_vm_clear_freed(adev, vm, &fence);
+ /* First clear freed BOs and get a fence for that work, if any. */
+ r = amdgpu_vm_clear_freed(adev, vm, &clear_fence);
if (r)
goto error;
+ /* For MAP/REPLACE we also need to update the BO mappings. */
if (operation == AMDGPU_VA_OP_MAP ||
operation == AMDGPU_VA_OP_REPLACE) {
r = amdgpu_vm_bo_update(adev, bo_va, false);
@@ -781,13 +743,59 @@ amdgpu_gem_va_update_vm(struct amdgpu_device *adev,
goto error;
}
+ /* Always update PDEs after we touched the mappings. */
r = amdgpu_vm_update_pdes(adev, vm, false);
+ if (r)
+ goto error;
+
+ /*
+ * Decide which fence represents the "last update" for this VM/BO:
+ *
+ * - For MAP/REPLACE we want the PT update fence, which is tracked as
+ * either vm->last_update (for always-valid BOs) or bo_va->last_pt_update
+ * (for per-BO updates).
+ *
+ * - For UNMAP/CLEAR we rely on the fence returned by
+ * amdgpu_vm_clear_freed(), which already covers the page table work
+ * for the removed mappings.
+ */
+ switch (operation) {
+ case AMDGPU_VA_OP_MAP:
+ case AMDGPU_VA_OP_REPLACE:
+ if (bo_va && bo_va->base.bo) {
+ if (amdgpu_vm_is_bo_always_valid(vm, bo_va->base.bo)) {
+ if (vm->last_update)
+ last_update = dma_fence_get(vm->last_update);
+ } else {
+ if (bo_va->last_pt_update)
+ last_update = dma_fence_get(bo_va->last_pt_update);
+ }
+ }
+ break;
+ case AMDGPU_VA_OP_UNMAP:
+ case AMDGPU_VA_OP_CLEAR:
+ if (clear_fence)
+ last_update = dma_fence_get(clear_fence);
+ break;
+ default:
+ break;
+ }
error:
if (r && r != -ERESTARTSYS)
DRM_ERROR("Couldn't update BO_VA (%d)\n", r);
- return fence;
+ /*
+ * If we managed to pick a more specific last-update fence, prefer it
+ * over the generic clear_fence and drop the extra reference to the
+ * latter.
+ */
+ if (last_update) {
+ dma_fence_put(clear_fence);
+ return last_update;
+ }
+
+ return clear_fence;
}
int amdgpu_gem_va_ioctl(struct drm_device *dev, void *data,
@@ -813,6 +821,7 @@ int amdgpu_gem_va_ioctl(struct drm_device *dev, void *data,
uint64_t vm_size;
int r = 0;
+ /* Validate virtual address range against reserved regions. */
if (args->va_address < AMDGPU_VA_RESERVED_BOTTOM) {
dev_dbg(dev->dev,
"va_address 0x%llx is in reserved area 0x%llx\n",
@@ -846,6 +855,7 @@ int amdgpu_gem_va_ioctl(struct drm_device *dev, void *data,
return -EINVAL;
}
+ /* Validate operation type. */
switch (args->operation) {
case AMDGPU_VA_OP_MAP:
case AMDGPU_VA_OP_UNMAP:
@@ -869,6 +879,7 @@ int amdgpu_gem_va_ioctl(struct drm_device *dev, void *data,
abo = NULL;
}
+ /* Add input syncobj fences (if any) for synchronization. */
r = amdgpu_gem_add_input_fence(filp,
args->input_fence_syncobj_handles,
args->num_syncobj_handles);
@@ -891,6 +902,7 @@ int amdgpu_gem_va_ioctl(struct drm_device *dev, void *data,
goto error;
}
+ /* Resolve the BO-VA mapping for this VM/BO combination. */
if (abo) {
bo_va = amdgpu_vm_bo_find(&fpriv->vm, abo);
if (!bo_va) {
@@ -903,6 +915,11 @@ int amdgpu_gem_va_ioctl(struct drm_device *dev, void *data,
bo_va = NULL;
}
+ /*
+ * Prepare the timeline syncobj node if the user requested a VM
+ * timeline update. This only allocates/looks up the syncobj and
+ * chain node; the actual fence is attached later.
+ */
r = amdgpu_gem_update_timeline_node(filp,
args->vm_timeline_syncobj_out,
args->vm_timeline_point,
@@ -934,18 +951,30 @@ int amdgpu_gem_va_ioctl(struct drm_device *dev, void *data,
default:
break;
}
+
+ /*
+ * Once the VA operation is done, update the VM and obtain the fence
+ * that represents the last relevant update for this mapping. This
+ * fence can then be exported to the user-visible VM timeline.
+ */
if (!r && !(args->flags & AMDGPU_VM_DELAY_UPDATE) && !adev->debug_vm) {
fence = amdgpu_gem_va_update_vm(adev, &fpriv->vm, bo_va,
args->operation);
- if (timeline_syncobj)
- amdgpu_gem_update_bo_mapping(filp, bo_va,
- args->operation,
- args->vm_timeline_point,
- fence, timeline_syncobj,
- timeline_chain);
- else
- dma_fence_put(fence);
+ if (timeline_syncobj && fence) {
+ if (!args->vm_timeline_point) {
+ /* Replace the existing fence when no point is given. */
+ drm_syncobj_replace_fence(timeline_syncobj,
+ fence);
+ } else {
+ /* Attach the last-update fence at a specific point. */
+ drm_syncobj_add_point(timeline_syncobj,
+ timeline_chain,
+ fence,
+ args->vm_timeline_point);
+ }
+ }
+ dma_fence_put(fence);
}
--
2.51.0
Cryptocurrency investing has always been a passion of mine but that passion quickly turned into panic when I became the victim of a fraudulent SOL trading platform. The platform promised a fifteen percent daily return which seemed like an incredible opportunity I could not miss. I was drawn in by the sleek interface, positive reviews and supposed user testimonials. Believing I had found a breakthrough I went ahead and deposited 92000 in SOL.To my horror the platform shut down the very next day. My account was wiped clean and I had no access to my funds. Just like that my entire investment vanished and I was left in financial ruin. The days that followed were some of the darkest I have ever experienced. I struggled to pay my bills, skipped meals and found myself in a deep emotional and financial pit.Feeling defeated and ashamed I turned to a close friend and confided in them. That conversation changed everything. My friend referred me to SALVAGE ASSET RECOVERY , a specialized team that deals with cryptocurrency scam recovery. I was skeptical at first. After all, I had already been scammed once and was hesitant to trust anyone again. But desperation pushed me to give it a try. SALVAGE ASSET RECOVERY impressed me. They listened carefully to my story, explained the process in clear detail and reassured me that they would do everything possible to help. Using advanced blockchain tools and digital tracing methods they were able to locate the flow of my stolen funds.Weeks later I received the news I never thought I would hear. SALVAGE ASSET RECOVERY had successfully recovered my entire 92000 in SOL. I was overwhelmed with relief and gratitude. I could finally breathe again knowing that my hard earned money had not been lost forever. If something sounds too good to be true in the crypto world it almost always is. While I still believe in the future of blockchain I now invest with much more caution and diligence.To anyone who has fallen victim to a similar scam know that all hope is not lost. Reach out to people who can truly help. SALVAGE ASSET RECOVERY gave me a second chance and for that I will always be thankful. you can use the details provided below.
Whats app : +18476547096
Telegram : @Salvageasset
Cryptocurrency investing has always been a passion of mine but that passion quickly turned into panic when I became the victim of a fraudulent SOL trading platform. The platform promised a fifteen percent daily return which seemed like an incredible opportunity I could not miss. I was drawn in by the sleek interface, positive reviews and supposed user testimonials. Believing I had found a breakthrough I went ahead and deposited 92000 in SOL.To my horror the platform shut down the very next day. My account was wiped clean and I had no access to my funds. Just like that my entire investment vanished and I was left in financial ruin. The days that followed were some of the darkest I have ever experienced. I struggled to pay my bills, skipped meals and found myself in a deep emotional and financial pit.Feeling defeated and ashamed I turned to a close friend and confided in them. That conversation changed everything. My friend referred me to SALVAGE ASSET RECOVERY , a specialized team that deals with cryptocurrency scam recovery. I was skeptical at first. After all, I had already been scammed once and was hesitant to trust anyone again. But desperation pushed me to give it a try. SALVAGE ASSET RECOVERY impressed me. They listened carefully to my story, explained the process in clear detail and reassured me that they would do everything possible to help. Using advanced blockchain tools and digital tracing methods they were able to locate the flow of my stolen funds.Weeks later I received the news I never thought I would hear. SALVAGE ASSET RECOVERY had successfully recovered my entire 92000 in SOL. I was overwhelmed with relief and gratitude. I could finally breathe again knowing that my hard earned money had not been lost forever. If something sounds too good to be true in the crypto world it almost always is. While I still believe in the future of blockchain I now invest with much more caution and diligence.To anyone who has fallen victim to a similar scam know that all hope is not lost. Reach out to people who can truly help. SALVAGE ASSET RECOVERY gave me a second chance and for that I will always be thankful. you can use the details provided below.
Whats app : +18476547096
Telegram : @Salvageasset
I spent weeks wondering if a lost seed phrase could ever be restored. While researching, I came across iForce Hacker Recovery, a cryptocurrency recovery company that assisted me in regaining my seed phrase, wallet access, and monies that had been stolen by crooks. I was depressed and hopeless until their skilled and supportive crew walked me through the procedure. I'm grateful to them for helping me regain everything. I wholeheartedly recommend iForce Hacker Recovery to anyone who requires dependable crypto recovery assistance.
Website: http: // iforcehackers. co m
WhatsApp:+1 240-803-3706.
Email: iforcehk @ consultant.com
iForce Hacker Recovery are seasoned cryptocurrency specialists recognized for excellence in bitcoin fraud recovery. Their team has extensive knowledge of blockchain technology and the quickly changing cryptocurrency ecosystem, and they have a track record of assisting clients in recovering lost or stolen assets. They can successfully manage complex recovery cases because of their competence. There is still hope if you have been the victim of a cryptocurrency scam because iForce Hacker Recovery is ready to assist you in regaining the money that has been lost.
Website: ht tps://iforcehackers. co m
WhatsApp: +1 240-803-3706
Email: iforcehk @ consultant. c om
On 09.02.2026 16:38, Jiri Pirko wrote:
> From: Jiri Pirko <jiri(a)nvidia.com>
>
> dma_addr is unitialized in dma_direct_map_phys() when swiotlb is forced
> and DMA_ATTR_MMIO is set which leads to random value print out in
> warning. Fix that by just returning DMA_MAPPING_ERROR.
>
> Fixes: e53d29f957b3 ("dma-mapping: convert dma_direct_*map_page to be phys_addr_t based")
> Signed-off-by: Jiri Pirko <jiri(a)nvidia.com>
I will take this patch when v7.0-rc1 is out, as this fix definitely has
to be applied regardless of the discussion about the remaining patches.
> ---
> kernel/dma/direct.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
> index da2fadf45bcd..62f0d9d0ba02 100644
> --- a/kernel/dma/direct.h
> +++ b/kernel/dma/direct.h
> @@ -88,7 +88,7 @@ static inline dma_addr_t dma_direct_map_phys(struct device *dev,
>
> if (is_swiotlb_force_bounce(dev)) {
> if (attrs & DMA_ATTR_MMIO)
> - goto err_overflow;
> + return DMA_MAPPING_ERROR;
>
> return swiotlb_map(dev, phys, size, dir, attrs);
> }
Best regards
--
Marek Szyprowski, PhD
Samsung R&D Institute Poland
iForce Hacker Recovery is widely recognized as one of the top bitcoin recovery services, known for its dedication to recovering lost or stolen bitcoins. Their recovery process uses advanced blockchain analytics and forensic techniques to help victims trace and recover their digital assets. The team, made up of experts in blockchain technology and regulatory compliance, offers personalized solutions for issues like fraud, scams, and hacking.
Homepage; https://iforcehackers.com/
Email; iforcehk(a)consultant.com
Text-whatsapp; +1 (240) 803. 37 06
As a realtor accustomed to high-value transactions, nothing prepared me for the fear of losing access to my Bitcoin wallet, which included $1 million saved for retirement. After mistakenly removing the app, discovering faulty backups, and forgetting my recovery phrase, I was frantic. I contacted iFORCE HACKER RECOVERY after receiving a suggestion from a friend. Though suspicious at first, their professionalism, compassion, clear communication, and meticulous approach convinced me. After ten long days, my wallet was completely retrieved. They not only returned my Bitcoin, but also my peace of mind and future.
Contact Information:
Website: ht tps: // iforcehackers .co m .
Email: iforcehk @ consultant .co m
WhatsApp: +1 240-803-3706
I realize how volatile and thrilling cryptocurrency can be. After joining a Telegram-based service, I made consistent profits for six months before unexpected faults deprived me of approximately $343,000. Withdrawal blunders, little help, and rising dread kept me stuck. I then discovered iForce Hacker Recovery from positive reviews. They replied swiftly, handled my issue professionally, and walked me through every step. My valuables were returned within a week, giving me back my confidence. I heartily recommend their dependable, professional aid services.
Contact Info:
Website address: htt p:// iforcehackers. co m.
Email: iforcehk @ consultant .co m
WhatsApp: +1 240 803-3706
I spent a lot of time researching ways to recover stolen Bitcoin after I unexpectedly lost $120,000 during a trade. During my search, I came across multiple testimonials and ads praising the services of iForce Hacker Recovery, so I decided to reach out. I was extremely impressed with the assistance I received. Not only did I get my $120,000 back, but my hacked social media accounts were also restored. If you ever find yourself in a similar situation, don't stress—just contact iForce Hacker Recovery. They can help you recover everything you’ve lost from hacking incidents.
Website: ht tps://iforcehackers. co m
WhatsApp: +1 240-803-3706
Email: iforcehk @ consultant. c om
Hello everyone, I wish to give my own review about iFORCE HACKER RECOVERY. I hope this message reaches the right audience. As an entrepreneur, I was eager to invest in cryptocurrency after learning about its potential to significantly increase my income. I came across someone online who promised exceptionally high returns, and because he seemed knowledgeable, I was easily convinced. After investing a substantial amount, I soon discovered that I was unable to withdraw my funds. It was all a scam. This left me in debt and caused severe damage to my business. Fortunately, I came across a pop-up about cryptocurrency recovery and cybersecurity experts. I reached out to iFORCE HACKER RECOVERY, a cybersecurity and cryptocurrency recovery company, and they successfully helped me recover my lost funds. I am incredibly grateful for their services and highly recommend them to anyone in need of cryptocurrency recovery assistance.
Website; ht tps: //iforcehackers. co m/
Email; iforcehk @ consultant. c om
Call/Text-whatsapp; +1 (240) 8033 (706)
iForce Hacker Recovery are seasoned cryptocurrency specialists recognized for excellence in bitcoin fraud recovery. Their team has extensive knowledge of blockchain technology and the quickly changing cryptocurrency ecosystem, and they have a track record of assisting clients in recovering lost or stolen assets. They can successfully manage complex recovery cases because of their competence. There is still hope if you have been the victim of a cryptocurrency scam because iForce Hacker Recovery is ready to assist you in regaining the money that has been lost.
Website: ht tps://iforcehackers. co m
WhatsApp: +1 240-803-3706
Email: iforcehk @ consultant. c om
On 2/11/26 11:06, Philipp Stanner wrote:
> On Tue, 2026-02-10 at 11:01 +0100, Christian König wrote:
>> At first glance it is counter intuitive to protect a constant function
>> pointer table by RCU, but this allows modules providing the function
>> table to unload by waiting for an RCU grace period.
>
> I think that someone who does not already have a deep understanding
> about dma-buf and fences will have much trouble understanding *why*
> this patch is in the log and *what it achieves*.
>
> Good commit messages are at least as important as good code. In
> drm/sched for example I've been trying so many times to figure out why
> certain hacks and changes were implemented, but all that git-blame ever
> gave me was one liners, often hinting at some driver internal work
> around ._.
How about something like this:
The fence ops of a dma_fence currently need to life as long as the dma_fence is alive.
This means that the module who originally issued a dma_fence can't unload unless all of them are freed up.
As first step to solve this issue protect the fence ops by RCU.
While it is counter intuitive to protect a constant function pointer table by RCU it allows modules to wait for an RCU grace period to make sure that nobody is executing their functions any more.
>
>>
>> v2: make one the now duplicated lockdep warnings a comment instead.
>> v3: Add more documentation to ->wait and ->release callback.
>> v4: fix typo in documentation
>> v5: rebased on drm-tip
>>
>> Signed-off-by: Christian König <christian.koenig(a)amd.com>
>> ---
>> drivers/dma-buf/dma-fence.c | 69 +++++++++++++++++++++++++------------
>> include/linux/dma-fence.h | 29 ++++++++++++++--
>> 2 files changed, 73 insertions(+), 25 deletions(-)
>>
>> diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
>> index e05beae6e407..de9bf18be3d4 100644
>> --- a/drivers/dma-buf/dma-fence.c
>> +++ b/drivers/dma-buf/dma-fence.c
>> @@ -522,6 +522,7 @@ EXPORT_SYMBOL(dma_fence_signal);
>> signed long
>> dma_fence_wait_timeout(struct dma_fence *fence, bool intr, signed long timeout)
>> {
>> + const struct dma_fence_ops *ops;
>> signed long ret;
>>
>> if (WARN_ON(timeout < 0))
>> @@ -533,15 +534,21 @@ dma_fence_wait_timeout(struct dma_fence *fence, bool intr, signed long timeout)
>>
>> dma_fence_enable_sw_signaling(fence);
>>
>> - if (trace_dma_fence_wait_start_enabled()) {
>
> Why can wait_start_enabled() be removed? Is that related to the life
> time decoupling or is it a separate topic?
It isn't removed, I've just removed the "if (trace_dma_fence_wait_start_enabled())" optimization which is used by the tracing subsystem as self-patching code (longer story).
The trace_dma_fence_wait_start() trace point function is still called a few lines below.
>> - rcu_read_lock();
>> - trace_dma_fence_wait_start(fence);
>> + rcu_read_lock();
>> + ops = rcu_dereference(fence->ops);
>> + trace_dma_fence_wait_start(fence);
>> + if (ops->wait) {
>> + /*
>> + * Implementing the wait ops is deprecated and not supported for
>> + * issuer independent fences, so it is ok to use the ops outside
>
> s/issuer/issuers of
Fixed.
> And how do we know that this here is an independent fence?
> What even is an "independent fence" – one with internal spinlock?
I rephrased the sentence a bit to make that more clearer:
/*
* Implementing the wait ops is deprecated and not supported for
* issuers of fences who wants them to be independent of their
* module after they signal, so it is ok to use the ops outside
* the RCU protected section.
*/
>
>> + * the RCU protected section.
>> + */
>> + rcu_read_unlock();
>> + ret = ops->wait(fence, intr, timeout);
>> + } else {
>> rcu_read_unlock();
>> - }
>> - if (fence->ops->wait)
>> - ret = fence->ops->wait(fence, intr, timeout);
>> - else
>> ret = dma_fence_default_wait(fence, intr, timeout);
>> + }
>
> The git diff here looks awkward. Do you use git format-patch --
> histogram?
Nope, what's the matter?
>> if (trace_dma_fence_wait_end_enabled()) {
>> rcu_read_lock();
>> trace_dma_fence_wait_end(fence);
>>
>> @@ -1049,7 +1067,12 @@ __dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,
>> BUG_ON(!ops || !ops->get_driver_name || !ops->get_timeline_name);
>>
>> kref_init(&fence->refcount);
>> - fence->ops = ops;
>> + /*
>> + * At first glance it is counter intuitive to protect a constant
>> + * function pointer table by RCU, but this allows modules providing the
>> + * function table to unload by waiting for an RCU grace period.
>
> Maybe add a sentence like "Fences can live longer than the module which
> issued them."
Going to use the same as the commit message here as soon as we synced up on that.
>
>> + */
>> + RCU_INIT_POINTER(fence->ops, ops);
>> INIT_LIST_HEAD(&fence->cb_list);
>> fence->lock = lock;
>> fence->context = context;
>> @@ -1129,11 +1152,12 @@ EXPORT_SYMBOL(dma_fence_init64);
>> */
>> const char __rcu *dma_fence_driver_name(struct dma_fence *fence)
>> {
>> - RCU_LOCKDEP_WARN(!rcu_read_lock_held(),
>> - "RCU protection is required for safe access to returned string");
>> + const struct dma_fence_ops *ops;
>>
>> + /* RCU protection is required for safe access to returned string */
>> + ops = rcu_dereference(fence->ops);
>> if (!dma_fence_test_signaled_flag(fence))
>> - return (const char __rcu *)fence->ops->get_driver_name(fence);
>> + return (const char __rcu *)ops->get_driver_name(fence);
>> else
>> return (const char __rcu *)"detached-driver";
>> }
>> @@ -1161,11 +1185,12 @@ EXPORT_SYMBOL(dma_fence_driver_name);
>> */
>> const char __rcu *dma_fence_timeline_name(struct dma_fence *fence)
>> {
>> - RCU_LOCKDEP_WARN(!rcu_read_lock_held(),
>> - "RCU protection is required for safe access to returned string");
>> + const struct dma_fence_ops *ops;
>>
>> + /* RCU protection is required for safe access to returned string */
>> + ops = rcu_dereference(fence->ops);
>> if (!dma_fence_test_signaled_flag(fence))
>> - return (const char __rcu *)fence->ops->get_driver_name(fence);
>> + return (const char __rcu *)ops->get_driver_name(fence);
>> else
>> return (const char __rcu *)"signaled-timeline";
>> }
>
> Did we make any progress in our conversation about removing those two
> functions and callbacks? They're only used by i915.
Actually they are mostly used by the trace points and debugfs, so we certainly can't remove them.
But I'm really wondering why the heck i915 is using them?
>> diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h
>> index 9c4d25289239..6bf4feb0e01f 100644
>> --- a/include/linux/dma-fence.h
>> +++ b/include/linux/dma-fence.h
>> @@ -67,7 +67,7 @@ struct seq_file;
>> */
>> struct dma_fence {
>> spinlock_t *lock;
>> - const struct dma_fence_ops *ops;
>> + const struct dma_fence_ops __rcu *ops;
>> /*
>> * We clear the callback list on kref_put so that by the time we
>> * release the fence it is unused. No one should be adding to the
>> @@ -220,6 +220,10 @@ struct dma_fence_ops {
>> * timed out. Can also return other error values on custom implementations,
>> * which should be treated as if the fence is signaled. For example a hardware
>> * lockup could be reported like that.
>> + *
>> + * Implementing this callback prevents the fence from detaching after
>> + * signaling and so it is mandatory for the module providing the
>
> s/mandatory/necessary ?
Fixed.
>
>> + * dma_fence_ops to stay loaded as long as the dma_fence exists.
>> */
>> signed long (*wait)(struct dma_fence *fence,
>> bool intr, signed long timeout);
>> @@ -231,6 +235,13 @@ struct dma_fence_ops {
>> * Can be called from irq context. This callback is optional. If it is
>> * NULL, then dma_fence_free() is instead called as the default
>> * implementation.
>> + *
>> + * Implementing this callback prevents the fence from detaching after
>> + * signaling and so it is mandatory for the module providing the
>
> same
Fixed.
>
>> + * dma_fence_ops to stay loaded as long as the dma_fence exists.
>> + *
>> + * If the callback is implemented the memory backing the dma_fence
>> + * object must be freed RCU safe.
>> */
>> void (*release)(struct dma_fence *fence);
>>
>> @@ -454,13 +465,19 @@ dma_fence_test_signaled_flag(struct dma_fence *fence)
>> static inline bool
>> dma_fence_is_signaled_locked(struct dma_fence *fence)
>> {
>> + const struct dma_fence_ops *ops;
>> +
>> if (dma_fence_test_signaled_flag(fence))
>> return true;
>>
>> - if (fence->ops->signaled && fence->ops->signaled(fence)) {
>> + rcu_read_lock();
>> + ops = rcu_dereference(fence->ops);
>> + if (ops->signaled && ops->signaled(fence)) {
>
> Maybe you can educate me a bit about RCU here – couldn't this still
> race? If the ops were unloaded before you take rcu_read_lock(),
> rcu_dereference() would give you an invalid pointer here since you
> don't check for !ops, no?
Perfectly correct thinking, yes.
But the check for !ops is added in patch #2 when we actually start to set ops = NULL when the fence signals.
I intentionally separated that because it is basically the second step in making the solution to detach the fence ops from the module by RCU work.
We could merge the two patches together, but I think the separation actually makes sense should anybody start to complain about the additional RCU overhead.
Thanks,
Christian.
>
>
>> + rcu_read_unlock();
>> dma_fence_signal_locked(fence);
>> return true;
>> }
>> + rcu_read_unlock();
>>
>> return false;
>> }
>> @@ -484,13 +501,19 @@ dma_fence_is_signaled_locked(struct dma_fence *fence)
>> static inline bool
>> dma_fence_is_signaled(struct dma_fence *fence)
>> {
>> + const struct dma_fence_ops *ops;
>> +
>> if (dma_fence_test_signaled_flag(fence))
>> return true;
>>
>> - if (fence->ops->signaled && fence->ops->signaled(fence)) {
>> + rcu_read_lock();
>> + ops = rcu_dereference(fence->ops);
>> + if (ops->signaled && ops->signaled(fence)) {
>
> same
>
>
> Danke,
> P.
>
>> + rcu_read_unlock();
>> dma_fence_signal(fence);
>> return true;
>> }
>> + rcu_read_unlock();
>>
>> return false;
>> }
>
On 2/11/26 10:50, Philipp Stanner wrote:
> On Tue, 2026-02-10 at 11:01 +0100, Christian König wrote:
...
>> Using a per-fence spinlock allows completely decoupling spinlock producer
>> and consumer life times, simplifying the handling in most use cases.
>
> That's a good commit message btw, detailing what the motivation is.
> Would be great to see messages like that more frequently :]
Yeah, but they are not so easy to write.
>> trace_dma_fence_init(fence);
>> @@ -1091,7 +1094,7 @@ __dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,
>> * dma_fence_init - Initialize a custom fence.
>> * @fence: the fence to initialize
>> * @ops: the dma_fence_ops for operations on this fence
>> - * @lock: the irqsafe spinlock to use for locking this fence
>> + * @lock: optional irqsafe spinlock to use for locking this fence
>> * @context: the execution context this fence is run on
>> * @seqno: a linear increasing sequence number for this context
>> *
>> @@ -1101,6 +1104,10 @@ __dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,
>> *
>> * context and seqno are used for easy comparison between fences, allowing
>> * to check which fence is later by simply using dma_fence_later().
>> + *
>> + * It is strongly discouraged to provide an external lock. This is only allowed
>
> "strongly discouraged […] because this does not decouple lock and fence
> life times." ?
Good point, added some more text.
>> + * for legacy use cases when multiple fences need to be prevented from
>> + * signaling out of order.
>
> I think our previous discussions revealed that the external lock does
> not even help with that, does it?
Well only when you provide a ->signaled() callback in the dma_fence_ops.
The reason we have so much different approaches in the dma_fence handling is because it is basically the unification multiple different driver implementations which all targeted more or less different use cases.
>> + * for legacy use cases when multiple fences need to be prevented from
>> + * signaling out of order.
>> */
>> void
>> dma_fence_init64(struct dma_fence *fence, const struct dma_fence_ops *ops,
>> diff --git a/drivers/dma-buf/sync_debug.h b/drivers/dma-buf/sync_debug.h
>> index 02af347293d0..c49324505b20 100644
>> --- a/drivers/dma-buf/sync_debug.h
>> +++ b/drivers/dma-buf/sync_debug.h
>> @@ -47,7 +47,7 @@ struct sync_timeline {
>>
>> static inline struct sync_timeline *dma_fence_parent(struct dma_fence *fence)
>> {
>> - return container_of(fence->lock, struct sync_timeline, lock);
>> + return container_of(fence->extern_lock, struct sync_timeline, lock);
>
> You're sure that this will never have to check for the flag?
Yes, the code would have crashed before if anything than a sync_pt created by sync_pt_create was encountered here.
We could drop the wrapper, move the cast to the only place where it matters and document the why and what with a code comment.... but this is all dead code which breaks some of the fundamental dma-fence rules and it is only left here because we can't break the UAPI.
>> static const char *xe_hw_fence_get_driver_name(struct dma_fence *dma_fence)
>> diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h
>> index 88c842fc35d5..6eabbb1c471c 100644
>> --- a/include/linux/dma-fence.h
>> +++ b/include/linux/dma-fence.h
>> @@ -34,7 +34,8 @@ struct seq_file;
>> * @ops: dma_fence_ops associated with this fence
>> * @rcu: used for releasing fence with kfree_rcu
>> * @cb_list: list of all callbacks to call
>> - * @lock: spin_lock_irqsave used for locking
>> + * @extern_lock: external spin_lock_irqsave used for locking
>
> Add a "(deprecated)" ?
Done.
>
>> + * @inline_lock: alternative internal spin_lock_irqsave used for locking
>> * @context: execution context this fence belongs to, returned by
>> * dma_fence_context_alloc()
>> * @seqno: the sequence number of this fence inside the execution context,
>> @@ -49,6 +50,7 @@ struct seq_file;
>> * of the time.
>> *
>> * DMA_FENCE_FLAG_INITIALIZED_BIT - fence was initialized
>> + * DMA_FENCE_FLAG_INLINE_LOCK_BIT - use inline spinlock instead of external one
>> * DMA_FENCE_FLAG_SIGNALED_BIT - fence is already signaled
>> * DMA_FENCE_FLAG_TIMESTAMP_BIT - timestamp recorded for fence signaling
>> * DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT - enable_signaling might have been called
>> @@ -66,7 +68,10 @@ struct seq_file;
>> * been completed, or never called at all.
>> */
>> struct dma_fence {
>> - spinlock_t *lock;
>> + union {
>> + spinlock_t *extern_lock;
>> + spinlock_t inline_lock;
>> + };
>> const struct dma_fence_ops __rcu *ops;
>> /*
>> * We clear the callback list on kref_put so that by the time we
>> @@ -100,6 +105,7 @@ struct dma_fence {
>>
>> enum dma_fence_flag_bits {
>> DMA_FENCE_FLAG_INITIALIZED_BIT,
>> + DMA_FENCE_FLAG_INLINE_LOCK_BIT,
>
> Just asking about a nit: what's the order here, always alphabetically?
In which the flags are used in the code flow.
>> DMA_FENCE_FLAG_SEQNO64_BIT,
>> DMA_FENCE_FLAG_SIGNALED_BIT,
>> DMA_FENCE_FLAG_TIMESTAMP_BIT,
>> @@ -381,11 +387,12 @@ dma_fence_get_rcu_safe(struct dma_fence __rcu **fencep)
>> * dma_fence_spinlock - return pointer to the spinlock protecting the fence
>> * @fence: the fence to get the lock from
>> *
>> - * Return the pointer to the extern lock.
>> + * Return either the pointer to the embedded or the external spin lock.
>> */
>> static inline spinlock_t *dma_fence_spinlock(struct dma_fence *fence)
>> {
>> - return fence->lock;
>> + return test_bit(DMA_FENCE_FLAG_INLINE_LOCK_BIT, &fence->flags) ?
>> + &fence->inline_lock : fence->extern_lock;
>
> I personally am not a fan of using '?' for anything longer than 1 line
> and think that
>
> if (condition)
> return a;
>
> return b;
>
> is much better readable.
Mhm, I disagree in this particular case. Especially that you have both possibilities side by side makes it more readable I think.
Regards,
Christian.
On Tue, Feb 10, 2026 at 10:05:14AM +0100, Jiri Pirko wrote:
> Mon, Feb 09, 2026 at 09:08:03PM +0100, jstultz(a)google.com wrote:
> >On Mon, Feb 9, 2026 at 7:38 AM Jiri Pirko <jiri(a)resnulli.us> wrote:
> >>
> >> From: Jiri Pirko <jiri(a)nvidia.com>
> >>
> >> Currently the flags, which are unused, are validated for all heaps.
> >> Since the follow-up patch introduces a flag valid for only one of the
> >> heaps, allow to specify the valid flags per-heap.
> >
> >I'm not really in this space anymore, so take my feedback with a grain of salt.
> >
> >While the heap allocate flags argument is unused, it was intended to
> >be used for generic allocation flags that would apply to all or at
> >least a wide majority of heaps.
> >
> >It was definitely not added to allow for per-heap or heap specific
> >flags (as this patch tries to utilize it). That was the mess we had
> >with ION driver that we were trying to avoid.
> >
> >The intent of dma-buf heaps is to try to abstract all the different
> >device memory constraints so there only needs to be a [usage] ->
> >[heap] mapping, and otherwise userland can be generalized so that it
> >doesn't need to be re-written to work with different devices/memory
> >types. Adding heap-specific allocation flags prevents that
> >generalization.
> >
> >So instead of adding heap specific flags, the general advice has been
> >to add a separate heap name for the flag property.
>
> Right, my original idea was to add a separate heap. Then I spotted the
> flags and seemed like a great fit. Was not aware or the history or
> original intention. Would be probably good to document it for
> future generations.
>
> So instead of flag, I will add heap named something
> like "system_cc_decrypted" to implement this.
It is problematic to expose a user‑visible API that depends on a name.
Such a design limits our ability to extend the functionality in the
future, should new use cases arise.
Thanks
>
> Thanks!
Hi Jiri,
kernel test robot noticed the following build errors:
[auto build test ERROR on drm-misc/drm-misc-next]
[also build test ERROR on drm-tip/drm-tip trace/for-next linus/master v6.19]
[cannot apply to next-20260209]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Jiri-Pirko/dma-mapping-avoid…
base: https://gitlab.freedesktop.org/drm/misc/kernel.git drm-misc-next
patch link: https://lore.kernel.org/r/20260209153809.250835-6-jiri%40resnulli.us
patch subject: [PATCH 5/5] dma-buf: heaps: system: add an option to allocate explicitly decrypted memory
config: s390-allyesconfig (https://download.01.org/0day-ci/archive/20260211/202602110149.tBUPP0bh-lkp@…)
compiler: s390-linux-gcc (GCC) 15.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260211/202602110149.tBUPP0bh-lkp@…)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp(a)intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202602110149.tBUPP0bh-lkp@intel.com/
All errors (new ones prefixed by >>):
drivers/dma-buf/heaps/system_heap.c: In function 'system_heap_set_page_decrypted':
>> drivers/dma-buf/heaps/system_heap.c:66:15: error: implicit declaration of function 'set_memory_decrypted' [-Wimplicit-function-declaration]
66 | ret = set_memory_decrypted(addr, nr_pages);
| ^~~~~~~~~~~~~~~~~~~~
drivers/dma-buf/heaps/system_heap.c: In function 'system_heap_set_page_encrypted':
>> drivers/dma-buf/heaps/system_heap.c:80:15: error: implicit declaration of function 'set_memory_encrypted' [-Wimplicit-function-declaration]
80 | ret = set_memory_encrypted(addr, nr_pages);
| ^~~~~~~~~~~~~~~~~~~~
vim +/set_memory_decrypted +66 drivers/dma-buf/heaps/system_heap.c
59
60 static int system_heap_set_page_decrypted(struct page *page)
61 {
62 unsigned long addr = (unsigned long)page_address(page);
63 unsigned int nr_pages = 1 << compound_order(page);
64 int ret;
65
> 66 ret = set_memory_decrypted(addr, nr_pages);
67 if (ret)
68 pr_warn_ratelimited("dma-buf system heap: failed to decrypt page at %p\n",
69 page_address(page));
70
71 return ret;
72 }
73
74 static int system_heap_set_page_encrypted(struct page *page)
75 {
76 unsigned long addr = (unsigned long)page_address(page);
77 unsigned int nr_pages = 1 << compound_order(page);
78 int ret;
79
> 80 ret = set_memory_encrypted(addr, nr_pages);
81 if (ret)
82 pr_warn_ratelimited("dma-buf system heap: failed to re-encrypt page at %p, leaking memory\n",
83 page_address(page));
84
85 return ret;
86 }
87
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
On Tue, Feb 10, 2026 at 03:49:02PM +0100, Jiri Pirko wrote:
> Tue, Feb 10, 2026 at 01:43:57PM +0100, jgg(a)ziepe.ca wrote:
> >On Tue, Feb 10, 2026 at 10:14:08AM +0100, Jiri Pirko wrote:
> >
> >> >I'd advocate that the right design is for userspace to positively
> >> >signal via this flag that it wants/accepts shared memory and without
> >> >the flag shared memory should never be returned.
> >>
> >> We can have the same behaviour with the separate heap, can't we?
> >> Userpace positively signals it wants/accepts the shared memory by
> >> choosing "system_cc_decrypted" heap name.
> >
> >So what do the other heap names do? Always private? Do you ever get
> >heaps that are unknowably private or shared (eg MMIO backed?)
>
> If I understand the code correctly, you may get something like this:
> $ ls /dev/dma_heap/
> default_cma_region
> protected,secure-video
> protected,secure-video-record
> protected,trusted-ui
> system
>
> The "protected*" ones are created by tee. I believe they handle
> memory that is inaccesible to CPU.
If that is the only list of options then maybe just the name will work
Ok.
I *think* CMA and system should be reliably CC private.
The protected ones seem to have their own internal definition, and
probably can't exist on CC VM systems..
Meaning we don't have any shared things leaking through which would be
the point.
Jason
On Tue, Feb 10, 2026 at 10:14:08AM +0100, Jiri Pirko wrote:
> >I'd advocate that the right design is for userspace to positively
> >signal via this flag that it wants/accepts shared memory and without
> >the flag shared memory should never be returned.
>
> We can have the same behaviour with the separate heap, can't we?
> Userpace positively signals it wants/accepts the shared memory by
> choosing "system_cc_decrypted" heap name.
So what do the other heap names do? Always private? Do you ever get
heaps that are unknowably private or shared (eg MMIO backed?)
Jason
Hi Jiri,
kernel test robot noticed the following build errors:
[auto build test ERROR on drm-misc/drm-misc-next]
[also build test ERROR on drm-tip/drm-tip trace/for-next linus/master v6.19]
[cannot apply to next-20260209]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Jiri-Pirko/dma-mapping-avoid…
base: https://gitlab.freedesktop.org/drm/misc/kernel.git drm-misc-next
patch link: https://lore.kernel.org/r/20260209153809.250835-6-jiri%40resnulli.us
patch subject: [PATCH 5/5] dma-buf: heaps: system: add an option to allocate explicitly decrypted memory
config: s390-allmodconfig (https://download.01.org/0day-ci/archive/20260210/202602101926.lsquJdb1-lkp@…)
compiler: clang version 18.1.8 (https://github.com/llvm/llvm-project 3b5b5c1ec4a3095ab096dd780e84d7ab81f3d7ff)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260210/202602101926.lsquJdb1-lkp@…)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp(a)intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202602101926.lsquJdb1-lkp@intel.com/
All errors (new ones prefixed by >>):
>> drivers/dma-buf/heaps/system_heap.c:66:8: error: call to undeclared function 'set_memory_decrypted'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
66 | ret = set_memory_decrypted(addr, nr_pages);
| ^
>> drivers/dma-buf/heaps/system_heap.c:80:8: error: call to undeclared function 'set_memory_encrypted'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
80 | ret = set_memory_encrypted(addr, nr_pages);
| ^
2 errors generated.
vim +/set_memory_decrypted +66 drivers/dma-buf/heaps/system_heap.c
59
60 static int system_heap_set_page_decrypted(struct page *page)
61 {
62 unsigned long addr = (unsigned long)page_address(page);
63 unsigned int nr_pages = 1 << compound_order(page);
64 int ret;
65
> 66 ret = set_memory_decrypted(addr, nr_pages);
67 if (ret)
68 pr_warn_ratelimited("dma-buf system heap: failed to decrypt page at %p\n",
69 page_address(page));
70
71 return ret;
72 }
73
74 static int system_heap_set_page_encrypted(struct page *page)
75 {
76 unsigned long addr = (unsigned long)page_address(page);
77 unsigned int nr_pages = 1 << compound_order(page);
78 int ret;
79
> 80 ret = set_memory_encrypted(addr, nr_pages);
81 if (ret)
82 pr_warn_ratelimited("dma-buf system heap: failed to re-encrypt page at %p, leaking memory\n",
83 page_address(page));
84
85 return ret;
86 }
87
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
Hi everyone,
dma_fences have ever lived under the tyranny dictated by the module
lifetime of their issuer, leading to crashes should anybody still holding
a reference to a dma_fence when the module of the issuer was unloaded.
The basic problem is that when buffer are shared between drivers
dma_fence objects can leak into external drivers and stay there even
after they are signaled. The dma_resv object for example only lazy releases
dma_fences.
So what happens is that when the module who originally created the dma_fence
unloads the dma_fence_ops function table becomes unavailable as well and so
any attempt to release the fence crashes the system.
Previously various approaches have been discussed, including changing the
locking semantics of the dma_fence callbacks (by me) as well as using the
drm scheduler as intermediate layer (by Sima) to disconnect dma_fences
from their actual users, but none of them are actually solving all problems.
Tvrtko did some really nice prerequisite work by protecting the returned
strings of the dma_fence_ops by RCU. This way dma_fence creators where
able to just wait for an RCU grace period after fence signaling before
they could be save to free those data structures.
Now this patch set here goes a step further and protects the whole
dma_fence_ops structure by RCU, so that after the fence signals the
pointer to the dma_fence_ops is set to NULL when there is no wait nor
release callback given. All functionality which use the dma_fence_ops
reference are put inside an RCU critical section, except for the
deprecated issuer specific wait and of course the optional release
callback.
Additional to the RCU changes the lock protecting the dma_fence state
previously had to be allocated external. This set here now changes the
functionality to make that external lock optional and allows dma_fences
to use an inline lock and be self contained.
v4:
Rebases the whole set on upstream changes, especially the cleanup
from Philip in patch "drm/amdgpu: independence for the amdkfd_fence!".
Adding two patches which brings the DMA-fence self tests up to date.
The first selftest changes removes the mock_wait and so actually starts
testing the default behavior instead of some hacky implementation in the
test. This one got upstreamed independent of this set.
The second drops the mock_fence as well and tests the new RCU and inline
spinlock functionality.
v5:
Rebase on top of drm-misc-next instead of drm-tip, leave out all driver
changes for now since those should go through the driver specific paths
anyway.
Address a few more review comments, especially some rebase mess and
typos. And finally fix one more bug found by AMDs CI system.
v6:
Minor style changes, re-ordered patch #1, dropped the scheduler fence
change for now
v7:
The patch adding the dma_fence_was_initialized() function was pushed
upstream individually since that is really an independent cleanup.
Fixed some missing i915 bits in patch "dma-buf: abstract fence locking".
Please review and comment,
Christian.
On Mon, Feb 9, 2026 at 7:38 AM Jiri Pirko <jiri(a)resnulli.us> wrote:
>
> From: Jiri Pirko <jiri(a)nvidia.com>
>
> Currently the flags, which are unused, are validated for all heaps.
> Since the follow-up patch introduces a flag valid for only one of the
> heaps, allow to specify the valid flags per-heap.
I'm not really in this space anymore, so take my feedback with a grain of salt.
While the heap allocate flags argument is unused, it was intended to
be used for generic allocation flags that would apply to all or at
least a wide majority of heaps.
It was definitely not added to allow for per-heap or heap specific
flags (as this patch tries to utilize it). That was the mess we had
with ION driver that we were trying to avoid.
The intent of dma-buf heaps is to try to abstract all the different
device memory constraints so there only needs to be a [usage] ->
[heap] mapping, and otherwise userland can be generalized so that it
doesn't need to be re-written to work with different devices/memory
types. Adding heap-specific allocation flags prevents that
generalization.
So instead of adding heap specific flags, the general advice has been
to add a separate heap name for the flag property.
Now, there has been many discussions around "protected buffers" (which
doesn't seem to map exactly to this confidental computing primitive,
but sounds like it might be related) , which have bounced between
being a allocation flag or a device specific heap without much
resolution. I appreciate in this patch seires you've pushed your
concept down into a DMA_ATTR_, as I do feel the kernel should have a
deeper sense of protected buffers (or any general propery like this)
as a concept if it is going to be a generic allocation flag, instead
of it being a somewhat thin creation of the outer heap-driver layer.
But, it seems like the use case here is still far too narrow for a top
level allocation flag.
So I'd advocate against introducing heap-specific flags like this.
thanks
-john
On Thu, Feb 05, 2026 at 07:41:11AM -0700, Alex Williamson wrote:
> >> From https://anongit.freedesktop.org/git/drm/drm-misc
> >> * branch drm-misc-next -> FETCH_HEAD
> >>
> >> $ git show FETCH_HEAD
> >> commit 779ec12c85c9e4547519e3903a371a3b26a289de
> >> Author: Alexander Konyukhov <Alexander.Konyukhov(a)kaspersky.com>
> >> Date: Tue Feb 3 16:48:46 2026 +0300
> >>
> >> drm/komeda: fix integer overflow in AFBC framebuffer size check
> >>
> >> $ git merge-base FETCH_HEAD 61ceaf236115f20f4fdd7cf60f883ada1063349a
> >> 24d479d26b25bce5faea3ddd9fa8f3a6c3129ea7
> >> $ git describe --contains 24d479d26b25bce5faea3ddd9fa8f3a6c3129ea7
> >> v6.19-rc6^0
> >>
> >> $ git log --oneline 61ceaf236115f20f4fdd7cf60f883ada1063349a ^FETCH_HEAD
> >> 61ceaf236115f2 vfio: Prevent from pinned DMABUF importers to attach to VFIO DMABUF
> >>
> >> Just pull Alex's tree, the drm-misc-next tree already has v6.19-rc6,
> >> so all they will see is one extra patch from Alex in your PR.
> >>
> >> No need to backmerge, this is normal git stuff and there won't be
> >> conflicts when they merge a later Linus tag.
> >
> > Correct, but that would merge the same patch through two different
> > trees. That is usually a pretty big no-go.
>
> Applying the patch through two different trees is a no-go, but
> merging the same commit from a shared branch or tag is very common
> and acceptable. It's the same commit after all, there is no
> conflict, no duplicate commit. When the trees are merged, the
> commit will exist once in the log. Thanks,
+1
This is how shared branches work. There is no issue here.
Jason