From: Xiang Gao <gaoxiang17(a)xiaomi.com>
The kernel-doc comments for vmapping_counter and vmap_ptr in struct
dma_buf reference "@lock" as the protecting lock, but struct dma_buf
no longer has a "lock" member. The mutex was removed in favor of using
the dma_resv lock exclusively. The implementation correctly uses
dma_resv_assert_held(dmabuf->resv) in dma_buf_vmap() and
dma_buf_vunmap(), so update the documentation to reference @resv
instead.
Signed-off-by: gaoxiang17 <gaoxiang17(a)xiaomi.com>
---
include/linux/dma-buf.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index 133b9e637b55..ef6d93fd7a2c 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -322,13 +322,13 @@ struct dma_buf {
* @vmapping_counter:
*
* Used internally to refcnt the vmaps returned by dma_buf_vmap().
- * Protected by @lock.
+ * Protected by @resv.
*/
unsigned vmapping_counter;
/**
* @vmap_ptr:
- * The current vmap ptr if @vmapping_counter > 0. Protected by @lock.
+ * The current vmap ptr if @vmapping_counter > 0. Protected by @resv.
*/
struct iosys_map vmap_ptr;
--
2.34.1
ghost mystery recovery Hacker in 2026
ghost mystery recovery Hacker for 2026 include ghost mystery Recovery Hacker And which utilize blockchain forensics and legal strategies to recover stolen or lost assets. These firms specialize in tracing funds, working with law enforcement, and providing expert testimony to freeze assets. ghost mystery recovery Hacker Highly rated for 2026 for using AI-powered tools to trace funds across exchanges and privacy coins, with a focus on scams and hacked wallets.
Email address: support@ ghostmysteryrecovery. c om
WhatsApp on (+44) 7480 061765
Website; ghostmysteryrecovery. c om
ghost mystery recovery Hacker in 2026
ghost mystery recovery Hacker for 2026 include ghost mystery Recovery Hacker And which utilize blockchain forensics and legal strategies to recover stolen or lost assets. These firms specialize in tracing funds, working with law enforcement, and providing expert testimony to freeze assets. ghost mystery recovery Hacker Highly rated for 2026 for using AI-powered tools to trace funds across exchanges and privacy coins, with a focus on scams and hacked wallets.
Email address: support@ ghostmysteryrecovery. c om
WhatsApp on (+44) 7480 061765
Website; ghostmysteryrecovery. c om
Ghost mystery recovery Hacker in 2026, was Verified as The Best Cryptocurrency Recovery
ghost mystery recovery Hacker, established in 2026, is a trusted leader in cryptocurrency recovery. Known for its professionalism, ghost helps clients recover lost or stolen funds from scams, hacks, and unauthorized transactions. With advanced tools and a dedicated team, they offer tailored solutions for ghost recovery across blockchains. Customers value ghost for its transparency, fast response, and confidential service. Its success rate and commitment make it a top choice for crypto investors.
Email address: support@ ghostmysteryrecovery. c om
WhatsApp on (+44) 7480 061765
Website; ghostmysteryrecovery. c om
In the world of crypto, speed and security aren’t just conveniences—they’re essentials. That’s exactly what Flash USDT delivers. Designed for traders, businesses, and anyone who values instant and reliable transfers, Flash USDT is a powerful software tailored for peer-to-peer USDT (Tether) transactions that are fast, low-cost, and seamless.
Official Website: https://globalflashhubs.com/
Whether you're managing a P2P exchange or simply need a trusted tool for transferring digital assets, Flash BTC and USDT offer the performance and security you expect, integrated smoothly into your workflow.
What’s New with Flash Coin and Generator Software?
Enhanced support for trading P2P across BTC and USDT TRC20 networks, catering to betting, gaming, and forex platforms.
New installation guides for Windows and mobile devices, making setup easier than ever.
Expanded blockch@in compatibility, allowing you to flash coins directly to Bin@nce, Trust Wallet, and more with confidence.
Flash BTC and USDT TRC20 transfers last up to 90 days, ensuring flexibility
Why Choose Flash Coin?
It’s more than just software—it’s a commitment to trust and efficiency. With Flash Coin, you gain access to one of the most reliable services online for crypto transactions, designed to keep your exchanges swift and secure across multiple platforms.
Ready to experience the difference? Shop now and see why traders and businesses worldwide rely on Flash Coin for their digital asset needs.
Get Started Today
Learn how to install and operate Flash Generator software with easy-to-follow instructions for your device. Whether on desktop or mobile, integrating this tool into your daily routine has never been simpler.
Have questions or need support? Reach out directly through our official channels:
Official Website: https://globalflashhubs.com/
WhatsApp: https://wa.link/8q02qv
Telegram: https://t.me/billier5
Explore more about our products and how Flash USDT can transform your crypto experience:
From: Barry Song <v-songbaohua(a)oppo.com>
In many cases, the pages passed to vmap() may include high-order
pages allocated with __GFP_COMP flags. For example, the systemheap
often allocates pages in descending order: order 8, then 4, then 0.
Currently, vmap() iterates over every page individually—even pages
inside a high-order block are handled one by one.
This patch detects high-order pages and maps them as a single
contiguous block whenever possible.
An alternative would be to implement a new API, vmap_sg(), but that
change seems to be large in scope.
When vmapping a 128MB dma-buf using the systemheap, this patch
makes system_heap_do_vmap() roughly 17× faster.
W/ patch:
[ 10.404769] system_heap_do_vmap took 2494000 ns
[ 12.525921] system_heap_do_vmap took 2467008 ns
[ 14.517348] system_heap_do_vmap took 2471008 ns
[ 16.593406] system_heap_do_vmap took 2444000 ns
[ 19.501341] system_heap_do_vmap took 2489008 ns
W/o patch:
[ 7.413756] system_heap_do_vmap took 42626000 ns
[ 9.425610] system_heap_do_vmap took 42500992 ns
[ 11.810898] system_heap_do_vmap took 42215008 ns
[ 14.336790] system_heap_do_vmap took 42134992 ns
[ 16.373890] system_heap_do_vmap took 42750000 ns
Cc: David Hildenbrand <david(a)kernel.org>
Cc: Uladzislau Rezki <urezki(a)gmail.com>
Cc: Sumit Semwal <sumit.semwal(a)linaro.org>
Cc: John Stultz <jstultz(a)google.com>
Cc: Maxime Ripard <mripard(a)kernel.org>
Tested-by: Tangquan Zheng <zhengtangquan(a)oppo.com>
Signed-off-by: Barry Song <v-songbaohua(a)oppo.com>
---
* diff with rfc:
Many code refinements based on David's suggestions, thanks!
Refine comment and changelog according to Uladzislau, thanks!
rfc link:
https://lore.kernel.org/linux-mm/20251122090343.81243-1-21cnbao@gmail.com/
mm/vmalloc.c | 45 +++++++++++++++++++++++++++++++++++++++------
1 file changed, 39 insertions(+), 6 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 41dd01e8430c..8d577767a9e5 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -642,6 +642,29 @@ static int vmap_small_pages_range_noflush(unsigned long addr, unsigned long end,
return err;
}
+static inline int get_vmap_batch_order(struct page **pages,
+ unsigned int stride, unsigned int max_steps, unsigned int idx)
+{
+ int nr_pages = 1;
+
+ /*
+ * Currently, batching is only supported in vmap_pages_range
+ * when page_shift == PAGE_SHIFT.
+ */
+ if (stride != 1)
+ return 0;
+
+ nr_pages = compound_nr(pages[idx]);
+ if (nr_pages == 1)
+ return 0;
+ if (max_steps < nr_pages)
+ return 0;
+
+ if (num_pages_contiguous(&pages[idx], nr_pages) == nr_pages)
+ return compound_order(pages[idx]);
+ return 0;
+}
+
/*
* vmap_pages_range_noflush is similar to vmap_pages_range, but does not
* flush caches.
@@ -655,23 +678,33 @@ int __vmap_pages_range_noflush(unsigned long addr, unsigned long end,
pgprot_t prot, struct page **pages, unsigned int page_shift)
{
unsigned int i, nr = (end - addr) >> PAGE_SHIFT;
+ unsigned int stride;
WARN_ON(page_shift < PAGE_SHIFT);
+ /*
+ * For vmap(), users may allocate pages from high orders down to
+ * order 0, while always using PAGE_SHIFT as the page_shift.
+ * We first check whether the initial page is a compound page. If so,
+ * there may be an opportunity to batch multiple pages together.
+ */
if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMALLOC) ||
- page_shift == PAGE_SHIFT)
+ (page_shift == PAGE_SHIFT && !PageCompound(pages[0])))
return vmap_small_pages_range_noflush(addr, end, prot, pages);
- for (i = 0; i < nr; i += 1U << (page_shift - PAGE_SHIFT)) {
- int err;
+ stride = 1U << (page_shift - PAGE_SHIFT);
+ for (i = 0; i < nr; ) {
+ int err, order;
- err = vmap_range_noflush(addr, addr + (1UL << page_shift),
+ order = get_vmap_batch_order(pages, stride, nr - i, i);
+ err = vmap_range_noflush(addr, addr + (1UL << (page_shift + order)),
page_to_phys(pages[i]), prot,
- page_shift);
+ page_shift + order);
if (err)
return err;
- addr += 1UL << page_shift;
+ addr += 1UL << (page_shift + order);
+ i += 1U << (order + page_shift - PAGE_SHIFT);
}
return 0;
--
2.39.3 (Apple Git-146)
By combining cross-chain tracing, rapid response, and data-driven investigation, Cipher Rescue Chain stands as the global benchmark for crypto recovery. Every traced transaction, every reconstructed path, and every recovered asset reinforces the same conclusion: with the right forensic expertise, recovery is not only possible—it is highly achievable.
The kerneldoc comment on dma_fence_init() and dma_fence_init64() describe
the legacy reason to pass an external lock as a need to prevent multiple
fences "from signaling out of order". However, this wording is a bit
misleading: a shared spinlock does not (and cannot) prevent the signaler
from signaling out of order. Signaling order is the driver's responsibility
regardless of whether the lock is shared or per-fence.
What a shared lock actually provides is serialization of signaling and
observation across fences in a given context, so that observers never
see a later fence signaled while an earlier one is not.
Reword both comments to describe this more accurately.
Signed-off-by: MaÃra Canal <mcanal(a)igalia.com>
---
Hi,
While reading the documentation, I found this particular paragraph quite
hard to understand. As I understand it, locks don't enforce order, only
serialization, but the paragraph seems to communicate the other way around.
Due to that, I had the impression that the current wording can be
misleading for driver developers.
I'm proposing a new wording to better describe the use case of the
external lock based on my understanding, but it would be great to hear
the feedback and suggestions from more experienced developers who might
have more insight about these legacy use cases.
Best regards,
- MaÃra
drivers/dma-buf/dma-fence.c | 12 ++++++++----
1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
index 1826ba73094c..bdc29d1c1b5c 100644
--- a/drivers/dma-buf/dma-fence.c
+++ b/drivers/dma-buf/dma-fence.c
@@ -1102,8 +1102,10 @@ __dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,
* to check which fence is later by simply using dma_fence_later().
*
* It is strongly discouraged to provide an external lock because this couples
- * lock and fence life time. This is only allowed for legacy use cases when
- * multiple fences need to be prevented from signaling out of order.
+ * lock and fence lifetime. This is only allowed for legacy use cases that need
+ * a shared lock to serialize signaling and observation of fences within a
+ * context, so that observers never see a later fence signaled while an earlier
+ * one isn't.
*/
void
dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops,
@@ -1129,8 +1131,10 @@ EXPORT_SYMBOL(dma_fence_init);
* to check which fence is later by simply using dma_fence_later().
*
* It is strongly discouraged to provide an external lock because this couples
- * lock and fence life time. This is only allowed for legacy use cases when
- * multiple fences need to be prevented from signaling out of order.
+ * lock and fence lifetime. This is only allowed for legacy use cases that need
+ * a shared lock to serialize signaling and observation of fences within a
+ * context, so that observers never see a later fence signaled while an earlier
+ * one isn't.
*/
void
dma_fence_init64(struct dma_fence *fence, const struct dma_fence_ops *ops,
--
2.53.0