Feeling the need for speed and a bit of winter fun, even when the weather outside is frightful? Then maybe it’s time to check out Snow Rider 3D. This simple but surprisingly addictive game offers a thrill of downhill skiing and snowboarding right from your browser, no downloads required. Let’s break down how to jump in and start enjoying this surprisingly engaging title.
https://snowriderfree.com/
Gameplay: Simple Controls, Endless Possibilities
The core gameplay of Snow Rider 3D is deceptively straightforward. You control your character's direction using the left and right arrow keys (or A and D). Your objective? Navigate through a series of procedurally generated slopes littered with obstacles. These obstacles range from simple ramps and rails to more challenging hazards like trees, snowdrifts, and even abandoned shacks.
The beauty of Snow Rider 3D lies in its physics. While simple, they feel surprisingly realistic. You'll need to anticipate turns, adjust your speed, and time your jumps to successfully navigate the terrain. A crash will reset you to the beginning of the course, so precision and patience are key.
The game offers different levels, each presenting a unique challenge. Some focus on speed and long jumps, while others demand skillful maneuvering through tight spaces. As you progress, you unlock new skins and sleds, adding a touch of customization to your experience. Think of it as a casual time-killer that can quickly turn into an hour-long obsession!
Tips for Mastering the Mountain:
Alright, so you're ready to hit the slopes. Here are a few tips to help you improve your runs and avoid those frustrating wipeouts:
Practice Makes Perfect: Don't get discouraged by early crashes. The more you play, the better you'll understand the physics and learn to anticipate the terrain.
Master the Turns: Smooth, controlled turns are essential for maintaining speed and avoiding obstacles. Practice feathering the arrow keys to make subtle adjustments.
Timing is Everything: When approaching jumps and ramps, pay close attention to your speed and angle. A well-timed jump can make all the difference.
Don't Be Afraid to Slow Down: Sometimes, the fastest route isn't the safest. Don't be afraid to ease off the gas and navigate tricky sections with caution. Consider looking up guides for specific levels of Snow Rider 3D at websites like Snow Rider 3D if you’re really struggling.
Experiment with Sleds and Skins: Different sleds may offer slight variations in handling. Try out different options to find one that suits your playstyle.
Conclusion: A Fun and Accessible Winter Escape
Snow Rider 3D is a surprisingly addictive and accessible game that’s perfect for a quick dose of winter fun. It's simple controls and challenging gameplay make it easy to pick up and play, while its procedural generation ensures that each run is a unique experience. So, whether you're looking for a casual time-killer or a challenging skill-based game, Snow Rider 3D is definitely worth checking out.
This is the next version of the shmem backed GEM objects series
originally from Asahi, previously posted by Daniel Almeida.
The previous version of the patch series can be found here:
https://patchwork.freedesktop.org/series/156093/
This patch series may be applied on top of the
driver-core/driver-core-testing branch:
https://git.kernel.org/pub/scm/linux/kernel/git/driver-core/driver-core.git…
Changelogs are per-patch
Asahi Lina (2):
rust: helpers: Add bindings/wrappers for dma_resv_lock
rust: drm: gem: shmem: Add DRM shmem helper abstraction
Lyude Paul (5):
rust: drm: Add gem::impl_aref_for_gem_obj!
rust: drm: gem: Add raw_dma_resv() function
rust: gem: Introduce DriverObject::Args
rust: drm: gem: Introduce shmem::SGTable
rust: drm/gem: Add vmap functions to shmem bindings
drivers/gpu/drm/nova/gem.rs | 5 +-
drivers/gpu/drm/tyr/gem.rs | 3 +-
rust/bindings/bindings_helper.h | 3 +
rust/helpers/dma-resv.c | 13 +
rust/helpers/drm.c | 56 ++-
rust/helpers/helpers.c | 1 +
rust/kernel/drm/gem/mod.rs | 79 +++-
rust/kernel/drm/gem/shmem.rs | 654 ++++++++++++++++++++++++++++++++
8 files changed, 792 insertions(+), 22 deletions(-)
create mode 100644 rust/helpers/dma-resv.c
create mode 100644 rust/kernel/drm/gem/shmem.rs
base-commit: dc33ae50d32b509af5ae61030912fa20c79ef112
prerequisite-patch-id: c631986f96e2073263e97e82a65b96fc5ada6924
prerequisite-patch-id: ae853e8eb8d58c77881371960be4ae92755e83c6
prerequisite-patch-id: 0ab78b50648c7d8f66b83c32ed2af0ec3ede42a3
prerequisite-patch-id: 636ec7f913f4047e5e1a1788f3e835b7259698c2
prerequisite-patch-id: d75e4d7140eadeeed8017af8cd093bfd2766ee8e
prerequisite-patch-id: 67a8010c1bc95bca1d2cf6b246c67bc79d24e766
--
2.53.0
On 2026-03-16 12:58 pm, Jiri Pirko wrote:
> From: Jiri Pirko <jiri(a)nvidia.com>
>
> Current CC designs don't place a vIOMMU in front of untrusted devices.
> Instead, the DMA API forces all untrusted device DMA through swiotlb
> bounce buffers (is_swiotlb_force_bounce()) which copies data into
> decrypted memory on behalf of the device.
>
> When a caller has already arranged for the memory to be decrypted
> via set_memory_decrypted(), the DMA API needs to know so it can map
> directly using the unencrypted physical address rather than bounce
> buffering. Following the pattern of DMA_ATTR_MMIO, add
> DMA_ATTR_CC_DECRYPTED for this purpose. Like the MMIO case, only the
> caller knows what kind of memory it has and must inform the DMA API
> for it to work correctly.
Echoing Jason's point, if the intent of this is to indicate shared
memory, please call it DMA_ATTR_CC_SHARED. Yes, some of the existing
APIs are badly named because they conflated intent with implementation
details; that is no reason to keep wilfully making the same mistake.
At least with Arm CCA, the architecture enforces *confidentiality*
pretty much orthogonally to encryption - if your threat model excludes
physical attacks against DRAM, you can still have Realms isolated from
each other (and of course other execution states) without even
implementing the memory encryption feature; conversely if you do have
it, then even all the shared/host memory may still be physically
encrypted, it just has its own context (key) distinct from the Realm
ones. Similarly, while it's not a "true" CoCo environment, pKVM has a
similar notion of shared vs. private which can benefit from
piggy-backing off much of the CoCo infrastructure in places like the DMA
layer, but has nothing whatsoever to do with actual encryption.
Furthermore, "shared" is just shorter and more readable, even before I
invoke the previous discussion of why it should be "unencrypted" rather
than "decrypted" anyway ;)
> Signed-off-by: Jiri Pirko <jiri(a)nvidia.com>
> ---
> v3->v4:
> - added some sanity checks to dma_map_phys and dma_unmap_phys
> - enhanced documentation of DMA_ATTR_CC_DECRYPTED attr
> v1->v2:
> - rebased on top of recent dma-mapping-fixes
> ---
> include/linux/dma-mapping.h | 10 ++++++++++
> include/trace/events/dma.h | 3 ++-
> kernel/dma/direct.h | 14 +++++++++++---
> kernel/dma/mapping.c | 13 +++++++++++--
> 4 files changed, 34 insertions(+), 6 deletions(-)
>
> diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
> index 29973baa0581..476964d2b22f 100644
> --- a/include/linux/dma-mapping.h
> +++ b/include/linux/dma-mapping.h
> @@ -85,6 +85,16 @@
> * a cacheline must have this attribute for this to be considered safe.
> */
> #define DMA_ATTR_CPU_CACHE_CLEAN (1UL << 11)
> +/*
> + * DMA_ATTR_CC_DECRYPTED: Indicates the DMA mapping is decrypted (shared) for
> + * confidential computing guests. For normal system memory the caller must have
> + * called set_memory_decrypted(), and pgprot_decrypted must be used when
> + * creating CPU PTEs for the mapping. The same decrypted semantic may be passed
> + * to the vIOMMU when it sets up the IOPTE. For MMIO use together with
That being "the vIOMMU" that you said doesn't exist, and which is
explicitly not supported?...
> + * DMA_ATTR_MMIO to indicate decrypted MMIO. Unless DMA_ATTR_MMIO is provided
> + * a struct page is required.
> + */
> +#define DMA_ATTR_CC_DECRYPTED (1UL << 12)
>
> /*
> * A dma_addr_t can hold any valid DMA or bus address for the platform. It can
> diff --git a/include/trace/events/dma.h b/include/trace/events/dma.h
> index 33e99e792f1a..b8082d5177c4 100644
> --- a/include/trace/events/dma.h
> +++ b/include/trace/events/dma.h
> @@ -32,7 +32,8 @@ TRACE_DEFINE_ENUM(DMA_NONE);
> { DMA_ATTR_ALLOC_SINGLE_PAGES, "ALLOC_SINGLE_PAGES" }, \
> { DMA_ATTR_NO_WARN, "NO_WARN" }, \
> { DMA_ATTR_PRIVILEGED, "PRIVILEGED" }, \
> - { DMA_ATTR_MMIO, "MMIO" })
> + { DMA_ATTR_MMIO, "MMIO" }, \
> + { DMA_ATTR_CC_DECRYPTED, "CC_DECRYPTED" })
>
> DECLARE_EVENT_CLASS(dma_map,
> TP_PROTO(struct device *dev, phys_addr_t phys_addr, dma_addr_t dma_addr,
> diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
> index e89f175e9c2d..c047a9d0fda3 100644
> --- a/kernel/dma/direct.h
> +++ b/kernel/dma/direct.h
> @@ -84,16 +84,24 @@ static inline dma_addr_t dma_direct_map_phys(struct device *dev,
> dma_addr_t dma_addr;
>
> if (is_swiotlb_force_bounce(dev)) {
> - if (attrs & DMA_ATTR_MMIO)
> - return DMA_MAPPING_ERROR;
> + if (!(attrs & DMA_ATTR_CC_DECRYPTED)) {
> + if (attrs & DMA_ATTR_MMIO)
> + return DMA_MAPPING_ERROR;
>
> - return swiotlb_map(dev, phys, size, dir, attrs);
> + return swiotlb_map(dev, phys, size, dir, attrs);
> + }
> + } else if (attrs & DMA_ATTR_CC_DECRYPTED) {
> + return DMA_MAPPING_ERROR;
> }
>
> if (attrs & DMA_ATTR_MMIO) {
> dma_addr = phys;
> if (unlikely(!dma_capable(dev, dma_addr, size, false)))
> goto err_overflow;
> + } else if (attrs & DMA_ATTR_CC_DECRYPTED) {
> + dma_addr = phys_to_dma_unencrypted(dev, phys);
> + if (unlikely(!dma_capable(dev, dma_addr, size, false)))
> + goto err_overflow;
> } else {
> dma_addr = phys_to_dma(dev, phys);
> if (unlikely(!dma_capable(dev, dma_addr, size, true)) ||
> diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
> index 3928a509c44c..abb0c88b188b 100644
> --- a/kernel/dma/mapping.c
> +++ b/kernel/dma/mapping.c
> @@ -157,6 +157,7 @@ dma_addr_t dma_map_phys(struct device *dev, phys_addr_t phys, size_t size,
> {
> const struct dma_map_ops *ops = get_dma_ops(dev);
> bool is_mmio = attrs & DMA_ATTR_MMIO;
> + bool is_cc_decrypted = attrs & DMA_ATTR_CC_DECRYPTED;
> dma_addr_t addr = DMA_MAPPING_ERROR;
>
> BUG_ON(!valid_dma_direction(dir));
> @@ -165,8 +166,11 @@ dma_addr_t dma_map_phys(struct device *dev, phys_addr_t phys, size_t size,
> return DMA_MAPPING_ERROR;
>
> if (dma_map_direct(dev, ops) ||
> - (!is_mmio && arch_dma_map_phys_direct(dev, phys + size)))
> + (!is_mmio && !is_cc_decrypted &&
> + arch_dma_map_phys_direct(dev, phys + size)))
> addr = dma_direct_map_phys(dev, phys, size, dir, attrs);
> + else if (is_cc_decrypted)
> + return DMA_MAPPING_ERROR;
> else if (use_dma_iommu(dev))
...although, why *shouldn't* this be allowed with a vIOMMU? (Especially
given that a vIOMMU for untrusted devices can be emulated by the host
VMM without the CoCo hypervisor having to care at all - again, at least
on Arm and other architectures where IOMMUs are regular driver model
devices)
> addr = iommu_dma_map_phys(dev, phys, size, dir, attrs);
> else if (ops->map_phys)
Or indeed any other non-direct ops? Obviously all the legacy
architectures like Alpha are never going to see this or care, but I
could imagine Xen and possibly PowerPC might.
Thanks,
Robin.
> @@ -203,11 +207,16 @@ void dma_unmap_phys(struct device *dev, dma_addr_t addr, size_t size,
> {
> const struct dma_map_ops *ops = get_dma_ops(dev);
> bool is_mmio = attrs & DMA_ATTR_MMIO;
> + bool is_cc_decrypted = attrs & DMA_ATTR_CC_DECRYPTED;
>
> BUG_ON(!valid_dma_direction(dir));
> +
> if (dma_map_direct(dev, ops) ||
> - (!is_mmio && arch_dma_unmap_phys_direct(dev, addr + size)))
> + (!is_mmio && !is_cc_decrypted &&
> + arch_dma_unmap_phys_direct(dev, addr + size)))
> dma_direct_unmap_phys(dev, addr, size, dir, attrs);
> + else if (is_cc_decrypted)
> + return;
> else if (use_dma_iommu(dev))
> iommu_dma_unmap_phys(dev, addr, size, dir, attrs);
> else if (ops->unmap_phys)
Almighty Cryptocurrency Recovery is a private investigation, asset recovery, and financial regulator. We specialize in instances involving recovery scams, cryptocurrency, fake investment schemes, and ethical hacking. We examine the factors influencing your score and are also experts in credit repair. removing criminal records, school grades, and jobs involving phone and social media hacking.
Every piece of software required to carry out recoveries from beginning to end is available.
The writers and offenders band together to establish a syndicate, so be wary of false reviews and testimonials on the internet.To get started, send an email to our support team at the address below as soon as you can.
almightyrecoverycoin(a)mail.com and whatsapp +53 51 55 6969
Visit website; almightyrecoveryco.wixsite.com/almighty-recovery-co
Stay Safe!
This patch series adds a new dma-buf heap driver that exposes coherent,
non‑reusable reserved-memory regions as named heaps, so userspace can
explicitly allocate buffers from those device‑specific pools.
Motivation: we want cgroup accounting for all userspace‑visible buffer
allocations (DRM, v4l2, dma‑buf heaps, etc.). That’s hard to do when
drivers call dma_alloc_attrs() directly because the accounting controller
(memcg vs dmem) is ambiguous. The long‑term plan is to steer those paths
toward dma‑buf heaps, where each heap can unambiguously charge a single
controller. To reach that goal, we need a heap backend for each
dma_alloc_attrs() memory type. CMA and system heaps already exist;
coherent reserved‑memory was the missing piece, since many SoCs define
dedicated, device‑local coherent pools in DT under /reserved-memory using
"shared-dma-pool" with non‑reusable regions (i.e., not CMA) that are
carved out exclusively for coherent DMA and are currently only usable by
in‑kernel drivers.
Because these regions are device‑dependent, each heap instance binds a
heap device to its reserved‑mem region via a newly introduced helper
function -namely, of_reserved_mem_device_init_with_mem()- so coherent
allocations use the correct dev->dma_mem.
Charging to cgroups for these buffers is intentionally left out to keep
review focused on the new heap; I plan to follow up based on Eric’s [1]
and Maxime’s [2] work on dmem charging from userspace.
This series also makes the new heap driver modular, in line with the CMA
heap change in [3].
[1] https://lore.kernel.org/all/20260218-dmabuf-heap-cma-dmem-v2-0-b249886fb7b2…
[2] https://lore.kernel.org/all/20250310-dmem-cgroups-v1-0-2984c1bc9312@kernel.…
[3] https://lore.kernel.org/all/20260303-dma-buf-heaps-as-modules-v3-0-24344812…
Signed-off-by: Albert Esteve <aesteve(a)redhat.com>
---
Changes in v3:
- Reorganized changesets among patches to ensure bisectability
- Removed unused dma_heap_coherent_register() leftover
- Removed fallback when setting mask in coherent heap dev, since
dma_set_mask() already truncates to supported masks
- Moved struct rmem_assigned_device (rd) logic to
of_reserved_mem_device_init_with_mem() to allow listing the device
- Link to v2: https://lore.kernel.org/r/20260303-b4-dmabuf-heap-coherent-rmem-v2-0-65a465…
Changes in v2:
- Removed dmem charging parts
- Moved coherent heap registering logic to coherent.c
- Made heap device a member of struct dma_heap
- Split dma_heap_add logic into create/register, to be able to
access the stored heap device before registered.
- Avoid platform device in favour of heap device
- Added a wrapper to rmem device_init() op
- Switched from late_initcall() to module_init()
- Made the coherent heap driver modular
- Link to v1: https://lore.kernel.org/r/20260224-b4-dmabuf-heap-coherent-rmem-v1-1-dffef4…
---
Albert Esteve (5):
dma-buf: dma-heap: split dma_heap_add
of_reserved_mem: add a helper for rmem device_init op
dma: coherent: store reserved memory coherent regions
dma-buf: heaps: Add Coherent heap to dmabuf heaps
dma-buf: heaps: coherent: Turn heap into a module
John Stultz (1):
dma-buf: dma-heap: Keep track of the heap device struct
drivers/dma-buf/dma-heap.c | 138 +++++++++--
drivers/dma-buf/heaps/Kconfig | 9 +
drivers/dma-buf/heaps/Makefile | 1 +
drivers/dma-buf/heaps/coherent_heap.c | 417 ++++++++++++++++++++++++++++++++++
drivers/of/of_reserved_mem.c | 68 ++++--
include/linux/dma-heap.h | 5 +
include/linux/dma-map-ops.h | 7 +
include/linux/of_reserved_mem.h | 8 +
kernel/dma/coherent.c | 34 +++
9 files changed, 640 insertions(+), 47 deletions(-)
---
base-commit: 6de23f81a5e08be8fbf5e8d7e9febc72a5b5f27f
change-id: 20260223-b4-dmabuf-heap-coherent-rmem-91fd3926afe9
Best regards,
--
Albert Esteve <aesteve(a)redhat.com>
Many of us have f@ll€n for inves tment, pon zi, love, dating, even cryp to sc@m, it's really sad, few months ago I was a vict im of an invest ment sc@m , but today I got my fuπds back to my wall et , unbelievable right? I'll tell you h0w: Firstly, Believe in ghosttrackha ckers these guys are geπuine and geπius, I stumbled upon series of reviews from th€m, after months of l00sing my $20k worth of bitc0iπ thinking they were gone forever I sent them a m@il, €xplaining what happened and if there was any possibility, they said "nothing w€ can't haπdle" I sat back and in 48hrs they ask€d for my addr€ss and they r€c0v€r€d the full funds.
I couldn't believe my eyes, I told them I'll go spr€ad the words to many others who have fall€n vi¢t|m. Goodluck
ghosttrackhackers@ gmail . com
On Thu, Mar 05, 2026 at 01:36:40PM +0100, Jiri Pirko wrote:
> From: Jiri Pirko <jiri(a)nvidia.com>
>
> Current CC designs don't place a vIOMMU in front of untrusted devices.
> Instead, the DMA API forces all untrusted device DMA through swiotlb
> bounce buffers (is_swiotlb_force_bounce()) which copies data into
> decrypted memory on behalf of the device.
>
> When a caller has already arranged for the memory to be decrypted
> via set_memory_decrypted(), the DMA API needs to know so it can map
> directly using the unencrypted physical address rather than bounce
> buffering. Following the pattern of DMA_ATTR_MMIO, add
> DMA_ATTR_CC_DECRYPTED for this purpose. Like the MMIO case, only the
> caller knows what kind of memory it has and must inform the DMA API
> for it to work correctly.
>
> Signed-off-by: Jiri Pirko <jiri(a)nvidia.com>
> ---
> v1->v2:
> - rebased on top of recent dma-mapping-fixes
> ---
> include/linux/dma-mapping.h | 6 ++++++
> include/trace/events/dma.h | 3 ++-
> kernel/dma/direct.h | 14 +++++++++++---
> 3 files changed, 19 insertions(+), 4 deletions(-)
>
> diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
> index 29973baa0581..ae3d85e494ec 100644
> --- a/include/linux/dma-mapping.h
> +++ b/include/linux/dma-mapping.h
> @@ -85,6 +85,12 @@
> * a cacheline must have this attribute for this to be considered safe.
> */
> #define DMA_ATTR_CPU_CACHE_CLEAN (1UL << 11)
> +/*
> + * DMA_ATTR_CC_DECRYPTED: Indicates memory that has been explicitly decrypted
> + * (shared) for confidential computing guests. The caller must have
> + * called set_memory_decrypted(). A struct page is required.
> + */
> +#define DMA_ATTR_CC_DECRYPTED (1UL << 12)
While adding the new attribute is fine, I would expect additional checks in
dma_map_phys() to ensure the attribute cannot be misused. For example,
WARN_ON(attrs & (DMA_ATTR_CC_DECRYPTED | DMA_ATTR_MMIO)), along with a check
that we are taking the direct path only.
Thanks
On 3/12/26 19:45, Matt Evans wrote:
> Hi all,
>
>
> There were various suggestions in the September 2025 thread "[TECH
> TOPIC] vfio, iommufd: Enabling user space drivers to vend more
> granular access to client processes" [0], and LPC discussions, around
> improving the situation for multi-process userspace driver designs.
> This RFC series implements some of these ideas.
>
> (Thanks for feedback on v1! Revised series, with changes noted
> inline.)
>
> Background: Multi-process USDs
> ==============================
>
> The userspace driver scenario discussed in that thread involves a
> primary process driving a PCIe function through VFIO/iommufd, which
> manages the function-wide ownership/lifecycle. The function is
> designed to provide multiple distinct programming interfaces (for
> example, several independent MMIO register frames in one function),
> and the primary process delegates control of these interfaces to
> multiple independent client processes (which do the actual work).
> This scenario clearly relies on a HW design that provides appropriate
> isolation between the programming interfaces.
>
> The two key needs are:
>
> 1. Mechanisms to safely delegate a subset of the device MMIO
> resources to a client process without over-sharing wider access
> (or influence over whole-device activities, such as reset).
>
> 2. Mechanisms to allow a client process to do its own iommufd
> management w.r.t. its address space, in a way that's isolated
> from DMA relating to other clients.
>
>
> mmap() of VFIO DMABUFs
> ======================
>
> This RFC addresses #1 in "vfio/pci: Support mmap() of a VFIO DMABUF",
> implementing the proposals in [0] to add mmap() support to the
> existing VFIO DMABUF exporter.
>
> This enables a userspace driver to define DMABUF ranges corresponding
> to sub-ranges of a BAR, and grant a given client (via a shared fd)
> the capability to access (only) those sub-ranges. The VFIO device fds
> would be kept private to the primary process. All the client can do
> with that fd is map (or iomap via iommufd) that specific subset of
> resources, and the impact of bugs/malice is contained.
>
> (We'll follow up on #2 separately, as a related-but-distinct problem.
> PASIDs are one way to achieve per-client isolation of DMA; another
> could be sharing of a single IOVA space via 'constrained' iommufds.)
>
>
> New in v2: To achieve this, the existing VFIO BAR mmap() path is
> converted to use DMABUFs behind the scenes, in "vfio/pci: Convert BAR
> mmap() to use a DMABUF" plus new helper functions, as Jason/Christian
> suggested in the v1 discussion [3].
>
> This means:
>
> - Both regular and new DMABUF BAR mappings share the same vm_ops,
> i.e. mmap()ing DMABUFs is a smaller change on top of the existing
> mmap().
>
> - The zapping of mappings occurs via vfio_pci_dma_buf_move(), and the
> vfio_pci_zap_bars() originally paired with the _move()s can go
> away. Each DMABUF has a unique address_space.
>
> - It's a step towards future iommufd VFIO Type1 emulation
> implementing P2P, since iommufd can now get a DMABUF from a VA that
> it's mapping for IO; the VMAs' vm_file is that of the backing
> DMABUF.
>
>
> Revocation/reclaim
> ==================
>
> Mapping a BAR subset is useful, but the lifetime of access granted to
> a client needs to be managed well. For example, a protocol between
> the primary process and the client can indicate when the client is
> done, and when it's safe to reuse the resources elsewhere, but cleanup
> can't practically be cooperative.
>
> For robustness, we enable the driver to make the resources
> guaranteed-inaccessible when it chooses, so that it can re-assign them
> to other uses in future.
>
> "vfio/pci: Permanently revoke a DMABUF on request" adds a new VFIO
> device fd ioctl, VFIO_DEVICE_PCI_DMABUF_REVOKE. This takes a DMABUF
> fd parameter previously exported (from that device!) and permanently
> revokes the DMABUF. This notifies/detaches importers, zaps PTEs for
> any mappings, and guarantees no future attachment/import/map/access is
> possible by any means.
>
> A primary driver process would use this operation when the client's
> tenure ends to reclaim "loaned-out" MMIO interfaces, at which point
> the interfaces could be safely re-used.
>
> New in v2: ioctl() on VFIO driver fd, rather than DMABUF fd. A DMABUF
> is revoked using code common to vfio_pci_dma_buf_move(), selectively
> zapping mappings (after waiting for completion on the
> dma_buf_invalidate_mappings() request).
>
>
> BAR mapping access attributes
> =============================
>
> Inspired by Alex [Mastro] and Jason's comments in [0] and Mahmoud's
> work in [1] with the goal of controlling CPU access attributes for
> VFIO BAR mappings (e.g. WC), we can decorate DMABUFs with access
> attributes that are then used by a mapping's PTEs.
>
> I've proposed reserving a field in struct
> vfio_device_feature_dma_buf's flags to specify an attribute for its
> ranges. Although that keeps the (UAPI) struct unchanged, it means all
> ranges in a DMABUF share the same attribute. I feel a single
> attribute-to-mmap() relation is logical/reasonable. An application
> can also create multiple DMABUFs to describe any BAR layout and mix of
> attributes.
>
>
> Tests
> =====
>
> (Still sharing the [RFC ONLY] userspace test/demo program for context,
> not for merge.)
>
> It illustrates & tests various map/revoke cases, but doesn't use the
> existing VFIO selftests and relies on a (tweaked) QEMU EDU function.
> I'm (still) working on integrating the scenarios into the existing
> VFIO selftests.
>
> This code has been tested in mapping DMABUFs of single/multiple
> ranges, aliasing mmap()s, aliasing ranges across DMABUFs, vm_pgoff >
> 0, revocation, shutdown/cleanup scenarios, and hugepage mappings seem
> to work correctly. I've lightly tested WC mappings also (by observing
> resulting PTEs as having the correct attributes...).
>
>
> Fin
> ===
>
> v2 is based on next-20260310 (to build on Leon's recent series
> "vfio: Wait for dma-buf invalidation to complete" [2]).
>
>
> Please share your thoughts! I'd like to de-RFC if we feel this
> approach is now fair.
I only skimmed over it, but at least of hand I couldn't find anything fundamentally wrong.
The locking order seems to change in patch #6. In general I strongly recommend to enable lockdep while testing anyway but explicitly when I see such changes.
Additional to that it might also be a good idea to have a lockdep initcall function which defines the locking order in the way all the VFIO code should follow.
See function dma_resv_lockdep() for an example on how to do that. Especially with mmap support and all the locks involved with that it has proven to be a good practice to have something like that.
Regards,
Christian.
>
>
> Many thanks,
>
>
> Matt
>
>
>
> References:
>
> [0]: https://lore.kernel.org/linux-iommu/20250918214425.2677057-1-amastro@fb.com/
> [1]: https://lore.kernel.org/all/20250804104012.87915-1-mngyadam@amazon.de/
> [2]: https://lore.kernel.org/linux-iommu/20260205-nocturnal-poetic-chamois-f566a…
> [3]: https://lore.kernel.org/all/20260226202211.929005-1-mattev@meta.com/
>
> --------------------------------------------------------------------------------
> Changelog:
>
> v2: Respin based on the feedback/suggestions:
>
> - Transform the existing VFIO BAR mmap path to also use DMABUFs behind
> the scenes, and then simply share that code for explicitly-mapped
> DMABUFs.
>
> - Refactors the export itself out of vfio_pci_core_feature_dma_buf,
> and shared by a new vfio_pci_core_mmap_prep_dmabuf helper used by
> the regular VFIO mmap to create a DMABUF.
>
> - Revoke buffers using a VFIO device fd ioctl
>
> v1: https://lore.kernel.org/all/20260226202211.929005-1-mattev@meta.com/
>
>
> Matt Evans (10):
> vfio/pci: Set up VFIO barmap before creating a DMABUF
> vfio/pci: Clean up DMABUFs before disabling function
> vfio/pci: Add helper to look up PFNs for DMABUFs
> vfio/pci: Add a helper to create a DMABUF for a BAR-map VMA
> vfio/pci: Convert BAR mmap() to use a DMABUF
> vfio/pci: Remove vfio_pci_zap_bars()
> vfio/pci: Support mmap() of a VFIO DMABUF
> vfio/pci: Permanently revoke a DMABUF on request
> vfio/pci: Add mmap() attributes to DMABUF feature
> [RFC ONLY] selftests: vfio: Add standalone vfio_dmabuf_mmap_test
>
> drivers/vfio/pci/Kconfig | 3 +-
> drivers/vfio/pci/Makefile | 3 +-
> drivers/vfio/pci/vfio_pci_config.c | 18 +-
> drivers/vfio/pci/vfio_pci_core.c | 123 +--
> drivers/vfio/pci/vfio_pci_dmabuf.c | 425 +++++++--
> drivers/vfio/pci/vfio_pci_priv.h | 46 +-
> include/uapi/linux/vfio.h | 42 +-
> tools/testing/selftests/vfio/Makefile | 1 +
> .../vfio/standalone/vfio_dmabuf_mmap_test.c | 837 ++++++++++++++++++
> 9 files changed, 1339 insertions(+), 159 deletions(-)
> create mode 100644 tools/testing/selftests/vfio/standalone/vfio_dmabuf_mmap_test.c
>