On Wed Apr 22, 2026 at 8:52 AM JST, Lyude Paul wrote:
One of the more obvious use cases for gem shmem objects is the ability to create mappings into their contents. So, let's hook this up in our rust bindings.
Similar to how we handle SGTables, we make sure there's two different types of mappings: owned mappings (kernel::drm::gem::shmem::VMap) and borrowed mappings (kernel::drm::gem::shmem::VMapRef).
Signed-off-by: Lyude Paul lyude@redhat.com
V7:
- Switch over to the new iosys map bindings that use the Io trait
V8:
- Get rid of iosys_map bindings for now, only support non-iomem types
- s/as_shmem()/as_raw_shmem()
V9:
- Get rid of some outdated comments I missed
- Add missing SIZE check to raw_vmap()
- Add a proper unit test that ensures that we actually validate SIZE at compile-time. Turns out it takes only 34 lines to make a boilerplate DRM driver for a kunit test :)
- Add unit tests
- Add some missing #[inline]s
V10:
- Correct issue with iomem error path We previously called raw_vunmap() if we got an iomem allocation, but raw_vunmap() was written such that it assumed all allocations were sysmem allocations. Fix this by just making raw_vunmap() accept a iosys_map.
V11:
- Use Alexandre's clever solution to remove the macros we were using for maintaining two different VMap types.
- Change the order of items in Object<T> to ensure that sgt_res is always dropped before obj.
I am not sure this is enough to solve the double-free issue - although my GEM knowledge is lacking, so please take this with a grain of salt.
Take an object where we called `sg_table`, so the SGT has been created.
When the object is dropped by DRM, `free_callback` is first called. It calls `drm_gem_shmem_release`, which frees the SGT.
Then, the `Drop` implementation of `Object` is called, and `sgt_res` is dropped... which attempts to free the SGT again.
I haven't tested that and rely only on a quick look at the code and my partial understanding, but the correct fix appears to be to clearing `sgt_res` in `free_callback`.
- Fix typo in Object.raw_vmap()
rust/kernel/drm/gem/shmem.rs | 355 +++++++++++++++++++++++++++++++++++ 1 file changed, 355 insertions(+)
diff --git a/rust/kernel/drm/gem/shmem.rs b/rust/kernel/drm/gem/shmem.rs index a477312c8a09b..b96de8d33141d 100644 --- a/rust/kernel/drm/gem/shmem.rs +++ b/rust/kernel/drm/gem/shmem.rs @@ -26,6 +26,11 @@ from_err_ptr, to_result, // },
- io::{
Io,IoCapable,IoKnownSize, //- }, prelude::*, scatterlist, types::{
@@ -35,6 +40,11 @@ }; use core::{ cell::UnsafeCell,
- ffi::c_void,
- mem::{
self,MaybeUninit, //- }, ops::{ Deref, DerefMut, //
@@ -45,6 +55,7 @@ }, }; use gem::{
- BaseObject, BaseObjectPrivate, DriverObject, IntoGEMObject, //
@@ -289,6 +300,84 @@ pub fn owned_sg_table(&self, dev: &device::Device<Bound>) -> Result<SGTable<T>> // `Some(Devres<SGTableMap<T>>)`. Ok(SGTable(self.into())) }
- /// Attempt to create a vmap from the gem object, and confirm the size of said vmap.
- fn raw_vmap(&self, min_size: usize) -> Result<*mut c_void> {
if self.size() < min_size {return Err(ENOSPC);}let mut map: MaybeUninit<bindings::iosys_map> = MaybeUninit::uninit();// SAFETY: drm_gem_shmem_vmap can be called with the DMA reservation lock heldto_result(unsafe {// TODO: see top of filebindings::dma_resv_lock(self.raw_dma_resv(), ptr::null_mut());let ret = bindings::drm_gem_shmem_vmap_locked(self.as_raw_shmem(), map.as_mut_ptr());bindings::dma_resv_unlock(self.raw_dma_resv());ret})?;// SAFETY: The call to drm_gem_shmem_vmap_locked succeeded above, so we are guaranteed that// map is properly initialized.let map = unsafe { map.assume_init() };
With the DMA reservation guard I suggested in the previous patch, this can become
let map = { let mut map: MaybeUninitbindings::iosys_map = MaybeUninit::uninit(); let _dma_resv = DmaResvGuard::new(self);
// SAFETY: drm_gem_shmem_vmap can be called with the DMA reservation lock held to_result(unsafe { bindings::drm_gem_shmem_vmap_locked(self.as_raw_shmem(), map.as_mut_ptr()) })?;
// SAFETY: The call to drm_gem_shmem_vmap_locked succeeded above, so we are guaranteed that // map is properly initialized. unsafe { map.assume_init() } };
// XXX: We don't currently support iomem allocationsif map.is_iomem {// SAFETY:// - The vmap operation above succeeded, guaranteeing that `map` points to a valid// memory mapping.// - We checked that this is an iomem allocation, making it safe to read vaddr_iomemunsafe { self.raw_vunmap(map) };Err(ENOTSUPP)} else {// SAFETY: We checked that this is not an iomem allocation, making it safe to read vaddrOk(unsafe { map.__bindgen_anon_1.vaddr })}- }
- /// Unmap a vmap from the gem object.
- ///
- /// # Safety
- ///
- /// - The caller promises that `map` is a valid vmap on this gem object.
- /// - The caller promises that the memory pointed to by map will no longer be accesed through
- /// this instance.
- unsafe fn raw_vunmap(&self, mut map: bindings::iosys_map) {
let resv = self.raw_dma_resv();// SAFETY:// - This function is safe to call with the DMA reservation lock held// - Our `ARef` is proof that the underlying gem object here is initialized and thus safe to// dereference.unsafe {// TODO: see top of filebindings::dma_resv_lock(resv, ptr::null_mut());bindings::drm_gem_shmem_vunmap_locked(self.as_raw_shmem(), &mut map);bindings::dma_resv_unlock(resv);}- }
- /// Creates and returns a virtual kernel memory mapping for this object.
- #[inline]
- pub fn vmap<const SIZE: usize>(&self) -> Result<VMapRef<'_, T, SIZE>> {
Ok(VMap {// INVARIANT: `raw_vmap()` checks that the gem object is at least as large as `SIZE`.addr: self.raw_vmap(SIZE)?,owner: self,})- }
- /// Creates and returns an owned reference to a virtual kernel memory mapping for this object.
- #[inline]
- pub fn owned_vmap<const SIZE: usize>(&self) -> Result<VMapOwned<T, SIZE>> {
Ok(VMap {// INVARIANT: `raw_vmap()` checks that the gem object is at least as large as `SIZE`.addr: self.raw_vmap(SIZE)?,owner: self.into(),})- }
You can factor out some of the code into a generic private method:
fn make_vmap<R, const SIZE: usize>(&self, owner: R) -> Result<VMap<T, R, SIZE>> where R: Deref<Target = Object<T>>, { Ok(VMap { // INVARIANT: `raw_vmap()` checks that the gem object is at least as large as `SIZE`. addr: self.raw_vmap(SIZE)?, owner, }) }
/// Creates and returns a virtual kernel memory mapping for this object. #[inline] pub fn vmap<const SIZE: usize>(&self) -> Result<VMapRef<'_, T, SIZE>> { self.make_vmap(self) }
/// Creates and returns an owned reference to a virtual kernel memory mapping for this object. #[inline] pub fn owned_vmap<const SIZE: usize>(&self) -> Result<VMapOwned<T, SIZE>> { self.make_vmap(self.into()) }
It doesn't result in less LoCs, but reduces duplication and removes one `INVARIANT` statement.