A few selftest harness changes being merged to v6.16, which exposed some bugs and vulnerabilities in the iommufd selftest code. Fix them properly.
Note that the patch fixing the build warnings at mfd is not ideal, as it has possibly hit some corner case in the gcc: https://lore.kernel.org/all/aEi8DV+ReF3v3Rlf@nvidia.com/
This is on github: https://github.com/nicolinc/iommufd/commits/iommufd_selftest_fixes-v6.16
Thanks Nicolin
Nicolin Chen (4): iommufd/selftest: Fix iommufd_dirty_tracking with large hugepage sizes iommufd/selftest: Add missing close(mfd) in memfd_mmap() iommufd/selftest: Add asserts testing global mfd iommufd/selftest: Fix build warnings due to uninitialized mfd
tools/testing/selftests/iommu/iommufd_utils.h | 9 ++++- tools/testing/selftests/iommu/iommufd.c | 38 +++++++++++++++---- 2 files changed, 38 insertions(+), 9 deletions(-)
The hugepage test cases of iommufd_dirty_tracking have the 64MB and 128MB coverages. Both of them are smaller than the default hugepage size 512MB, when CONFIG_PAGE_SIZE_64KB=y. However, these test cases have a variant of using huge pages, which would mmap(MAP_HUGETLB) using these smaller sizes than the system hugepag size. This results in the kernel aligning up the smaller size to 512MB. If a memory was located between the upper 64/128MB size boundary and the hugepage 512MB boundary, it would get wiped out: https://lore.kernel.org/all/aEoUhPYIAizTLADq@nvidia.com/
Given that this aligning up behavior is well documented, we have no choice but to allocate a hugepage aligned size to avoid this unintended wipe out. Instead of relying on the kernel's internal force alignment, pass the same size to posix_memalign() and map().
On the other hand, the munmap() handler in the kernel doesn't align up, so we have to manually fix the munmap length to prevent a size mismatch.
Also, fix the FIXTURE_TEARDOWN(), as it misuses an munmap() for the bitmap allocated via posix_memalign() and forgets to free the self->buffer.
Fixes: a9af47e382a4 ("iommufd/selftest: Test IOMMU_HWPT_GET_DIRTY_BITMAP") Cc: stable@vger.kernel.org Signed-off-by: Nicolin Chen nicolinc@nvidia.com --- tools/testing/selftests/iommu/iommufd.c | 28 ++++++++++++++++++++----- 1 file changed, 23 insertions(+), 5 deletions(-)
diff --git a/tools/testing/selftests/iommu/iommufd.c b/tools/testing/selftests/iommu/iommufd.c index 1a8e85afe9aa..602f8540242b 100644 --- a/tools/testing/selftests/iommu/iommufd.c +++ b/tools/testing/selftests/iommu/iommufd.c @@ -7,6 +7,7 @@ #include <sys/eventfd.h>
#define __EXPORTED_HEADERS__ +#include <linux/const.h> #include <linux/vfio.h>
#include "iommufd_utils.h" @@ -2022,7 +2023,19 @@ FIXTURE_SETUP(iommufd_dirty_tracking) self->fd = open("/dev/iommu", O_RDWR); ASSERT_NE(-1, self->fd);
- rc = posix_memalign(&self->buffer, HUGEPAGE_SIZE, variant->buffer_size); + if (variant->hugepages) { + /* + * Allocation must be aligned to the HUGEPAGE_SIZE, because the + * following mmap() will automatically align the length to be a + * multiple of the underlying huge page size. Failing to do the + * same at this allocation will result in a memory overwrite by + * the mmap(). + */ + size = __ALIGN_KERNEL(variant->buffer_size, HUGEPAGE_SIZE); + } else { + size = variant->buffer_size; + } + rc = posix_memalign(&self->buffer, HUGEPAGE_SIZE, size); if (rc || !self->buffer) { SKIP(return, "Skipping buffer_size=%lu due to errno=%d", variant->buffer_size, rc); @@ -2037,8 +2050,8 @@ FIXTURE_SETUP(iommufd_dirty_tracking) mmap_flags |= MAP_HUGETLB | MAP_POPULATE; } assert((uintptr_t)self->buffer % HUGEPAGE_SIZE == 0); - vrc = mmap(self->buffer, variant->buffer_size, PROT_READ | PROT_WRITE, - mmap_flags, -1, 0); + vrc = mmap(self->buffer, size, PROT_READ | PROT_WRITE, mmap_flags, -1, + 0); assert(vrc == self->buffer);
self->page_size = MOCK_PAGE_SIZE; @@ -2066,8 +2079,13 @@ FIXTURE_SETUP(iommufd_dirty_tracking)
FIXTURE_TEARDOWN(iommufd_dirty_tracking) { - munmap(self->buffer, variant->buffer_size); - munmap(self->bitmap, DIV_ROUND_UP(self->bitmap_size, BITS_PER_BYTE)); + unsigned long size = variant->buffer_size; + + if (variant->hugepages) + size = __ALIGN_KERNEL(variant->buffer_size, HUGEPAGE_SIZE); + munmap(self->buffer, size); + free(self->buffer); + free(self->bitmap); teardown_iommufd(self->fd, _metadata); }
On Sun, Jun 15, 2025 at 10:02:03PM -0700, Nicolin Chen wrote:
FIXTURE_TEARDOWN(iommufd_dirty_tracking) {
- munmap(self->buffer, variant->buffer_size);
- munmap(self->bitmap, DIV_ROUND_UP(self->bitmap_size, BITS_PER_BYTE));
- unsigned long size = variant->buffer_size;
- if (variant->hugepages)
size = __ALIGN_KERNEL(variant->buffer_size, HUGEPAGE_SIZE);
- munmap(self->buffer, size);
- free(self->buffer);
- free(self->bitmap); teardown_iommufd(self->fd, _metadata);
munmap followed by free isn't right..
This code is using the glibc allocator to get a bunch of pages mmap'd to an aligned location then replacing the pages with MAP_SHARED and maybe HAP_HUGETLB versions.
The glibc allocator is fine to unmap them via free.
I think like this will do the job:
diff --git a/tools/testing/selftests/iommu/iommufd.c b/tools/testing/selftests/iommu/iommufd.c index 1a8e85afe9aa51..e0f4f1541a1f4a 100644 --- a/tools/testing/selftests/iommu/iommufd.c +++ b/tools/testing/selftests/iommu/iommufd.c @@ -2008,6 +2008,7 @@ FIXTURE_VARIANT(iommufd_dirty_tracking)
FIXTURE_SETUP(iommufd_dirty_tracking) { + size_t mmap_buffer_size; unsigned long size; int mmap_flags; void *vrc; @@ -2022,12 +2023,7 @@ FIXTURE_SETUP(iommufd_dirty_tracking) self->fd = open("/dev/iommu", O_RDWR); ASSERT_NE(-1, self->fd);
- rc = posix_memalign(&self->buffer, HUGEPAGE_SIZE, variant->buffer_size); - if (rc || !self->buffer) { - SKIP(return, "Skipping buffer_size=%lu due to errno=%d", - variant->buffer_size, rc); - } - + mmap_buffer_size = variant->buffer_size; mmap_flags = MAP_SHARED | MAP_ANONYMOUS | MAP_FIXED; if (variant->hugepages) { /* @@ -2035,9 +2031,25 @@ FIXTURE_SETUP(iommufd_dirty_tracking) * not available. */ mmap_flags |= MAP_HUGETLB | MAP_POPULATE; + + /* + * Allocation must be aligned to the HUGEPAGE_SIZE, because the + * following mmap() will automatically align the length to be a + * multiple of the underlying huge page size. Failing to do the + * same at this allocation will result in a memory overwrite by + * the mmap(). + */ + if (mmap_buffer_size < HUGEPAGE_SIZE) + mmap_buffer_size = HUGEPAGE_SIZE; + } + + rc = posix_memalign(&self->buffer, HUGEPAGE_SIZE, mmap_buffer_size); + if (rc || !self->buffer) { + SKIP(return, "Skipping buffer_size=%lu due to errno=%d", + mmap_buffer_size, rc); } assert((uintptr_t)self->buffer % HUGEPAGE_SIZE == 0); - vrc = mmap(self->buffer, variant->buffer_size, PROT_READ | PROT_WRITE, + vrc = mmap(self->buffer, mmap_buffer_size, PROT_READ | PROT_WRITE, mmap_flags, -1, 0); assert(vrc == self->buffer);
@@ -2066,8 +2078,8 @@ FIXTURE_SETUP(iommufd_dirty_tracking)
FIXTURE_TEARDOWN(iommufd_dirty_tracking) { - munmap(self->buffer, variant->buffer_size); - munmap(self->bitmap, DIV_ROUND_UP(self->bitmap_size, BITS_PER_BYTE)); + free(self->buffer); + free(self->bitmap); teardown_iommufd(self->fd, _metadata); }
On Mon, Jun 16, 2025 at 01:25:01PM -0300, Jason Gunthorpe wrote:
On Sun, Jun 15, 2025 at 10:02:03PM -0700, Nicolin Chen wrote:
FIXTURE_TEARDOWN(iommufd_dirty_tracking) {
- munmap(self->buffer, variant->buffer_size);
- munmap(self->bitmap, DIV_ROUND_UP(self->bitmap_size, BITS_PER_BYTE));
- unsigned long size = variant->buffer_size;
- if (variant->hugepages)
size = __ALIGN_KERNEL(variant->buffer_size, HUGEPAGE_SIZE);
- munmap(self->buffer, size);
- free(self->buffer);
- free(self->bitmap); teardown_iommufd(self->fd, _metadata);
munmap followed by free isn't right..
You are right. I re-checked with Copilot. It says the same thing. I think the whole posix_memalign() + mmap() confuses me..
Yet, should the bitmap pair with free() since it's allocated by a posix_memalign() call?
This code is using the glibc allocator to get a bunch of pages mmap'd to an aligned location then replacing the pages with MAP_SHARED and maybe HAP_HUGETLB versions.
And I studied some use cases from Copilot. It says that, to use the combination of posix_memalign+mmap, we should do: aligned_ptr = posix_memalign(pagesize, pagesize); unmap(aligned_ptr, pagesize); mapped = mmap(aligned_ptr, pagesize, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED, -1, 0); munmap(mapped, pagesize); // No free() after munmap().
---breakdown--- Before `posix_memalign()`: [ heap memory unused ]
After `posix_memalign()`: [ posix_memalign() memory ] ← managed by malloc/free ↑ aligned_ptr
After `munmap(aligned_ptr)`: [ unmapped memory ] ← allocator no longer owns it
After `mmap(aligned_ptr, ..., MAP_FIXED)`: [ anonymous mmap region ] ← fully remapped, under your control ↑ mapped ---end---
It points out that the heap bookkeeping will be silently clobbered without the munmap() in-between (like we are doing): ---breakdown--- After `posix_memalign()`: [ posix_memalign() memory ] ← malloc thinks it owns this
Then `mmap(aligned_ptr, ..., MAP_FIXED)`: [ anonymous mmap region ] ← malloc still thinks it owns this (!) ↑ mapped ---end---
It also gives a simpler solution for a memory that is not huge page backed but huge page aligned (our !variant->hugepage case): ---code--- void *ptr; size_t alignment = 2 * 1024 * 1024; // or whatever HUGEPAGE_SIZE was size_t size = variant->buffer_size;
// Step 1: Use posix_memalign to get an aligned pointer if (posix_memalign(&ptr, alignment, size) != 0) { perror("posix_memalign"); return -1; }
// Use the memory directly self->buffer = ptr;
// Access/manipulate the memory as needed...
// Step 2: Clean up when done free(self->buffer); ---end---
Also, for a huge page case, there is no need of posix_memalign(): "Hugepages are not part of the standard heap, so allocator functions like posix_memalign() or malloc() don't help and can even get in the way."
Instead, it suggests a cleaner version without posix_memalign(): ---code--- void *addr = mmap(NULL, variant->buffer_size, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS | MAP_HUGETLB | MAP_POPULATE, -1, 0); if (addr == MAP_FAILED) { perror("mmap"); return -1; } ---end---
Should we follow?
Thanks Nicolin
On Mon, Jun 16, 2025 at 07:02:08PM -0700, Nicolin Chen wrote:
On Mon, Jun 16, 2025 at 01:25:01PM -0300, Jason Gunthorpe wrote:
On Sun, Jun 15, 2025 at 10:02:03PM -0700, Nicolin Chen wrote:
FIXTURE_TEARDOWN(iommufd_dirty_tracking) {
- munmap(self->buffer, variant->buffer_size);
- munmap(self->bitmap, DIV_ROUND_UP(self->bitmap_size, BITS_PER_BYTE));
- unsigned long size = variant->buffer_size;
- if (variant->hugepages)
size = __ALIGN_KERNEL(variant->buffer_size, HUGEPAGE_SIZE);
- munmap(self->buffer, size);
- free(self->buffer);
- free(self->bitmap); teardown_iommufd(self->fd, _metadata);
munmap followed by free isn't right..
You are right. I re-checked with Copilot. It says the same thing. I think the whole posix_memalign() + mmap() confuses me..
Yet, should the bitmap pair with free() since it's allocated by a posix_memalign() call?
This code is using the glibc allocator to get a bunch of pages mmap'd to an aligned location then replacing the pages with MAP_SHARED and maybe HAP_HUGETLB versions.
And I studied some use cases from Copilot. It says that, to use the combination of posix_memalign+mmap, we should do: aligned_ptr = posix_memalign(pagesize, pagesize); unmap(aligned_ptr, pagesize); mapped = mmap(aligned_ptr, pagesize, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED, -1, 0); munmap(mapped, pagesize); // No free() after munmap().
---breakdown--- Before `posix_memalign()`: [ heap memory unused ]
After `posix_memalign()`: [ posix_memalign() memory ] ← managed by malloc/free ↑ aligned_ptr
After `munmap(aligned_ptr)`: [ unmapped memory ] ← allocator no longer owns it
Incorrect. The allocator has no idea about the munmap and munmap doesn't disturb any of the allocator tracking structures.
After `mmap(aligned_ptr, ..., MAP_FIXED)`: [ anonymous mmap region ] ← fully remapped, under your control ↑ mapped ---end---
No, this is wrong.
It points out that the heap bookkeeping will be silently clobbered without the munmap() in-between (like we are doing):
Nope, doesn't work like that.
---breakdown--- After `posix_memalign()`: [ posix_memalign() memory ] ← malloc thinks it owns this
Then `mmap(aligned_ptr, ..., MAP_FIXED)`: [ anonymous mmap region ] ← malloc still thinks it owns this (!) ↑ mapped ---end---
Yes, this is correct and what we are doing here. The allocator always owns it and we are just replacing the memory with a different mmap.
It also gives a simpler solution for a memory that is not huge page backed but huge page aligned (our !variant->hugepage case): ---code--- void *ptr; size_t alignment = 2 * 1024 * 1024; // or whatever HUGEPAGE_SIZE was size_t size = variant->buffer_size;
// Step 1: Use posix_memalign to get an aligned pointer if (posix_memalign(&ptr, alignment, size) != 0) { perror("posix_memalign"); return -1; }
Also no, the main point of this is to inject MAP_SHARED which posix_memalign cannot not do.
Also, for a huge page case, there is no need of posix_memalign(): "Hugepages are not part of the standard heap, so allocator functions like posix_memalign() or malloc() don't help and can even get in the way."
Instead, it suggests a cleaner version without posix_memalign(): ---code--- void *addr = mmap(NULL, variant->buffer_size, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS | MAP_HUGETLB | MAP_POPULATE, -1, 0); if (addr == MAP_FAILED) { perror("mmap"); return -1; } ---end---
Yes, we could do this only for MAP_HUGETLB, but it doesn't help the normal case with MAP_SHARED.
So I would leave it alone, use the version I showed.
Jason
On Tue, Jun 17, 2025 at 08:59:48AM -0300, Jason Gunthorpe wrote:
On Mon, Jun 16, 2025 at 07:02:08PM -0700, Nicolin Chen wrote:
---breakdown--- After `posix_memalign()`: [ posix_memalign() memory ] ← malloc thinks it owns this
Then `mmap(aligned_ptr, ..., MAP_FIXED)`: [ anonymous mmap region ] ← malloc still thinks it owns this (!) ↑ mapped ---end---
Yes, this is correct and what we are doing here. The allocator always owns it and we are just replacing the memory with a different mmap.
Hmm, if allocator always owns it. Does that mean the munmap() [3] will release what [1] and [2] do (allocating and replacing)?
[1] posix_memalign() [2] mmap() [3] munmap()
// Step 1: Use posix_memalign to get an aligned pointer if (posix_memalign(&ptr, alignment, size) != 0) { perror("posix_memalign"); return -1; }
Also no, the main point of this is to inject MAP_SHARED which posix_memalign cannot not do.
I see.
Instead, it suggests a cleaner version without posix_memalign(): ---code--- void *addr = mmap(NULL, variant->buffer_size, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS | MAP_HUGETLB | MAP_POPULATE, -1, 0); if (addr == MAP_FAILED) { perror("mmap"); return -1; } ---end---
Yes, we could do this only for MAP_HUGETLB, but it doesn't help the normal case with MAP_SHARED.
So I would leave it alone, use the version I showed.
OK. Will respin a v2 with that.
Thanks Nicolin
On Tue, Jun 17, 2025 at 02:23:41PM -0700, Nicolin Chen wrote:
On Tue, Jun 17, 2025 at 08:59:48AM -0300, Jason Gunthorpe wrote:
On Mon, Jun 16, 2025 at 07:02:08PM -0700, Nicolin Chen wrote:
---breakdown--- After `posix_memalign()`: [ posix_memalign() memory ] ← malloc thinks it owns this
Then `mmap(aligned_ptr, ..., MAP_FIXED)`: [ anonymous mmap region ] ← malloc still thinks it owns this (!) ↑ mapped ---end---
Yes, this is correct and what we are doing here. The allocator always owns it and we are just replacing the memory with a different mmap.
Hmm, if allocator always owns it. Does that mean the munmap() [3] will release what [1] and [2] do (allocating and replacing)?
No, munmap doesn't destroy the allocator meta data.
Jason
On Tue, Jun 17, 2025 at 08:01:36PM -0300, Jason Gunthorpe wrote:
On Tue, Jun 17, 2025 at 02:23:41PM -0700, Nicolin Chen wrote:
On Tue, Jun 17, 2025 at 08:59:48AM -0300, Jason Gunthorpe wrote:
On Mon, Jun 16, 2025 at 07:02:08PM -0700, Nicolin Chen wrote:
---breakdown--- After `posix_memalign()`: [ posix_memalign() memory ] ← malloc thinks it owns this
Then `mmap(aligned_ptr, ..., MAP_FIXED)`: [ anonymous mmap region ] ← malloc still thinks it owns this (!) ↑ mapped ---end---
Yes, this is correct and what we are doing here. The allocator always owns it and we are just replacing the memory with a different mmap.
Hmm, if allocator always owns it. Does that mean the munmap() [3] will release what [1] and [2] do (allocating and replacing)?
No, munmap doesn't destroy the allocator meta data.
Should we do something to that meta data?
On Tue, Jun 17, 2025 at 04:46:57PM -0700, Nicolin Chen wrote:
On Tue, Jun 17, 2025 at 08:01:36PM -0300, Jason Gunthorpe wrote:
On Tue, Jun 17, 2025 at 02:23:41PM -0700, Nicolin Chen wrote:
On Tue, Jun 17, 2025 at 08:59:48AM -0300, Jason Gunthorpe wrote:
On Mon, Jun 16, 2025 at 07:02:08PM -0700, Nicolin Chen wrote:
---breakdown--- After `posix_memalign()`: [ posix_memalign() memory ] ← malloc thinks it owns this
Then `mmap(aligned_ptr, ..., MAP_FIXED)`: [ anonymous mmap region ] ← malloc still thinks it owns this (!) ↑ mapped ---end---
Yes, this is correct and what we are doing here. The allocator always owns it and we are just replacing the memory with a different mmap.
Hmm, if allocator always owns it. Does that mean the munmap() [3] will release what [1] and [2] do (allocating and replacing)?
No, munmap doesn't destroy the allocator meta data.
Should we do something to that meta data?
My patch calls free on it which removes the metadata and might munmap the range (or re-use it, but that doesn't matter)
Jason
Do not forget to close mfd in the error paths, since none of the callers would close it when ASSERT_NE(MAP_FAILED, buf) fails.
Fixes: 0bcceb1f51c7 ("iommufd: Selftest coverage for IOMMU_IOAS_MAP_FILE") Cc: stable@vger.kernel.org Signed-off-by: Nicolin Chen nicolinc@nvidia.com --- tools/testing/selftests/iommu/iommufd_utils.h | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/iommu/iommufd_utils.h b/tools/testing/selftests/iommu/iommufd_utils.h index 72f6636e5d90..6e967b58acfd 100644 --- a/tools/testing/selftests/iommu/iommufd_utils.h +++ b/tools/testing/selftests/iommu/iommufd_utils.h @@ -60,13 +60,18 @@ static inline void *memfd_mmap(size_t length, int prot, int flags, int *mfd_p) { int mfd_flags = (flags & MAP_HUGETLB) ? MFD_HUGETLB : 0; int mfd = memfd_create("buffer", mfd_flags); + void *buf = MAP_FAILED;
if (mfd <= 0) return MAP_FAILED; if (ftruncate(mfd, length)) - return MAP_FAILED; + goto out; *mfd_p = mfd; - return mmap(0, length, prot, flags, mfd, 0); + buf = mmap(0, length, prot, flags, mfd, 0); +out: + if (buf == MAP_FAILED) + close(mfd); + return buf; }
/*
On Sun, Jun 15, 2025 at 10:02:04PM -0700, Nicolin Chen wrote:
Do not forget to close mfd in the error paths, since none of the callers would close it when ASSERT_NE(MAP_FAILED, buf) fails.
Fixes: 0bcceb1f51c7 ("iommufd: Selftest coverage for IOMMU_IOAS_MAP_FILE") Cc: stable@vger.kernel.org Signed-off-by: Nicolin Chen nicolinc@nvidia.com
tools/testing/selftests/iommu/iommufd_utils.h | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-)
Reviewed-by: Jason Gunthorpe jgg@nvidia.com
Jason
The mfd and mfd_buffer will be used in the tests directly without an extra check. Test them in setup_sizes() to ensure they are safe to use.
Fixes: 0bcceb1f51c7 ("iommufd: Selftest coverage for IOMMU_IOAS_MAP_FILE") Cc: stable@vger.kernel.org Signed-off-by: Nicolin Chen nicolinc@nvidia.com --- tools/testing/selftests/iommu/iommufd.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/tools/testing/selftests/iommu/iommufd.c b/tools/testing/selftests/iommu/iommufd.c index 602f8540242b..393d95f88ad4 100644 --- a/tools/testing/selftests/iommu/iommufd.c +++ b/tools/testing/selftests/iommu/iommufd.c @@ -55,6 +55,8 @@ static __attribute__((constructor)) void setup_sizes(void)
mfd_buffer = memfd_mmap(BUFFER_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, &mfd); + assert(mfd_buffer != MAP_FAILED); + assert(mfd > 0); }
FIXTURE(iommufd)
On Sun, Jun 15, 2025 at 10:02:05PM -0700, Nicolin Chen wrote:
The mfd and mfd_buffer will be used in the tests directly without an extra check. Test them in setup_sizes() to ensure they are safe to use.
Fixes: 0bcceb1f51c7 ("iommufd: Selftest coverage for IOMMU_IOAS_MAP_FILE") Cc: stable@vger.kernel.org Signed-off-by: Nicolin Chen nicolinc@nvidia.com
tools/testing/selftests/iommu/iommufd.c | 2 ++ 1 file changed, 2 insertions(+)
Reviewed-by: Jason Gunthorpe jgg@nvidia.com
Jason
Commit 869c788909b9 ("selftests: harness: Stop using setjmp()/longjmp()") changed the harness structure. For some unknown reason, two build warnings occur to the iommufd selftest:
iommufd.c: In function ‘wrapper_iommufd_mock_domain_all_aligns’: iommufd.c:1807:17: warning: ‘mfd’ may be used uninitialized in this function 1807 | close(mfd); | ^~~~~~~~~~ iommufd.c:1767:13: note: ‘mfd’ was declared here 1767 | int mfd; | ^~~ iommufd.c: In function ‘wrapper_iommufd_mock_domain_all_aligns_copy’: iommufd.c:1870:17: warning: ‘mfd’ may be used uninitialized in this function 1870 | close(mfd); | ^~~~~~~~~~ iommufd.c:1819:13: note: ‘mfd’ was declared here 1819 | int mfd; | ^~~
All the mfd have been used in the variant->file path only, so it's likely a false alarm.
FWIW, the commit mentioned above does not cause this, yet it might affect gcc in a certain way that resulted in the warnings. It is also found that ading a dummy setjmp (which doesn't make sense) could mute the warnings: https://lore.kernel.org/all/aEi8DV+ReF3v3Rlf@nvidia.com/
The job of this selftest is to catch kernel bug, while such warnings will unlikely disrupt its role. Mute the warning by force initializing the mfd and add an ASSERT_GT().
Signed-off-by: Nicolin Chen nicolinc@nvidia.com --- tools/testing/selftests/iommu/iommufd.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/iommu/iommufd.c b/tools/testing/selftests/iommu/iommufd.c index 393d95f88ad4..ca611ae5925f 100644 --- a/tools/testing/selftests/iommu/iommufd.c +++ b/tools/testing/selftests/iommu/iommufd.c @@ -1749,13 +1749,15 @@ TEST_F(iommufd_mock_domain, all_aligns) unsigned int end; uint8_t *buf; int prot = PROT_READ | PROT_WRITE; - int mfd; + int mfd = -1;
if (variant->file) buf = memfd_mmap(buf_size, prot, MAP_SHARED, &mfd); else buf = mmap(0, buf_size, prot, self->mmap_flags, -1, 0); ASSERT_NE(MAP_FAILED, buf); + if (variant->file) + ASSERT_GT(mfd, 0); check_refs(buf, buf_size, 0);
/* @@ -1801,13 +1803,15 @@ TEST_F(iommufd_mock_domain, all_aligns_copy) unsigned int end; uint8_t *buf; int prot = PROT_READ | PROT_WRITE; - int mfd; + int mfd = -1;
if (variant->file) buf = memfd_mmap(buf_size, prot, MAP_SHARED, &mfd); else buf = mmap(0, buf_size, prot, self->mmap_flags, -1, 0); ASSERT_NE(MAP_FAILED, buf); + if (variant->file) + ASSERT_GT(mfd, 0); check_refs(buf, buf_size, 0);
/*
On Sun, Jun 15, 2025 at 10:02:06PM -0700, Nicolin Chen wrote:
Commit 869c788909b9 ("selftests: harness: Stop using setjmp()/longjmp()") changed the harness structure. For some unknown reason, two build warnings occur to the iommufd selftest:
iommufd.c: In function ‘wrapper_iommufd_mock_domain_all_aligns’: iommufd.c:1807:17: warning: ‘mfd’ may be used uninitialized in this function 1807 | close(mfd); | ^~~~~~~~~~ iommufd.c:1767:13: note: ‘mfd’ was declared here 1767 | int mfd; | ^~~ iommufd.c: In function ‘wrapper_iommufd_mock_domain_all_aligns_copy’: iommufd.c:1870:17: warning: ‘mfd’ may be used uninitialized in this function 1870 | close(mfd); | ^~~~~~~~~~ iommufd.c:1819:13: note: ‘mfd’ was declared here 1819 | int mfd; | ^~~
All the mfd have been used in the variant->file path only, so it's likely a false alarm.
FWIW, the commit mentioned above does not cause this, yet it might affect gcc in a certain way that resulted in the warnings. It is also found that ading a dummy setjmp (which doesn't make sense) could mute the warnings: https://lore.kernel.org/all/aEi8DV+ReF3v3Rlf@nvidia.com/
The job of this selftest is to catch kernel bug, while such warnings will unlikely disrupt its role. Mute the warning by force initializing the mfd and add an ASSERT_GT().
Signed-off-by: Nicolin Chen nicolinc@nvidia.com
tools/testing/selftests/iommu/iommufd.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-)
Reviewed-by: Jason Gunthorpe jgg@nvidia.com
Jason
linux-kselftest-mirror@lists.linaro.org