On Tue, Jul 01, 2025 at 10:51:20PM +0000, Pranjal Shrivastava wrote:
On Tue, Jul 01, 2025 at 03:07:57PM -0700, Nicolin Chen wrote:
On Tue, Jul 01, 2025 at 08:43:30PM +0000, Pranjal Shrivastava wrote:
On Tue, Jul 01, 2025 at 01:23:17PM -0700, Nicolin Chen wrote:
Or perhaps calling them "non-accelerated commands" would be nicer.
Uhh okay, so there'll be a separate driver in the VM issuing invalidation commands directly to the CMDQV thus we don't see any of it's part here?
That's how it works. VM must run a guest-level VCMDQ driver that separates accelerated and non-accelerated commands as it already does:
accelerated commands => VCMDQ (HW)
non-accelerated commands => SMMU CMDQ (SW) =iommufd=> SMMU CMDQ (HW)
Right exactly what got me confused. I was assuming the same CMDQV driver would run in the Guest kernel but seems like there's another driver for the Guest that's not in tree yet or maybe is a purely user-space thing?
It's the same tegra241-cmdqv.c in the kernel, which is already a part of mainline Linux. Both host and guest run the same copy of software. The host kernel just has the user VINTF part (via iommufd) additional to what the guest already has.
And the weird part was that "invalidation" commands are accelerated but we use the .cache_invalidate viommu op for `non-invalidation` commands. But I guess what you meant there could be non-accelerated invalidation commands (maybe something stage 2 TLBIs?) which would go through the .cache_invalidate op, right?
I am talking about this: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/driv...
Those commands returned "false" will be issued to smmu->cmdq in a guest VM, which will be trapped by VMM as a standard SMMU nesting and will be further forwarded via iommufd to the host kernel that will invoke this cache_invalidate op in the arm-smmu-v3 driver.
Those commands returned "true" will be issued to vcmdq->cmdq that is HW-accelerated queue (setup by VMM via iommufd's hw_queue/mmap).
Nicolin