On 7/14/25 22:50, Mike Rapoport wrote:
On Mon, Jul 14, 2025 at 03:19:17PM +0200, Uladzislau Rezki wrote:
On Mon, Jul 14, 2025 at 01:39:20PM +0100, David Laight wrote:
On Wed, 9 Jul 2025 11:22:34 -0700 Dave Hansendave.hansen@intel.com wrote:
On 7/9/25 11:15, Jacob Pan wrote:
> Is there a use case where a SVA user can access kernel memory in the > first place? No. It should be fully blocked.
Then I don't understand what is the "vulnerability condition" being addressed here. We are talking about KVA range here.
SVA users can't access kernel memory, but they can compel walks of kernel page tables, which the IOMMU caches. The trouble starts if the kernel happens to free that page table page and the IOMMU is using the cache after the page is freed.
That was covered in the changelog, but I guess it could be made a bit more succinct.
But does this really mean that every flush_tlb_kernel_range() should flush the IOMMU page tables as well? AFAIU, set_memory flushes TLB even when bits in pte change and it seems like an overkill...
As far as I can see, only the next-level page table pointer in the middle-level entry matters. SVA is not allowed to access kernel addresses, which has been ensured by the U/S bit in the leaf PTEs, so other bit changes don't matter here.
Thanks, baolu