On 8/11/2025 8:57 PM, Jason Gunthorpe wrote:
On Fri, Aug 08, 2025 at 01:15:12PM +0800, Baolu Lu wrote:
+static void kernel_pte_work_func(struct work_struct *work) +{
- struct ptdesc *ptdesc, *next;
- iommu_sva_invalidate_kva_range(0, TLB_FLUSH_ALL);
- guard(spinlock)(&kernel_pte_work.lock);
- list_for_each_entry_safe(ptdesc, next, &kernel_pte_work.list, pt_list) {
list_del_init(&ptdesc->pt_list);
pagetable_dtor_free(ptdesc);
- }
Do a list_move from kernel_pte_work.list to an on-stack list head and then immediately release the lock. No reason to hold the spinock while doing frees, also no reason to do list_del_init, that memory probably gets zerod in pagetable_dtor_free()
Yep,using guard(spinlock)() for scope-bound lock management sacrifices fine-grained control over the protected area. If offers convenience at the cost of precision.
Out of my bias, calling it sluggard(spinlock)() might be proper.
Thanks, Ethan
Jason