On Sun, Jan 19, 2025 at 06:40:57PM +0800, Yi Liu wrote:
On 2025/1/19 04:32, Nicolin Chen wrote:
On Sat, Jan 18, 2025 at 04:23:22PM +0800, Yi Liu wrote:
On 2025/1/11 11:32, Nicolin Chen wrote:
+static int iommufd_hwpt_attach_device(struct iommufd_hw_pagetable *hwpt,
struct iommufd_device *idev)
+{
- struct iommufd_attach_handle *handle;
- int rc;
- if (hwpt->fault) {
rc = iommufd_fault_domain_attach_dev(hwpt, idev, true);
if (rc)
return rc;
- }
- handle = kzalloc(sizeof(*handle), GFP_KERNEL);
- if (!handle) {
rc = -ENOMEM;
goto out_fault_detach;
- }
- handle->idev = idev;
- rc = iommu_attach_group_handle(hwpt->domain, idev->igroup->group,
&handle->handle);
- if (rc)
goto out_free_handle;
- return 0;
+out_free_handle:
- kfree(handle);
- handle = NULL;
+out_fault_detach:
- if (hwpt->fault)
iommufd_fault_domain_detach_dev(hwpt, idev, handle, true);
- return rc;
+}
Here the revert path passes in a handle=NULL..
aha. got it. Perhaps we can allocate handle first. In the below thread, it is possible that a failed domain may have pending PRIs, it would require the caller to call the auto response. Although, we are likely to swap the order, but it is nice to have for the caller to do it.
https://lore.kernel.org/linux-iommu/f685daca-081a-4ede-b1e1-559009fa9ebc@int...
Hmm, I don't really see a point in letting the detach flow to scan the two lists in hwpt->fault against a zero-ed handle... which feels like a waste of CPU cycles?
And I am not sure how that xa_insert part is realted?
Thanks Nic