I could bet some money that this does not bring any significant performance gain.
On Sun, Jan 24, 2021 at 02:29:05PM +0800, Tianjia Zhang wrote:
`section->free_cnt` represents the free page in sgx_epc_section, which is assigned once after initialization. In fact, just after the initialization is completed, the pages are in the `init_laundry_list` list and cannot be allocated. This needs to be recovered by EREMOVE of function sgx_sanitize_section() before it can be used as a page that can be allocated. The sgx_sanitize_section() will be called in the kernel thread ksgxd.
This patch moves the initialization of `section->free_cnt` from the initialization function `sgx_setup_epc_section()` to the function `sgx_sanitize_section()`, and then accumulates the count after the
Use single quotes instead of hyphens.
successful execution of EREMOVE. This seems to be more reasonable, free_cnt will also truly reflect the allocatable free pages in EPC.
Sined-off-by: Tianjia Zhang tianjia.zhang@linux.alibaba.com Reviewed-by: Sean Christopherson seanjc@google.com
arch/x86/kernel/cpu/sgx/main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index 4465912174fd..e455ec7b3449 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -48,6 +48,7 @@ static void sgx_sanitize_section(struct sgx_epc_section *section) if (!ret) { spin_lock(§ion->lock); list_move(&page->list, §ion->page_list);
section->free_cnt++; spin_unlock(§ion->lock);
Someone can try to allocate a page while sanitize process is in progress.
I think it is better to keep critical sections in the form that when you leave from one, the global state is legit.
} else list_move_tail(&page->list, &dirty);
@@ -643,7 +644,6 @@ static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size, list_add_tail(§ion->pages[i].list, §ion->init_laundry_list); }
- section->free_cnt = nr_pages; return true;
} -- 2.19.1.3.ge56e4f7
/Jarkko