__set_clr_pte_enc() miscalculates physical address to operate. pfn is in unit of PG_LEVEL_4K, not PGL_LEVEL_{2M, 1G}. Shift size to get physical address should be PAGE_SHIFT, not page_level_shift().
Fixes: dfaaec9033b8 ("x86: Add support for changing memory encryption attribute in early boot") Reviewed-by: Kirill A. Shutemov kirill.shutemov@linux.intel.com Signed-off-by: Isaku Yamahata isaku.yamahata@intel.com --- arch/x86/mm/mem_encrypt.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index 4b01f7dbaf30..ae78cef79980 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -262,7 +262,7 @@ static void __init __set_clr_pte_enc(pte_t *kpte, int level, bool enc) if (pgprot_val(old_prot) == pgprot_val(new_prot)) return;
- pa = pfn << page_level_shift(level); + pa = pfn << PAGE_SHIFT; size = page_level_size(level);
/*
On Thu, Mar 18, 2021 at 01:26:57PM -0700, Isaku Yamahata wrote:
__set_clr_pte_enc() miscalculates physical address to operate. pfn is in unit of PG_LEVEL_4K, not PGL_LEVEL_{2M, 1G}. Shift size to get physical address should be PAGE_SHIFT, not page_level_shift().
Fixes: dfaaec9033b8 ("x86: Add support for changing memory encryption attribute in early boot") Reviewed-by: Kirill A. Shutemov kirill.shutemov@linux.intel.com Signed-off-by: Isaku Yamahata isaku.yamahata@intel.com
arch/x86/mm/mem_encrypt.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
<formletter>
This is not the correct way to submit patches for inclusion in the stable kernel tree. Please read: https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html for how to do this properly.
</formletter>
On 3/18/21 3:26 PM, Isaku Yamahata wrote:
__set_clr_pte_enc() miscalculates physical address to operate. pfn is in unit of PG_LEVEL_4K, not PGL_LEVEL_{2M, 1G}. Shift size to get physical address should be PAGE_SHIFT, not page_level_shift().
Fixes: dfaaec9033b8 ("x86: Add support for changing memory encryption attribute in early boot") Reviewed-by: Kirill A. Shutemov kirill.shutemov@linux.intel.com Signed-off-by: Isaku Yamahata isaku.yamahata@intel.com
Reviewed-by: Tom Lendacky thomas.lendacky@amd.com
arch/x86/mm/mem_encrypt.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index 4b01f7dbaf30..ae78cef79980 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -262,7 +262,7 @@ static void __init __set_clr_pte_enc(pte_t *kpte, int level, bool enc) if (pgprot_val(old_prot) == pgprot_val(new_prot)) return;
- pa = pfn << page_level_shift(level);
- pa = pfn << PAGE_SHIFT; size = page_level_size(level);
/*
On Mon, Mar 22, 2021 at 04:02:11PM -0500, Tom Lendacky wrote:
On 3/18/21 3:26 PM, Isaku Yamahata wrote:
__set_clr_pte_enc() miscalculates physical address to operate. pfn is in unit of PG_LEVEL_4K, not PGL_LEVEL_{2M, 1G}. Shift size to get physical address should be PAGE_SHIFT, not page_level_shift().
Fixes: dfaaec9033b8 ("x86: Add support for changing memory encryption attribute in early boot") Reviewed-by: Kirill A. Shutemov kirill.shutemov@linux.intel.com Signed-off-by: Isaku Yamahata isaku.yamahata@intel.com
Reviewed-by: Tom Lendacky thomas.lendacky@amd.com
<formletter>
This is not the correct way to submit patches for inclusion in the stable kernel tree. Please read: https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html for how to do this properly.
</formletter>
The following commit has been merged into the x86/urgent branch of tip:
Commit-ID: 8249d17d3194eac064a8ca5bc5ca0abc86feecde Gitweb: https://git.kernel.org/tip/8249d17d3194eac064a8ca5bc5ca0abc86feecde Author: Isaku Yamahata isaku.yamahata@intel.com AuthorDate: Thu, 18 Mar 2021 13:26:57 -07:00 Committer: Borislav Petkov bp@suse.de CommitterDate: Tue, 23 Mar 2021 11:59:45 +01:00
x86/mem_encrypt: Correct physical address calculation in __set_clr_pte_enc()
The pfn variable contains the page frame number as returned by the pXX_pfn() functions, shifted to the right by PAGE_SHIFT to remove the page bits. After page protection computations are done to it, it gets shifted back to the physical address using page_level_shift().
That is wrong, of course, because that function determines the shift length based on the level of the page in the page table but in all the cases, it was shifted by PAGE_SHIFT before.
Therefore, shift it back using PAGE_SHIFT to get the correct physical address.
[ bp: Rewrite commit message. ]
Fixes: dfaaec9033b8 ("x86: Add support for changing memory encryption attribute in early boot") Signed-off-by: Isaku Yamahata isaku.yamahata@intel.com Signed-off-by: Borislav Petkov bp@suse.de Reviewed-by: Kirill A. Shutemov kirill.shutemov@linux.intel.com Reviewed-by: Tom Lendacky thomas.lendacky@amd.com Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/81abbae1657053eccc535c16151f63cd049dcb97.161609829... --- arch/x86/mm/mem_encrypt.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index 4b01f7d..ae78cef 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -262,7 +262,7 @@ static void __init __set_clr_pte_enc(pte_t *kpte, int level, bool enc) if (pgprot_val(old_prot) == pgprot_val(new_prot)) return;
- pa = pfn << page_level_shift(level); + pa = pfn << PAGE_SHIFT; size = page_level_size(level);
/*
linux-stable-mirror@lists.linaro.org