Post a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full"), __change_memory_common has a real chance of failing due to split failure. Before that commit, this line was introduced in c55191e96caa, still having a chance of failing if it needs to allocate pagetable memory in apply_to_page_range, although that has never been observed to be true. In general, we should always propagate the return value to the caller.
Cc: stable@vger.kernel.org Fixes: c55191e96caa ("arm64: mm: apply r/o permissions of VM areas to its linear alias as well") Signed-off-by: Dev Jain dev.jain@arm.com --- Based on Linux 6.18-rc4.
arch/arm64/mm/pageattr.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index 5135f2d66958..b4ea86cd3a71 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -148,6 +148,7 @@ static int change_memory_common(unsigned long addr, int numpages, unsigned long size = PAGE_SIZE * numpages; unsigned long end = start + size; struct vm_struct *area; + int ret; int i;
if (!PAGE_ALIGNED(addr)) { @@ -185,8 +186,10 @@ static int change_memory_common(unsigned long addr, int numpages, if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY || pgprot_val(clear_mask) == PTE_RDONLY)) { for (i = 0; i < area->nr_pages; i++) { - __change_memory_common((u64)page_address(area->pages[i]), + ret = __change_memory_common((u64)page_address(area->pages[i]), PAGE_SIZE, set_mask, clear_mask); + if (ret) + return ret; } }
On 03/11/25 11:43 AM, Dev Jain wrote:
Post a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full"), __change_memory_common has a real chance of failing due to split failure. Before that commit, this line was introduced in c55191e96caa, still having
A small nit:
Commit description needs to follow after the SHA ID ^^^^^^^^^^
a chance of failing if it needs to allocate pagetable memory in apply_to_page_range, although that has never been observed to be true. In general, we should always propagate the return value to the caller.
Cc: stable@vger.kernel.org Fixes: c55191e96caa ("arm64: mm: apply r/o permissions of VM areas to its linear alias as well")
Does is really need a Fixes: ? There is no problem which is being fixed.
Signed-off-by: Dev Jain dev.jain@arm.com
Based on Linux 6.18-rc4.
arch/arm64/mm/pageattr.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index 5135f2d66958..b4ea86cd3a71 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -148,6 +148,7 @@ static int change_memory_common(unsigned long addr, int numpages, unsigned long size = PAGE_SIZE * numpages; unsigned long end = start + size; struct vm_struct *area;
- int ret; int i;
if (!PAGE_ALIGNED(addr)) { @@ -185,8 +186,10 @@ static int change_memory_common(unsigned long addr, int numpages, if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY || pgprot_val(clear_mask) == PTE_RDONLY)) { for (i = 0; i < area->nr_pages; i++) {
__change_memory_common((u64)page_address(area->pages[i]),
ret = __change_memory_common((u64)page_address(area->pages[i]), PAGE_SIZE, set_mask, clear_mask);if (ret) } }return ret;
Although the change does make sense.
On 03/11/25 1:18 pm, Anshuman Khandual wrote:
On 03/11/25 11:43 AM, Dev Jain wrote:
Post a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full"), __change_memory_common has a real chance of failing due to split failure. Before that commit, this line was introduced in c55191e96caa, still having
A small nit:
Commit description needs to follow after the SHA ID ^^^^^^^^^^
Didn't do that for brevity's sake, it is there in the fixes tag.
a chance of failing if it needs to allocate pagetable memory in apply_to_page_range, although that has never been observed to be true. In general, we should always propagate the return value to the caller.
Cc: stable@vger.kernel.org Fixes: c55191e96caa ("arm64: mm: apply r/o permissions of VM areas to its linear alias as well")
Does is really need a Fixes: ? There is no problem which is being fixed.
If an error happens in the linear map alias permission change, it will be suppressed due to the return value not being checked.
Signed-off-by: Dev Jain dev.jain@arm.com
Based on Linux 6.18-rc4.
arch/arm64/mm/pageattr.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index 5135f2d66958..b4ea86cd3a71 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -148,6 +148,7 @@ static int change_memory_common(unsigned long addr, int numpages, unsigned long size = PAGE_SIZE * numpages; unsigned long end = start + size; struct vm_struct *area;
- int ret; int i;
if (!PAGE_ALIGNED(addr)) { @@ -185,8 +186,10 @@ static int change_memory_common(unsigned long addr, int numpages, if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY || pgprot_val(clear_mask) == PTE_RDONLY)) { for (i = 0; i < area->nr_pages; i++) {
__change_memory_common((u64)page_address(area->pages[i]),
ret = __change_memory_common((u64)page_address(area->pages[i]), PAGE_SIZE, set_mask, clear_mask);if (ret) } }return ret;Although the change does make sense.
On Mon, Nov 03, 2025 at 11:43:06AM +0530, Dev Jain wrote:
Post a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full"), __change_memory_common has a real chance of failing due to split failure. Before that commit, this line was introduced in c55191e96caa, still having a chance of failing if it needs to allocate pagetable memory in apply_to_page_range, although that has never been observed to be true. In general, we should always propagate the return value to the caller.
Cc: stable@vger.kernel.org Fixes: c55191e96caa ("arm64: mm: apply r/o permissions of VM areas to its linear alias as well") Signed-off-by: Dev Jain dev.jain@arm.com
Based on Linux 6.18-rc4.
arch/arm64/mm/pageattr.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index 5135f2d66958..b4ea86cd3a71 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -148,6 +148,7 @@ static int change_memory_common(unsigned long addr, int numpages, unsigned long size = PAGE_SIZE * numpages; unsigned long end = start + size; struct vm_struct *area;
- int ret; int i;
if (!PAGE_ALIGNED(addr)) { @@ -185,8 +186,10 @@ static int change_memory_common(unsigned long addr, int numpages, if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY || pgprot_val(clear_mask) == PTE_RDONLY)) { for (i = 0; i < area->nr_pages; i++) {
__change_memory_common((u64)page_address(area->pages[i]),
ret = __change_memory_common((u64)page_address(area->pages[i]), PAGE_SIZE, set_mask, clear_mask);if (ret)return ret;
Hmm, this means we can return failure half-way through the operation. Is that something callers are expecting to handle? If so, how can they tell how far we got?
Will
On 11/3/25 7:16 AM, Will Deacon wrote:
On Mon, Nov 03, 2025 at 11:43:06AM +0530, Dev Jain wrote:
Post a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full"), __change_memory_common has a real chance of failing due to split failure. Before that commit, this line was introduced in c55191e96caa, still having a chance of failing if it needs to allocate pagetable memory in apply_to_page_range, although that has never been observed to be true. In general, we should always propagate the return value to the caller.
Cc: stable@vger.kernel.org Fixes: c55191e96caa ("arm64: mm: apply r/o permissions of VM areas to its linear alias as well") Signed-off-by: Dev Jain dev.jain@arm.com
Based on Linux 6.18-rc4.
arch/arm64/mm/pageattr.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index 5135f2d66958..b4ea86cd3a71 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -148,6 +148,7 @@ static int change_memory_common(unsigned long addr, int numpages, unsigned long size = PAGE_SIZE * numpages; unsigned long end = start + size; struct vm_struct *area;
- int ret; int i;
if (!PAGE_ALIGNED(addr)) { @@ -185,8 +186,10 @@ static int change_memory_common(unsigned long addr, int numpages, if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY || pgprot_val(clear_mask) == PTE_RDONLY)) { for (i = 0; i < area->nr_pages; i++) {
__change_memory_common((u64)page_address(area->pages[i]),
ret = __change_memory_common((u64)page_address(area->pages[i]), PAGE_SIZE, set_mask, clear_mask);if (ret)return ret;Hmm, this means we can return failure half-way through the operation. Is that something callers are expecting to handle? If so, how can they tell how far we got?
IIUC the callers don't have to know whether it is half-way or not because the callers will change the permission back (e.g. to RW) for the whole range when freeing memory.
Thanks, Yang
Will
On 04/11/25 12:15 am, Yang Shi wrote:
On 11/3/25 7:16 AM, Will Deacon wrote:
On Mon, Nov 03, 2025 at 11:43:06AM +0530, Dev Jain wrote:
Post a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full"), __change_memory_common has a real chance of failing due to split failure. Before that commit, this line was introduced in c55191e96caa, still having a chance of failing if it needs to allocate pagetable memory in apply_to_page_range, although that has never been observed to be true. In general, we should always propagate the return value to the caller.
Cc: stable@vger.kernel.org Fixes: c55191e96caa ("arm64: mm: apply r/o permissions of VM areas to its linear alias as well") Signed-off-by: Dev Jain dev.jain@arm.com
Based on Linux 6.18-rc4.
arch/arm64/mm/pageattr.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index 5135f2d66958..b4ea86cd3a71 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -148,6 +148,7 @@ static int change_memory_common(unsigned long addr, int numpages, unsigned long size = PAGE_SIZE * numpages; unsigned long end = start + size; struct vm_struct *area; + int ret; int i; if (!PAGE_ALIGNED(addr)) { @@ -185,8 +186,10 @@ static int change_memory_common(unsigned long addr, int numpages, if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY || pgprot_val(clear_mask) == PTE_RDONLY)) { for (i = 0; i < area->nr_pages; i++) {
- __change_memory_common((u64)page_address(area->pages[i]),
+ ret = __change_memory_common((u64)page_address(area->pages[i]), PAGE_SIZE, set_mask, clear_mask); + if (ret) + return ret;
Hmm, this means we can return failure half-way through the operation. Is that something callers are expecting to handle? If so, how can they tell how far we got?
IIUC the callers don't have to know whether it is half-way or not because the callers will change the permission back (e.g. to RW) for the whole range when freeing memory.
Yes, it is the caller's responsibility to set VM_FLUSH_RESET_PERMS flag. Upon vfree(), it will change the direct map permissions back to RW.
Thanks, Yang
Will
On Tue, Nov 04, 2025 at 09:06:12AM +0530, Dev Jain wrote:
On 04/11/25 12:15 am, Yang Shi wrote:
On 11/3/25 7:16 AM, Will Deacon wrote:
On Mon, Nov 03, 2025 at 11:43:06AM +0530, Dev Jain wrote:
Post a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full"), __change_memory_common has a real chance of failing due to split failure. Before that commit, this line was introduced in c55191e96caa, still having a chance of failing if it needs to allocate pagetable memory in apply_to_page_range, although that has never been observed to be true. In general, we should always propagate the return value to the caller.
Cc: stable@vger.kernel.org Fixes: c55191e96caa ("arm64: mm: apply r/o permissions of VM areas to its linear alias as well") Signed-off-by: Dev Jain dev.jain@arm.com
Based on Linux 6.18-rc4.
arch/arm64/mm/pageattr.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index 5135f2d66958..b4ea86cd3a71 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -148,6 +148,7 @@ static int change_memory_common(unsigned long addr, int numpages, unsigned long size = PAGE_SIZE * numpages; unsigned long end = start + size; struct vm_struct *area; + int ret; int i; if (!PAGE_ALIGNED(addr)) { @@ -185,8 +186,10 @@ static int change_memory_common(unsigned long addr, int numpages, if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY || pgprot_val(clear_mask) == PTE_RDONLY)) { for (i = 0; i < area->nr_pages; i++) {
- __change_memory_common((u64)page_address(area->pages[i]),
+ ret = __change_memory_common((u64)page_address(area->pages[i]), PAGE_SIZE, set_mask, clear_mask); + if (ret) + return ret;
Hmm, this means we can return failure half-way through the operation. Is that something callers are expecting to handle? If so, how can they tell how far we got?
IIUC the callers don't have to know whether it is half-way or not because the callers will change the permission back (e.g. to RW) for the whole range when freeing memory.
Yes, it is the caller's responsibility to set VM_FLUSH_RESET_PERMS flag. Upon vfree(), it will change the direct map permissions back to RW.
Ok, but vfree() ends up using update_range_prot() to do that and if we need to worry about that failing (as per your commit message), then we're in trouble because the calls to set_area_direct_map() are unchecked.
In other words, this patch is either not necessary or it is incomplete.
Will
On 04/11/25 6:26 pm, Will Deacon wrote:
On Tue, Nov 04, 2025 at 09:06:12AM +0530, Dev Jain wrote:
On 04/11/25 12:15 am, Yang Shi wrote:
On 11/3/25 7:16 AM, Will Deacon wrote:
On Mon, Nov 03, 2025 at 11:43:06AM +0530, Dev Jain wrote:
Post a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full"), __change_memory_common has a real chance of failing due to split failure. Before that commit, this line was introduced in c55191e96caa, still having a chance of failing if it needs to allocate pagetable memory in apply_to_page_range, although that has never been observed to be true. In general, we should always propagate the return value to the caller.
Cc: stable@vger.kernel.org Fixes: c55191e96caa ("arm64: mm: apply r/o permissions of VM areas to its linear alias as well") Signed-off-by: Dev Jain dev.jain@arm.com
Based on Linux 6.18-rc4.
arch/arm64/mm/pageattr.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index 5135f2d66958..b4ea86cd3a71 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -148,6 +148,7 @@ static int change_memory_common(unsigned long addr, int numpages, unsigned long size = PAGE_SIZE * numpages; unsigned long end = start + size; struct vm_struct *area; + int ret; int i; if (!PAGE_ALIGNED(addr)) { @@ -185,8 +186,10 @@ static int change_memory_common(unsigned long addr, int numpages, if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY || pgprot_val(clear_mask) == PTE_RDONLY)) { for (i = 0; i < area->nr_pages; i++) {
- __change_memory_common((u64)page_address(area->pages[i]),
+ ret = __change_memory_common((u64)page_address(area->pages[i]), PAGE_SIZE, set_mask, clear_mask); + if (ret) + return ret;
Hmm, this means we can return failure half-way through the operation. Is that something callers are expecting to handle? If so, how can they tell how far we got?
IIUC the callers don't have to know whether it is half-way or not because the callers will change the permission back (e.g. to RW) for the whole range when freeing memory.
Yes, it is the caller's responsibility to set VM_FLUSH_RESET_PERMS flag. Upon vfree(), it will change the direct map permissions back to RW.
Ok, but vfree() ends up using update_range_prot() to do that and if we need to worry about that failing (as per your commit message), then we're in trouble because the calls to set_area_direct_map() are unchecked.
In other words, this patch is either not necessary or it is incomplete.
I think we had concluded in the discussion of the linear map series that those calls will always succeed - I'll refresh my memory on that and get back to you later!
Will
On 04/11/25 6:26 pm, Will Deacon wrote:
On Tue, Nov 04, 2025 at 09:06:12AM +0530, Dev Jain wrote:
On 04/11/25 12:15 am, Yang Shi wrote:
On 11/3/25 7:16 AM, Will Deacon wrote:
On Mon, Nov 03, 2025 at 11:43:06AM +0530, Dev Jain wrote:
Post a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full"), __change_memory_common has a real chance of failing due to split failure. Before that commit, this line was introduced in c55191e96caa, still having a chance of failing if it needs to allocate pagetable memory in apply_to_page_range, although that has never been observed to be true. In general, we should always propagate the return value to the caller.
Cc: stable@vger.kernel.org Fixes: c55191e96caa ("arm64: mm: apply r/o permissions of VM areas to its linear alias as well") Signed-off-by: Dev Jain dev.jain@arm.com
Based on Linux 6.18-rc4.
arch/arm64/mm/pageattr.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index 5135f2d66958..b4ea86cd3a71 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -148,6 +148,7 @@ static int change_memory_common(unsigned long addr, int numpages, unsigned long size = PAGE_SIZE * numpages; unsigned long end = start + size; struct vm_struct *area; + int ret; int i; if (!PAGE_ALIGNED(addr)) { @@ -185,8 +186,10 @@ static int change_memory_common(unsigned long addr, int numpages, if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY || pgprot_val(clear_mask) == PTE_RDONLY)) { for (i = 0; i < area->nr_pages; i++) {
- __change_memory_common((u64)page_address(area->pages[i]),
+ ret = __change_memory_common((u64)page_address(area->pages[i]), PAGE_SIZE, set_mask, clear_mask); + if (ret) + return ret;
Hmm, this means we can return failure half-way through the operation. Is that something callers are expecting to handle? If so, how can they tell how far we got?
IIUC the callers don't have to know whether it is half-way or not because the callers will change the permission back (e.g. to RW) for the whole range when freeing memory.
Yes, it is the caller's responsibility to set VM_FLUSH_RESET_PERMS flag. Upon vfree(), it will change the direct map permissions back to RW.
Ok, but vfree() ends up using update_range_prot() to do that and if we need to worry about that failing (as per your commit message), then we're in trouble because the calls to set_area_direct_map() are unchecked.
In other words, this patch is either not necessary or it is incomplete.
Here is the relevant email, in the discussion between Ryan and Yang:
https://lore.kernel.org/all/fe52a1d8-5211-4962-afc8-c3f9caf64119@os.ampereco...
We had concluded that all callers of set_memory_ro() or set_memory_rox() (which require the linear map perm change back to default, upon vfree() ) will call it for the entire region (vm_struct). So, when we do the set_direct_map_invalid_noflush, it is guaranteed that the region has already been split. So this call cannot fail.
https://lore.kernel.org/all/f8898c87-8f49-4ef2-86ae-b60bcf67658c@os.ampereco...
This email notes that there is some code doing set_memory_rw() and unnecessarily setting the VM_FLUSH_RESET_PERMS flag, but in that case we don't care about the set_direct_map_invalid_noflush call failing because the protections are already RW.
Although we had also observed that all of this is fragile and depends on the caller doing the correct thing. The real solution should be somehow getting rid of the BBM style invalidation. Ryan had proposed some methods in that email thread.
One solution which I had thought of, is that, observe that we are doing an overkill by setting the linear map to invalid and then default, for the *entire* region. What we can do is iterate over the linear map alias of the vm_struct *area and only change permission back to RW for the pages which are *not* RW. And, those relevant mappings are guaranteed to be split because they were changed from RW to not RW.
Will
linux-stable-mirror@lists.linaro.org