The patch titled
Subject: padata: honor the caller's alignment in case of chunk_size 0
has been added to the -mm mm-hotfixes-unstable branch. Its filename is
padata-honor-the-callers-alignment-in-case-of-chunk_size-0.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patche…
This patch will later appear in the mm-hotfixes-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: Kamlesh Gurudasani <kamlesh(a)ti.com>
Subject: padata: honor the caller's alignment in case of chunk_size 0
Date: Thu, 22 Aug 2024 02:32:52 +0530
In the case where we are forcing the ps.chunk_size to be at least 1, we
are ignoring the caller's alignment.
Move the forcing of ps.chunk_size to be at least 1 before rounding it up
to caller's alignment, so that caller's alignment is honored.
While at it, use max() to force the ps.chunk_size to be at least 1 to
improve readability.
Link: https://lkml.kernel.org/r/20240822-max-v1-1-cb4bc5b1c101@ti.com
Fixes: 6d45e1c948a8 ("padata: Fix possible divide-by-0 panic in padata_mt_helper()")
Signed-off-by: Kamlesh Gurudasani <kamlesh(a)ti.com>
Cc: Daniel Jordan <daniel.m.jordan(a)oracle.com>
Cc: Herbert Xu <herbert(a)gondor.apana.org.au>
Cc: Steffen Klassert <steffen.klassert(a)secunet.com>
Cc: Waiman Long <longman(a)redhat.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
kernel/padata.c | 12 ++++--------
1 file changed, 4 insertions(+), 8 deletions(-)
--- a/kernel/padata.c~padata-honor-the-callers-alignment-in-case-of-chunk_size-0
+++ a/kernel/padata.c
@@ -509,21 +509,17 @@ void __init padata_do_multithreaded(stru
/*
* Chunk size is the amount of work a helper does per call to the
- * thread function. Load balance large jobs between threads by
+ * thread function. Load balance large jobs between threads by
* increasing the number of chunks, guarantee at least the minimum
* chunk size from the caller, and honor the caller's alignment.
+ * Ensure chunk_size is at least 1 to prevent divide-by-0
+ * panic in padata_mt_helper().
*/
ps.chunk_size = job->size / (ps.nworks * load_balance_factor);
ps.chunk_size = max(ps.chunk_size, job->min_chunk);
+ ps.chunk_size = max(ps.chunk_size, 1ul);
ps.chunk_size = roundup(ps.chunk_size, job->align);
- /*
- * chunk_size can be 0 if the caller sets min_chunk to 0. So force it
- * to at least 1 to prevent divide-by-0 panic in padata_mt_helper().`
- */
- if (!ps.chunk_size)
- ps.chunk_size = 1U;
-
list_for_each_entry(pw, &works, pw_list)
if (job->numa_aware) {
int old_node = atomic_read(&last_used_nid);
_
Patches currently in -mm which might be from kamlesh(a)ti.com are
padata-honor-the-callers-alignment-in-case-of-chunk_size-0.patch
The patch titled
Subject: Revert "mm: skip CMA pages when they are not available"
has been added to the -mm mm-hotfixes-unstable branch. Its filename is
revert-mm-skip-cma-pages-when-they-are-not-available.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patche…
This patch will later appear in the mm-hotfixes-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: Usama Arif <usamaarif642(a)gmail.com>
Subject: Revert "mm: skip CMA pages when they are not available"
Date: Wed, 21 Aug 2024 20:26:07 +0100
This reverts commit 5da226dbfce3a2f44978c2c7cf88166e69a6788b.
lruvec->lru_lock is highly contended and is held when calling
isolate_lru_folios. If the lru has a large number of CMA folios
consecutively, while the allocation type requested is not MIGRATE_MOVABLE,
isolate_lru_folios can hold the lock for a very long time while it skips
those. For FIO workload, ~150million order=0 folios were skipped to
isolate a few ZONE_DMA folios [1]. This can cause lockups [1] and high
memory pressure for extended periods of time [2].
[1] https://lore.kernel.org/all/CAOUHufbkhMZYz20aM_3rHZ3OcK4m2puji2FGpUpn_-DevG…
[2] https://lore.kernel.org/all/ZrssOrcJIDy8hacI@gmail.com/
Link: https://lkml.kernel.org/r/9060a32d-b2d7-48c0-8626-1db535653c54@gmail.com
Fixes: 5da226dbfce3 ("mm: skip CMA pages when they are not available")
Signed-off-by: Usama Arif <usamaarif642(a)gmail.com>
Cc: Bharata B Rao <bharata(a)amd.com>
Cc: Breno Leitao <leitao(a)debian.org>
Cc: David Hildenbrand <david(a)redhat.com>
Cc: Johannes Weiner <hannes(a)cmpxchg.org>
Cc: Matthew Wilcox <willy(a)infradead.org>
Cc: Rik van Riel <riel(a)surriel.com>
Cc: Vlastimil Babka <vbabka(a)suse.cz>
Cc: Yu Zhao <yuzhao(a)google.com>
Cc: Zhaoyang Huang <huangzhaoyang(a)gmail.com>
Cc: Zhaoyang Huang <zhaoyang.huang(a)unisoc.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/vmscan.c | 41 ++++++++++++++++++++---------------------
1 file changed, 20 insertions(+), 21 deletions(-)
--- a/mm/vmscan.c~revert-mm-skip-cma-pages-when-they-are-not-available
+++ a/mm/vmscan.c
@@ -1604,25 +1604,6 @@ static __always_inline void update_lru_s
}
-#ifdef CONFIG_CMA
-/*
- * It is waste of effort to scan and reclaim CMA pages if it is not available
- * for current allocation context. Kswapd can not be enrolled as it can not
- * distinguish this scenario by using sc->gfp_mask = GFP_KERNEL
- */
-static bool skip_cma(struct folio *folio, struct scan_control *sc)
-{
- return !current_is_kswapd() &&
- gfp_migratetype(sc->gfp_mask) != MIGRATE_MOVABLE &&
- folio_migratetype(folio) == MIGRATE_CMA;
-}
-#else
-static bool skip_cma(struct folio *folio, struct scan_control *sc)
-{
- return false;
-}
-#endif
-
/*
* Isolating page from the lruvec to fill in @dst list by nr_to_scan times.
*
@@ -1669,8 +1650,7 @@ static unsigned long isolate_lru_folios(
nr_pages = folio_nr_pages(folio);
total_scan += nr_pages;
- if (folio_zonenum(folio) > sc->reclaim_idx ||
- skip_cma(folio, sc)) {
+ if (folio_zonenum(folio) > sc->reclaim_idx) {
nr_skipped[folio_zonenum(folio)] += nr_pages;
move_to = &folios_skipped;
goto move;
@@ -4273,6 +4253,25 @@ void lru_gen_soft_reclaim(struct mem_cgr
#endif /* CONFIG_MEMCG */
+#ifdef CONFIG_CMA
+/*
+ * It is waste of effort to scan and reclaim CMA pages if it is not available
+ * for current allocation context. Kswapd can not be enrolled as it can not
+ * distinguish this scenario by using sc->gfp_mask = GFP_KERNEL
+ */
+static bool skip_cma(struct folio *folio, struct scan_control *sc)
+{
+ return !current_is_kswapd() &&
+ gfp_migratetype(sc->gfp_mask) != MIGRATE_MOVABLE &&
+ folio_migratetype(folio) == MIGRATE_CMA;
+}
+#else
+static bool skip_cma(struct folio *folio, struct scan_control *sc)
+{
+ return false;
+}
+#endif
+
/******************************************************************************
* the eviction
******************************************************************************/
_
Patches currently in -mm which might be from usamaarif642(a)gmail.com are
revert-mm-skip-cma-pages-when-they-are-not-available.patch
[Public]
Thank you, Jiri, for your feedback.
I've dropped this patch from DC v.3.2.297.
We will follow-up on this separately and merge it after you do confirm the issue you reported is fixed.
Thanks,
Roman
> -----Original Message-----
> From: Jiri Slaby <jirislaby(a)kernel.org>
> Sent: Monday, August 19, 2024 4:37 AM
> To: Li, Roman <Roman.Li(a)amd.com>; amd-gfx(a)lists.freedesktop.org
> Cc: Wentland, Harry <Harry.Wentland(a)amd.com>; Li, Sun peng (Leo)
> <Sunpeng.Li(a)amd.com>; Siqueira, Rodrigo <Rodrigo.Siqueira(a)amd.com>;
> Pillai, Aurabindo <Aurabindo.Pillai(a)amd.com>; Lin, Wayne
> <Wayne.Lin(a)amd.com>; Gutierrez, Agustin <Agustin.Gutierrez(a)amd.com>;
> Chung, ChiaHsuan (Tom) <ChiaHsuan.Chung(a)amd.com>; Zuo, Jerry
> <Jerry.Zuo(a)amd.com>; Mohamed, Zaeem <Zaeem.Mohamed(a)amd.com>
> Subject: Re: RE: [PATCH 12/13] drm/amd/display: Fix a typo in revert commit
>
> On 16. 08. 24, 21:30, Li, Roman wrote:
> > [Public]
> >
> > Wiil update commit message as:
> >
> > -------------
> > drm/amd/display: Fix MST DSC lightup
> >
> > [Why]
> > Secondary monitor does not come up due to MST DSC bw calculation
> regression.
>
> This patch is only related to this. It does not fix that issue on its own at all.
>
> > [How]
> > Fix bug in try_disable_dsc()
>
> This update is worse than the original, IMO.
>
> Could you write saner commit logs in the whole amdgpu overall?
>
> If you insist on those [why] and [how] parts, something like:
> """
> [Why]
> The linked commit below misreverted one hunk in try_disable_dsc().
>
> [How]
> Fix that by using proper (original) 'max_kbps' instead of bogus 'stream_kbps'.
> ""
>
> > Fixes: 4b6564cb120c ("drm/amd/display: Fix MST BW calculation
> > Regression")
> >
> > Cc: mario.limonciello(a)amd.com
> > Cc: alexander.deucher(a)amd.com
> > Cc: stable(a)vger.kernel.org
> > Reported-by: jirislaby(a)kernel.org
>
> Care to fix up your machinery so that listed people are really CCed? I received a
> copy of neither the original (4b6564cb120c), nor this one.
>
> Nor any mentions in the linked #3495 at all.
>
> I would have told you that 4b6564cb120c is bogus. Immediately when it hit
> me as it differs from our (SUSE) in-tree revert in exactly this hunk. If I have
> known about this in the first place...
>
> And you would have received a Tested-by if it had worked.
>
> Given all the above, amdgpu workflow appears to be very ill. Please fix it.
>
> > Closes: https://gitlab.freedesktop.org/drm/amd/-/issues/3495
> > Closes: https://bugzilla.suse.com/show_bug.cgi?id=1228093
> > Reviewed-by: Roman Li <roman.li(a)amd.com>
> > Signed-off-by: Fangzhi Zuo <Jerry.Zuo(a)amd.com>
> > Signed-off-by: Roman Li <roman.li(a)amd.com>
> > Tested-by: Daniel Wheeler <daniel.wheeler(a)amd.com>
> >
> >
> >> -----Original Message-----
> >> From: Roman.Li(a)amd.com <Roman.Li(a)amd.com>
> >> Sent: Thursday, August 15, 2024 6:45 PM
> >> To: amd-gfx(a)lists.freedesktop.org
> >> Cc: Wentland, Harry <Harry.Wentland(a)amd.com>; Li, Sun peng (Leo)
> >> <Sunpeng.Li(a)amd.com>; Siqueira, Rodrigo <Rodrigo.Siqueira(a)amd.com>;
> >> Pillai, Aurabindo <Aurabindo.Pillai(a)amd.com>; Li, Roman
> >> <Roman.Li(a)amd.com>; Lin, Wayne <Wayne.Lin(a)amd.com>; Gutierrez,
> >> Agustin <Agustin.Gutierrez(a)amd.com>; Chung, ChiaHsuan (Tom)
> >> <ChiaHsuan.Chung(a)amd.com>; Zuo, Jerry <Jerry.Zuo(a)amd.com>;
> Mohamed,
> >> Zaeem <Zaeem.Mohamed(a)amd.com>; Zuo, Jerry <Jerry.Zuo(a)amd.com>
> >> Subject: [PATCH 12/13] drm/amd/display: Fix a typo in revert commit
> >>
> >> From: Fangzhi Zuo <Jerry.Zuo(a)amd.com>
> >>
> >> A typo is fixed for "drm/amd/display: Fix MST BW calculation Regression"
> >>
> >> Fixes: 4b6564cb120c ("drm/amd/display: Fix MST BW calculation
> >> Regression")
> >>
> >> Reviewed-by: Roman Li <roman.li(a)amd.com>
> >> Signed-off-by: Fangzhi Zuo <Jerry.Zuo(a)amd.com>
> >> Signed-off-by: Roman Li <roman.li(a)amd.com>
> >> ---
> >> drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c |
> 2 +-
> >> 1 file changed, 1 insertion(+), 1 deletion(-)
> >>
> >> diff --git
> >> a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
> >> b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
> >> index 958fad0d5307..5e08ca700c3f 100644
> >> ---
> a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
> >> +++
> b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
> >> @@ -1066,7 +1066,7 @@ static int try_disable_dsc(struct
> >> drm_atomic_state *state,
> >> vars[next_index].dsc_enabled = false;
> >> vars[next_index].bpp_x16 = 0;
> >> } else {
> >> - vars[next_index].pbn =
> >> kbps_to_peak_pbn(params[next_index].bw_range.stream_kbps,
> >> fec_overhead_multiplier_x1000);
> >> + vars[next_index].pbn =
> >> kbps_to_peak_pbn(params[next_index].bw_range.max_kbps,
> >> fec_overhead_multiplier_x1000);
> >> ret = drm_dp_atomic_find_time_slots(state,
> >>
> >> params[next_index].port->mgr,
> >>
> >> params[next_index].port,
>
>
> thanks,
> --
> js
> suse labs
When gfs2_fill_super() fails, destroy_workqueue()
is called within gfs2_gl_hash_clear(), and the
subsequent code path calls destroy_workqueue()
on the same work queue again.
This issue can be fixed by setting the work
queue pointer to NULL after the first
destroy_workqueue() call and checking for
a NULL pointer before attempting to destroy
the work queue again.
Reported-by: syzbot+d34c2a269ed512c531b0(a)syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=d34c2a269ed512c531b0
Fixes: 30e388d57367 ("gfs2: Switch to a per-filesystem glock workqueue")
Cc: stable(a)vger.kernel.org
Signed-off-by: Julian Sun <sunjunchao2870(a)gmail.com>
---
fs/gfs2/glock.c | 1 +
fs/gfs2/ops_fstype.c | 3 ++-
2 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
index 32991cb22023..5838039d78e3 100644
--- a/fs/gfs2/glock.c
+++ b/fs/gfs2/glock.c
@@ -2273,6 +2273,7 @@ void gfs2_gl_hash_clear(struct gfs2_sbd *sdp)
gfs2_free_dead_glocks(sdp);
glock_hash_walk(dump_glock_func, sdp);
destroy_workqueue(sdp->sd_glock_wq);
+ sdp->sd_glock_wq = NULL;
}
static const char *state2str(unsigned state)
diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c
index 0561edd6cc86..5c0e1b24d6ec 100644
--- a/fs/gfs2/ops_fstype.c
+++ b/fs/gfs2/ops_fstype.c
@@ -1308,7 +1308,8 @@ static int gfs2_fill_super(struct super_block *sb, struct fs_context *fc)
fail_delete_wq:
destroy_workqueue(sdp->sd_delete_wq);
fail_glock_wq:
- destroy_workqueue(sdp->sd_glock_wq);
+ if (sdp->sd_glock_wq)
+ destroy_workqueue(sdp->sd_glock_wq);
fail_free:
free_sbd(sdp);
sb->s_fs_info = NULL;
--
2.39.2
The existing code uses min_t(ssize_t, outarg.size, XATTR_LIST_MAX) when
parsing the FUSE daemon's response to a zero-length getxattr/listxattr
request.
On 32-bit kernels, where ssize_t and outarg.size are the same size, this is
wrong: The min_t() will pass through any size values that are negative when
interpreted as signed.
fuse_listxattr() will then return this userspace-supplied negative value,
which callers will treat as an error value.
This kind of bug pattern can lead to fairly bad security bugs because of
how error codes are used in the Linux kernel. If a caller were to convert
the numeric error into an error pointer, like so:
struct foo *func(...) {
int len = fuse_getxattr(..., NULL, 0);
if (len < 0)
return ERR_PTR(len);
...
}
then it would end up returning this userspace-supplied negative value cast
to a pointer - but the caller of this function wouldn't recognize it as an
error pointer (IS_ERR_VALUE() only detects values in the narrow range in
which legitimate errno values are), and so it would just be treated as a
kernel pointer.
I think there is at least one theoretical codepath where this could happen,
but that path would involve virtio-fs with submounts plus some weird
SELinux configuration, so I think it's probably not a concern in practice.
Cc: stable(a)vger.kernel.org
Fixes: 63401ccdb2ca ("fuse: limit xattr returned size")
Signed-off-by: Jann Horn <jannh(a)google.com>
---
fs/fuse/xattr.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/fs/fuse/xattr.c b/fs/fuse/xattr.c
index 5b423fdbb13f..9f568d345c51 100644
--- a/fs/fuse/xattr.c
+++ b/fs/fuse/xattr.c
@@ -81,7 +81,7 @@ ssize_t fuse_getxattr(struct inode *inode, const char *name, void *value,
}
ret = fuse_simple_request(fm, &args);
if (!ret && !size)
- ret = min_t(ssize_t, outarg.size, XATTR_SIZE_MAX);
+ ret = min_t(size_t, outarg.size, XATTR_SIZE_MAX);
if (ret == -ENOSYS) {
fm->fc->no_getxattr = 1;
ret = -EOPNOTSUPP;
@@ -143,7 +143,7 @@ ssize_t fuse_listxattr(struct dentry *entry, char *list, size_t size)
}
ret = fuse_simple_request(fm, &args);
if (!ret && !size)
- ret = min_t(ssize_t, outarg.size, XATTR_LIST_MAX);
+ ret = min_t(size_t, outarg.size, XATTR_LIST_MAX);
if (ret > 0 && size)
ret = fuse_verify_xattr_list(list, ret);
if (ret == -ENOSYS) {
---
base-commit: b0da640826ba3b6506b4996a6b23a429235e6923
change-id: 20240819-fuse-oob-error-fix-664d082176d5
--
Jann Horn <jannh(a)google.com>
The patch below does not apply to the 6.1-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
To reproduce the conflict and resubmit, you may use the following commands:
git fetch https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/ linux-6.1.y
git checkout FETCH_HEAD
git cherry-pick -x 40b760cfd44566bca791c80e0720d70d75382b84
# <resolve conflicts, build, test, etc.>
git commit -s
git send-email --to '<stable(a)vger.kernel.org>' --in-reply-to '2024081933-cheddar-oak-0777@gregkh' --subject-prefix 'PATCH 6.1.y' HEAD^..
Possible dependencies:
40b760cfd445 ("mm/numa: no task_numa_fault() call if PTE is changed")
d2136d749d76 ("mm: support multi-size THP numa balancing")
6b0ed7b3c775 ("mm: factor out the numa mapping rebuilding into a new helper")
ec1778807a80 ("mm: mprotect: use a folio in change_pte_range()")
6695cf68b15c ("mm: memory: use a folio in do_numa_page()")
73eab3ca481e ("mm: migrate: convert migrate_misplaced_page() to migrate_misplaced_folio()")
2ac9e99f3b21 ("mm: migrate: convert numamigrate_isolate_page() to numamigrate_isolate_folio()")
df57721f9a63 ("Merge tag 'x86_shstk_for_6.6-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip")
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
From 40b760cfd44566bca791c80e0720d70d75382b84 Mon Sep 17 00:00:00 2001
From: Zi Yan <ziy(a)nvidia.com>
Date: Fri, 9 Aug 2024 10:59:04 -0400
Subject: [PATCH] mm/numa: no task_numa_fault() call if PTE is changed
When handling a numa page fault, task_numa_fault() should be called by a
process that restores the page table of the faulted folio to avoid
duplicated stats counting. Commit b99a342d4f11 ("NUMA balancing: reduce
TLB flush via delaying mapping on hint page fault") restructured
do_numa_page() and did not avoid task_numa_fault() call in the second page
table check after a numa migration failure. Fix it by making all
!pte_same() return immediately.
This issue can cause task_numa_fault() being called more than necessary
and lead to unexpected numa balancing results (It is hard to tell whether
the issue will cause positive or negative performance impact due to
duplicated numa fault counting).
Link: https://lkml.kernel.org/r/20240809145906.1513458-2-ziy@nvidia.com
Fixes: b99a342d4f11 ("NUMA balancing: reduce TLB flush via delaying mapping on hint page fault")
Signed-off-by: Zi Yan <ziy(a)nvidia.com>
Reported-by: "Huang, Ying" <ying.huang(a)intel.com>
Closes: https://lore.kernel.org/linux-mm/87zfqfw0yw.fsf@yhuang6-desk2.ccr.corp.inte…
Acked-by: David Hildenbrand <david(a)redhat.com>
Cc: Baolin Wang <baolin.wang(a)linux.alibaba.com>
Cc: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Cc: Mel Gorman <mgorman(a)suse.de>
Cc: Yang Shi <shy828301(a)gmail.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
diff --git a/mm/memory.c b/mm/memory.c
index 34f8402d2046..3c01d68065be 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -5295,7 +5295,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
if (unlikely(!pte_same(old_pte, vmf->orig_pte))) {
pte_unmap_unlock(vmf->pte, vmf->ptl);
- goto out;
+ return 0;
}
pte = pte_modify(old_pte, vma->vm_page_prot);
@@ -5358,23 +5358,19 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
if (!migrate_misplaced_folio(folio, vma, target_nid)) {
nid = target_nid;
flags |= TNF_MIGRATED;
- } else {
- flags |= TNF_MIGRATE_FAIL;
- vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,
- vmf->address, &vmf->ptl);
- if (unlikely(!vmf->pte))
- goto out;
- if (unlikely(!pte_same(ptep_get(vmf->pte), vmf->orig_pte))) {
- pte_unmap_unlock(vmf->pte, vmf->ptl);
- goto out;
- }
- goto out_map;
+ task_numa_fault(last_cpupid, nid, nr_pages, flags);
+ return 0;
}
-out:
- if (nid != NUMA_NO_NODE)
- task_numa_fault(last_cpupid, nid, nr_pages, flags);
- return 0;
+ flags |= TNF_MIGRATE_FAIL;
+ vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,
+ vmf->address, &vmf->ptl);
+ if (unlikely(!vmf->pte))
+ return 0;
+ if (unlikely(!pte_same(ptep_get(vmf->pte), vmf->orig_pte))) {
+ pte_unmap_unlock(vmf->pte, vmf->ptl);
+ return 0;
+ }
out_map:
/*
* Make it present again, depending on how arch implements
@@ -5387,7 +5383,10 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
numa_rebuild_single_mapping(vmf, vma, vmf->address, vmf->pte,
writable);
pte_unmap_unlock(vmf->pte, vmf->ptl);
- goto out;
+
+ if (nid != NUMA_NO_NODE)
+ task_numa_fault(last_cpupid, nid, nr_pages, flags);
+ return 0;
}
static inline vm_fault_t create_huge_pmd(struct vm_fault *vmf)