From: Dom Cobley <popcornmix(a)gmail.com>
[ Upstream commit 1698ecb218eb82587dbfc71a2e26ded66e5ecf59 ]
Symptom is random switching of speakers when using multichannel.
Repeatedly running speakertest -c8 occasionally starts with
channels jumbled. This is fixed with HD_CTL_WHOLSMP.
The other bit looks beneficial and apears harmless in testing so
I'd suggest adding it too.
Documentation says: HD_CTL_WHILSMP_SET
Wait for whole sample. When this bit is set MAI transmit will start
only when there is at least one whole sample available in the fifo.
Documentation says: HD_CTL_CHALIGN_SET
Channel Align When Overflow. This bit is used to realign the audio
channels in case of an overflow.
If this bit is set, after the detection of an overflow, equal
amount of dummy words to the missing words will be written to fifo,
filling up the broken sample and maintaining alignment.
Signed-off-by: Dom Cobley <popcornmix(a)gmail.com>
Signed-off-by: Maxime Ripard <maxime(a)cerno.tech>
Reviewed-by: Nicolas Saenz Julienne <nsaenz(a)kernel.org>
Link: https://patchwork.freedesktop.org/patch/msgid/20210525132354.297468-7-maxim…
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
drivers/gpu/drm/vc4/vc4_hdmi.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
index c58b8840090a..ee293f061f0a 100644
--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
+++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
@@ -1074,7 +1074,9 @@ static int vc4_hdmi_audio_trigger(struct snd_pcm_substream *substream, int cmd,
HDMI_WRITE(HDMI_MAI_CTL,
VC4_SET_FIELD(vc4_hdmi->audio.channels,
VC4_HD_MAI_CTL_CHNUM) |
- VC4_HD_MAI_CTL_ENABLE);
+ VC4_HD_MAI_CTL_WHOLSMP |
+ VC4_HD_MAI_CTL_CHALIGN |
+ VC4_HD_MAI_CTL_ENABLE);
break;
case SNDRV_PCM_TRIGGER_STOP:
HDMI_WRITE(HDMI_MAI_CTL,
--
2.30.2
The following commit has been merged into the sched/urgent branch of tip:
Commit-ID: 868ad33bfa3bf39960982682ad3a0f8ebda1656e
Gitweb: https://git.kernel.org/tip/868ad33bfa3bf39960982682ad3a0f8ebda1656e
Author: Thomas Gleixner <tglx(a)linutronix.de>
AuthorDate: Sat, 28 Aug 2021 15:55:52 +02:00
Committer: Peter Zijlstra <peterz(a)infradead.org>
CommitterDate: Thu, 09 Sep 2021 11:27:23 +02:00
sched: Prevent balance_push() on remote runqueues
sched_setscheduler() and rt_mutex_setprio() invoke the run-queue balance
callback after changing priorities or the scheduling class of a task. The
run-queue for which the callback is invoked can be local or remote.
That's not a problem for the regular rq::push_work which is serialized with
a busy flag in the run-queue struct, but for the balance_push() work which
is only valid to be invoked on the outgoing CPU that's wrong. It not only
triggers the debug warning, but also leaves the per CPU variable push_work
unprotected, which can result in double enqueues on the stop machine list.
Remove the warning and validate that the function is invoked on the
outgoing CPU.
Fixes: ae7927023243 ("sched: Optimize finish_lock_switch()")
Reported-by: Sebastian Siewior <bigeasy(a)linutronix.de>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz(a)infradead.org>
Cc: stable(a)vger.kernel.org
Link: https://lkml.kernel.org/r/87zgt1hdw7.ffs@tglx
---
kernel/sched/core.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index f3b27c6..b21a185 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -8523,7 +8523,6 @@ static void balance_push(struct rq *rq)
struct task_struct *push_task = rq->curr;
lockdep_assert_rq_held(rq);
- SCHED_WARN_ON(rq->cpu != smp_processor_id());
/*
* Ensure the thing is persistent until balance_push_set(.on = false);
@@ -8531,9 +8530,10 @@ static void balance_push(struct rq *rq)
rq->balance_callback = &balance_push_callback;
/*
- * Only active while going offline.
+ * Only active while going offline and when invoked on the outgoing
+ * CPU.
*/
- if (!cpu_dying(rq->cpu))
+ if (!cpu_dying(rq->cpu) || rq != this_rq())
return;
/*
From: yanghui <yanghui.def(a)bytedance.com>
Subject: mm/mempolicy: fix a race between offset_il_node and mpol_rebind_task
Servers happened below panic:
Kernel version:5.4.56
BUG: unable to handle page fault for address: 0000000000002c48
RIP: 0010:__next_zones_zonelist+0x1d/0x40
[264003.977696] RAX: 0000000000002c40 RBX: 0000000000100dca RCX: 0000000000000014
[264003.977872] Call Trace:
[264003.977888] __alloc_pages_nodemask+0x277/0x310
[264003.977908] alloc_page_interleave+0x13/0x70
[264003.977926] handle_mm_fault+0xf99/0x1390
[264003.977951] __do_page_fault+0x288/0x500
[264003.977979] ? schedule+0x39/0xa0
[264003.977994] do_page_fault+0x30/0x110
[264003.978010] page_fault+0x3e/0x50
The reason for the panic is that MAX_NUMNODES is passed in the third
parameter in __alloc_pages_nodemask(preferred_nid). So access to
zonelist->zoneref->zone_idx in __next_zones_zonelist will cause a panic.
In offset_il_node(), first_node() returns nid from pol->v.nodes, after
this other threads may chang pol->v.nodes before next_node(). This race
condition will let next_node return MAX_NUMNODES. So put pol->nodes in a
local variable.
The race condition is between offset_il_node and cpuset_change_task_nodemask:
CPU0: CPU1:
alloc_pages_vma()
interleave_nid(pol,)
offset_il_node(pol,)
first_node(pol->v.nodes) cpuset_change_task_nodemask
//nodes==0xc mpol_rebind_task
mpol_rebind_policy
mpol_rebind_nodemask(pol,nodes)
//nodes==0x3
next_node(nid, pol->v.nodes)//return MAX_NUMNODES
Link: https://lkml.kernel.org/r/20210906034658.48721-1-yanghui.def@bytedance.com
Signed-off-by: yanghui <yanghui.def(a)bytedance.com>
Reviewed-by: Muchun Song <songmuchun(a)bytedance.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/mempolicy.c | 17 +++++++++++++----
1 file changed, 13 insertions(+), 4 deletions(-)
--- a/mm/mempolicy.c~mm-mempolicy-fix-a-race-between-offset_il_node-and-mpol_rebind_task
+++ a/mm/mempolicy.c
@@ -1876,17 +1876,26 @@ unsigned int mempolicy_slab_node(void)
*/
static unsigned offset_il_node(struct mempolicy *pol, unsigned long n)
{
- unsigned nnodes = nodes_weight(pol->nodes);
- unsigned target;
+ nodemask_t nodemask = pol->nodes;
+ unsigned int target, nnodes;
int i;
int nid;
+ /*
+ * The barrier will stabilize the nodemask in a register or on
+ * the stack so that it will stop changing under the code.
+ *
+ * Between first_node() and next_node(), pol->nodes could be changed
+ * by other threads. So we put pol->nodes in a local stack.
+ */
+ barrier();
+ nnodes = nodes_weight(nodemask);
if (!nnodes)
return numa_node_id();
target = (unsigned int)n % nnodes;
- nid = first_node(pol->nodes);
+ nid = first_node(nodemask);
for (i = 0; i < target; i++)
- nid = next_node(nid, pol->nodes);
+ nid = next_node(nid, nodemask);
return nid;
}
_
From: Miaohe Lin <linmiaohe(a)huawei.com>
Subject: mm/page_alloc.c: avoid accessing uninitialized pcp page migratetype
If it's not prepared to free unref page, the pcp page migratetype is
unset. Thus We will get rubbish from get_pcppage_migratetype() and might
list_del &page->lru again after it's already deleted from the list leading
to grumble about data corruption.
Link: https://lkml.kernel.org/r/20210902115447.57050-1-linmiaohe@huawei.com
Fixes: df1acc856923 ("mm/page_alloc: avoid conflating IRQs disabled with zone->lock")
Signed-off-by: Miaohe Lin <linmiaohe(a)huawei.com>
Acked-by: Mel Gorman <mgorman(a)techsingularity.net>
Acked-by: Vlastimil Babka <vbabka(a)suse.cz>
Reviewed-by: David Hildenbrand <david(a)redhat.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/page_alloc.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
--- a/mm/page_alloc.c~mm-page_allocc-avoid-accessing-uninitialized-pcp-page-migratetype
+++ a/mm/page_alloc.c
@@ -3428,8 +3428,10 @@ void free_unref_page_list(struct list_he
/* Prepare pages for freeing */
list_for_each_entry_safe(page, next, list, lru) {
pfn = page_to_pfn(page);
- if (!free_unref_page_prepare(page, pfn, 0))
+ if (!free_unref_page_prepare(page, pfn, 0)) {
list_del(&page->lru);
+ continue;
+ }
/*
* Free isolated pages directly to the allocator, see
_
From: Rik van Riel <riel(a)surriel.com>
Subject: mm,vmscan: fix divide by zero in get_scan_count
Changeset f56ce412a59d ("mm: memcontrol: fix occasional OOMs due to
proportional memory.low reclaim") introduced a divide by zero corner case
when oomd is being used in combination with cgroup memory.low protection.
When oomd decides to kill a cgroup, it will force the cgroup memory to be
reclaimed after killing the tasks, by writing to the memory.max file for
that cgroup, forcing the remaining page cache and reclaimable slab to be
reclaimed down to zero.
Previously, on cgroups with some memory.low protection that would result
in the memory being reclaimed down to the memory.low limit, or likely not
at all, having the page cache reclaimed asynchronously later.
With f56ce412a59d the oomd write to memory.max tries to reclaim all the
way down to zero, which may race with another reclaimer, to the point of
ending up with the divide by zero below.
This patch implements the obvious fix.
Link: https://lkml.kernel.org/r/20210826220149.058089c6@imladris.surriel.com
Fixes: f56ce412a59d ("mm: memcontrol: fix occasional OOMs due to proportional memory.low reclaim")
Signed-off-by: Rik van Riel <riel(a)surriel.com>
Acked-by: Roman Gushchin <guro(a)fb.com>
Acked-by: Michal Hocko <mhocko(a)suse.com>
Acked-by: Johannes Weiner <hannes(a)cmpxchg.org>
Acked-by: Chris Down <chris(a)chrisdown.name>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/vmscan.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/mm/vmscan.c~mmvmscan-fix-divide-by-zero-in-get_scan_count
+++ a/mm/vmscan.c
@@ -2715,7 +2715,7 @@ out:
cgroup_size = max(cgroup_size, protection);
scan = lruvec_size - lruvec_size * protection /
- cgroup_size;
+ (cgroup_size + 1);
/*
* Minimally target SWAP_CLUSTER_MAX pages to keep
_
From: Li Zhijian <lizhijian(a)cn.fujitsu.com>
Subject: mm/hmm: bypass devmap pte when all pfn requested flags are fulfilled
Previously, we noticed the one rpma example was failed[1] since
36f30e486d, where it will use ODP feature to do RDMA WRITE between fsdax
files.
After digging into the code, we found hmm_vma_handle_pte() will still
return EFAULT even though all the its requesting flags has been fulfilled.
That's because a DAX page will be marked as (_PAGE_SPECIAL | PAGE_DEVMAP)
by pte_mkdevmap().
[1]: https://github.com/pmem/rpma/issues/1142
Link: https://lkml.kernel.org/r/20210830094232.203029-1-lizhijian@cn.fujitsu.com
Fixes: 405506274922 ("mm/hmm: add missing call to hmm_pte_need_fault in HMM_PFN_SPECIAL handling")
Signed-off-by: Li Zhijian <lizhijian(a)cn.fujitsu.com>
Reviewed-by: Christoph Hellwig <hch(a)lst.de>
Reviewed-by: Jason Gunthorpe <jgg(a)nvidia.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/hmm.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
--- a/mm/hmm.c~mm-hmm-bypass-devmap-pte-when-all-pfn-requested-flags-are-fulfilled
+++ a/mm/hmm.c
@@ -295,10 +295,13 @@ static int hmm_vma_handle_pte(struct mm_
goto fault;
/*
+ * Bypass devmap pte such as DAX page when all pfn requested
+ * flags(pfn_req_flags) are fulfilled.
* Since each architecture defines a struct page for the zero page, just
* fall through and treat it like a normal page.
*/
- if (pte_special(pte) && !is_zero_pfn(pte_pfn(pte))) {
+ if (pte_special(pte) && !pte_devmap(pte) &&
+ !is_zero_pfn(pte_pfn(pte))) {
if (hmm_pte_need_fault(hmm_vma_walk, pfn_req_flags, 0)) {
pte_unmap(ptep);
return -EFAULT;
_
The patch titled
Subject: mm, hwpoison: add is_free_buddy_page() in HWPoisonHandlable()
has been added to the -mm tree. Its filename is
mm-hwpoison-add-is_free_buddy_page-in-hwpoisonhandlable.patch
This patch should soon appear at
https://ozlabs.org/~akpm/mmots/broken-out/mm-hwpoison-add-is_free_buddy_pag…
and later at
https://ozlabs.org/~akpm/mmotm/broken-out/mm-hwpoison-add-is_free_buddy_pag…
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next and is updated
there every 3-4 working days
------------------------------------------------------
From: Naoya Horiguchi <naoya.horiguchi(a)nec.com>
Subject: mm, hwpoison: add is_free_buddy_page() in HWPoisonHandlable()
commit fcc00621d88b ("mm/hwpoison: retry with shake_page() for unhandlable
pages") changes the return value of __get_hwpoison_page() to retry for
transiently unhandlable cases. However, __get_hwpoison_page() currently
fails to properly judge buddy pages as handlable, so hard/soft offline for
buddy pages always fail as "unhandlable page". This is totally
regrettable.
So let's add is_free_buddy_page() in HWPoisonHandlable(), so that
__get_hwpoison_page() returns different return values between buddy
pages and unhandlable pages as intended.
Link: https://lkml.kernel.org/r/20210909004131.163221-1-naoya.horiguchi@linux.dev
Fixes: fcc00621d88b ("mm/hwpoison: retry with shake_page() for unhandlable pages")
Signed-off-by: Naoya Horiguchi <naoya.horiguchi(a)nec.com>
Cc: Tony Luck <tony.luck(a)intel.com>
Cc: Oscar Salvador <osalvador(a)suse.de>
Cc: Mike Kravetz <mike.kravetz(a)oracle.com>
Cc: Michal Hocko <mhocko(a)suse.com>
Cc: Yang Shi <shy828301(a)gmail.com>
Cc: David Hildenbrand <david(a)redhat.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/memory-failure.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/mm/memory-failure.c~mm-hwpoison-add-is_free_buddy_page-in-hwpoisonhandlable
+++ a/mm/memory-failure.c
@@ -1126,7 +1126,7 @@ static int page_action(struct page_state
*/
static inline bool HWPoisonHandlable(struct page *page)
{
- return PageLRU(page) || __PageMovable(page);
+ return PageLRU(page) || __PageMovable(page) || is_free_buddy_page(page);
}
static int __get_hwpoison_page(struct page *page)
_
Patches currently in -mm which might be from naoya.horiguchi(a)nec.com are
mm-sparse-set-section_nid_shift-to-6.patch
mm-hwpoison-add-is_free_buddy_page-in-hwpoisonhandlable.patch