From: Tero Kristo <t-kristo(a)ti.com>
[ Upstream commit 98ece19f247159a51003796ede7112fef2df5d7f ]
The reset handling APIs for omap-prm can be invoked PM runtime which
runs in atomic context. For this to work properly, switch to atomic
iopoll version instead of the current which can sleep. Otherwise,
this throws a "BUG: scheduling while atomic" warning. Issue is seen
rather easily when CONFIG_PREEMPT is enabled.
Signed-off-by: Tero Kristo <t-kristo(a)ti.com>
Acked-by: Santosh Shilimkar <ssantosh(a)kernel.org>
Signed-off-by: Tony Lindgren <tony(a)atomide.com>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
drivers/soc/ti/omap_prm.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/soc/ti/omap_prm.c b/drivers/soc/ti/omap_prm.c
index 96c6f777519c0..c9b3f9ebf0bbf 100644
--- a/drivers/soc/ti/omap_prm.c
+++ b/drivers/soc/ti/omap_prm.c
@@ -256,10 +256,10 @@ static int omap_reset_deassert(struct reset_controller_dev *rcdev,
goto exit;
/* wait for the status to be set */
- ret = readl_relaxed_poll_timeout(reset->prm->base +
- reset->prm->data->rstst,
- v, v & BIT(st_bit), 1,
- OMAP_RESET_MAX_WAIT);
+ ret = readl_relaxed_poll_timeout_atomic(reset->prm->base +
+ reset->prm->data->rstst,
+ v, v & BIT(st_bit), 1,
+ OMAP_RESET_MAX_WAIT);
if (ret)
pr_err("%s: timedout waiting for %s:%lu\n", __func__,
reset->prm->data->name, id);
--
2.25.1
The patch titled
Subject: mm/cma.c: use exact_nid true to fix possible per-numa cma leak
has been removed from the -mm tree. Its filename was
mm-cmac-use-exact_nid-true-to-fix-possible-per-numa-cma-leak.patch
This patch was dropped because it was merged into mainline or a subsystem tree
------------------------------------------------------
From: Barry Song <song.bao.hua(a)hisilicon.com>
Subject: mm/cma.c: use exact_nid true to fix possible per-numa cma leak
Calling cma_declare_contiguous_nid() with false exact_nid for per-numa
reservation can easily cause cma leak and various confusion. For example,
mm/hugetlb.c is trying to reserve per-numa cma for gigantic pages. But it
can easily leak cma and make users confused when system has memoryless
nodes.
In case the system has 4 numa nodes, and only numa node0 has memory. if
we set hugetlb_cma=4G in bootargs, mm/hugetlb.c will get 4 cma areas for 4
different numa nodes. since exact_nid=false in current code, all 4 numa
nodes will get cma successfully from node0, but hugetlb_cma[1 to 3] will
never be available to hugepage will only allocate memory from
hugetlb_cma[0].
In case the system has 4 numa nodes, both numa node0&2 has memory, other
nodes have no memory. if we set hugetlb_cma=4G in bootargs, mm/hugetlb.c
will get 4 cma areas for 4 different numa nodes. since exact_nid=false in
current code, all 4 numa nodes will get cma successfully from node0 or 2,
but hugetlb_cma[1] and [3] will never be available to hugepage as
mm/hugetlb.c will only allocate memory from hugetlb_cma[0] and
hugetlb_cma[2]. This causes permanent leak of the cma areas which are
supposed to be used by memoryless node.
Of cource we can workaround the issue by letting mm/hugetlb.c scan all cma
areas in alloc_gigantic_page() even node_mask includes node0 only. that
means when node_mask includes node0 only, we can get page from
hugetlb_cma[1] to hugetlb_cma[3]. But this will cause kernel crash in
free_gigantic_page() while it wants to free page by:
cma_release(hugetlb_cma[page_to_nid(page)], page, 1 << order)
On the other hand, exact_nid=false won't consider numa distance, it might
be not that useful to leverage cma areas on remote nodes. I feel it is
much simpler to make exact_nid true to make everything clear. After that,
memoryless nodes won't be able to reserve per-numa CMA from other nodes
which have memory.
Link: http://lkml.kernel.org/r/20200628074345.27228-1-song.bao.hua@hisilicon.com
Fixes: cf11e85fc08c ("mm: hugetlb: optionally allocate gigantic hugepages using cma")
Signed-off-by: Barry Song <song.bao.hua(a)hisilicon.com>
Acked-by: Roman Gushchin <guro(a)fb.com>
Cc: Jonathan Cameron <Jonathan.Cameron(a)huawei.com>
Cc: Aslan Bakirov <aslan(a)fb.com>
Cc: Michal Hocko <mhocko(a)kernel.org>
Cc: Andreas Schaufler <andreas.schaufler(a)gmx.de>
Cc: Mike Kravetz <mike.kravetz(a)oracle.com>
Cc: Rik van Riel <riel(a)surriel.com>
Cc: Joonsoo Kim <js1304(a)gmail.com>
Cc: Robin Murphy <robin.murphy(a)arm.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/cma.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
--- a/mm/cma.c~mm-cmac-use-exact_nid-true-to-fix-possible-per-numa-cma-leak
+++ a/mm/cma.c
@@ -339,13 +339,13 @@ int __init cma_declare_contiguous_nid(ph
*/
if (base < highmem_start && limit > highmem_start) {
addr = memblock_alloc_range_nid(size, alignment,
- highmem_start, limit, nid, false);
+ highmem_start, limit, nid, true);
limit = highmem_start;
}
if (!addr) {
addr = memblock_alloc_range_nid(size, alignment, base,
- limit, nid, false);
+ limit, nid, true);
if (!addr) {
ret = -ENOMEM;
goto err;
_
Patches currently in -mm which might be from song.bao.hua(a)hisilicon.com are
mm-hugetlb-avoid-hardcoding-while-checking-if-cma-is-enable.patch
mm-cma-fix-the-name-of-cma-areas.patch
mm-hugetlb-fix-the-name-of-hugetlb-cma.patch
The patch titled
Subject: mm/hugetlb.c: fix pages per hugetlb calculation
has been removed from the -mm tree. Its filename was
hugetlb-fix-pages-per-hugetlb-calculation.patch
This patch was dropped because it was merged into mainline or a subsystem tree
------------------------------------------------------
From: Mike Kravetz <mike.kravetz(a)oracle.com>
Subject: mm/hugetlb.c: fix pages per hugetlb calculation
The routine hpage_nr_pages() was incorrectly used to calculate the number
of base pages in a hugetlb page. hpage_nr_pages is designed to be called
for THP pages and will return HPAGE_PMD_NR for hugetlb pages of any size.
Due to the context in which hpage_nr_pages was called, it is unlikely to
produce a user visible error. The routine with the incorrect call is only
exercised in the case of hugetlb memory error or migration. In addition,
this would need to be on an architecture which supports huge page sizes
less than PMD_SIZE. And, the vma containing the huge page would also need
to smaller than PMD_SIZE.
Link: http://lkml.kernel.org/r/20200629185003.97202-1-mike.kravetz@oracle.com
Fixes: c0d0381ade79 ("hugetlbfs: use i_mmap_rwsem for more pmd sharing synchronization")
Signed-off-by: Mike Kravetz <mike.kravetz(a)oracle.com>
Reviewed-by: Matthew Wilcox (Oracle) <willy(a)infradead.org>
Reported-by: Matthew Wilcox (Oracle) <willy(a)infradead.org>
Cc: Michal Hocko <mhocko(a)kernel.org>
Cc: "Kirill A . Shutemov" <kirill.shutemov(a)linux.intel.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/hugetlb.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/mm/hugetlb.c~hugetlb-fix-pages-per-hugetlb-calculation
+++ a/mm/hugetlb.c
@@ -1593,7 +1593,7 @@ static struct address_space *_get_hugetl
/* Use first found vma */
pgoff_start = page_to_pgoff(hpage);
- pgoff_end = pgoff_start + hpage_nr_pages(hpage) - 1;
+ pgoff_end = pgoff_start + pages_per_huge_page(page_hstate(hpage)) - 1;
anon_vma_interval_tree_foreach(avc, &anon_vma->rb_root,
pgoff_start, pgoff_end) {
struct vm_area_struct *vma = avc->vma;
_
Patches currently in -mm which might be from mike.kravetz(a)oracle.com are
hugetlbfs-prevent-filesystem-stacking-of-hugetlbfs.patch
The patch titled
Subject: umh: fix refcount underflow in fork_usermode_blob().
has been removed from the -mm tree. Its filename was
umh-fix-refcount-underflow-in-fork_usermode_blob.patch
This patch was dropped because an alternative patch was merged
------------------------------------------------------
From: Tetsuo Handa <penguin-kernel(a)i-love.sakura.ne.jp>
Subject: umh: fix refcount underflow in fork_usermode_blob().
Since free_bprm(bprm) always calls allow_write_access(bprm->file) and
fput(bprm->file) if bprm->file is set to non-NULL, __do_execve_file()
must call deny_write_access(file) and get_file(file) if called from
do_execve_file() path. Otherwise, use-after-free access can happen at
fput(file) in fork_usermode_blob().
general protection fault, probably for non-canonical address 0x6b6b6b6b6b6b6b6b: 0000 [#1] SMP DEBUG_PAGEALLOC
CPU: 3 PID: 4131 Comm: insmod Tainted: G O 5.6.0-rc5+ #978
Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 07/29/2019
RIP: 0010:fork_usermode_blob+0xaa/0x190
Link: http://lkml.kernel.org/r/9b846b1f-a231-4f09-8c37-6bfb0d1e7b05@i-love.sakura…
Signed-off-by: Tetsuo Handa <penguin-kernel(a)I-love.SAKURA.ne.jp>
Fixes: 449325b52b7a6208 ("umh: introduce fork_usermode_blob() helper")
Cc: Alexei Starovoitov <ast(a)kernel.org>
Cc: David S. Miller <davem(a)davemloft.net>
Cc: Alexander Viro <viro(a)zeniv.linux.org.uk>
Cc: "Eric W. Biederman" <ebiederm(a)xmission.com>
Cc: <stable(a)vger.kernel.org> [4.18+]
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
fs/exec.c | 14 ++++++++++----
1 file changed, 10 insertions(+), 4 deletions(-)
--- a/fs/exec.c~umh-fix-refcount-underflow-in-fork_usermode_blob
+++ a/fs/exec.c
@@ -1868,11 +1868,17 @@ static int __do_execve_file(int fd, stru
check_unsafe_exec(bprm);
current->in_execve = 1;
- if (!file)
+ if (!file) {
file = do_open_execat(fd, filename, flags);
- retval = PTR_ERR(file);
- if (IS_ERR(file))
- goto out_unmark;
+ retval = PTR_ERR(file);
+ if (IS_ERR(file))
+ goto out_unmark;
+ } else {
+ retval = deny_write_access(file);
+ if (retval)
+ goto out_unmark;
+ get_file(file);
+ }
sched_exec();
_
Patches currently in -mm which might be from penguin-kernel(a)i-love.sakura.ne.jp are
kernel-hung_taskc-monitor-killed-tasks.patch
stable team,
Please consider commit 774911290c58 ("KVM: s390: reduce number of IO pins to 1")
for stable. This can avoid OOM killer activity on highly fragmented memory
even when swap and memory is available.
We decided too late that this is stable material, so sorry for not marking it
correctly.
Christian
Am 07.07.20 um 20:35 schrieb Chris Wilson:
> Quoting lepton (2020-07-07 19:17:51)
>> On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson <chris(a)chris-wilson.co.uk> wrote:
>>> Quoting lepton (2020-07-07 18:05:21)
>>>> On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris(a)chris-wilson.co.uk> wrote:
>>>>> If we assign obj->filp, we believe that the create vgem bo is native and
>>>>> allow direct operations like mmap() assuming it behaves as backed by a
>>>>> shmemfs inode. When imported from a dmabuf, the obj->pages are
>>>>> not always meaningful and the shmemfs backing store misleading.
>>>>>
>>>>> Note, that regular mmap access to a vgem bo is via the dumb buffer API,
>>>>> and that rejects attempts to mmap an imported dmabuf,
>>>> What do you mean by "regular mmap access" here? It looks like vgem is
>>>> using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
>>>> drm_gem_dumb_map_offset
>>> As I too found out, and so had to correct my story telling.
>>>
>>> By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
>>> API] as opposed to mmap() via an exported dma-buf fd. I had to look at
>>> igt to see how it was being used.
>> Now it seems your fix is to disable "regular mmap" on imported dma buf
>> for vgem. I am not really a graphic guy, but then the api looks like:
>> for a gem handle, user space has to guess to find out the way to mmap
>> it. If user space guess wrong, then it will fail to mmap. Is this the
>> expected way
>> for people to handle gpu buffer?
> You either have a dumb buffer handle, or a dma-buf fd. If you have the
> handle, you have to use the dumb buffer API, there's no other way to
> mmap it. If you have the dma-buf fd, you should mmap it directly. Those
> two are clear.
>
> It's when you import the dma-buf into vgem and create a handle out of
> it, that's when the handle is no longer first class and certain uAPI
> [the dumb buffer API in particular] fail.
>
> It's not brilliant, as you say, it requires the user to remember the
> difference between the handles, but at the same time it does prevent
> them falling into coherency traps by forcing them to use the right
> driver to handle the object, and have to consider the additional ioctls
> that go along with that access.
Yes, Chris is right. Mapping DMA-buf through the mmap() APIs of an
importer is illegal.
What we could maybe try to do is to redirect this mmap() API call on the
importer to the exporter, but I'm pretty sure that the fs layer wouldn't
like that without changes.
Regards,
Christian.
> -Chris
There appears to be a timing issue where using a divider of 32 breaks
the DSS for OMAP36xx despite the TRM stating 32 is a valid
number. Through experimentation, it appears that 31 works.
This same fix was issued for kernels 4.5+. However, between
kernels 4.4 and 4.5, the directory structure was changed when the
dss directory was moved inside the omapfb directory. That broke the
patch on kernels older than 4.5, because it didn't permit the patch
to apply cleanly for 4.4 and older.
A similar patch was applied to the 3.16 kernel already, but not to 4.4.
Commit 4b911101a5cd ("drm/omap: fix max fclk divider for omap36xx") is
on the 3.16 stable branch with notes from Ben about the path change.
Since this was applied for 3.16 already, this patch is for kernels
3.17 through 4.4 only.
Fixes: f7018c213502 ("video: move fbdev to drivers/video/fbdev")
Cc: <stable(a)vger.kernel.org> #3.17 - 4.4
CC: <tomi.valkeinen(a)ti.com>
Signed-off-by: Adam Ford <aford173(a)gmail.com>
diff --git a/drivers/video/fbdev/omap2/dss/dss.c b/drivers/video/fbdev/omap2/dss/dss.c
index 9200a8668b49..a57c3a5f4bf8 100644
--- a/drivers/video/fbdev/omap2/dss/dss.c
+++ b/drivers/video/fbdev/omap2/dss/dss.c
@@ -843,7 +843,7 @@ static const struct dss_features omap34xx_dss_feats = {
};
static const struct dss_features omap3630_dss_feats = {
- .fck_div_max = 32,
+ .fck_div_max = 31,
.dss_fck_multiplier = 1,
.parent_clk_name = "dpll4_ck",
.dpi_select_source = &dss_dpi_select_source_omap2_omap3,
--
2.25.1