The patch titled
Subject: mm/mempolicy: fix a race between offset_il_node and mpol_rebind_task
has been removed from the -mm tree. Its filename was
mm-mempolicy-fix-a-race-between-offset_il_node-and-mpol_rebind_task.patch
This patch was dropped because it was merged into mainline or a subsystem tree
------------------------------------------------------
From: yanghui <yanghui.def(a)bytedance.com>
Subject: mm/mempolicy: fix a race between offset_il_node and mpol_rebind_task
Servers happened below panic:
Kernel version:5.4.56
BUG: unable to handle page fault for address: 0000000000002c48
RIP: 0010:__next_zones_zonelist+0x1d/0x40
[264003.977696] RAX: 0000000000002c40 RBX: 0000000000100dca RCX: 0000000000000014
[264003.977872] Call Trace:
[264003.977888] __alloc_pages_nodemask+0x277/0x310
[264003.977908] alloc_page_interleave+0x13/0x70
[264003.977926] handle_mm_fault+0xf99/0x1390
[264003.977951] __do_page_fault+0x288/0x500
[264003.977979] ? schedule+0x39/0xa0
[264003.977994] do_page_fault+0x30/0x110
[264003.978010] page_fault+0x3e/0x50
The reason for the panic is that MAX_NUMNODES is passed in the third
parameter in __alloc_pages_nodemask(preferred_nid). So access to
zonelist->zoneref->zone_idx in __next_zones_zonelist will cause a panic.
In offset_il_node(), first_node() returns nid from pol->v.nodes, after
this other threads may chang pol->v.nodes before next_node(). This race
condition will let next_node return MAX_NUMNODES. So put pol->nodes in a
local variable.
The race condition is between offset_il_node and cpuset_change_task_nodemask:
CPU0: CPU1:
alloc_pages_vma()
interleave_nid(pol,)
offset_il_node(pol,)
first_node(pol->v.nodes) cpuset_change_task_nodemask
//nodes==0xc mpol_rebind_task
mpol_rebind_policy
mpol_rebind_nodemask(pol,nodes)
//nodes==0x3
next_node(nid, pol->v.nodes)//return MAX_NUMNODES
Link: https://lkml.kernel.org/r/20210906034658.48721-1-yanghui.def@bytedance.com
Signed-off-by: yanghui <yanghui.def(a)bytedance.com>
Reviewed-by: Muchun Song <songmuchun(a)bytedance.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/mempolicy.c | 17 +++++++++++++----
1 file changed, 13 insertions(+), 4 deletions(-)
--- a/mm/mempolicy.c~mm-mempolicy-fix-a-race-between-offset_il_node-and-mpol_rebind_task
+++ a/mm/mempolicy.c
@@ -1876,17 +1876,26 @@ unsigned int mempolicy_slab_node(void)
*/
static unsigned offset_il_node(struct mempolicy *pol, unsigned long n)
{
- unsigned nnodes = nodes_weight(pol->nodes);
- unsigned target;
+ nodemask_t nodemask = pol->nodes;
+ unsigned int target, nnodes;
int i;
int nid;
+ /*
+ * The barrier will stabilize the nodemask in a register or on
+ * the stack so that it will stop changing under the code.
+ *
+ * Between first_node() and next_node(), pol->nodes could be changed
+ * by other threads. So we put pol->nodes in a local stack.
+ */
+ barrier();
+ nnodes = nodes_weight(nodemask);
if (!nnodes)
return numa_node_id();
target = (unsigned int)n % nnodes;
- nid = first_node(pol->nodes);
+ nid = first_node(nodemask);
for (i = 0; i < target; i++)
- nid = next_node(nid, pol->nodes);
+ nid = next_node(nid, nodemask);
return nid;
}
_
Patches currently in -mm which might be from yanghui.def(a)bytedance.com are
The patch titled
Subject: mm/page_alloc.c: avoid accessing uninitialized pcp page migratetype
has been removed from the -mm tree. Its filename was
mm-page_allocc-avoid-accessing-uninitialized-pcp-page-migratetype.patch
This patch was dropped because it was merged into mainline or a subsystem tree
------------------------------------------------------
From: Miaohe Lin <linmiaohe(a)huawei.com>
Subject: mm/page_alloc.c: avoid accessing uninitialized pcp page migratetype
If it's not prepared to free unref page, the pcp page migratetype is
unset. Thus We will get rubbish from get_pcppage_migratetype() and might
list_del &page->lru again after it's already deleted from the list leading
to grumble about data corruption.
Link: https://lkml.kernel.org/r/20210902115447.57050-1-linmiaohe@huawei.com
Fixes: df1acc856923 ("mm/page_alloc: avoid conflating IRQs disabled with zone->lock")
Signed-off-by: Miaohe Lin <linmiaohe(a)huawei.com>
Acked-by: Mel Gorman <mgorman(a)techsingularity.net>
Acked-by: Vlastimil Babka <vbabka(a)suse.cz>
Reviewed-by: David Hildenbrand <david(a)redhat.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/page_alloc.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
--- a/mm/page_alloc.c~mm-page_allocc-avoid-accessing-uninitialized-pcp-page-migratetype
+++ a/mm/page_alloc.c
@@ -3428,8 +3428,10 @@ void free_unref_page_list(struct list_he
/* Prepare pages for freeing */
list_for_each_entry_safe(page, next, list, lru) {
pfn = page_to_pfn(page);
- if (!free_unref_page_prepare(page, pfn, 0))
+ if (!free_unref_page_prepare(page, pfn, 0)) {
list_del(&page->lru);
+ continue;
+ }
/*
* Free isolated pages directly to the allocator, see
_
Patches currently in -mm which might be from linmiaohe(a)huawei.com are
mm-memory_hotplug-make-hwpoisoned-dirty-swapcache-pages-unmovable.patch
mm-zsmallocc-close-race-window-between-zs_pool_dec_isolated-and-zs_unregister_migration.patch
mm-zsmallocc-combine-two-atomic-ops-in-zs_pool_dec_isolated.patch
The patch titled
Subject: mm,vmscan: fix divide by zero in get_scan_count
has been removed from the -mm tree. Its filename was
mmvmscan-fix-divide-by-zero-in-get_scan_count.patch
This patch was dropped because it was merged into mainline or a subsystem tree
------------------------------------------------------
From: Rik van Riel <riel(a)surriel.com>
Subject: mm,vmscan: fix divide by zero in get_scan_count
Changeset f56ce412a59d ("mm: memcontrol: fix occasional OOMs due to
proportional memory.low reclaim") introduced a divide by zero corner case
when oomd is being used in combination with cgroup memory.low protection.
When oomd decides to kill a cgroup, it will force the cgroup memory to be
reclaimed after killing the tasks, by writing to the memory.max file for
that cgroup, forcing the remaining page cache and reclaimable slab to be
reclaimed down to zero.
Previously, on cgroups with some memory.low protection that would result
in the memory being reclaimed down to the memory.low limit, or likely not
at all, having the page cache reclaimed asynchronously later.
With f56ce412a59d the oomd write to memory.max tries to reclaim all the
way down to zero, which may race with another reclaimer, to the point of
ending up with the divide by zero below.
This patch implements the obvious fix.
Link: https://lkml.kernel.org/r/20210826220149.058089c6@imladris.surriel.com
Fixes: f56ce412a59d ("mm: memcontrol: fix occasional OOMs due to proportional memory.low reclaim")
Signed-off-by: Rik van Riel <riel(a)surriel.com>
Acked-by: Roman Gushchin <guro(a)fb.com>
Acked-by: Michal Hocko <mhocko(a)suse.com>
Acked-by: Johannes Weiner <hannes(a)cmpxchg.org>
Acked-by: Chris Down <chris(a)chrisdown.name>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/vmscan.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/mm/vmscan.c~mmvmscan-fix-divide-by-zero-in-get_scan_count
+++ a/mm/vmscan.c
@@ -2715,7 +2715,7 @@ out:
cgroup_size = max(cgroup_size, protection);
scan = lruvec_size - lruvec_size * protection /
- cgroup_size;
+ (cgroup_size + 1);
/*
* Minimally target SWAP_CLUSTER_MAX pages to keep
_
Patches currently in -mm which might be from riel(a)surriel.com are
The patch titled
Subject: mm/hugetlb: initialize hugetlb_usage in mm_init
has been removed from the -mm tree. Its filename was
mm-hugetlb-initialize-hugetlb_usage-in-mm_init.patch
This patch was dropped because it was merged into mainline or a subsystem tree
------------------------------------------------------
From: Liu Zixian <liuzixian4(a)huawei.com>
Subject: mm/hugetlb: initialize hugetlb_usage in mm_init
After fork, the child process will get incorrect (2x) hugetlb_usage.
If a process uses 5 2MB hugetlb pages in an anonymous mapping,
HugetlbPages: 10240 kB
and then forks, the child will show,
HugetlbPages: 20480 kB
The reason for double the amount is because hugetlb_usage will be copied
from the parent and then increased when we copy page tables from parent to
child. Child will have 2x actual usage.
Fix this by adding hugetlb_count_init in mm_init.
Link: https://lkml.kernel.org/r/20210826071742.877-1-liuzixian4@huawei.com
Fixes: 5d317b2b6536 ("mm: hugetlb: proc: add HugetlbPages field to /proc/PID/status")
Signed-off-by: Liu Zixian <liuzixian4(a)huawei.com>
Reviewed-by: Naoya Horiguchi <naoya.horiguchi(a)nec.com>
Reviewed-by: Mike Kravetz <mike.kravetz(a)oracle.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
include/linux/hugetlb.h | 9 +++++++++
kernel/fork.c | 1 +
2 files changed, 10 insertions(+)
--- a/include/linux/hugetlb.h~mm-hugetlb-initialize-hugetlb_usage-in-mm_init
+++ a/include/linux/hugetlb.h
@@ -858,6 +858,11 @@ static inline spinlock_t *huge_pte_lockp
void hugetlb_report_usage(struct seq_file *m, struct mm_struct *mm);
+static inline void hugetlb_count_init(struct mm_struct *mm)
+{
+ atomic_long_set(&mm->hugetlb_usage, 0);
+}
+
static inline void hugetlb_count_add(long l, struct mm_struct *mm)
{
atomic_long_add(l, &mm->hugetlb_usage);
@@ -1042,6 +1047,10 @@ static inline spinlock_t *huge_pte_lockp
return &mm->page_table_lock;
}
+static inline void hugetlb_count_init(struct mm_struct *mm)
+{
+}
+
static inline void hugetlb_report_usage(struct seq_file *f, struct mm_struct *m)
{
}
--- a/kernel/fork.c~mm-hugetlb-initialize-hugetlb_usage-in-mm_init
+++ a/kernel/fork.c
@@ -1063,6 +1063,7 @@ static struct mm_struct *mm_init(struct
mm->pmd_huge_pte = NULL;
#endif
mm_init_uprobes_state(mm);
+ hugetlb_count_init(mm);
if (current->mm) {
mm->flags = current->mm->flags & MMF_INIT_MASK;
_
Patches currently in -mm which might be from liuzixian4(a)huawei.com are
The patch titled
Subject: mm/hmm: bypass devmap pte when all pfn requested flags are fulfilled
has been removed from the -mm tree. Its filename was
mm-hmm-bypass-devmap-pte-when-all-pfn-requested-flags-are-fulfilled.patch
This patch was dropped because it was merged into mainline or a subsystem tree
------------------------------------------------------
From: Li Zhijian <lizhijian(a)cn.fujitsu.com>
Subject: mm/hmm: bypass devmap pte when all pfn requested flags are fulfilled
Previously, we noticed the one rpma example was failed[1] since
36f30e486d, where it will use ODP feature to do RDMA WRITE between fsdax
files.
After digging into the code, we found hmm_vma_handle_pte() will still
return EFAULT even though all the its requesting flags has been fulfilled.
That's because a DAX page will be marked as (_PAGE_SPECIAL | PAGE_DEVMAP)
by pte_mkdevmap().
[1]: https://github.com/pmem/rpma/issues/1142
Link: https://lkml.kernel.org/r/20210830094232.203029-1-lizhijian@cn.fujitsu.com
Fixes: 405506274922 ("mm/hmm: add missing call to hmm_pte_need_fault in HMM_PFN_SPECIAL handling")
Signed-off-by: Li Zhijian <lizhijian(a)cn.fujitsu.com>
Reviewed-by: Christoph Hellwig <hch(a)lst.de>
Reviewed-by: Jason Gunthorpe <jgg(a)nvidia.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/hmm.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
--- a/mm/hmm.c~mm-hmm-bypass-devmap-pte-when-all-pfn-requested-flags-are-fulfilled
+++ a/mm/hmm.c
@@ -295,10 +295,13 @@ static int hmm_vma_handle_pte(struct mm_
goto fault;
/*
+ * Bypass devmap pte such as DAX page when all pfn requested
+ * flags(pfn_req_flags) are fulfilled.
* Since each architecture defines a struct page for the zero page, just
* fall through and treat it like a normal page.
*/
- if (pte_special(pte) && !is_zero_pfn(pte_pfn(pte))) {
+ if (pte_special(pte) && !pte_devmap(pte) &&
+ !is_zero_pfn(pte_pfn(pte))) {
if (hmm_pte_need_fault(hmm_vma_walk, pfn_req_flags, 0)) {
pte_unmap(ptep);
return -EFAULT;
_
Patches currently in -mm which might be from lizhijian(a)cn.fujitsu.com are
The patch titled
Subject: mm/memory_hotplug: use "unsigned long" for PFN in zone_for_pfn_range()
has been removed from the -mm tree. Its filename was
mm-memory_hotplug-use-unsigned-long-for-pfn-in-zone_for_pfn_range.patch
This patch was dropped because it was merged into mainline or a subsystem tree
------------------------------------------------------
From: David Hildenbrand <david(a)redhat.com>
Subject: mm/memory_hotplug: use "unsigned long" for PFN in zone_for_pfn_range()
Patch series "mm/memory_hotplug: preparatory patches for new online policy and memory"
These are all cleanups and one fix previously sent as part of [1]:
[PATCH v1 00/12] mm/memory_hotplug: "auto-movable" online policy and memory
groups.
These patches make sense even without the other series, therefore I pulled
them out to make the other series easier to digest.
[1] https://lkml.kernel.org/r/20210607195430.48228-1-david@redhat.com
This patch (of 4):
Checkpatch complained on a follow-up patch that we are using "unsigned"
here, which defaults to "unsigned int" and checkpatch is correct.
As we will search for a fitting zone using the wrong pfn, we might end
up onlining memory to one of the special kernel zones, such as ZONE_DMA,
which can end badly as the onlined memory does not satisfy properties of
these zones.
Use "unsigned long" instead, just as we do in other places when handling
PFNs. This can bite us once we have physical addresses in the range of
multiple TB.
Link: https://lkml.kernel.org/r/20210712124052.26491-2-david@redhat.com
Fixes: e5e689302633 ("mm, memory_hotplug: display allowed zones in the preferred ordering")
Signed-off-by: David Hildenbrand <david(a)redhat.com>
Reviewed-by: Pankaj Gupta <pankaj.gupta(a)ionos.com>
Reviewed-by: Muchun Song <songmuchun(a)bytedance.com>
Reviewed-by: Oscar Salvador <osalvador(a)suse.de>
Cc: David Hildenbrand <david(a)redhat.com>
Cc: Vitaly Kuznetsov <vkuznets(a)redhat.com>
Cc: "Michael S. Tsirkin" <mst(a)redhat.com>
Cc: Jason Wang <jasowang(a)redhat.com>
Cc: Pankaj Gupta <pankaj.gupta.linux(a)gmail.com>
Cc: Wei Yang <richard.weiyang(a)linux.alibaba.com>
Cc: Michal Hocko <mhocko(a)kernel.org>
Cc: Dan Williams <dan.j.williams(a)intel.com>
Cc: Anshuman Khandual <anshuman.khandual(a)arm.com>
Cc: Dave Hansen <dave.hansen(a)linux.intel.com>
Cc: Vlastimil Babka <vbabka(a)suse.cz>
Cc: Mike Rapoport <rppt(a)kernel.org>
Cc: "Rafael J. Wysocki" <rjw(a)rjwysocki.net>
Cc: Len Brown <lenb(a)kernel.org>
Cc: Pavel Tatashin <pasha.tatashin(a)soleen.com>
Cc: Heiko Carstens <hca(a)linux.ibm.com>
Cc: Michael Ellerman <mpe(a)ellerman.id.au>
Cc: Catalin Marinas <catalin.marinas(a)arm.com>
Cc: virtualization(a)lists.linux-foundation.org
Cc: Andy Lutomirski <luto(a)kernel.org>
Cc: "Aneesh Kumar K.V" <aneesh.kumar(a)linux.ibm.com>
Cc: Anton Blanchard <anton(a)ozlabs.org>
Cc: Ard Biesheuvel <ardb(a)kernel.org>
Cc: Baoquan He <bhe(a)redhat.com>
Cc: Benjamin Herrenschmidt <benh(a)kernel.crashing.org>
Cc: Borislav Petkov <bp(a)alien8.de>
Cc: Christian Borntraeger <borntraeger(a)de.ibm.com>
Cc: Christophe Leroy <christophe.leroy(a)c-s.fr>
Cc: Dave Jiang <dave.jiang(a)intel.com>
Cc: "H. Peter Anvin" <hpa(a)zytor.com>
Cc: Ingo Molnar <mingo(a)redhat.com>
Cc: Jia He <justin.he(a)arm.com>
Cc: Joe Perches <joe(a)perches.com>
Cc: Kefeng Wang <wangkefeng.wang(a)huawei.com>
Cc: Laurent Dufour <ldufour(a)linux.ibm.com>
Cc: Michel Lespinasse <michel(a)lespinasse.org>
Cc: Nathan Lynch <nathanl(a)linux.ibm.com>
Cc: Nicholas Piggin <npiggin(a)gmail.com>
Cc: Paul Mackerras <paulus(a)samba.org>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Pierre Morel <pmorel(a)linux.ibm.com>
Cc: "Rafael J. Wysocki" <rafael.j.wysocki(a)intel.com>
Cc: Rich Felker <dalias(a)libc.org>
Cc: Scott Cheloha <cheloha(a)linux.ibm.com>
Cc: Sergei Trofimovich <slyfox(a)gentoo.org>
Cc: Thiago Jung Bauermann <bauerman(a)linux.ibm.com>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Vasily Gorbik <gor(a)linux.ibm.com>
Cc: Vishal Verma <vishal.l.verma(a)intel.com>
Cc: Will Deacon <will(a)kernel.org>
Cc: Yoshinori Sato <ysato(a)users.sourceforge.jp>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
include/linux/memory_hotplug.h | 4 ++--
mm/memory_hotplug.c | 4 ++--
2 files changed, 4 insertions(+), 4 deletions(-)
--- a/include/linux/memory_hotplug.h~mm-memory_hotplug-use-unsigned-long-for-pfn-in-zone_for_pfn_range
+++ a/include/linux/memory_hotplug.h
@@ -339,8 +339,8 @@ extern void sparse_remove_section(struct
unsigned long map_offset, struct vmem_altmap *altmap);
extern struct page *sparse_decode_mem_map(unsigned long coded_mem_map,
unsigned long pnum);
-extern struct zone *zone_for_pfn_range(int online_type, int nid, unsigned start_pfn,
- unsigned long nr_pages);
+extern struct zone *zone_for_pfn_range(int online_type, int nid,
+ unsigned long start_pfn, unsigned long nr_pages);
extern int arch_create_linear_mapping(int nid, u64 start, u64 size,
struct mhp_params *params);
void arch_remove_linear_mapping(u64 start, u64 size);
--- a/mm/memory_hotplug.c~mm-memory_hotplug-use-unsigned-long-for-pfn-in-zone_for_pfn_range
+++ a/mm/memory_hotplug.c
@@ -708,8 +708,8 @@ static inline struct zone *default_zone_
return movable_node_enabled ? movable_zone : kernel_zone;
}
-struct zone *zone_for_pfn_range(int online_type, int nid, unsigned start_pfn,
- unsigned long nr_pages)
+struct zone *zone_for_pfn_range(int online_type, int nid,
+ unsigned long start_pfn, unsigned long nr_pages)
{
if (online_type == MMOP_ONLINE_KERNEL)
return default_kernel_zone_for_pfn(nid, start_pfn, nr_pages);
_
Patches currently in -mm which might be from david(a)redhat.com are
The patch titled
Subject: rapidio: avoid bogus __alloc_size warning
has been added to the -mm tree. Its filename is
rapidio-avoid-bogus-__alloc_size-warning.patch
This patch should soon appear at
https://ozlabs.org/~akpm/mmots/broken-out/rapidio-avoid-bogus-__alloc_size-…
and later at
https://ozlabs.org/~akpm/mmotm/broken-out/rapidio-avoid-bogus-__alloc_size-…
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next and is updated
there every 3-4 working days
------------------------------------------------------
From: Kees Cook <keescook(a)chromium.org>
Subject: rapidio: avoid bogus __alloc_size warning
GCC 9.3 (but not later) incorrectly evaluates the arguments to
check_copy_size(), getting seemingly confused by the size being returned
from array_size(). Instead, perform the calculation once, which both
makes the code more readable and avoids the bug in GCC.
In file included from arch/x86/include/asm/preempt.h:7,
from include/linux/preempt.h:78,
from include/linux/spinlock.h:55,
from include/linux/mm_types.h:9,
from include/linux/buildid.h:5,
from include/linux/module.h:14,
from drivers/rapidio/devices/rio_mport_cdev.c:13:
In function 'check_copy_size',
inlined from 'copy_from_user' at include/linux/uaccess.h:191:6,
inlined from 'rio_mport_transfer_ioctl' at drivers/rapidio/devices/rio_mport_cdev.c:983:6:
include/linux/thread_info.h:213:4: error: call to '__bad_copy_to' declared with attribute error: copy destination size is too small
213 | __bad_copy_to();
| ^~~~~~~~~~~~~~~
But the allocation size and the copy size are identical:
transfer = vmalloc(array_size(sizeof(*transfer), transaction.count));
if (!transfer)
return -ENOMEM;
if (unlikely(copy_from_user(transfer,
(void __user *)(uintptr_t)transaction.block,
array_size(sizeof(*transfer), transaction.count)))) {
Link: https://lkml.kernel.org/r/20210909161409.2250920-1-keescook@chromium.org
Link: https://lore.kernel.org/linux-mm/202109091134.FHnRmRxu-lkp@intel.com/
Signed-off-by: Kees Cook <keescook(a)chromium.org>
Reported-by: kernel test robot <lkp(a)intel.com>
Cc: Matt Porter <mporter(a)kernel.crashing.org>
Cc: Alexandre Bounine <alex.bou9(a)gmail.com>
Cc: Jing Xiangfeng <jingxiangfeng(a)huawei.com>
Cc: Ira Weiny <ira.weiny(a)intel.com>
Cc: John Hubbard <jhubbard(a)nvidia.com>
Cc: Souptick Joarder <jrdr.linux(a)gmail.com>
Cc: Gustavo A. R. Silva <gustavoars(a)kernel.org>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
drivers/rapidio/devices/rio_mport_cdev.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
--- a/drivers/rapidio/devices/rio_mport_cdev.c~rapidio-avoid-bogus-__alloc_size-warning
+++ a/drivers/rapidio/devices/rio_mport_cdev.c
@@ -965,6 +965,7 @@ static int rio_mport_transfer_ioctl(stru
struct rio_transfer_io *transfer;
enum dma_data_direction dir;
int i, ret = 0;
+ size_t size;
if (unlikely(copy_from_user(&transaction, arg, sizeof(transaction))))
return -EFAULT;
@@ -976,13 +977,14 @@ static int rio_mport_transfer_ioctl(stru
priv->md->properties.transfer_mode) == 0)
return -ENODEV;
- transfer = vmalloc(array_size(sizeof(*transfer), transaction.count));
+ size = array_size(sizeof(*transfer), transaction.count);
+ transfer = vmalloc(size);
if (!transfer)
return -ENOMEM;
if (unlikely(copy_from_user(transfer,
(void __user *)(uintptr_t)transaction.block,
- array_size(sizeof(*transfer), transaction.count)))) {
+ size))) {
ret = -EFAULT;
goto out_free;
}
@@ -994,8 +996,7 @@ static int rio_mport_transfer_ioctl(stru
transaction.sync, dir, &transfer[i]);
if (unlikely(copy_to_user((void __user *)(uintptr_t)transaction.block,
- transfer,
- array_size(sizeof(*transfer), transaction.count))))
+ transfer, size)))
ret = -EFAULT;
out_free:
_
Patches currently in -mm which might be from keescook(a)chromium.org are
rapidio-avoid-bogus-__alloc_size-warning.patch