From: Davidlohr Bueso <dave(a)stgolabs.net>
Subject: ipc/shm: fix shmat() nil address after round-down when remapping
shmat()'s SHM_REMAP option forbids passing a nil address for; this is in
fact the very first thing we check for. Andrea reported that for
SHM_RND|SHM_REMAP cases we can end up bypassing the initial addr check,
but we need to check again if the address was rounded down to nil. As of
this patch, such cases will return -EINVAL.
Link: http://lkml.kernel.org/r/20180503204934.kk63josdu6u53fbd@linux-n805
Signed-off-by: Davidlohr Bueso <dbueso(a)suse.de>
Reported-by: Andrea Arcangeli <aarcange(a)redhat.com>
Cc: Joe Lawrence <joe.lawrence(a)redhat.com>
Cc: Manfred Spraul <manfred(a)colorfullife.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
ipc/shm.c | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff -puN ipc/shm.c~ipc-shm-fix-shmat-nil-address-after-round-down-when-remapping ipc/shm.c
--- a/ipc/shm.c~ipc-shm-fix-shmat-nil-address-after-round-down-when-remapping
+++ a/ipc/shm.c
@@ -1363,9 +1363,17 @@ long do_shmat(int shmid, char __user *sh
if (addr) {
if (addr & (shmlba - 1)) {
- if (shmflg & SHM_RND)
+ if (shmflg & SHM_RND) {
addr &= ~(shmlba - 1); /* round down */
- else
+
+ /*
+ * Ensure that the round-down is non-nil
+ * when remapping. This can happen for
+ * cases when addr < shmlba.
+ */
+ if (!addr && (shmflg & SHM_REMAP))
+ goto out;
+ } else
#ifndef __ARCH_FORCE_SHMLBA
if (addr & ~PAGE_MASK)
#endif
_
From: Davidlohr Bueso <dave(a)stgolabs.net>
Subject: Revert "ipc/shm: Fix shmat mmap nil-page protection"
Patch series "ipc/shm: shmat() fixes around nil-page".
These patches fix two issues reported[1] a while back by Joe and Andrea
around how shmat(2) behaves with nil-page.
The first reverts a commit that it was incorrectly thought that mapping
nil-page (address=0) was a no no with MAP_FIXED. This is not the case,
with the exception of SHM_REMAP; which is address in the second patch.
I chose two patches because it is easier to backport and it explicitly
reverts bogus behaviour. Both patches ought to be in -stable and ltp
testcases need updated (the added testcase around the cve can be modified
to just test for SHM_RND|SHM_REMAP).
[1] lkml.kernel.org/r/20180430172152.nfa564pvgpk3ut7p@linux-n805
This patch (of 2):
95e91b831f87 ("ipc/shm: Fix shmat mmap nil-page protection") worked on the
idea that we should not be mapping as root addr=0 and MAP_FIXED. However,
it was reported that this scenario is in fact valid, thus making the patch
both bogus and breaks userspace as well. For example X11's libint10.so
relies on shmat(1, SHM_RND) for lowmem initialization[1].
[1] https://cgit.freedesktop.org/xorg/xserver/tree/hw/xfree86/os-support/linux/…
Link: http://lkml.kernel.org/r/20180503203243.15045-2-dave@stgolabs.net
Fixes: 95e91b831f87 ("ipc/shm: Fix shmat mmap nil-page protection")
Signed-off-by: Davidlohr Bueso <dbueso(a)suse.de>
Reported-by: Joe Lawrence <joe.lawrence(a)redhat.com>
Reported-by: Andrea Arcangeli <aarcange(a)redhat.com>
Cc: Manfred Spraul <manfred(a)colorfullife.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
ipc/shm.c | 9 ++-------
1 file changed, 2 insertions(+), 7 deletions(-)
diff -puN ipc/shm.c~revert-ipc-shm-fix-shmat-mmap-nil-page-protection ipc/shm.c
--- a/ipc/shm.c~revert-ipc-shm-fix-shmat-mmap-nil-page-protection
+++ a/ipc/shm.c
@@ -1363,13 +1363,8 @@ long do_shmat(int shmid, char __user *sh
if (addr) {
if (addr & (shmlba - 1)) {
- /*
- * Round down to the nearest multiple of shmlba.
- * For sane do_mmap_pgoff() parameters, avoid
- * round downs that trigger nil-page and MAP_FIXED.
- */
- if ((shmflg & SHM_RND) && addr >= shmlba)
- addr &= ~(shmlba - 1);
+ if (shmflg & SHM_RND)
+ addr &= ~(shmlba - 1); /* round down */
else
#ifndef __ARCH_FORCE_SHMLBA
if (addr & ~PAGE_MASK)
_
From: Matthew Wilcox <mawilcox(a)microsoft.com>
Subject: idr: fix invalid ptr dereference on item delete
If the radix tree underlying the IDR happens to be full and we attempt to
remove an id which is larger than any id in the IDR, we will call
__radix_tree_delete() with an uninitialised 'slot' pointer, at which point
anything could happen. This was easiest to hit with a single entry at id
0 and attempting to remove a non-0 id, but it could have happened with 64
entries and attempting to remove an id >= 64.
Roman said:
The syzcaller test boils down to opening /dev/kvm, creating an
eventfd, and calling a couple of KVM ioctls. None of this requires
superuser. And the result is dereferencing an uninitialized pointer
which is likely a crash. The specific path caught by syzbot is via
KVM_HYPERV_EVENTD ioctl which is new in 4.17. But I guess there are
other user-triggerable paths, so cc:stable is probably justified.
Matthew added:
We have around 250 calls to idr_remove() in the kernel today. Many
of them pass an ID which is embedded in the object they're removing,
so they're safe. Picking a few likely candidates:
drivers/firewire/core-cdev.c looks unsafe; the ID comes from an ioctl.
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c is similar
drivers/atm/nicstar.c could be taken down by a handcrafted packet
Link: http://lkml.kernel.org/r/20180518175025.GD6361@bombadil.infradead.org
Fixes: 0a835c4f090a ("Reimplement IDR and IDA using the radix tree")
Reported-by: <syzbot+35666cba7f0a337e2e79(a)syzkaller.appspotmail.com>
Debugged-by: Roman Kagan <rkagan(a)virtuozzo.com>
Signed-off-by: Matthew Wilcox <mawilcox(a)microsoft.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
lib/radix-tree.c | 4 +++-
tools/testing/radix-tree/idr-test.c | 7 +++++++
2 files changed, 10 insertions(+), 1 deletion(-)
diff -puN lib/radix-tree.c~idr-fix-invalid-ptr-dereference-on-item-delete lib/radix-tree.c
--- a/lib/radix-tree.c~idr-fix-invalid-ptr-dereference-on-item-delete
+++ a/lib/radix-tree.c
@@ -2034,10 +2034,12 @@ void *radix_tree_delete_item(struct radi
unsigned long index, void *item)
{
struct radix_tree_node *node = NULL;
- void __rcu **slot;
+ void __rcu **slot = NULL;
void *entry;
entry = __radix_tree_lookup(root, index, &node, &slot);
+ if (!slot)
+ return NULL;
if (!entry && (!is_idr(root) || node_tag_get(root, node, IDR_FREE,
get_slot_offset(node, slot))))
return NULL;
diff -puN tools/testing/radix-tree/idr-test.c~idr-fix-invalid-ptr-dereference-on-item-delete tools/testing/radix-tree/idr-test.c
--- a/tools/testing/radix-tree/idr-test.c~idr-fix-invalid-ptr-dereference-on-item-delete
+++ a/tools/testing/radix-tree/idr-test.c
@@ -252,6 +252,13 @@ void idr_checks(void)
idr_remove(&idr, 3);
idr_remove(&idr, 0);
+ assert(idr_alloc(&idr, DUMMY_PTR, 0, 0, GFP_KERNEL) == 0);
+ idr_remove(&idr, 1);
+ for (i = 1; i < RADIX_TREE_MAP_SIZE; i++)
+ assert(idr_alloc(&idr, DUMMY_PTR, 0, 0, GFP_KERNEL) == i);
+ idr_remove(&idr, 1 << 30);
+ idr_destroy(&idr);
+
for (i = INT_MAX - 3UL; i < INT_MAX + 1UL; i++) {
struct item *item = item_create(i, 0);
assert(idr_alloc(&idr, item, i, i + 10, GFP_KERNEL) == i);
_
The patch titled
Subject: mm, page_alloc: do not break __GFP_THISNODE by zonelist reset
has been added to the -mm tree. Its filename is
mm-page_alloc-do-not-break-__gfp_thisnode-by-zonelist-reset.patch
This patch should soon appear at
http://ozlabs.org/~akpm/mmots/broken-out/mm-page_alloc-do-not-break-__gfp_t…
and later at
http://ozlabs.org/~akpm/mmotm/broken-out/mm-page_alloc-do-not-break-__gfp_t…
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next and is updated
there every 3-4 working days
------------------------------------------------------
From: Vlastimil Babka <vbabka(a)suse.cz>
Subject: mm, page_alloc: do not break __GFP_THISNODE by zonelist reset
In __alloc_pages_slowpath() we reset zonelist and preferred_zoneref for
allocations that can ignore memory policies. The zonelist is obtained
from current CPU's node. This is a problem for __GFP_THISNODE allocations
that want to allocate on a different node, e.g. because the allocating
thread has been migrated to a different CPU.
This has been observed to break SLAB in our 4.4-based kernel, because
there it relies on __GFP_THISNODE working as intended. If a slab page is
put on wrong node's list, then further list manipulations may corrupt the
list because page_to_nid() is used to determine which node's list_lock
should be locked and thus we may take a wrong lock and race.
Current SLAB implementation seems to be immune by luck thanks to commit
511e3a058812 ("mm/slab: make cache_grow() handle the page allocated on
arbitrary node") but there may be others assuming that __GFP_THISNODE
works as promised.
We can fix it by simply removing the zonelist reset completely. There is
actually no reason to reset it, because memory policies and cpusets don't
affect the zonelist choice in the first place. This was different when
commit 183f6371aac2 ("mm: ignore mempolicies when using
ALLOC_NO_WATERMARK") introduced the code, as mempolicies provided their
own restricted zonelists.
We might consider this for 4.17 although I don't know if there's anything
currently broken. Stable backports should be more important, but will
have to be reviewed carefully, as the code went through many changes. BTW
I think that also the ac->preferred_zoneref reset is currently useless if
we don't also reset ac->nodemask from a mempolicy to NULL first (which we
probably should for the OOM victims etc?), but I would leave that for a
separate patch.
Link: http://lkml.kernel.org/r/20180525130853.13915-1-vbabka@suse.cz
Signed-off-by: Vlastimil Babka <vbabka(a)suse.cz>
Fixes: 183f6371aac2 ("mm: ignore mempolicies when using ALLOC_NO_WATERMARK")
Cc: Mel Gorman <mgorman(a)techsingularity.net>
Cc: Michal Hocko <mhocko(a)kernel.org>
Cc: David Rientjes <rientjes(a)google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim(a)lge.com>
Cc: Vlastimil Babka <vbabka(a)suse.cz>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/page_alloc.c | 1 -
1 file changed, 1 deletion(-)
diff -puN mm/page_alloc.c~mm-page_alloc-do-not-break-__gfp_thisnode-by-zonelist-reset mm/page_alloc.c
--- a/mm/page_alloc.c~mm-page_alloc-do-not-break-__gfp_thisnode-by-zonelist-reset
+++ a/mm/page_alloc.c
@@ -4169,7 +4169,6 @@ retry:
* orientated.
*/
if (!(alloc_flags & ALLOC_CPUSET) || reserve_flags) {
- ac->zonelist = node_zonelist(numa_node_id(), gfp_mask);
ac->preferred_zoneref = first_zones_zonelist(ac->zonelist,
ac->high_zoneidx, ac->nodemask);
}
_
Patches currently in -mm which might be from vbabka(a)suse.cz are
mm-page_alloc-do-not-break-__gfp_thisnode-by-zonelist-reset.patch
This is a note to let you know that I've just added the patch titled
phy: qcom-qusb2: Fix crash if nvmem cell not specified
to my usb git tree which can be found at
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb.git
in the usb-next branch.
The patch will show up in the next release of the linux-next tree
(usually sometime within the next 24 hours during the week.)
The patch will also be merged in the next major kernel release
during the merge window.
If you have any questions about this process, please let me know.
>From 0b4555e776ba0712c6fafb98b226b21fd05d2427 Mon Sep 17 00:00:00 2001
From: Manu Gautam <mgautam(a)codeaurora.org>
Date: Thu, 3 May 2018 02:36:10 +0530
Subject: phy: qcom-qusb2: Fix crash if nvmem cell not specified
Driver currently crashes due to NULL pointer deference
while updating PHY tune register if nvmem cell is NULL.
Since, fused value for Tune1/2 register is optional,
we'd rather bail out.
Fixes: ca04d9d3e1b1 ("phy: qcom-qusb2: New driver for QUSB2 PHY on Qcom chips")
Reviewed-by: Vivek Gautam <vivek.gautam(a)codeaurora.org>
Reviewed-by: Evan Green <evgreen(a)chromium.org>
Cc: stable <stable(a)vger.kernel.org> # 4.14+
Signed-off-by: Manu Gautam <mgautam(a)codeaurora.org>
Signed-off-by: Kishon Vijay Abraham I <kishon(a)ti.com>
---
drivers/phy/qualcomm/phy-qcom-qusb2.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/phy/qualcomm/phy-qcom-qusb2.c b/drivers/phy/qualcomm/phy-qcom-qusb2.c
index 94afeac1a19e..40fdef8b5b75 100644
--- a/drivers/phy/qualcomm/phy-qcom-qusb2.c
+++ b/drivers/phy/qualcomm/phy-qcom-qusb2.c
@@ -315,6 +315,10 @@ static void qusb2_phy_set_tune2_param(struct qusb2_phy *qphy)
const struct qusb2_phy_cfg *cfg = qphy->cfg;
u8 *val;
+ /* efuse register is optional */
+ if (!qphy->cell)
+ return;
+
/*
* Read efuse register having TUNE2/1 parameter's high nibble.
* If efuse register shows value as 0x0, or if we fail to find
--
2.17.0
ext4_resize_fs() has an off-by-one bug when checking whether growing of
a filesystem will not overflow inode count. As a result it allows a
filesystem with 8192 inodes per group to grow to 64TB which overflows
inode count to 0 and makes filesystem unusable. Fix it.
CC: stable(a)vger.kernel.org
Fixes: 3f8a6411fbada1fa482276591e037f3b1adcf55b
Reported-by: Jaco Kroon <jaco(a)uls.co.za>
Signed-off-by: Jan Kara <jack(a)suse.cz>
---
fs/ext4/resize.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
index b6bec270a8e4..d792b7689d92 100644
--- a/fs/ext4/resize.c
+++ b/fs/ext4/resize.c
@@ -1933,7 +1933,7 @@ int ext4_resize_fs(struct super_block *sb, ext4_fsblk_t n_blocks_count)
return 0;
n_group = ext4_get_group_number(sb, n_blocks_count - 1);
- if (n_group > (0xFFFFFFFFUL / EXT4_INODES_PER_GROUP(sb))) {
+ if (n_group >= (0xFFFFFFFFUL / EXT4_INODES_PER_GROUP(sb))) {
ext4_warning(sb, "resize would cause inodes_count overflow");
return -EINVAL;
}
--
2.13.6
This is a note to let you know that I've just added the patch titled
phy: qcom-qusb2: Fix crash if nvmem cell not specified
to my usb git tree which can be found at
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb.git
in the usb-testing branch.
The patch will show up in the next release of the linux-next tree
(usually sometime within the next 24 hours during the week.)
The patch will be merged to the usb-next branch sometime soon,
after it passes testing, and the merge window is open.
If you have any questions about this process, please let me know.
>From 0b4555e776ba0712c6fafb98b226b21fd05d2427 Mon Sep 17 00:00:00 2001
From: Manu Gautam <mgautam(a)codeaurora.org>
Date: Thu, 3 May 2018 02:36:10 +0530
Subject: phy: qcom-qusb2: Fix crash if nvmem cell not specified
Driver currently crashes due to NULL pointer deference
while updating PHY tune register if nvmem cell is NULL.
Since, fused value for Tune1/2 register is optional,
we'd rather bail out.
Fixes: ca04d9d3e1b1 ("phy: qcom-qusb2: New driver for QUSB2 PHY on Qcom chips")
Reviewed-by: Vivek Gautam <vivek.gautam(a)codeaurora.org>
Reviewed-by: Evan Green <evgreen(a)chromium.org>
Cc: stable <stable(a)vger.kernel.org> # 4.14+
Signed-off-by: Manu Gautam <mgautam(a)codeaurora.org>
Signed-off-by: Kishon Vijay Abraham I <kishon(a)ti.com>
---
drivers/phy/qualcomm/phy-qcom-qusb2.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/phy/qualcomm/phy-qcom-qusb2.c b/drivers/phy/qualcomm/phy-qcom-qusb2.c
index 94afeac1a19e..40fdef8b5b75 100644
--- a/drivers/phy/qualcomm/phy-qcom-qusb2.c
+++ b/drivers/phy/qualcomm/phy-qcom-qusb2.c
@@ -315,6 +315,10 @@ static void qusb2_phy_set_tune2_param(struct qusb2_phy *qphy)
const struct qusb2_phy_cfg *cfg = qphy->cfg;
u8 *val;
+ /* efuse register is optional */
+ if (!qphy->cell)
+ return;
+
/*
* Read efuse register having TUNE2/1 parameter's high nibble.
* If efuse register shows value as 0x0, or if we fail to find
--
2.17.0
From: Chintan Pandya <cpandya(a)codeaurora.org>
The following kernel panic was observed on ARM64 platform due to a stale
TLB entry.
1. ioremap with 4K size, a valid pte page table is set.
2. iounmap it, its pte entry is set to 0.
3. ioremap the same address with 2M size, update its pmd entry with
a new value.
4. CPU may hit an exception because the old pmd entry is still in TLB,
which leads to a kernel panic.
Commit b6bdb7517c3d ("mm/vmalloc: add interfaces to free unmapped page
table") has addressed this panic by falling to pte mappings in the above
case on ARM64.
To support pmd mappings in all cases, TLB purge needs to be performed
in this case on ARM64.
Add a new arg, 'addr', to pud_free_pmd_page() and pmd_free_pte_page()
so that TLB purge can be added later in seprate patches.
[toshi(a)hpe.com: merge changes, rewrite patch description]
Fixes: 28ee90fe6048 ("x86/mm: implement free pmd/pte page interfaces")
Signed-off-by: Chintan Pandya <cpandya(a)codeaurora.org>
Signed-off-by: Toshi Kani <toshi.kani(a)hpe.com>
Cc: Andrew Morton <akpm(a)linux-foundation.org>
Cc: Michal Hocko <mhocko(a)suse.com>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Ingo Molnar <mingo(a)redhat.com>
Cc: "H. Peter Anvin" <hpa(a)zytor.com>
Cc: Will Deacon <will.deacon(a)arm.com>
Cc: Joerg Roedel <joro(a)8bytes.org>
Cc: <stable(a)vger.kernel.org>
---
arch/arm64/mm/mmu.c | 4 ++--
arch/x86/mm/pgtable.c | 8 +++++---
include/asm-generic/pgtable.h | 8 ++++----
lib/ioremap.c | 4 ++--
4 files changed, 13 insertions(+), 11 deletions(-)
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 2dbb2c9..da98828 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -973,12 +973,12 @@ int pmd_clear_huge(pmd_t *pmdp)
return 1;
}
-int pud_free_pmd_page(pud_t *pud)
+int pud_free_pmd_page(pud_t *pud, unsigned long addr)
{
return pud_none(*pud);
}
-int pmd_free_pte_page(pmd_t *pmd)
+int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
{
return pmd_none(*pmd);
}
diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
index ffc8c13..37e3cba 100644
--- a/arch/x86/mm/pgtable.c
+++ b/arch/x86/mm/pgtable.c
@@ -718,11 +718,12 @@ int pmd_clear_huge(pmd_t *pmd)
/**
* pud_free_pmd_page - Clear pud entry and free pmd page.
* @pud: Pointer to a PUD.
+ * @addr: Virtual address associated with pud.
*
* Context: The pud range has been unmaped and TLB purged.
* Return: 1 if clearing the entry succeeded. 0 otherwise.
*/
-int pud_free_pmd_page(pud_t *pud)
+int pud_free_pmd_page(pud_t *pud, unsigned long addr)
{
pmd_t *pmd;
int i;
@@ -733,7 +734,7 @@ int pud_free_pmd_page(pud_t *pud)
pmd = (pmd_t *)pud_page_vaddr(*pud);
for (i = 0; i < PTRS_PER_PMD; i++)
- if (!pmd_free_pte_page(&pmd[i]))
+ if (!pmd_free_pte_page(&pmd[i], addr + (i * PMD_SIZE)))
return 0;
pud_clear(pud);
@@ -745,11 +746,12 @@ int pud_free_pmd_page(pud_t *pud)
/**
* pmd_free_pte_page - Clear pmd entry and free pte page.
* @pmd: Pointer to a PMD.
+ * @addr: Virtual address associated with pmd.
*
* Context: The pmd range has been unmaped and TLB purged.
* Return: 1 if clearing the entry succeeded. 0 otherwise.
*/
-int pmd_free_pte_page(pmd_t *pmd)
+int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
{
pte_t *pte;
diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
index f59639a..b081794 100644
--- a/include/asm-generic/pgtable.h
+++ b/include/asm-generic/pgtable.h
@@ -1019,8 +1019,8 @@ static inline int p4d_clear_huge(p4d_t *p4d)
int pmd_set_huge(pmd_t *pmd, phys_addr_t addr, pgprot_t prot);
int pud_clear_huge(pud_t *pud);
int pmd_clear_huge(pmd_t *pmd);
-int pud_free_pmd_page(pud_t *pud);
-int pmd_free_pte_page(pmd_t *pmd);
+int pud_free_pmd_page(pud_t *pud, unsigned long addr);
+int pmd_free_pte_page(pmd_t *pmd, unsigned long addr);
#else /* !CONFIG_HAVE_ARCH_HUGE_VMAP */
static inline int p4d_set_huge(p4d_t *p4d, phys_addr_t addr, pgprot_t prot)
{
@@ -1046,11 +1046,11 @@ static inline int pmd_clear_huge(pmd_t *pmd)
{
return 0;
}
-static inline int pud_free_pmd_page(pud_t *pud)
+static inline int pud_free_pmd_page(pud_t *pud, unsigned long addr)
{
return 0;
}
-static inline int pmd_free_pte_page(pmd_t *pmd)
+static inline int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
{
return 0;
}
diff --git a/lib/ioremap.c b/lib/ioremap.c
index 54e5bba..517f585 100644
--- a/lib/ioremap.c
+++ b/lib/ioremap.c
@@ -92,7 +92,7 @@ static inline int ioremap_pmd_range(pud_t *pud, unsigned long addr,
if (ioremap_pmd_enabled() &&
((next - addr) == PMD_SIZE) &&
IS_ALIGNED(phys_addr + addr, PMD_SIZE) &&
- pmd_free_pte_page(pmd)) {
+ pmd_free_pte_page(pmd, addr)) {
if (pmd_set_huge(pmd, phys_addr + addr, prot))
continue;
}
@@ -119,7 +119,7 @@ static inline int ioremap_pud_range(p4d_t *p4d, unsigned long addr,
if (ioremap_pud_enabled() &&
((next - addr) == PUD_SIZE) &&
IS_ALIGNED(phys_addr + addr, PUD_SIZE) &&
- pud_free_pmd_page(pud)) {
+ pud_free_pmd_page(pud, addr)) {
if (pud_set_huge(pud, phys_addr + addr, prot))
continue;
}
--
Qualcomm India Private Limited, on behalf of Qualcomm Innovation
Center, Inc., is a member of Code Aurora Forum, a Linux Foundation
Collaborative Project
From: Chintan Pandya <cpandya(a)codeaurora.org>
The following kernel panic was observed on ARM64 platform due to a stale
TLB entry.
1. ioremap with 4K size, a valid pte page table is set.
2. iounmap it, its pte entry is set to 0.
3. ioremap the same address with 2M size, update its pmd entry with
a new value.
4. CPU may hit an exception because the old pmd entry is still in TLB,
which leads to a kernel panic.
Commit b6bdb7517c3d ("mm/vmalloc: add interfaces to free unmapped page
table") has addressed this panic by falling to pte mappings in the above
case on ARM64.
To support pmd mappings in all cases, TLB purge needs to be performed
in this case on ARM64.
Add a new arg, 'addr', to pud_free_pmd_page() and pmd_free_pte_page()
so that TLB purge can be added later in seprate patches.
[toshi(a)hpe.com: merge changes, rewrite patch description]
Fixes: 28ee90fe6048 ("x86/mm: implement free pmd/pte page interfaces")
Signed-off-by: Chintan Pandya <cpandya(a)codeaurora.org>
Signed-off-by: Toshi Kani <toshi.kani(a)hpe.com>
Cc: Andrew Morton <akpm(a)linux-foundation.org>
Cc: Michal Hocko <mhocko(a)suse.com>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Ingo Molnar <mingo(a)redhat.com>
Cc: "H. Peter Anvin" <hpa(a)zytor.com>
Cc: Will Deacon <will.deacon(a)arm.com>
Cc: Joerg Roedel <joro(a)8bytes.org>
Cc: <stable(a)vger.kernel.org>
---
arch/arm64/mm/mmu.c | 4 ++--
arch/x86/mm/pgtable.c | 8 +++++---
include/asm-generic/pgtable.h | 8 ++++----
lib/ioremap.c | 4 ++--
4 files changed, 13 insertions(+), 11 deletions(-)
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 2dbb2c9..da98828 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -973,12 +973,12 @@ int pmd_clear_huge(pmd_t *pmdp)
return 1;
}
-int pud_free_pmd_page(pud_t *pud)
+int pud_free_pmd_page(pud_t *pud, unsigned long addr)
{
return pud_none(*pud);
}
-int pmd_free_pte_page(pmd_t *pmd)
+int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
{
return pmd_none(*pmd);
}
diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
index ffc8c13..37e3cba 100644
--- a/arch/x86/mm/pgtable.c
+++ b/arch/x86/mm/pgtable.c
@@ -718,11 +718,12 @@ int pmd_clear_huge(pmd_t *pmd)
/**
* pud_free_pmd_page - Clear pud entry and free pmd page.
* @pud: Pointer to a PUD.
+ * @addr: Virtual address associated with pud.
*
* Context: The pud range has been unmaped and TLB purged.
* Return: 1 if clearing the entry succeeded. 0 otherwise.
*/
-int pud_free_pmd_page(pud_t *pud)
+int pud_free_pmd_page(pud_t *pud, unsigned long addr)
{
pmd_t *pmd;
int i;
@@ -733,7 +734,7 @@ int pud_free_pmd_page(pud_t *pud)
pmd = (pmd_t *)pud_page_vaddr(*pud);
for (i = 0; i < PTRS_PER_PMD; i++)
- if (!pmd_free_pte_page(&pmd[i]))
+ if (!pmd_free_pte_page(&pmd[i], addr + (i * PMD_SIZE)))
return 0;
pud_clear(pud);
@@ -745,11 +746,12 @@ int pud_free_pmd_page(pud_t *pud)
/**
* pmd_free_pte_page - Clear pmd entry and free pte page.
* @pmd: Pointer to a PMD.
+ * @addr: Virtual address associated with pmd.
*
* Context: The pmd range has been unmaped and TLB purged.
* Return: 1 if clearing the entry succeeded. 0 otherwise.
*/
-int pmd_free_pte_page(pmd_t *pmd)
+int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
{
pte_t *pte;
diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
index f59639a..b081794 100644
--- a/include/asm-generic/pgtable.h
+++ b/include/asm-generic/pgtable.h
@@ -1019,8 +1019,8 @@ static inline int p4d_clear_huge(p4d_t *p4d)
int pmd_set_huge(pmd_t *pmd, phys_addr_t addr, pgprot_t prot);
int pud_clear_huge(pud_t *pud);
int pmd_clear_huge(pmd_t *pmd);
-int pud_free_pmd_page(pud_t *pud);
-int pmd_free_pte_page(pmd_t *pmd);
+int pud_free_pmd_page(pud_t *pud, unsigned long addr);
+int pmd_free_pte_page(pmd_t *pmd, unsigned long addr);
#else /* !CONFIG_HAVE_ARCH_HUGE_VMAP */
static inline int p4d_set_huge(p4d_t *p4d, phys_addr_t addr, pgprot_t prot)
{
@@ -1046,11 +1046,11 @@ static inline int pmd_clear_huge(pmd_t *pmd)
{
return 0;
}
-static inline int pud_free_pmd_page(pud_t *pud)
+static inline int pud_free_pmd_page(pud_t *pud, unsigned long addr)
{
return 0;
}
-static inline int pmd_free_pte_page(pmd_t *pmd)
+static inline int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
{
return 0;
}
diff --git a/lib/ioremap.c b/lib/ioremap.c
index 54e5bba..517f585 100644
--- a/lib/ioremap.c
+++ b/lib/ioremap.c
@@ -92,7 +92,7 @@ static inline int ioremap_pmd_range(pud_t *pud, unsigned long addr,
if (ioremap_pmd_enabled() &&
((next - addr) == PMD_SIZE) &&
IS_ALIGNED(phys_addr + addr, PMD_SIZE) &&
- pmd_free_pte_page(pmd)) {
+ pmd_free_pte_page(pmd, addr)) {
if (pmd_set_huge(pmd, phys_addr + addr, prot))
continue;
}
@@ -119,7 +119,7 @@ static inline int ioremap_pud_range(p4d_t *p4d, unsigned long addr,
if (ioremap_pud_enabled() &&
((next - addr) == PUD_SIZE) &&
IS_ALIGNED(phys_addr + addr, PUD_SIZE) &&
- pud_free_pmd_page(pud)) {
+ pud_free_pmd_page(pud, addr)) {
if (pud_set_huge(pud, phys_addr + addr, prot))
continue;
}
--
Qualcomm India Private Limited, on behalf of Qualcomm Innovation
Center, Inc., is a member of Code Aurora Forum, a Linux Foundation
Collaborative Project
As it stands, memory_failure() gets thoroughly confused by dev_pagemap
backed mappings. The recovery code has specific enabling for several
possible page states and needs new enabling to handle poison in dax
mappings.
In order to support reliable reverse mapping of user space addresses add
new locking in the fsdax implementation to prevent races between
page-address_space disassociation events and the rmap performed in the
memory_failure() path. Additionally, since dev_pagemap pages are hidden
from the page allocator, add a mechanism to determine the size of the
mapping that encompasses a given poisoned pfn. Lastly, since pmem errors
can be repaired, change the speculatively accessed poison protection,
mce_unmap_kpfn(), to be reversible and otherwise allow ongoing access
from the kernel.
---
Dan Williams (11):
device-dax: convert to vmf_insert_mixed and vm_fault_t
device-dax: cleanup vm_fault de-reference chains
device-dax: enable page_mapping()
device-dax: set page->index
filesystem-dax: set page->index
filesystem-dax: perform __dax_invalidate_mapping_entry() under the page lock
mm, madvise_inject_error: fix page count leak
x86, memory_failure: introduce {set,clear}_mce_nospec()
mm, memory_failure: pass page size to kill_proc()
mm, memory_failure: teach memory_failure() about dev_pagemap pages
libnvdimm, pmem: restore page attributes when clearing errors
arch/x86/include/asm/set_memory.h | 29 ++++++
arch/x86/kernel/cpu/mcheck/mce-internal.h | 15 ---
arch/x86/kernel/cpu/mcheck/mce.c | 38 +-------
drivers/dax/device.c | 91 ++++++++++++--------
drivers/nvdimm/pmem.c | 26 ++++++
drivers/nvdimm/pmem.h | 13 +++
fs/dax.c | 102 ++++++++++++++++++++--
include/linux/huge_mm.h | 5 +
include/linux/set_memory.h | 14 +++
mm/huge_memory.c | 4 -
mm/madvise.c | 11 ++
mm/memory-failure.c | 133 +++++++++++++++++++++++++++--
12 files changed, 370 insertions(+), 111 deletions(-)
Add a kernel parameter that allows setting UV memory block size. This
is to provide an adjustment for new forms of PMEM and other DIMM memory
that might require alignment restrictions other than scanning the global
address table for the required minimum alignment. The value set will be
further adjusted by both the GAM range table scan as well as restrictions
imposed by set_memory_block_size_order().
Signed-off-by: Mike Travis <mike.travis(a)hpe.com>
Reviewed-by: Andrew Banman <andrew.banman(a)hpe.com>
Cc: stable(a)vger.kernel.org
---
arch/x86/kernel/apic/x2apic_uv_x.c | 11 +++++++++++
1 file changed, 11 insertions(+)
--- linux.orig/arch/x86/kernel/apic/x2apic_uv_x.c
+++ linux/arch/x86/kernel/apic/x2apic_uv_x.c
@@ -396,6 +396,17 @@ EXPORT_SYMBOL(uv_hub_info_version);
/* Default UV memory block size is 2GB */
static unsigned long mem_block_size = (2UL << 30);
+/* Kernel parameter to specify UV mem block size */
+static int parse_mem_block_size(char *ptr)
+{
+ unsigned long size = memparse(ptr, NULL);
+
+ /* Size will be rounded down by set_block_size() below */
+ mem_block_size = size;
+ return 0;
+}
+early_param("uv_memblksize", parse_mem_block_size);
+
static __init int adj_blksize(u32 lgre)
{
unsigned long base = (unsigned long)lgre << UV_GAM_RANGE_SHFT;
--
Add a call to the new function to "adjust" the current fixed UV memory
block size of 2GB so it can be changed to a different physical boundary.
This accommodates changes in the Intel BIOS, and therefore UV BIOS,
which now can align boundaries different than the previous UV standard
of 2GB. It also flags any UV Global Address boundaries from BIOS that
cause a change in the mem block size (boundary).
The current boundary of 2GB has been used on UV since the first system
release in 2009 with Linux 2.6 and has worked fine. But the new NVDIMM
persistent memory modules (PMEM), along with the Intel BIOS changes to
support these modules caused the memory block size boundary to be set
to a lower limit. Intel only guarantees that this minimum boundary at
64MB though the current Linux limit is 128MB.
Note that the default remains 2GB if no changes occur.
Signed-off-by: Mike Travis <mike.travis(a)hpe.com>
Reviewed-by: Andrew Banman <andrew.banman(a)hpe.com>
Cc: stable(a)vger.kernel.org
---
v2: Update description
---
arch/x86/kernel/apic/x2apic_uv_x.c | 49 ++++++++++++++++++++++++++++++++++---
1 file changed, 46 insertions(+), 3 deletions(-)
--- linux.orig/arch/x86/kernel/apic/x2apic_uv_x.c
+++ linux/arch/x86/kernel/apic/x2apic_uv_x.c
@@ -26,6 +26,7 @@
#include <linux/delay.h>
#include <linux/crash_dump.h>
#include <linux/reboot.h>
+#include <linux/memory.h>
#include <asm/uv/uv_mmrs.h>
#include <asm/uv/uv_hub.h>
@@ -392,6 +393,40 @@ extern int uv_hub_info_version(void)
}
EXPORT_SYMBOL(uv_hub_info_version);
+/* Default UV memory block size is 2GB */
+static unsigned long mem_block_size = (2UL << 30);
+
+static __init int adj_blksize(u32 lgre)
+{
+ unsigned long base = (unsigned long)lgre << UV_GAM_RANGE_SHFT;
+ unsigned long size;
+
+ for (size = mem_block_size; size > MIN_MEMORY_BLOCK_SIZE; size >>= 1)
+ if (IS_ALIGNED(base, size))
+ break;
+
+ if (size >= mem_block_size)
+ return 0;
+
+ mem_block_size = size;
+ return 1;
+}
+
+static __init void set_block_size(void)
+{
+ unsigned int order = ffs(mem_block_size);
+
+ if (order) {
+ /* adjust for ffs return of 1..64 */
+ set_memory_block_size_order(order - 1);
+ pr_info("UV: mem_block_size set to 0x%lx\n", mem_block_size);
+ } else {
+ /* bad or zero value, default to 1UL << 31 (2GB) */
+ pr_err("UV: mem_block_size error with 0x%lx\n", mem_block_size);
+ set_memory_block_size_order(31);
+ }
+}
+
/* Build GAM range lookup table: */
static __init void build_uv_gr_table(void)
{
@@ -1180,23 +1215,30 @@ static void __init decode_gam_rng_tbl(un
<< UV_GAM_RANGE_SHFT);
int order = 0;
char suffix[] = " KMGTPE";
+ int flag = ' ';
while (size > 9999 && order < sizeof(suffix)) {
size /= 1024;
order++;
}
+ /* adjust max block size to current range start */
+ if (gre->type == 1 || gre->type == 2)
+ if (adj_blksize(lgre))
+ flag = '*';
+
if (!index) {
pr_info("UV: GAM Range Table...\n");
- pr_info("UV: # %20s %14s %5s %4s %5s %3s %2s\n", "Range", "", "Size", "Type", "NASID", "SID", "PN");
+ pr_info("UV: # %20s %14s %6s %4s %5s %3s %2s\n", "Range", "", "Size", "Type", "NASID", "SID", "PN");
}
- pr_info("UV: %2d: 0x%014lx-0x%014lx %5lu%c %3d %04x %02x %02x\n",
+ pr_info("UV: %2d: 0x%014lx-0x%014lx%c %5lu%c %3d %04x %02x %02x\n",
index++,
(unsigned long)lgre << UV_GAM_RANGE_SHFT,
(unsigned long)gre->limit << UV_GAM_RANGE_SHFT,
- size, suffix[order],
+ flag, size, suffix[order],
gre->type, gre->nasid, gre->sockid, gre->pnode);
+ /* update to next range start */
lgre = gre->limit;
if (sock_min > gre->sockid)
sock_min = gre->sockid;
@@ -1427,6 +1469,7 @@ static void __init uv_system_init_hub(vo
build_socket_tables();
build_uv_gr_table();
+ set_block_size();
uv_init_hub_info(&hub_info);
uv_possible_blades = num_possible_nodes();
if (!_node_to_pnode)
--
Add a new function to "adjust" the current fixed UV memory block size
of 2GB so it can be changed to a different physical boundary. This is
out of necessity so arch dependent code can accommodate specific BIOS
requirements which can align these new PMEM modules at less than the
default boundaries.
A "set order" type of function was used to insure that the memory block
size will be a power of two value without requiring a validity check.
64GB was chosen as the upper limit for memory block size values to
accommodate upcoming 4PB systems which have 6 more bits of physical
address space (46 becoming 52).
Signed-off-by: Mike Travis <mike.travis(a)hpe.com>
Reviewed-by: Andrew Banman <andrew.banman(a)hpe.com>
Cc: stable(a)vger.kernel.org
---
v2: Update description
---
arch/x86/mm/init_64.c | 20 ++++++++++++++++----
include/linux/memory.h | 1 +
2 files changed, 17 insertions(+), 4 deletions(-)
--- linux.orig/arch/x86/mm/init_64.c
+++ linux/arch/x86/mm/init_64.c
@@ -1350,16 +1350,28 @@ int kern_addr_valid(unsigned long addr)
/* Amount of ram needed to start using large blocks */
#define MEM_SIZE_FOR_LARGE_BLOCK (64UL << 30)
+/* Adjustable memory block size */
+static unsigned long set_memory_block_size;
+int __init set_memory_block_size_order(unsigned int order)
+{
+ unsigned long size = 1UL << order;
+
+ if (size > MEM_SIZE_FOR_LARGE_BLOCK || size < MIN_MEMORY_BLOCK_SIZE)
+ return -EINVAL;
+
+ set_memory_block_size = size;
+ return 0;
+}
+
static unsigned long probe_memory_block_size(void)
{
unsigned long boot_mem_end = max_pfn << PAGE_SHIFT;
unsigned long bz;
- /* If this is UV system, always set 2G block size */
- if (is_uv_system()) {
- bz = MAX_BLOCK_SIZE;
+ /* If memory block size has been set, then use it */
+ bz = set_memory_block_size;
+ if (bz)
goto done;
- }
/* Use regular block if RAM is smaller than MEM_SIZE_FOR_LARGE_BLOCK */
if (boot_mem_end < MEM_SIZE_FOR_LARGE_BLOCK) {
--- linux.orig/include/linux/memory.h
+++ linux/include/linux/memory.h
@@ -38,6 +38,7 @@ struct memory_block {
int arch_get_memory_phys_device(unsigned long start_pfn);
unsigned long memory_block_size_bytes(void);
+int set_memory_block_size_order(unsigned int order);
/* These states are exposed to userspace as text strings in sysfs */
#define MEM_ONLINE (1<<0) /* exposed to userspace */
--
According to the reference manual the shp_2_mcu / mcu_2_shp
scripts must be used for devices connected through the SPBA.
This fixes an issue we saw with DMA transfers.
Sometimes the SPI controller RX FIFO was not empty after a DMA
transfer and the driver got stuck in the next PIO transfer when
it read one word more than expected.
commit dd4b487b32a35 ("ARM: dts: imx6: Use correct SDMA script
for SPI cores") is fixing the same issue but only for SPI1 - 4.
Fixes: 677940258dd8e ("ARM: dts: imx6q: enable dma for ecspi5")
Signed-off-by: Sean Nyekjaer <sean.nyekjaer(a)prevas.dk>
Reviewed-by: Fabio Estevam <fabio.estevam(a)nxp.com>
---
arch/arm/boot/dts/imx6q.dtsi | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm/boot/dts/imx6q.dtsi b/arch/arm/boot/dts/imx6q.dtsi
index ae7b3f107893..5185300cc11f 100644
--- a/arch/arm/boot/dts/imx6q.dtsi
+++ b/arch/arm/boot/dts/imx6q.dtsi
@@ -96,7 +96,7 @@
clocks = <&clks IMX6Q_CLK_ECSPI5>,
<&clks IMX6Q_CLK_ECSPI5>;
clock-names = "ipg", "per";
- dmas = <&sdma 11 7 1>, <&sdma 12 7 2>;
+ dmas = <&sdma 11 8 1>, <&sdma 12 8 2>;
dma-names = "rx", "tx";
status = "disabled";
};
--
2.17.0
This is the start of the stable review cycle for the 3.18.110 release.
There are 45 patches in this series, all will be posted as a response
to this one. If anyone has any issues with these being applied, please
let me know.
Responses should be made by Sat May 26 09:30:59 UTC 2018.
Anything received after that time might be too late.
The whole patch series can be found in one patch at:
https://www.kernel.org/pub/linux/kernel/v3.x/stable-review/patch-3.18.110-r…
or in the git tree and branch at:
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-3.18.y
and the diffstat can be found below.
thanks,
greg k-h
-------------
Pseudo-Shortlog of commits:
Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Linux 3.18.110-rc1
Tetsuo Handa <penguin-kernel(a)I-love.SAKURA.ne.jp>
x86/kexec: Avoid double free_page() upon do_kexec_load() failure
Tetsuo Handa <penguin-kernel(a)I-love.SAKURA.ne.jp>
hfsplus: stop workqueue when fill_super() failed
Johannes Berg <johannes.berg(a)intel.com>
cfg80211: limit wiphy names to 128 bytes
Alexander Potapenko <glider(a)google.com>
scsi: sg: allocate with __GFP_ZERO in sg_build_indirect()
Jason Yan <yanaijie(a)huawei.com>
scsi: libsas: defer ata device eh commands to libata
Al Viro <viro(a)zeniv.linux.org.uk>
ext2: fix a block leak
Eric Dumazet <edumazet(a)google.com>
tcp: purge write queue in tcp_connect_init()
Willem de Bruijn <willemb(a)google.com>
net: test tailroom before appending to linear skb
Masami Hiramatsu <mhiramat(a)kernel.org>
ARM: 8772/1: kprobes: Prohibit kprobes on get_user functions
Dexuan Cui <decui(a)microsoft.com>
tick/broadcast: Use for_each_cpu() specially on UP kernels
Masami Hiramatsu <mhiramat(a)kernel.org>
ARM: 8771/1: kprobes: Prohibit kprobes on do_undefinstr
Ard Biesheuvel <ard.biesheuvel(a)linaro.org>
efi: Avoid potential crashes, fix the 'struct efi_pci_io_protocol_32' definition for mixed mode
Nicholas Piggin <npiggin(a)gmail.com>
powerpc/powernv: Fix NVRAM sleep in invalid context when crashing
Steven Rostedt (VMware) <rostedt(a)goodmis.org>
tracing/x86/xen: Remove zero data size trace events trace_xen_mmu_flush_tlb{_all}
Benjamin Herrenschmidt <benh(a)kernel.crashing.org>
powerpc: Don't preempt_disable() in show_cpuinfo()
Wenwen Wang <wang6495(a)umn.edu>
ALSA: control: fix a redundant-copy issue
Federico Cuello <fedux(a)fedux.com.ar>
ALSA: usb: mixer: volume quirk for CM102-A+/102S+
Shuah Khan (Samsung OSG) <shuah(a)kernel.org>
usbip: usbip_host: fix bad unlock balance during stub_probe()
Shuah Khan (Samsung OSG) <shuah(a)kernel.org>
usbip: usbip_host: fix NULL-ptr deref and use-after-free errors
Alexey Khoroshilov <khoroshilov(a)ispras.ru>
usbip: fix error handling in stub_probe()
Shuah Khan (Samsung OSG) <shuah(a)kernel.org>
usbip: usbip_host: run rebind from exit when module is removed
Shuah Khan (Samsung OSG) <shuah(a)kernel.org>
usbip: usbip_host: delete device from busid_table after rebind
Shuah Khan <shuahkh(a)osg.samsung.com>
usbip: usbip_host: refine probe and disconnect debug msgs to be useful
zhongjiang <zhongjiang(a)huawei.com>
kernel/exit.c: avoid undefined behaviour when calling wait4()
Michael Kerrisk (man-pages) <mtk.manpages(a)gmail.com>
pipe: cap initial pipe capacity according to pipe-max-size limit
James Chapman <jchapman(a)katalix.com>
l2tp: revert "l2tp: fix missing print session offset info"
Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Revert "ARM: dts: imx6qdl-wandboard: Fix audio channel swap"
Vasily Averin <vvs(a)virtuozzo.com>
lockd: lost rollback of set_grace_period() in lockd_down_net()
Bjørn Mork <bjorn(a)mork.no>
qmi_wwan: do not steal interfaces from class drivers
Xin Long <lucien.xin(a)gmail.com>
sctp: delay the authentication for the duplicated cookie-echo chunk
Xin Long <lucien.xin(a)gmail.com>
sctp: fix the issue that the cookie-ack with auth can't get processed
Yuchung Cheng <ycheng(a)google.com>
tcp: ignore Fast Open on repair mode
Debabrata Banerjee <dbanerje(a)akamai.com>
bonding: do not allow rlb updates to invalid mac
Michael Chan <michael.chan(a)broadcom.com>
tg3: Fix vunmap() BUG_ON() triggered from tg3_free_consistent().
Xin Long <lucien.xin(a)gmail.com>
sctp: use the old asoc when making the cookie-ack chunk in dupcook_d
Heiner Kallweit <hkallweit1(a)gmail.com>
r8169: fix powering up RTL8168h
Lance Richardson <lance.richardson.net(a)gmail.com>
net: support compat 64-bit time in {s,g}etsockopt
Eric Dumazet <edumazet(a)google.com>
net_sched: fq: take care of throttled flows before reuse
Moshe Shemesh <moshe(a)mellanox.com>
net/mlx4_en: Verify coalescing parameters are in range
Rob Taglang <rob(a)taglang.io>
net: ethernet: sun: niu set correct packet size in skb
Eric Dumazet <edumazet(a)google.com>
llc: better deal with too small mtu
Andrey Ignatov <rdna(a)fb.com>
ipv4: fix memory leaks in udp_sendmsg, ping_v4_sendmsg
Eric Dumazet <edumazet(a)google.com>
dccp: fix tasklet usage
Hangbin Liu <liuhangbin(a)gmail.com>
bridge: check iface upper dev when setting master via ioctl
Ingo Molnar <mingo(a)elte.hu>
8139too: Use disable_irq_nosync() in rtl8139_poll_controller()
-------------
Diffstat:
Makefile | 4 +-
arch/arm/boot/dts/imx6qdl-wandboard.dtsi | 1 -
arch/arm/include/asm/assembler.h | 10 +++
arch/arm/kernel/traps.c | 5 +-
arch/arm/lib/getuser.S | 10 +++
arch/powerpc/kernel/setup-common.c | 11 ---
arch/powerpc/platforms/powernv/opal-nvram.c | 14 +++-
arch/x86/boot/compressed/eboot.c | 6 +-
arch/x86/kernel/machine_kexec_32.c | 6 +-
arch/x86/kernel/machine_kexec_64.c | 4 +-
arch/x86/xen/mmu.c | 4 -
drivers/net/bonding/bond_alb.c | 2 +-
drivers/net/ethernet/broadcom/tg3.c | 9 +-
drivers/net/ethernet/mellanox/mlx4/en_ethtool.c | 16 ++++
drivers/net/ethernet/mellanox/mlx4/mlx4_en.h | 7 +-
drivers/net/ethernet/realtek/8139too.c | 2 +-
drivers/net/ethernet/realtek/r8169.c | 3 +
drivers/net/ethernet/sun/niu.c | 5 +-
drivers/net/usb/qmi_wwan.c | 13 +++
drivers/scsi/libsas/sas_scsi_host.c | 33 +++-----
drivers/scsi/sg.c | 2 +-
drivers/usb/usbip/stub.h | 2 +
drivers/usb/usbip/stub_dev.c | 69 +++++++++-------
drivers/usb/usbip/stub_main.c | 105 +++++++++++++++++++++---
fs/ext2/inode.c | 10 ---
fs/hfsplus/super.c | 1 +
fs/lockd/svc.c | 2 +
fs/pipe.c | 3 +
include/linux/efi.h | 8 +-
include/trace/events/xen.h | 16 ----
include/uapi/linux/nl80211.h | 2 +
kernel/exit.c | 4 +
kernel/time/tick-broadcast.c | 8 ++
net/bridge/br_if.c | 4 +-
net/compat.c | 6 +-
net/dccp/ccids/ccid2.c | 14 +++-
net/dccp/timer.c | 2 +-
net/ipv4/ip_output.c | 3 +-
net/ipv4/ping.c | 7 +-
net/ipv4/tcp.c | 2 +-
net/ipv4/tcp_output.c | 7 +-
net/ipv4/udp.c | 7 +-
net/ipv6/ip6_output.c | 3 +-
net/l2tp/l2tp_netlink.c | 2 -
net/llc/af_llc.c | 3 +
net/sched/sch_fq.c | 37 ++++++---
net/sctp/associola.c | 30 ++++++-
net/sctp/inqueue.c | 2 +-
net/sctp/sm_statefuns.c | 89 +++++++++++---------
net/wireless/core.c | 3 +
sound/core/control_compat.c | 3 +-
sound/usb/mixer.c | 8 ++
52 files changed, 427 insertions(+), 202 deletions(-)