The patch titled
Subject: mm/hugetlb: filter out hugetlb pages if HUGEPAGE migration is not supported.
has been added to the -mm tree. Its filename is
mm-hugetlb-filter-out-hugetlb-pages-if-hugepage-migration-is-not-supported.patch
This patch should soon appear at
http://ozlabs.org/~akpm/mmots/broken-out/mm-hugetlb-filter-out-hugetlb-page…
and later at
http://ozlabs.org/~akpm/mmotm/broken-out/mm-hugetlb-filter-out-hugetlb-page…
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next and is updated
there every 3-4 working days
------------------------------------------------------
From: "Aneesh Kumar K.V" <aneesh.kumar(a)linux.ibm.com>
Subject: mm/hugetlb: filter out hugetlb pages if HUGEPAGE migration is not supported.
When scanning for movable pages, filter out Hugetlb pages if hugepage
migration is not supported. Without this we hit infinte loop in
__offline_pages() where we do
pfn = scan_movable_pages(start_pfn, end_pfn);
if (pfn) { /* We have movable pages */
ret = do_migrate_range(pfn, end_pfn);
goto repeat;
}
Fix this by checking hugepage_migration_supported both in
has_unmovable_pages which is the primary backoff mechanism for page
offlining and for consistency reasons also into scan_movable_pages because
it doesn't make any sense to return a pfn to non-migrateable huge page.
This issue was revealed by, but not caused by 72b39cfc4d75 ("mm,
memory_hotplug: do not fail offlining too early").
Link: http://lkml.kernel.org/r/20180824063314.21981-1-aneesh.kumar@linux.ibm.com
Fixes: 72b39cfc4d75 ("mm, memory_hotplug: do not fail offlining too early")
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar(a)linux.ibm.com>
Reported-by: Haren Myneni <haren(a)linux.vnet.ibm.com>
Acked-by: Michal Hocko <mhocko(a)suse.com>
Reviewed-by: Naoya Horiguchi <n-horiguchi(a)ah.jp.nec.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/memory_hotplug.c | 3 ++-
mm/page_alloc.c | 4 ++++
2 files changed, 6 insertions(+), 1 deletion(-)
--- a/mm/memory_hotplug.c~mm-hugetlb-filter-out-hugetlb-pages-if-hugepage-migration-is-not-supported
+++ a/mm/memory_hotplug.c
@@ -1333,7 +1333,8 @@ static unsigned long scan_movable_pages(
if (__PageMovable(page))
return pfn;
if (PageHuge(page)) {
- if (page_huge_active(page))
+ if (hugepage_migration_supported(page_hstate(page)) &&
+ page_huge_active(page))
return pfn;
else
pfn = round_up(pfn + 1,
--- a/mm/page_alloc.c~mm-hugetlb-filter-out-hugetlb-pages-if-hugepage-migration-is-not-supported
+++ a/mm/page_alloc.c
@@ -7709,6 +7709,10 @@ bool has_unmovable_pages(struct zone *zo
* handle each tail page individually in migration.
*/
if (PageHuge(page)) {
+
+ if (!hugepage_migration_supported(page_hstate(page)))
+ goto unmovable;
+
iter = round_up(iter + 1, 1<<compound_order(page)) - 1;
continue;
}
_
Patches currently in -mm which might be from aneesh.kumar(a)linux.ibm.com are
mm-hugetlb-filter-out-hugetlb-pages-if-hugepage-migration-is-not-supported.patch
Hallo,
4.4.147 is the last kernel of the 4.4.x series that boots for me. All
subsequent versions panic on boot. How do I report this bug?
If I'm supposed to use https://bugzilla.kernel.org/ I don't know what
to fill into the fields. I don't even know if the longterm kernel falls
under "Mainline" or some other tree.
MSB
--
Fun chemistry experiments:
Sodium chloride doubles its volume
when combined with an equal amount of table salt.
Hi Greg,
Arch Linux recently enabled building the kernel documentation in their
PKGBUILD, which turned my normally peaceful build into a spamfest of
unescaped braces warnings from Perl.
These are fixed upstream with the following two patches:
701b3a3c0ac4 ("PATCH scripts/kernel-doc")
673bb2dfc364 ("scripts/kernel-doc: Escape all literal braces in regexes")
Could they please be applied to 4.18? They should be clean cherry-picks.
Thanks!
Nathan
Hi Greg, Ben, and all
Is https://www.kernel.org/category/releases.html updated in terms of EOL?
Some news out of Linaro conference [2] generated a lot of doubts and questions
around.
Specially because on the way it was stated by the news 3.16 wouldn't be active
anymore. So I'm not sure about the news, but I'd like confirmation from you about
expected EOL.
[2] https://itsfoss.com/linux-lts-kernel-six-years/
Thanks in advance,
Rodrigo.
Inside of start_xmit() the call to check if the connection is up and the
queueing of the packets for later transmission is not atomic which
leaves a window where cm_rep_handler can run, set the connection up,
dequeue pending packets and leave the subsequently queued packets by
start_xmit() sitting on neigh->queue until they're dropped when the
connection is torn down. This only applies to connected mode. These
dropped packets can really upset TCP, for example, and cause
multi-minute delays in transmission for open connections.
Here's the code in start_xmit where we check to see if the connection
is up:
if (ipoib_cm_get(neigh)) {
if (ipoib_cm_up(neigh)) {
ipoib_cm_send(dev, skb, ipoib_cm_get(neigh));
goto unref;
}
}
The race occurs if cm_rep_handler execution occurs after the above
connection check (specifically if it gets to the point where it acquires
priv->lock to dequeue pending skb's) but before the below code snippet
in start_xmit where packets are queued.
if (skb_queue_len(&neigh->queue) < IPOIB_MAX_PATH_REC_QUEUE) {
push_pseudo_header(skb, phdr->hwaddr);
spin_lock_irqsave(&priv->lock, flags);
__skb_queue_tail(&neigh->queue, skb);
spin_unlock_irqrestore(&priv->lock, flags);
} else {
++dev->stats.tx_dropped;
dev_kfree_skb_any(skb);
}
The patch re-checks ipoib_cm_up with priv->lock held to avoid this
race condition. Since odds are the conn should be up most of the time
(and thus the connection *not* down most of the time) we don't hold the
lock for the first check attempt to avoid a slowdown from unecessary
locking for the majority of the packets transmitted during the
connection's life.
Signed-off-by: Aaron Knister <aaron.s.knister(a)nasa.gov>
---
drivers/infiniband/ulp/ipoib/ipoib_main.c | 46 ++++++++++++++++++++++++++-----
1 file changed, 39 insertions(+), 7 deletions(-)
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c
index 26cde95b..a950c916 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_main.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c
@@ -1093,6 +1093,21 @@ static void unicast_arp_send(struct sk_buff *skb, struct net_device *dev,
spin_unlock_irqrestore(&priv->lock, flags);
}
+static bool defer_neigh_skb(struct sk_buff *skb,
+ struct net_device *dev,
+ struct ipoib_neigh *neigh,
+ struct ipoib_pseudo_header *phdr)
+{
+ if (skb_queue_len(&neigh->queue) < IPOIB_MAX_PATH_REC_QUEUE) {
+ push_pseudo_header(skb, phdr->hwaddr);
+ __skb_queue_tail(&neigh->queue, skb);
+ return true;
+ }
+
+ return false;
+}
+
+
static netdev_tx_t ipoib_start_xmit(struct sk_buff *skb, struct net_device *dev)
{
struct ipoib_dev_priv *priv = ipoib_priv(dev);
@@ -1101,6 +1116,7 @@ static netdev_tx_t ipoib_start_xmit(struct sk_buff *skb, struct net_device *dev)
struct ipoib_pseudo_header *phdr;
struct ipoib_header *header;
unsigned long flags;
+ bool deferred_pkt = true;
phdr = (struct ipoib_pseudo_header *) skb->data;
skb_pull(skb, sizeof(*phdr));
@@ -1160,6 +1176,23 @@ static netdev_tx_t ipoib_start_xmit(struct sk_buff *skb, struct net_device *dev)
ipoib_cm_send(dev, skb, ipoib_cm_get(neigh));
goto unref;
}
+ /*
+ * Re-check ipoib_cm_up with priv->lock held to avoid
+ * race condition between start_xmit and skb_dequeue in
+ * cm_rep_handler. Since odds are the conn should be up
+ * most of the time, we don't hold the lock for the
+ * first check above
+ */
+ spin_lock_irqsave(&priv->lock, flags);
+ if (ipoib_cm_up(neigh)) {
+ spin_unlock_irqrestore(&priv->lock, flags);
+ ipoib_cm_send(dev, skb, ipoib_cm_get(neigh));
+ } else {
+ deferred_pkt = defer_neigh_skb(skb, dev, neigh, phdr);
+ spin_unlock_irqrestore(&priv->lock, flags);
+ }
+
+ goto unref;
} else if (neigh->ah && neigh->ah->valid) {
neigh->ah->last_send = rn->send(dev, skb, neigh->ah->ah,
IPOIB_QPN(phdr->hwaddr));
@@ -1168,17 +1201,16 @@ static netdev_tx_t ipoib_start_xmit(struct sk_buff *skb, struct net_device *dev)
neigh_refresh_path(neigh, phdr->hwaddr, dev);
}
- if (skb_queue_len(&neigh->queue) < IPOIB_MAX_PATH_REC_QUEUE) {
- push_pseudo_header(skb, phdr->hwaddr);
- spin_lock_irqsave(&priv->lock, flags);
- __skb_queue_tail(&neigh->queue, skb);
- spin_unlock_irqrestore(&priv->lock, flags);
- } else {
+ spin_lock_irqsave(&priv->lock, flags);
+ deferred_pkt = defer_neigh_skb(skb, dev, neigh, phdr);
+ spin_unlock_irqrestore(&priv->lock, flags);
+
+unref:
+ if (!deferred_pkt) {
++dev->stats.tx_dropped;
dev_kfree_skb_any(skb);
}
-unref:
ipoib_neigh_put(neigh);
return NETDEV_TX_OK;
--
2.12.3
Use the new of_get_compatible_child() helper to lookup the slot child
node instead of using of_find_compatible_node(), which searches the
entire tree and thus can return an unrelated (i.e. non-child) node.
This also addresses a potential use-after-free (e.g. after probe
deferral) as the tree-wide helper drops a reference to its first
argument (i.e. the node of the device being probed).
While at it, also fix up the related slot-node reference leak.
Fixes: ed80a13bb4c4 ("mmc: meson-mx-sdio: Add a driver for the Amlogic Meson8 and Meson8b SoCs")
Cc: stable <stable(a)vger.kernel.org> # 4.15
Cc: Carlo Caione <carlo(a)endlessm.com>
Cc: Martin Blumenstingl <martin.blumenstingl(a)googlemail.com>
Cc: Ulf Hansson <ulf.hansson(a)linaro.org>
Signed-off-by: Johan Hovold <johan(a)kernel.org>
---
drivers/mmc/host/meson-mx-sdio.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/drivers/mmc/host/meson-mx-sdio.c b/drivers/mmc/host/meson-mx-sdio.c
index 09cb89645d06..2cfec33178c1 100644
--- a/drivers/mmc/host/meson-mx-sdio.c
+++ b/drivers/mmc/host/meson-mx-sdio.c
@@ -517,19 +517,23 @@ static struct mmc_host_ops meson_mx_mmc_ops = {
static struct platform_device *meson_mx_mmc_slot_pdev(struct device *parent)
{
struct device_node *slot_node;
+ struct platform_device *pdev;
/*
* TODO: the MMC core framework currently does not support
* controllers with multiple slots properly. So we only register
* the first slot for now
*/
- slot_node = of_find_compatible_node(parent->of_node, NULL, "mmc-slot");
+ slot_node = of_get_compatible_child(parent->of_node, "mmc-slot");
if (!slot_node) {
dev_warn(parent, "no 'mmc-slot' sub-node found\n");
return ERR_PTR(-ENOENT);
}
- return of_platform_device_create(slot_node, NULL, parent);
+ pdev = of_platform_device_create(slot_node, NULL, parent);
+ of_node_put(slot_node);
+
+ return pdev;
}
static int meson_mx_mmc_add_host(struct meson_mx_mmc_host *host)
--
2.18.0
This is the start of the stable review cycle for the 4.18.5 release.
There are 22 patches in this series, all will be posted as a response
to this one. If anyone has any issues with these being applied, please
let me know.
Responses should be made by Sat Aug 25 07:47:43 UTC 2018.
Anything received after that time might be too late.
The whole patch series can be found in one patch at:
https://www.kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.18.5-rc1…
or in the git tree and branch at:
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.18.y
and the diffstat can be found below.
thanks,
greg k-h
-------------
Pseudo-Shortlog of commits:
Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Linux 4.18.5-rc1
Jann Horn <jannh(a)google.com>
reiserfs: fix broken xattr handling (heap corruption, bad retval)
Esben Haabendal <eha(a)deif.com>
i2c: imx: Fix race condition in dma read
Hans de Goede <hdegoede(a)redhat.com>
i2c: core: ACPI: Properly set status byte to 0 for multi-byte writes
Lukas Wunner <lukas(a)wunner.de>
PCI: pciehp: Fix unprotected list iteration in IRQ handler
Lukas Wunner <lukas(a)wunner.de>
PCI: pciehp: Fix use-after-free on unplug
Myron Stowe <myron.stowe(a)redhat.com>
PCI: Skip MPS logic for Virtual Functions (VFs)
Zachary Zhang <zhangzg(a)marvell.com>
PCI: aardvark: Size bridges before resources allocation
Lukas Wunner <lukas(a)wunner.de>
PCI: hotplug: Don't leak pci_slot on registration failure
Rafael J. Wysocki <rafael.j.wysocki(a)intel.com>
PCI / ACPI / PM: Resume all bridges on suspend-to-RAM
Christian König <ckoenig.leichtzumerken(a)gmail.com>
PCI: Restore resized BAR state on resume
John David Anglin <dave.anglin(a)bell.net>
parisc: Remove ordered stores from syscall.S
John David Anglin <dave.anglin(a)bell.net>
parisc: Remove unnecessary barriers from spinlock.h
Gustavo A. R. Silva <gustavo(a)embeddedor.com>
drm/amdgpu/pm: Fix potential Spectre v1
Gustavo A. R. Silva <gustavo(a)embeddedor.com>
drm/i915/kvmgt: Fix potential Spectre v1
Jeremy Cline <jcline(a)redhat.com>
ext4: fix spectre gadget in ext4_mb_regular_allocator()
Michael Ellerman <mpe(a)ellerman.id.au>
powerpc64s: Show ori31 availability in spectre_v1 sysfs file not v2
Dave Hansen <dave.hansen(a)linux.intel.com>
x86/mm/init: Remove freed kernel image areas from alias mapping
Dave Hansen <dave.hansen(a)linux.intel.com>
x86/mm/init: Add helper for freeing kernel image pages
Dave Hansen <dave.hansen(a)linux.intel.com>
x86/mm/init: Pass unconverted symbol addresses to free_init_pages()
Dave Hansen <dave.hansen(a)linux.intel.com>
mm: Allow non-direct-map arguments to free_reserved_area()
Matthijs van Duin <matthijsvanduin(a)gmail.com>
pty: fix O_CLOEXEC for TIOCGPTPEER
Takashi Iwai <tiwai(a)suse.de>
EDAC: Add missing MEM_LRDDR4 entry in edac_mem_types[]
-------------
Diffstat:
Makefile | 4 ++--
arch/parisc/include/asm/spinlock.h | 8 ++------
arch/parisc/kernel/syscall.S | 24 +++++++++++-----------
arch/powerpc/kernel/security.c | 27 ++++++++++++++++---------
arch/x86/include/asm/processor.h | 1 +
arch/x86/include/asm/set_memory.h | 1 +
arch/x86/mm/init.c | 37 +++++++++++++++++++++++++++++++---
arch/x86/mm/init_64.c | 8 ++------
arch/x86/mm/pageattr.c | 13 ++++++++++++
drivers/edac/edac_mc.c | 1 +
drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c | 3 ++-
drivers/gpu/drm/i915/gvt/kvmgt.c | 9 ++++++++-
drivers/i2c/busses/i2c-imx.c | 8 ++++----
drivers/i2c/i2c-core-acpi.c | 11 +++++++---
drivers/pci/controller/pci-aardvark.c | 1 +
drivers/pci/hotplug/pci_hotplug_core.c | 9 +++++++++
drivers/pci/hotplug/pciehp.h | 1 +
drivers/pci/hotplug/pciehp_core.c | 7 +++++++
drivers/pci/hotplug/pciehp_hpc.c | 18 +++++------------
drivers/pci/pci-acpi.c | 6 ++----
drivers/pci/pci.c | 28 +++++++++++++++++++++++++
drivers/pci/probe.c | 4 ++++
drivers/tty/pty.c | 2 +-
fs/ext4/mballoc.c | 4 +++-
fs/reiserfs/xattr.c | 4 +++-
mm/page_alloc.c | 16 +++++++++++++--
26 files changed, 185 insertions(+), 70 deletions(-)
A recent change added some MDS processing in the lpfc_drain_txq
routine that relies on the fcp_wq being allocated. For nvmet operation
the fcp_wq is not allocated because it can only be an nvme-target.
When the original MDS support was added LS_MDS_LOOPBACK was
defined wrong, (0x16) it should have been 0x10 (decimal value used for
hex setting). This incorrect value allowed MDS_LOOPBACK to be set
simultaneously with LS_NPIV_FAB_SUPPORTED, causing the driver to
crash when it accesses the non-existent fcp_wq.
Correct the bad value setting for LS_MDS_LOOPBACK.
Fixes: ae9e28f36a6c ("lpfc: Add MDS Diagnostic support.")
Cc: <stable(a)vger.kernel.org> # 4.12
Signed-off-by: Dick Kennedy <dick.kennedy(a)broadcom.com>
Signed-off-by: James Smart <james.smart(a)broadcom.com>
---
drivers/scsi/lpfc/lpfc.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/scsi/lpfc/lpfc.h b/drivers/scsi/lpfc/lpfc.h
index e0d0da5f43d6..43732e8d1347 100644
--- a/drivers/scsi/lpfc/lpfc.h
+++ b/drivers/scsi/lpfc/lpfc.h
@@ -672,7 +672,7 @@ struct lpfc_hba {
#define LS_NPIV_FAB_SUPPORTED 0x2 /* Fabric supports NPIV */
#define LS_IGNORE_ERATT 0x4 /* intr handler should ignore ERATT */
#define LS_MDS_LINK_DOWN 0x8 /* MDS Diagnostics Link Down */
-#define LS_MDS_LOOPBACK 0x16 /* MDS Diagnostics Link Up (Loopback) */
+#define LS_MDS_LOOPBACK 0x10 /* MDS Diagnostics Link Up (Loopback) */
uint32_t hba_flag; /* hba generic flags */
#define HBA_ERATT_HANDLED 0x1 /* This flag is set when eratt handled */
--
2.13.1
The patch titled
Subject: mm: respect arch_dup_mmap() return value
has been added to the -mm tree. Its filename is
mm-respect-arch_dup_mmap-return-value.patch
This patch should soon appear at
http://ozlabs.org/~akpm/mmots/broken-out/mm-respect-arch_dup_mmap-return-va…
and later at
http://ozlabs.org/~akpm/mmotm/broken-out/mm-respect-arch_dup_mmap-return-va…
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next and is updated
there every 3-4 working days
------------------------------------------------------
From: Nadav Amit <namit(a)vmware.com>
Subject: mm: respect arch_dup_mmap() return value
d70f2a14b72a4 ("include/linux/sched/mm.h: uninline mmdrop_async(), etc")
ignored the return value of arch_dup_mmap(). As a result, on x86, a
failure to duplicate the LDT (e.g., due to memory allocation error), would
leave the duplicated memory mapping in an inconsistent state.
Fix by regarding the return value, as it was before the change.
Link: http://lkml.kernel.org/r/20180823051229.211856-1-namit@vmware.com
Fixes: d70f2a14b72a4 ("include/linux/sched/mm.h: uninline mmdrop_async(), etc")
Signed-off-by: Nadav Amit <namit(a)vmware.com>
Acked-by: Michal Hocko <mhocko(a)suse.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
kernel/fork.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
--- a/kernel/fork.c~mm-respect-arch_dup_mmap-return-value
+++ a/kernel/fork.c
@@ -550,8 +550,7 @@ static __latent_entropy int dup_mmap(str
goto out;
}
/* a new mm has just been created */
- arch_dup_mmap(oldmm, mm);
- retval = 0;
+ retval = arch_dup_mmap(oldmm, mm);
out:
up_write(&mm->mmap_sem);
flush_tlb_mm(oldmm);
_
Patches currently in -mm which might be from namit(a)vmware.com are
mm-respect-arch_dup_mmap-return-value.patch
The patch titled
Subject: mm: migration: fix migration of huge PMD shared pages
has been added to the -mm tree. Its filename is
mm-migration-fix-migration-of-huge-pmd-shared-pages.patch
This patch should soon appear at
http://ozlabs.org/~akpm/mmots/broken-out/mm-migration-fix-migration-of-huge…
and later at
http://ozlabs.org/~akpm/mmotm/broken-out/mm-migration-fix-migration-of-huge…
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next and is updated
there every 3-4 working days
------------------------------------------------------
From: Mike Kravetz <mike.kravetz(a)oracle.com>
Subject: mm: migration: fix migration of huge PMD shared pages
The page migration code employs try_to_unmap() to try and unmap the source
page. This is accomplished by using rmap_walk to find all vmas where the
page is mapped. This search stops when page mapcount is zero. For shared
PMD huge pages, the page map count is always 1 no matter the number of
mappings. Shared mappings are tracked via the reference count of the PMD
page. Therefore, try_to_unmap stops prematurely and does not completely
unmap all mappings of the source page.
This problem can result is data corruption as writes to the original
source page can happen after contents of the page are copied to the target
page. Hence, data is lost.
This problem was originally seen as DB corruption of shared global areas
after a huge page was soft offlined due to ECC memory errors. DB
developers noticed they could reproduce the issue by (hotplug) offlining
memory used to back huge pages. A simple testcase can reproduce the
problem by creating a shared PMD mapping (note that this must be at least
PUD_SIZE in size and PUD_SIZE aligned (1GB on x86)), and using
migrate_pages() to migrate process pages between nodes while continually
writing to the huge pages being migrated.
To fix, have the try_to_unmap_one routine check for huge PMD sharing by
calling huge_pmd_unshare for hugetlbfs huge pages. If it is a shared
mapping it will be 'unshared' which removes the page table entry and drops
the reference on the PMD page. After this, flush caches and TLB.
mmu notifiers are called before locking page tables, but we can not be
sure of PMD sharing until page tables are locked. Therefore, check for
the possibility of PMD sharing before locking so that notifiers can
prepare for the worst possible case.
Link: http://lkml.kernel.org/r/20180823205917.16297-2-mike.kravetz@oracle.com
Fixes: 39dde65c9940 ("shared page table for hugetlb page")
Signed-off-by: Mike Kravetz <mike.kravetz(a)oracle.com>
Cc: "Kirill A . Shutemov" <kirill.shutemov(a)linux.intel.com>
Cc: Vlastimil Babka <vbabka(a)suse.cz>
Cc: Naoya Horiguchi <n-horiguchi(a)ah.jp.nec.com>
Cc: Davidlohr Bueso <dave(a)stgolabs.net>
Cc: Michal Hocko <mhocko(a)kernel.org>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
include/linux/hugetlb.h | 14 ++++++++++++
mm/hugetlb.c | 40 ++++++++++++++++++++++++++++++++++--
mm/rmap.c | 42 +++++++++++++++++++++++++++++++++++---
3 files changed, 91 insertions(+), 5 deletions(-)
--- a/include/linux/hugetlb.h~mm-migration-fix-migration-of-huge-pmd-shared-pages
+++ a/include/linux/hugetlb.h
@@ -140,6 +140,8 @@ pte_t *huge_pte_alloc(struct mm_struct *
pte_t *huge_pte_offset(struct mm_struct *mm,
unsigned long addr, unsigned long sz);
int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr, pte_t *ptep);
+void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
+ unsigned long *start, unsigned long *end);
struct page *follow_huge_addr(struct mm_struct *mm, unsigned long address,
int write);
struct page *follow_huge_pd(struct vm_area_struct *vma,
@@ -170,6 +172,18 @@ static inline unsigned long hugetlb_tota
return 0;
}
+static inline int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr,
+ pte_t *ptep)
+{
+ return 0;
+}
+
+static inline void adjust_range_if_pmd_sharing_possible(
+ struct vm_area_struct *vma,
+ unsigned long *start, unsigned long *end)
+{
+}
+
#define follow_hugetlb_page(m,v,p,vs,a,b,i,w,n) ({ BUG(); 0; })
#define follow_huge_addr(mm, addr, write) ERR_PTR(-EINVAL)
#define copy_hugetlb_page_range(src, dst, vma) ({ BUG(); 0; })
--- a/mm/hugetlb.c~mm-migration-fix-migration-of-huge-pmd-shared-pages
+++ a/mm/hugetlb.c
@@ -4541,6 +4541,9 @@ static unsigned long page_table_shareabl
return saddr;
}
+#define _range_in_vma(vma, start, end) \
+ ((vma)->vm_start <= (start) && (end) <= (vma)->vm_end)
+
static bool vma_shareable(struct vm_area_struct *vma, unsigned long addr)
{
unsigned long base = addr & PUD_MASK;
@@ -4549,13 +4552,41 @@ static bool vma_shareable(struct vm_area
/*
* check on proper vm_flags and page table alignment
*/
- if (vma->vm_flags & VM_MAYSHARE &&
- vma->vm_start <= base && end <= vma->vm_end)
+ if (vma->vm_flags & VM_MAYSHARE && _range_in_vma(vma, base, end))
return true;
return false;
}
/*
+ * Determine if start,end range within vma could be mapped by shared pmd.
+ * If yes, adjust start and end to cover range associated with possible
+ * shared pmd mappings.
+ */
+void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
+ unsigned long *start, unsigned long *end)
+{
+ unsigned long check_addr = *start;
+
+ if (!(vma->vm_flags & VM_MAYSHARE))
+ return;
+
+ for (check_addr = *start; check_addr < *end; check_addr += PUD_SIZE) {
+ unsigned long a_start = check_addr & PUD_MASK;
+ unsigned long a_end = a_start + PUD_SIZE;
+
+ /*
+ * If sharing is possible, adjust start/end if necessary.
+ */
+ if (_range_in_vma(vma, a_start, a_end)) {
+ if (a_start < *start)
+ *start = a_start;
+ if (a_end > *end)
+ *end = a_end;
+ }
+ }
+}
+
+/*
* Search for a shareable pmd page for hugetlb. In any case calls pmd_alloc()
* and returns the corresponding pte. While this is not necessary for the
* !shared pmd case because we can allocate the pmd later as well, it makes the
@@ -4652,6 +4683,11 @@ int huge_pmd_unshare(struct mm_struct *m
{
return 0;
}
+
+void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
+ unsigned long *start, unsigned long *end)
+{
+}
#define want_pmd_share() (0)
#endif /* CONFIG_ARCH_WANT_HUGE_PMD_SHARE */
--- a/mm/rmap.c~mm-migration-fix-migration-of-huge-pmd-shared-pages
+++ a/mm/rmap.c
@@ -1362,11 +1362,21 @@ static bool try_to_unmap_one(struct page
}
/*
- * We have to assume the worse case ie pmd for invalidation. Note that
- * the page can not be free in this function as call of try_to_unmap()
- * must hold a reference on the page.
+ * For THP, we have to assume the worse case ie pmd for invalidation.
+ * For hugetlb, it could be much worse if we need to do pud
+ * invalidation in the case of pmd sharing.
+ *
+ * Note that the page can not be free in this function as call of
+ * try_to_unmap() must hold a reference on the page.
*/
end = min(vma->vm_end, start + (PAGE_SIZE << compound_order(page)));
+ if (PageHuge(page)) {
+ /*
+ * If sharing is possible, start and end will be adjusted
+ * accordingly.
+ */
+ adjust_range_if_pmd_sharing_possible(vma, &start, &end);
+ }
mmu_notifier_invalidate_range_start(vma->vm_mm, start, end);
while (page_vma_mapped_walk(&pvmw)) {
@@ -1409,6 +1419,32 @@ static bool try_to_unmap_one(struct page
subpage = page - page_to_pfn(page) + pte_pfn(*pvmw.pte);
address = pvmw.address;
+ if (PageHuge(page)) {
+ if (huge_pmd_unshare(mm, &address, pvmw.pte)) {
+ /*
+ * huge_pmd_unshare unmapped an entire PMD
+ * page. There is no way of knowing exactly
+ * which PMDs may be cached for this mm, so
+ * we must flush them all. start/end were
+ * already adjusted above to cover this range.
+ */
+ flush_cache_range(vma, start, end);
+ flush_tlb_range(vma, start, end);
+ mmu_notifier_invalidate_range(mm, start, end);
+
+ /*
+ * The ref count of the PMD page was dropped
+ * which is part of the way map counting
+ * is done for shared PMDs. Return 'true'
+ * here. When there is no other sharing,
+ * huge_pmd_unshare returns false and we will
+ * unmap the actual page and drop map count
+ * to zero.
+ */
+ page_vma_mapped_walk_done(&pvmw);
+ break;
+ }
+ }
if (IS_ENABLED(CONFIG_MIGRATION) &&
(flags & TTU_MIGRATION) &&
_
Patches currently in -mm which might be from mike.kravetz(a)oracle.com are
mm-migration-fix-migration-of-huge-pmd-shared-pages.patch
hugetlb-take-pmd-sharing-into-account-when-flushing-tlb-caches.patch
We use kzalloc to allocate the write_buf that we use for
i2c transfer on hdcp write. But it seems that we are forgetting
to free the memory that is not needed after i2c transfer is
completed.
Reported-by: Brian J Wood <brian.j.wood(a)intel.com>
Fixes: 2320175feb74 ("drm/i915: Implement HDCP for HDMI")
Cc: Ramalingam C <ramalingam.c(a)intel.com>
Cc: Sean Paul <seanpaul(a)chromium.org>
Cc: Jani Nikula <jani.nikula(a)linux.intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi(a)intel.com>
Cc: <stable(a)vger.kernel.org> # v4.17+
Signed-off-by: Rodrigo Vivi <rodrigo.vivi(a)intel.com>
---
drivers/gpu/drm/i915/intel_hdmi.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/i915/intel_hdmi.c b/drivers/gpu/drm/i915/intel_hdmi.c
index 8363fbd18ee8..a1799b5c12bb 100644
--- a/drivers/gpu/drm/i915/intel_hdmi.c
+++ b/drivers/gpu/drm/i915/intel_hdmi.c
@@ -943,8 +943,12 @@ static int intel_hdmi_hdcp_write(struct intel_digital_port *intel_dig_port,
ret = i2c_transfer(adapter, &msg, 1);
if (ret == 1)
- return 0;
- return ret >= 0 ? -EIO : ret;
+ ret = 0;
+ else if (ret >= 0)
+ ret = -EIO;
+
+ kfree(write_buf);
+ return ret;
}
static
--
2.17.1