The patch titled
Subject: mm/thp: do not wait for lock_page() in deferred_split_scan()
has been added to the -mm tree. Its filename is
mm-thp-do-not-wait-for-lock_page-in-deferred_split_scan.patch
This patch should soon appear at
http://ozlabs.org/~akpm/mmots/broken-out/mm-thp-do-not-wait-for-lock_page-i…
and later at
http://ozlabs.org/~akpm/mmotm/broken-out/mm-thp-do-not-wait-for-lock_page-i…
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/SubmitChecklist when testing your code ***
The -mm tree is included into linux-next and is updated
there every 3-4 working days
------------------------------------------------------
From: "Kirill A. Shutemov" <kirill.shutemov(a)linux.intel.com>
Subject: mm/thp: do not wait for lock_page() in deferred_split_scan()
deferred_split_scan() gets called from reclaim path. Waiting for page
lock may lead to deadlock there.
Replace lock_page() with trylock_page() and skip the page if we failed to
lock it. We will get to the page on the next scan.
Link: http://lkml.kernel.org/r/20180315150747.31945-1-kirill.shutemov@linux.intel…
Fixes: 9a982250f773 ("thp: introduce deferred_split_huge_page()")
Signed-off-by: Kirill A. Shutemov <kirill.shutemov(a)linux.intel.com>
Acked-by: Michal Hocko <mhocko(a)suse.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
diff -puN mm/huge_memory.c~mm-thp-do-not-wait-for-lock_page-in-deferred_split_scan mm/huge_memory.c
--- a/mm/huge_memory.c~mm-thp-do-not-wait-for-lock_page-in-deferred_split_scan
+++ a/mm/huge_memory.c
@@ -2783,11 +2783,13 @@ static unsigned long deferred_split_scan
list_for_each_safe(pos, next, &list) {
page = list_entry((void *)pos, struct page, mapping);
- lock_page(page);
+ if (!trylock_page(page))
+ goto next;
/* split_huge_page() removes page from list on success */
if (!split_huge_page(page))
split++;
unlock_page(page);
+next:
put_page(page);
}
_
Patches currently in -mm which might be from kirill.shutemov(a)linux.intel.com are
mm-khugepaged-convert-vm_bug_on-to-collapse-fail.patch
mm-thp-do-not-wait-for-lock_page-in-deferred_split_scan.patch
The patch titled
Subject: mm/vmscan: wake up flushers for legacy cgroups too
has been added to the -mm tree. Its filename is
mm-vmscan-wake-up-flushers-for-legacy-cgroups-too.patch
This patch should soon appear at
http://ozlabs.org/~akpm/mmots/broken-out/mm-vmscan-wake-up-flushers-for-leg…
and later at
http://ozlabs.org/~akpm/mmotm/broken-out/mm-vmscan-wake-up-flushers-for-leg…
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/SubmitChecklist when testing your code ***
The -mm tree is included into linux-next and is updated
there every 3-4 working days
------------------------------------------------------
From: Andrey Ryabinin <aryabinin(a)virtuozzo.com>
Subject: mm/vmscan: wake up flushers for legacy cgroups too
Commit 726d061fbd36 ("mm: vmscan: kick flushers when we encounter dirty
pages on the LRU") added flusher invocation to shrink_inactive_list() when
many dirty pages on the LRU are encountered.
However, shrink_inactive_list() doesn't wake up flushers for legacy cgroup
reclaim, so the next commit bbef938429f5 ("mm: vmscan: remove old flusher
wakeup from direct reclaim path") removed the only source of flusher's
wake up in legacy mem cgroup reclaim path.
This leads to premature OOM if there is too many dirty pages in cgroup:
# mkdir /sys/fs/cgroup/memory/test
# echo $$ > /sys/fs/cgroup/memory/test/tasks
# echo 50M > /sys/fs/cgroup/memory/test/memory.limit_in_bytes
# dd if=/dev/zero of=tmp_file bs=1M count=100
Killed
dd invoked oom-killer: gfp_mask=0x14000c0(GFP_KERNEL), nodemask=(null), order=0, oom_score_adj=0
Call Trace:
dump_stack+0x46/0x65
dump_header+0x6b/0x2ac
oom_kill_process+0x21c/0x4a0
out_of_memory+0x2a5/0x4b0
mem_cgroup_out_of_memory+0x3b/0x60
mem_cgroup_oom_synchronize+0x2ed/0x330
pagefault_out_of_memory+0x24/0x54
__do_page_fault+0x521/0x540
page_fault+0x45/0x50
Task in /test killed as a result of limit of /test
memory: usage 51200kB, limit 51200kB, failcnt 73
memory+swap: usage 51200kB, limit 9007199254740988kB, failcnt 0
kmem: usage 296kB, limit 9007199254740988kB, failcnt 0
Memory cgroup stats for /test: cache:49632KB rss:1056KB rss_huge:0KB shmem:0KB
mapped_file:0KB dirty:49500KB writeback:0KB swap:0KB inactive_anon:0KB
active_anon:1168KB inactive_file:24760KB active_file:24960KB unevictable:0KB
Memory cgroup out of memory: Kill process 3861 (bash) score 88 or sacrifice child
Killed process 3876 (dd) total-vm:8484kB, anon-rss:1052kB, file-rss:1720kB, shmem-rss:0kB
oom_reaper: reaped process 3876 (dd), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
Wake up flushers in legacy cgroup reclaim too.
Link: http://lkml.kernel.org/r/20180315164553.17856-1-aryabinin@virtuozzo.com
Fixes: bbef938429f5 ("mm: vmscan: remove old flusher wakeup from direct reclaim path")
Signed-off-by: Andrey Ryabinin <aryabinin(a)virtuozzo.com>
Tested-by: Shakeel Butt <shakeelb(a)google.com>
Cc: Mel Gorman <mgorman(a)techsingularity.net>
Cc: Tejun Heo <tj(a)kernel.org>
Cc: Johannes Weiner <hannes(a)cmpxchg.org>
Cc: Michal Hocko <mhocko(a)kernel.org>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
diff -puN mm/vmscan.c~mm-vmscan-wake-up-flushers-for-legacy-cgroups-too mm/vmscan.c
--- a/mm/vmscan.c~mm-vmscan-wake-up-flushers-for-legacy-cgroups-too
+++ a/mm/vmscan.c
@@ -1772,6 +1772,20 @@ shrink_inactive_list(unsigned long nr_to
set_bit(PGDAT_WRITEBACK, &pgdat->flags);
/*
+ * If dirty pages are scanned that are not queued for IO, it
+ * implies that flushers are not doing their job. This can
+ * happen when memory pressure pushes dirty pages to the end of
+ * the LRU before the dirty limits are breached and the dirty
+ * data has expired. It can also happen when the proportion of
+ * dirty pages grows not through writes but through memory
+ * pressure reclaiming all the clean cache. And in some cases,
+ * the flushers simply cannot keep up with the allocation
+ * rate. Nudge the flusher threads in case they are asleep.
+ */
+ if (stat.nr_unqueued_dirty == nr_taken)
+ wakeup_flusher_threads(WB_REASON_VMSCAN);
+
+ /*
* Legacy memcg will stall in page writeback so avoid forcibly
* stalling here.
*/
@@ -1783,22 +1797,9 @@ shrink_inactive_list(unsigned long nr_to
if (stat.nr_dirty && stat.nr_dirty == stat.nr_congested)
set_bit(PGDAT_CONGESTED, &pgdat->flags);
- /*
- * If dirty pages are scanned that are not queued for IO, it
- * implies that flushers are not doing their job. This can
- * happen when memory pressure pushes dirty pages to the end of
- * the LRU before the dirty limits are breached and the dirty
- * data has expired. It can also happen when the proportion of
- * dirty pages grows not through writes but through memory
- * pressure reclaiming all the clean cache. And in some cases,
- * the flushers simply cannot keep up with the allocation
- * rate. Nudge the flusher threads in case they are asleep, but
- * also allow kswapd to start writing pages during reclaim.
- */
- if (stat.nr_unqueued_dirty == nr_taken) {
- wakeup_flusher_threads(WB_REASON_VMSCAN);
+ /* Allow kswapd to start writing pages during reclaim. */
+ if (stat.nr_unqueued_dirty == nr_taken)
set_bit(PGDAT_DIRTY, &pgdat->flags);
- }
/*
* If kswapd scans pages marked marked for immediate
_
Patches currently in -mm which might be from aryabinin(a)virtuozzo.com are
mm-vmscan-wake-up-flushers-for-legacy-cgroups-too.patch
mm-vmscan-update-stale-comments.patch
mm-vmscan-replace-mm_vmscan_lru_shrink_inactive-with-shrink_page_list-tracepoint.patch
mm-vmscan-remove-redundant-current_may_throttle-check.patch
mm-vmscan-dont-change-pgdat-state-on-base-of-a-single-lru-list-state.patch
mm-vmscan-dont-mess-with-pgdat-flags-in-memcg-reclaim.patch
mm-kasan-dont-vfree-nonexistent-vm_area.patch
This is a note to let you know that I've just added the patch titled
siox: fix possible buffer overflow in device_add_store
to my char-misc git tree which can be found at
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc.git
in the char-misc-testing branch.
The patch will show up in the next release of the linux-next tree
(usually sometime within the next 24 hours during the week.)
The patch will be merged to the char-misc-next branch sometime soon,
after it passes testing, and the merge window is open.
If you have any questions about this process, please let me know.
>From f87deada80fe483e2286e29cd866dc66ddc2b6bc Mon Sep 17 00:00:00 2001
From: Gavin Schenk <g.schenk(a)eckelmann.de>
Date: Wed, 14 Feb 2018 15:25:02 +0100
Subject: siox: fix possible buffer overflow in device_add_store
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Width 20 given in format string is larger than destination
buffer 'type[20]', use %19s to prevent overflowing it.
Fixes: bbecb07fa0af ("siox: new driver framework for eckelmann SIOX")
Cc: stable <stable(a)vger.kernel.org>
Reported-by: David Binderman <dcb314(a)hotmail.com>
Signed-off-by: Gavin Schenk <g.schenk(a)eckelmann.de>
Reviewed-by: Uwe Kleine-König <u.kleine-koenig(a)pengutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/siox/siox-core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/siox/siox-core.c b/drivers/siox/siox-core.c
index fdfcdea25867..16590dfaafa4 100644
--- a/drivers/siox/siox-core.c
+++ b/drivers/siox/siox-core.c
@@ -594,7 +594,7 @@ static ssize_t device_add_store(struct device *dev,
size_t inbytes = 0, outbytes = 0;
u8 statustype = 0;
- ret = sscanf(buf, "%20s %zu %zu %hhu", type, &inbytes,
+ ret = sscanf(buf, "%19s %zu %zu %hhu", type, &inbytes,
&outbytes, &statustype);
if (ret != 3 && ret != 4)
return -EINVAL;
--
2.16.2
Current cleanup in the error path of xen_bind_pirq_msi_to_irq is
wrong. First of all there's an off-by-one in the cleanup loop, which
can lead to unbinding wrong IRQs.
Secondly IRQs not bound won't be freed, thus leaking IRQ numbers.
Note that there's no need to differentiate between bound and unbound
IRQs when freeing them, __unbind_from_irq will deal with both of them
correctly.
Fixes: 4892c9b4ada9f9 ("xen: add support for MSI message groups")
Reported-by: Hooman Mirhadi <mirhadih(a)amazon.com>
Signed-off-by: Roger Pau Monné <roger.pau(a)citrix.com>
---
Cc: Boris Ostrovsky <boris.ostrovsky(a)oracle.com>
Cc: Juergen Gross <jgross(a)suse.com>
Cc: Amit Shah <aams(a)amazon.com>
CC: stable(a)vger.kernel.org
Cc: xen-devel(a)lists.xenproject.org
---
drivers/xen/events/events_base.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index b241bfa529ce..159faf1269fb 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -763,8 +763,8 @@ int xen_bind_pirq_msi_to_irq(struct pci_dev *dev, struct msi_desc *msidesc,
mutex_unlock(&irq_mapping_update_lock);
return irq;
error_irq:
- for (; i >= 0; i--)
- __unbind_from_irq(irq + i);
+ while (nvec--)
+ __unbind_from_irq(irq + nvec);
mutex_unlock(&irq_mapping_update_lock);
return ret;
}
--
2.16.1
Hi,
Would you be interested in IT professionals?
Let me know your interest to send you more details like counts, pricing &
list info.
Regards,
Erin Clift
Marketing Associate
Please "Unsubscribe" if not interested in receiving further emails.
On architectures with CONFIG_HAVE_ARCH_HUGE_VMAP set, ioremap()
may create pud/pmd mappings. Kernel panic was observed on arm64
systems with Cortex-A75 in the following steps as described by
Hanjun Guo.
1. ioremap a 4K size, valid page table will build,
2. iounmap it, pte0 will set to 0;
3. ioremap the same address with 2M size, pgd/pmd is unchanged,
then set the a new value for pmd;
4. pte0 is leaked;
5. CPU may meet exception because the old pmd is still in TLB,
which will lead to kernel panic.
This panic is not reproducible on x86. INVLPG, called from iounmap,
purges all levels of entries associated with purged address on x86.
x86 still has memory leak.
The patch changes the ioremap path to free unmapped page table(s) since
doing so in the unmap path has the following issues:
- The iounmap() path is shared with vunmap(). Since vmap() only
supports pte mappings, making vunmap() to free a pte page is an
overhead for regular vmap users as they do not need a pte page
freed up.
- Checking if all entries in a pte page are cleared in the unmap path
is racy, and serializing this check is expensive.
- The unmap path calls free_vmap_area_noflush() to do lazy TLB purges.
Clearing a pud/pmd entry before the lazy TLB purges needs extra TLB
purge.
Add two interfaces, pud_free_pmd_page() and pmd_free_pte_page(),
which clear a given pud/pmd entry and free up a page for the lower
level entries.
This patch implements their stub functions on x86 and arm64, which
work as workaround.
Reported-by: Lei Li <lious.lilei(a)hisilicon.com>
Signed-off-by: Toshi Kani <toshi.kani(a)hpe.com>
Cc: Catalin Marinas <catalin.marinas(a)arm.com>
Cc: Wang Xuefeng <wxf.wang(a)hisilicon.com>
Cc: Will Deacon <will.deacon(a)arm.com>
Cc: Hanjun Guo <guohanjun(a)huawei.com>
Cc: Michal Hocko <mhocko(a)suse.com>
Cc: Andrew Morton <akpm(a)linux-foundation.org>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Ingo Molnar <mingo(a)redhat.com>
Cc: "H. Peter Anvin" <hpa(a)zytor.com>
Cc: Borislav Petkov <bp(a)suse.de>
Cc: Matthew Wilcox <willy(a)infradead.org>
Cc: Chintan Pandya <cpandya(a)codeaurora.org>
Cc: <stable(a)vger.kernel.org>
---
arch/arm64/mm/mmu.c | 10 ++++++++++
arch/x86/mm/pgtable.c | 24 ++++++++++++++++++++++++
include/asm-generic/pgtable.h | 10 ++++++++++
lib/ioremap.c | 6 ++++--
4 files changed, 48 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 8c704f1e53c2..2dbb2c9f1ec1 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -972,3 +972,13 @@ int pmd_clear_huge(pmd_t *pmdp)
pmd_clear(pmdp);
return 1;
}
+
+int pud_free_pmd_page(pud_t *pud)
+{
+ return pud_none(*pud);
+}
+
+int pmd_free_pte_page(pmd_t *pmd)
+{
+ return pmd_none(*pmd);
+}
diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
index 004abf9ebf12..1eed7ed518e6 100644
--- a/arch/x86/mm/pgtable.c
+++ b/arch/x86/mm/pgtable.c
@@ -702,4 +702,28 @@ int pmd_clear_huge(pmd_t *pmd)
return 0;
}
+
+/**
+ * pud_free_pmd_page - Clear pud entry and free pmd page.
+ * @pud: Pointer to a PUD.
+ *
+ * Context: The pud range has been unmaped and TLB purged.
+ * Return: 1 if clearing the entry succeeded. 0 otherwise.
+ */
+int pud_free_pmd_page(pud_t *pud)
+{
+ return pud_none(*pud);
+}
+
+/**
+ * pmd_free_pte_page - Clear pmd entry and free pte page.
+ * @pmd: Pointer to a PMD.
+ *
+ * Context: The pmd range has been unmaped and TLB purged.
+ * Return: 1 if clearing the entry succeeded. 0 otherwise.
+ */
+int pmd_free_pte_page(pmd_t *pmd)
+{
+ return pmd_none(*pmd);
+}
#endif /* CONFIG_HAVE_ARCH_HUGE_VMAP */
diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
index 2cfa3075d148..2490800f7c5a 100644
--- a/include/asm-generic/pgtable.h
+++ b/include/asm-generic/pgtable.h
@@ -983,6 +983,8 @@ int pud_set_huge(pud_t *pud, phys_addr_t addr, pgprot_t prot);
int pmd_set_huge(pmd_t *pmd, phys_addr_t addr, pgprot_t prot);
int pud_clear_huge(pud_t *pud);
int pmd_clear_huge(pmd_t *pmd);
+int pud_free_pmd_page(pud_t *pud);
+int pmd_free_pte_page(pmd_t *pmd);
#else /* !CONFIG_HAVE_ARCH_HUGE_VMAP */
static inline int p4d_set_huge(p4d_t *p4d, phys_addr_t addr, pgprot_t prot)
{
@@ -1008,6 +1010,14 @@ static inline int pmd_clear_huge(pmd_t *pmd)
{
return 0;
}
+static inline int pud_free_pmd_page(pud_t *pud)
+{
+ return 0;
+}
+static inline int pmd_free_pte_page(pud_t *pmd)
+{
+ return 0;
+}
#endif /* CONFIG_HAVE_ARCH_HUGE_VMAP */
#ifndef __HAVE_ARCH_FLUSH_PMD_TLB_RANGE
diff --git a/lib/ioremap.c b/lib/ioremap.c
index b808a390e4c3..54e5bbaa3200 100644
--- a/lib/ioremap.c
+++ b/lib/ioremap.c
@@ -91,7 +91,8 @@ static inline int ioremap_pmd_range(pud_t *pud, unsigned long addr,
if (ioremap_pmd_enabled() &&
((next - addr) == PMD_SIZE) &&
- IS_ALIGNED(phys_addr + addr, PMD_SIZE)) {
+ IS_ALIGNED(phys_addr + addr, PMD_SIZE) &&
+ pmd_free_pte_page(pmd)) {
if (pmd_set_huge(pmd, phys_addr + addr, prot))
continue;
}
@@ -117,7 +118,8 @@ static inline int ioremap_pud_range(p4d_t *p4d, unsigned long addr,
if (ioremap_pud_enabled() &&
((next - addr) == PUD_SIZE) &&
- IS_ALIGNED(phys_addr + addr, PUD_SIZE)) {
+ IS_ALIGNED(phys_addr + addr, PUD_SIZE) &&
+ pud_free_pmd_page(pud)) {
if (pud_set_huge(pud, phys_addr + addr, prot))
continue;
}
A memory block was allocated in intel_svm_bind_mm() but never freed
in a failure path. This patch fixes this by free it to avoid memory
leakage.
Cc: Ashok Raj <ashok.raj(a)intel.com>
Cc: Jacob Pan <jacob.jun.pan(a)linux.intel.com>
Cc: <stable(a)vger.kernel.org> # v4.4+
Signed-off-by: Lu Baolu <baolu.lu(a)linux.intel.com>
---
drivers/iommu/intel-svm.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/iommu/intel-svm.c b/drivers/iommu/intel-svm.c
index 35a408d..3d4b924 100644
--- a/drivers/iommu/intel-svm.c
+++ b/drivers/iommu/intel-svm.c
@@ -396,6 +396,7 @@ int intel_svm_bind_mm(struct device *dev, int *pasid, int flags, struct svm_dev_
pasid_max - 1, GFP_KERNEL);
if (ret < 0) {
kfree(svm);
+ kfree(sdev);
goto out;
}
svm->pasid = ret;
--
2.7.4
This is a note to let you know that I've just added the patch titled
Revert "base: arch_topology: fix section mismatch build warnings"
to my driver-core git tree which can be found at
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core.git
in the driver-core-testing branch.
The patch will show up in the next release of the linux-next tree
(usually sometime within the next 24 hours during the week.)
The patch will be merged to the driver-core-next branch sometime soon,
after it passes testing, and the merge window is open.
If you have any questions about this process, please let me know.
>From 9de9a449482677a75f1edd2049268a7efc40fc96 Mon Sep 17 00:00:00 2001
From: Gaku Inami <gaku.inami.xh(a)renesas.com>
Date: Tue, 13 Feb 2018 11:06:40 +0900
Subject: Revert "base: arch_topology: fix section mismatch build warnings"
This reverts commit 452562abb5b7 ("base: arch_topology: fix section
mismatch build warnings"). It causes the notifier call hangs in some
use-cases.
In some cases with using maxcpus, some of cpus are booted first and
then the remaining cpus are booted. As an example, some users who want
to realize fast boot up often use the following procedure.
1) Define all CPUs on device tree (CA57x4 + CA53x4)
2) Add "maxcpus=4" in bootargs
3) Kernel boot up with CA57x4
4) After kernel boot up, CA53x4 is booted from user
When kernel init was finished, CPUFREQ_POLICY_NOTIFIER was not still
unregisterd. This means that "__init init_cpu_capacity_callback()"
will be called after kernel init sequence. To avoid this problem,
it needs to remove __init{,data} annotations by reverting this commit.
Also, this commit was needed to fix kernel compile issue below.
However, this issue was also fixed by another patch: commit 82d8ba717ccb
("arch_topology: Fix section miss match warning due to
free_raw_capacity()") in v4.15 as well.
Whereas commit 452562abb5b7 added all the missing __init annotations,
commit 82d8ba717ccb removed it from free_raw_capacity().
WARNING: vmlinux.o(.text+0x548f24): Section mismatch in reference
from the function init_cpu_capacity_callback() to the variable
.init.text:$x
The function init_cpu_capacity_callback() references
the variable __init $x.
This is often because init_cpu_capacity_callback lacks a __init
annotation or the annotation of $x is wrong.
Fixes: 82d8ba717ccb ("arch_topology: Fix section miss match warning due to free_raw_capacity()")
Cc: stable <stable(a)vger.kernel.org>
Signed-off-by: Gaku Inami <gaku.inami.xh(a)renesas.com>
Reviewed-by: Dietmar Eggemann <dietmar.eggemann(a)arm.com>
Tested-by: Dietmar Eggemann <dietmar.eggemann(a)arm.com>
Acked-by: Sudeep Holla <sudeep.holla(a)arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/base/arch_topology.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
index 52ec5174bcb1..e7cb0c6ade81 100644
--- a/drivers/base/arch_topology.c
+++ b/drivers/base/arch_topology.c
@@ -169,11 +169,11 @@ bool __init topology_parse_cpu_capacity(struct device_node *cpu_node, int cpu)
}
#ifdef CONFIG_CPU_FREQ
-static cpumask_var_t cpus_to_visit __initdata;
-static void __init parsing_done_workfn(struct work_struct *work);
-static __initdata DECLARE_WORK(parsing_done_work, parsing_done_workfn);
+static cpumask_var_t cpus_to_visit;
+static void parsing_done_workfn(struct work_struct *work);
+static DECLARE_WORK(parsing_done_work, parsing_done_workfn);
-static int __init
+static int
init_cpu_capacity_callback(struct notifier_block *nb,
unsigned long val,
void *data)
@@ -209,7 +209,7 @@ init_cpu_capacity_callback(struct notifier_block *nb,
return 0;
}
-static struct notifier_block init_cpu_capacity_notifier __initdata = {
+static struct notifier_block init_cpu_capacity_notifier = {
.notifier_call = init_cpu_capacity_callback,
};
@@ -242,7 +242,7 @@ static int __init register_cpufreq_notifier(void)
}
core_initcall(register_cpufreq_notifier);
-static void __init parsing_done_workfn(struct work_struct *work)
+static void parsing_done_workfn(struct work_struct *work)
{
cpufreq_unregister_notifier(&init_cpu_capacity_notifier,
CPUFREQ_POLICY_NOTIFIER);
--
2.16.2