I'm announcing the release of the 4.4.135 kernel.
This is a quick release, reverting one commit in the 4.4.134 networking
stack that should not have gotten backported. If 4.4.134 works for
you, wonderful, but you really should update to be sure...
The updated 4.4.y git tree can be found at:
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git linux-4.4.y
and can be browsed at the normal kernel.org git web browser:
http://git.kernel.org/?p=linux/kernel/git/stable/linux-stable.git;a=summary
thanks,
greg k-h
------------
Makefile | 2 +-
net/ipv4/ip_vti.c | 1 +
2 files changed, 2 insertions(+), 1 deletion(-)
Greg Kroah-Hartman (2):
Revert "vti4: Don't override MTU passed on link creation via IFLA_MTU"
Linux 4.4.135
I'm announcing the release of the 3.18.112 kernel.
This is a quick release, reverting one commit in the 3.18.111 networking
stack that should not have gotten backported. If 3.18.111 works for
you, wonderful, but you really should update to be sure...
The updated 3.18.y git tree can be found at:
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git linux-3.18.y
and can be browsed at the normal kernel.org git web browser:
http://git.kernel.org/?p=linux/kernel/git/stable/linux-stable.git;a=summary
thanks,
greg k-h
------------
Makefile | 2 +-
net/ipv4/ip_vti.c | 1 +
2 files changed, 2 insertions(+), 1 deletion(-)
Greg Kroah-Hartman (2):
Revert "vti4: Don't override MTU passed on link creation via IFLA_MTU"
Linux 3.18.112
ioremap() calls pud_free_pmd_page() / pmd_free_pte_page() when it creates
a pud / pmd map. The following preconditions are met at their entry.
- All pte entries for a target pud/pmd address range have been cleared.
- System-wide TLB purges have been peformed for a target pud/pmd address
range.
The preconditions assure that there is no stale TLB entry for the range.
Speculation may not cache TLB entries since it requires all levels of page
entries, including ptes, to have P & A-bits set for an associated address.
However, speculation may cache pud/pmd entries (paging-structure caches)
when they have P-bit set.
Add a system-wide TLB purge (INVLPG) to a single page after clearing
pud/pmd entry's P-bit.
SDM 4.10.4.1, Operation that Invalidate TLBs and Paging-Structure Caches,
states that:
INVLPG invalidates all paging-structure caches associated with the
current PCID regardless of the liner addresses to which they correspond.
Fixes: 28ee90fe6048 ("x86/mm: implement free pmd/pte page interfaces")
Signed-off-by: Toshi Kani <toshi.kani(a)hpe.com>
Cc: Andrew Morton <akpm(a)linux-foundation.org>
Cc: Michal Hocko <mhocko(a)suse.com>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Ingo Molnar <mingo(a)redhat.com>
Cc: "H. Peter Anvin" <hpa(a)zytor.com>
Cc: Joerg Roedel <joro(a)8bytes.org>
Cc: <stable(a)vger.kernel.org>
---
arch/x86/mm/pgtable.c | 34 ++++++++++++++++++++++++++++------
1 file changed, 28 insertions(+), 6 deletions(-)
diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
index f60fdf411103..7e96594c7e97 100644
--- a/arch/x86/mm/pgtable.c
+++ b/arch/x86/mm/pgtable.c
@@ -721,24 +721,42 @@ int pmd_clear_huge(pmd_t *pmd)
* @pud: Pointer to a PUD.
* @addr: Virtual address associated with pud.
*
- * Context: The pud range has been unmaped and TLB purged.
+ * Context: The pud range has been unmapped and TLB purged.
* Return: 1 if clearing the entry succeeded. 0 otherwise.
*/
int pud_free_pmd_page(pud_t *pud, unsigned long addr)
{
- pmd_t *pmd;
+ pmd_t *pmd, *pmd_sv;
+ pte_t *pte;
int i;
if (pud_none(*pud))
return 1;
pmd = (pmd_t *)pud_page_vaddr(*pud);
+ pmd_sv = (pmd_t *)__get_free_page(GFP_KERNEL);
+ if (!pmd_sv)
+ return 0;
- for (i = 0; i < PTRS_PER_PMD; i++)
- if (!pmd_free_pte_page(&pmd[i], addr + (i * PMD_SIZE)))
- return 0;
+ for (i = 0; i < PTRS_PER_PMD; i++) {
+ pmd_sv[i] = pmd[i];
+ if (!pmd_none(pmd[i]))
+ pmd_clear(&pmd[i]);
+ }
pud_clear(pud);
+
+ /* INVLPG to clear all paging-structure caches */
+ flush_tlb_kernel_range(addr, addr + PAGE_SIZE-1);
+
+ for (i = 0; i < PTRS_PER_PMD; i++) {
+ if (!pmd_none(pmd_sv[i])) {
+ pte = (pte_t *)pmd_page_vaddr(pmd_sv[i]);
+ free_page((unsigned long)pte);
+ }
+ }
+
+ free_page((unsigned long)pmd_sv);
free_page((unsigned long)pmd);
return 1;
@@ -749,7 +767,7 @@ int pud_free_pmd_page(pud_t *pud, unsigned long addr)
* @pmd: Pointer to a PMD.
* @addr: Virtual address associated with pmd.
*
- * Context: The pmd range has been unmaped and TLB purged.
+ * Context: The pmd range has been unmapped and TLB purged.
* Return: 1 if clearing the entry succeeded. 0 otherwise.
*/
int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
@@ -761,6 +779,10 @@ int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
pte = (pte_t *)pmd_page_vaddr(*pmd);
pmd_clear(pmd);
+
+ /* INVLPG to clear all paging-structure caches */
+ flush_tlb_kernel_range(addr, addr + PAGE_SIZE-1);
+
free_page((unsigned long)pte);
return 1;
Hi,
Building kernel version 4.14.45 fails on me with:
DEBUG: builtin-record.c: In function '__cmd_record':
DEBUG: builtin-record.c:935:6: error: 'data' undeclared (first use in
this function)
DEBUG: if (data->is_pipe && rec->evlist->nr_entries == 1)
DEBUG: ^
DEBUG: builtin-record.c:935:6: note: each undeclared identifier is
reported only once for each function it appears in
DEBUG: CC util/evsel_fprintf.o
DEBUG: CC builtin-top.o
DEBUG: CC util/find_bit.o
DEBUG: CC util/kallsyms.o
DEBUG: CC builtin-script.o
DEBUG: CC util/levenshtein.o
DEBUG: CC util/llvm-utils.o
DEBUG: mv: cannot stat './.builtin-record.o.tmp': No such file or directory
DEBUG: make[3]: *** [builtin-record.o] Error 1
DEBUG: make[3]: *** Waiting for unfinished jobs....
It could be related to f766148e47d7 ("perf record: Fix crash in pipe mode").
Am I the only seeing this failure?
Cheers,
Pavlos
The function __builtin_expect returns long type (see the gcc
documentation), and so do macros likely and unlikely. Unfortunatelly, when
CONFIG_PROFILE_ANNOTATED_BRANCHES is selected, the macros likely and
unlikely expand to __branch_check__ and __branch_check__ truncates the
long type to int. This unintended truncation may cause bugs in various
kernel code (we found a bug in dm-writecache because of it), so it's
better to fix __branch_check__ to return long.
Signed-off-by: Mikulas Patocka <mpatocka(a)redhat.com>
Cc: stable(a)vger.kernel.org
---
include/linux/compiler.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Index: linux-2.6/include/linux/compiler.h
===================================================================
--- linux-2.6.orig/include/linux/compiler.h 2018-02-26 20:34:17.000000000 +0100
+++ linux-2.6/include/linux/compiler.h 2018-05-30 14:11:53.000000000 +0200
@@ -21,7 +21,7 @@ void ftrace_likely_update(struct ftrace_
#define unlikely_notrace(x) __builtin_expect(!!(x), 0)
#define __branch_check__(x, expect, is_constant) ({ \
- int ______r; \
+ long ______r; \
static struct ftrace_likely_data \
__attribute__((__aligned__(4))) \
__attribute__((section("_ftrace_annotated_branch"))) \
KASAN uses different routines to map shadow for hot added memory and memory
obtained in boot process. Attempt to offline memory onlined by normal boot
process leads to this:
Trying to vfree() nonexistent vm area (000000005d3b34b9)
WARNING: CPU: 2 PID: 13215 at mm/vmalloc.c:1525 __vunmap+0x147/0x190
Call Trace:
kasan_mem_notifier+0xad/0xb9
notifier_call_chain+0x166/0x260
__blocking_notifier_call_chain+0xdb/0x140
__offline_pages+0x96a/0xb10
memory_subsys_offline+0x76/0xc0
device_offline+0xb8/0x120
store_mem_state+0xfa/0x120
kernfs_fop_write+0x1d5/0x320
__vfs_write+0xd4/0x530
vfs_write+0x105/0x340
SyS_write+0xb0/0x140
Obviously we can't call vfree() to free memory that wasn't allocated via
vmalloc(). Use find_vm_area() to see if we can call vfree().
Unfortunately it's a bit tricky to properly unmap and free shadow allocated
during boot, so we'll have to keep it. If memory will come online again
that shadow will be reused.
Fixes: fa69b5989bb0 ("mm/kasan: add support for memory hotplug")
Reported-by: Paul Menzel <pmenzel+linux-kasan-dev(a)molgen.mpg.de>
Signed-off-by: Andrey Ryabinin <aryabinin(a)virtuozzo.com>
Cc: <stable(a)vger.kernel.org>
---
mm/kasan/kasan.c | 57 ++++++++++++++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 55 insertions(+), 2 deletions(-)
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index e13d911251e7..0d9d9d268f32 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -791,6 +791,41 @@ DEFINE_ASAN_SET_SHADOW(f5);
DEFINE_ASAN_SET_SHADOW(f8);
#ifdef CONFIG_MEMORY_HOTPLUG
+static bool shadow_mapped(unsigned long addr)
+{
+ pgd_t *pgd = pgd_offset_k(addr);
+ p4d_t *p4d;
+ pud_t *pud;
+ pmd_t *pmd;
+ pte_t *pte;
+
+ if (pgd_none(*pgd))
+ return false;
+ p4d = p4d_offset(pgd, addr);
+ if (p4d_none(*p4d))
+ return false;
+ pud = pud_offset(p4d, addr);
+ if (pud_none(*pud))
+ return false;
+
+ /*
+ * We can't use pud_large() or pud_huge(), the first one
+ * is arch-specific, the last one depend on HUGETLB_PAGE.
+ * So let's abuse pud_bad(), if bud is bad it's has to
+ * because it's huge.
+ */
+ if (pud_bad(*pud))
+ return true;
+ pmd = pmd_offset(pud, addr);
+ if (pmd_none(*pmd))
+ return false;
+
+ if (pmd_bad(*pmd))
+ return true;
+ pte = pte_offset_kernel(pmd, addr);
+ return !pte_none(*pte);
+}
+
static int __meminit kasan_mem_notifier(struct notifier_block *nb,
unsigned long action, void *data)
{
@@ -812,6 +847,14 @@ static int __meminit kasan_mem_notifier(struct notifier_block *nb,
case MEM_GOING_ONLINE: {
void *ret;
+ /*
+ * If shadow is mapped already than it must have been mapped
+ * during the boot. This could happen if we onlining previously
+ * offlined memory.
+ */
+ if (shadow_mapped(shadow_start))
+ return NOTIFY_OK;
+
ret = __vmalloc_node_range(shadow_size, PAGE_SIZE, shadow_start,
shadow_end, GFP_KERNEL,
PAGE_KERNEL, VM_NO_GUARD,
@@ -823,8 +866,18 @@ static int __meminit kasan_mem_notifier(struct notifier_block *nb,
kmemleak_ignore(ret);
return NOTIFY_OK;
}
- case MEM_OFFLINE:
- vfree((void *)shadow_start);
+ case MEM_OFFLINE: {
+ struct vm_struct *vm;
+
+ /*
+ * Only hot-added memory have vm_area. Freeing shadow
+ * mapped during boot would be tricky, so we'll just
+ * have to keep it.
+ */
+ vm = find_vm_area((void *)shadow_start);
+ if (vm)
+ vfree((void *)shadow_start);
+ }
}
return NOTIFY_OK;
--
2.13.6
I'm announcing the release of the 4.14.46 kernel.
This release fixes a problem where perf would not build properly in the
4.14.45 kernel release. If you do not use perf, there is no need to
upgrade at this time.
Many thanks to Pavlos Parissis for finding the problem so quickly and
reporting it.
The updated 4.14.y git tree can be found at:
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git linux-4.14.y
and can be browsed at the normal kernel.org git web browser:
http://git.kernel.org/?p=linux/kernel/git/stable/linux-stable.git;a=summary
thanks,
greg k-h
------------
Makefile | 2
tools/arch/arm/include/uapi/asm/kvm.h | 6
tools/arch/arm64/include/uapi/asm/kvm.h | 6
tools/arch/powerpc/include/uapi/asm/kvm.h | 2
tools/arch/s390/include/uapi/asm/kvm.h | 5
tools/arch/x86/include/asm/cpufeatures.h | 570 +++++++++++++------------
tools/arch/x86/include/asm/disabled-features.h | 11
tools/arch/x86/include/asm/required-features.h | 3
tools/include/uapi/linux/kvm.h | 1
tools/perf/.gitignore | 1
tools/perf/builtin-record.c | 9
tools/perf/perf.h | 1
tools/perf/util/record.c | 8
13 files changed, 340 insertions(+), 285 deletions(-)
Greg Kroah-Hartman (3):
tools: sync up .h files with the repective arch and uapi .h files
Revert "perf record: Fix crash in pipe mode"
Linux 4.14.46
Ravi Bangoria (1):
perf tools: Add trace/beauty/generated/ into .gitignore
Commit 944e0fc51a89c9827b98813d65dc083274777c7f ("x86/amd: don't set
X86_BUG_SYSRET_SS_ATTRS when running under Xen") breaks Xen pv-domains
on AMD processors, as a prerequisite patch from upstream wasn't added
to 4.9.
Fix that by adding the prerequisite setting of X86_FEATURE_XENPV to the
Xen pv early boot path.
Cc: David Woodhouse <dwmw(a)amazon.co.uk>
Cc: Boris Ostrovsky <boris.ostrovsky(a)oracle.com>
Signed-off-by: Juergen Gross <jgross(a)suse.com>
---
arch/x86/xen/enlighten.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index 081437b5f381..674656cdb68c 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1632,6 +1632,9 @@ asmlinkage __visible void __init xen_start_kernel(void)
xen_init_irq_ops();
xen_init_cpuid_mask();
+ /* Needed for init_amd(). */
+ setup_force_cpu_cap(X86_FEATURE_XENPV);
+
#ifdef CONFIG_X86_LOCAL_APIC
/*
* set up the basic apic ops.
--
2.13.6