This patchset adds audit support on arm64.
The implementation is just like in other architectures,
and so I think little explanation is needed.
I verified this patch with some commands on both 64-bit rootfs
and 32-bit rootfs(, but only in little-endian):
# auditctl -a exit,always -S openat -F path=/etc/inittab
# auditctl -a exit,always -F dir=/tmp -F perm=rw
# auditctl -a task,always
# autrace /bin/ls
What else?
(Thanks to Clayton for his cross-compiling patch)
I'd like to discuss about the following issues:
(issues)
* AUDIT_ARCH_*
Why do we need to distiguish big-endian and little-endian? [2/4]
* AArch32
We need to add a check for identifying the endian in 32-bit tasks. [3/4]
* syscall no in AArch32
Currently all the definitions are added in unistd32.h with
"ifdef __AARCH32_AUDITSYSCALL" to use asm-generic/audit_*.h. [3/4]
"ifdef" is necessary to avoid a conflict with 64-bit definitions.
Do we need a more sophisticated way?
* TIF_AUDITSYSCALL
Most architectures, except x86, do not check TIF_AUDITSYSCALL. Why not? [4/4]
* Userspace audit package
There are some missing syscall definitions in lib/aarch64_table.h.
There is no support for AUDIT_ARCH_ARM (I mean LE. armeb is BE).
AKASHI Takahiro (4):
audit: Enable arm64 support
arm64: Add audit support
arm64: audit: Add AArch32 support
arm64: audit: Add audit hook in ptrace/syscall_trace
arch/arm64/Kconfig | 3 +
arch/arm64/include/asm/audit32.h | 12 ++
arch/arm64/include/asm/ptrace.h | 5 +
arch/arm64/include/asm/syscall.h | 18 ++
arch/arm64/include/asm/thread_info.h | 1 +
arch/arm64/include/asm/unistd32.h | 387 ++++++++++++++++++++++++++++++++++
arch/arm64/kernel/Makefile | 4 +
arch/arm64/kernel/audit.c | 77 +++++++
arch/arm64/kernel/audit32.c | 46 ++++
arch/arm64/kernel/entry.S | 3 +
arch/arm64/kernel/ptrace.c | 12 ++
include/uapi/linux/audit.h | 2 +
init/Kconfig | 2 +-
13 files changed, 571 insertions(+), 1 deletion(-)
create mode 100644 arch/arm64/include/asm/audit32.h
create mode 100644 arch/arm64/kernel/audit.c
create mode 100644 arch/arm64/kernel/audit32.c
--
1.7.9.5
Hi linaro-kernel,
This is a repost of the patches to change forced up-migration into
an idle pull migration which were reverted from 14.04.
I've looked at the bug warning that we were seeing and
sorted it out. The issue was coming from the scheduler going through
the idle balance path during CPU hotplug and a testing gap on our
side which resulted in the hotplug tests not being run. The fix was
to handle this case correctly.
If you have received the exit criteria report, you'll see that there
is an unidentified performance drop at the moment which I'm
investigating so please don't pull these just yet.
All review comments gratefully received.
In switch_hrtimer_base() we are calling hrtimer_check_target() which guarantees
this:
/*
* With HIGHRES=y we do not migrate the timer when it is expiring
* before the next event on the target cpu because we cannot reprogram
* the target cpu hardware and we would cause it to fire late.
*
* Called with cpu_base->lock of target cpu held.
*/
But switch_hrtimer_base() is only called from one place, i.e.
__hrtimer_start_range_ns() and at that point (where we call
switch_hrtimer_base()) expiration time is not yet known as we call this routine
later: hrtimer_set_expires_range_ns().
To fix this, we need to find the updated expiry time before calling
switch_hrtimer_base().
Signed-off-by: Viresh Kumar <viresh.kumar(a)linaro.org>
---
Rebased over: v3.15-rc5
kernel/hrtimer.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/kernel/hrtimer.c b/kernel/hrtimer.c
index 6b715c0..e0501fe 100644
--- a/kernel/hrtimer.c
+++ b/kernel/hrtimer.c
@@ -990,11 +990,8 @@ int __hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim,
/* Remove an active timer from the queue: */
ret = remove_hrtimer(timer, base);
- /* Switch the timer base, if necessary: */
- new_base = switch_hrtimer_base(timer, base, mode & HRTIMER_MODE_PINNED);
-
if (mode & HRTIMER_MODE_REL) {
- tim = ktime_add_safe(tim, new_base->get_time());
+ tim = ktime_add_safe(tim, base->get_time());
/*
* CONFIG_TIME_LOW_RES is a temporary way for architectures
* to signal that they simply return xtime in
@@ -1009,6 +1006,9 @@ int __hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim,
hrtimer_set_expires_range_ns(timer, tim, delta_ns);
+ /* Switch the timer base, if necessary: */
+ new_base = switch_hrtimer_base(timer, base, mode & HRTIMER_MODE_PINNED);
+
timer_stats_hrtimer_set_start_info(timer);
leftmost = enqueue_hrtimer(timer, new_base);
--
2.0.0.rc2
Support for arch_irq_work_raise() was missing from
arm64 (a prerequisite for FULL_NOHZ).
This patch is based on the arm32 patch ARM 7872/1
which ports cleanly.
commit bf18525fd793101df42a1344ecc48b49b62e48c9
Author: Stephen Boyd <sboyd(a)codeaurora.org>
Date: Tue Oct 29 20:32:56 2013 +0100
ARM: 7872/1: Support arch_irq_work_raise() via self IPIs
By default, IRQ work is run from the tick interrupt (see
irq_work_run() in update_process_times()). When we're in full
NOHZ mode, restarting the tick requires the use of IRQ work and
if the only place we run IRQ work is in the tick interrupt we
have an unbreakable cycle. Implement arch_irq_work_raise() via
self IPIs to break this cycle and get the tick started again.
Note that we implement this via IPIs which are only available on
SMP builds. This shouldn't be a problem because full NOHZ is only
supported on SMP builds anyway.
Signed-off-by: Stephen Boyd <sboyd(a)codeaurora.org>
Reviewed-by: Kevin Hilman <khilman(a)linaro.org>
Cc: Frederic Weisbecker <fweisbec(a)gmail.com>
Signed-off-by: Russell King <rmk+kernel(a)arm.linux.org.uk>
Signed-off-by: Larry Bassel <larry.bassel(a)linaro.org>
Reviewed-by: Kevin Hilman <khilman(a)linaro.org>
---
arch/arm64/include/asm/hardirq.h | 2 +-
arch/arm64/kernel/smp.c | 18 ++++++++++++++++++
2 files changed, 19 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/hardirq.h b/arch/arm64/include/asm/hardirq.h
index ae4801d..0be6782 100644
--- a/arch/arm64/include/asm/hardirq.h
+++ b/arch/arm64/include/asm/hardirq.h
@@ -20,7 +20,7 @@
#include <linux/threads.h>
#include <asm/irq.h>
-#define NR_IPI 5
+#define NR_IPI 6
typedef struct {
unsigned int __softirq_pending;
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index f0a141d..20fd074 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -35,6 +35,7 @@
#include <linux/clockchips.h>
#include <linux/completion.h>
#include <linux/of.h>
+#include <linux/irq_work.h>
#include <asm/atomic.h>
#include <asm/cacheflush.h>
@@ -62,6 +63,7 @@ enum ipi_msg_type {
IPI_CALL_FUNC_SINGLE,
IPI_CPU_STOP,
IPI_TIMER,
+ IPI_IRQ_WORK,
};
/*
@@ -455,6 +457,13 @@ void arch_send_call_function_single_ipi(int cpu)
smp_cross_call(cpumask_of(cpu), IPI_CALL_FUNC_SINGLE);
}
+#ifdef CONFIG_IRQ_WORK
+void arch_irq_work_raise(void)
+{
+ smp_cross_call(cpumask_of(smp_processor_id()), IPI_IRQ_WORK);
+}
+#endif
+
static const char *ipi_types[NR_IPI] = {
#define S(x,s) [x - IPI_RESCHEDULE] = s
S(IPI_RESCHEDULE, "Rescheduling interrupts"),
@@ -462,6 +471,7 @@ static const char *ipi_types[NR_IPI] = {
S(IPI_CALL_FUNC_SINGLE, "Single function call interrupts"),
S(IPI_CPU_STOP, "CPU stop interrupts"),
S(IPI_TIMER, "Timer broadcast interrupts"),
+ S(IPI_IRQ_WORK, "IRQ work interrupts"),
};
void show_ipi_list(struct seq_file *p, int prec)
@@ -554,6 +564,14 @@ void handle_IPI(int ipinr, struct pt_regs *regs)
break;
#endif
+#ifdef CONFIG_IRQ_WORK
+ case IPI_IRQ_WORK:
+ irq_enter();
+ irq_work_run();
+ irq_exit();
+ break;
+#endif
+
default:
pr_crit("CPU%u: Unknown IPI message 0x%x\n", cpu, ipinr);
break;
--
1.8.3.2
From: Mark Brown <broonie(a)linaro.org>
arm64 defines TIF_POLLING_NRFLAG but not the corresponding shifted version,
breaking the build with next-20140509 due to fd99f91aa007b (sched/idle:
Avoid spurious wakeup IPIs) which added a reference to the shifted version
which is present on most arches.
Signed-off-by: Mark Brown <broonie(a)linaro.org>
---
arch/arm64/include/asm/thread_info.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h
index 720e70b66ffd..40ff87437734 100644
--- a/arch/arm64/include/asm/thread_info.h
+++ b/arch/arm64/include/asm/thread_info.h
@@ -113,6 +113,7 @@ static inline struct thread_info *current_thread_info(void)
#define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED)
#define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME)
#define _TIF_32BIT (1 << TIF_32BIT)
+#define _TIF_POLLING_NRFLAG (1 << TIF_POLLING_NRFLAG)
#define _TIF_WORK_MASK (_TIF_NEED_RESCHED | _TIF_SIGPENDING | \
_TIF_NOTIFY_RESUME)
--
2.0.0.rc2
________________________________________
发件人: Mark Brown [broonie(a)linaro.org]
发送时间: 2014年5月8日 16:41
收件人: Panshilin (Peter)
抄送: Alex Shi; Guodong Xu; Haojian Zhuang; linaro-kernel(a)lists.linaro.org
主题: Re: help:a issue about arm64 dma coherent Re: Is this patch included in LSK April release
On Wed, May 07, 2014 at 09:38:57AM +0000, Panshilin (Peter) wrote:
> we use arm64 dma_alloc_coherent to get a expected non-cacheable
> buffer. but when we use the buffer as a dma memory for device, after
> cpu write datas to the buffer, It is not coherent in ddr so that
> device cann't get proper datas. so we find the LSK current version's
> dma alloc is malfunctional.
> we have to flushcacheall after cpu write datas and It is ok. It shows
> that dma_alloc_coherent doesn't work properly.
Yes, this is the case. The code in mainline didn't work at the time the
last release was made and as only models were available for testing this
code could not be verified in LSK at that time. This should be resolved
in the 14.05 release.
From: Mark Brown <broonie(a)linaro.org>
Since newer DT bindings include references to include/dt-bindings we need
to make this available to build DTs using them. Upstream has a number of
reworkings which are much more invasive but featureful, just include a
minimal fix.
Signed-off-by: Mark Brown <broonie(a)linaro.org>
---
scripts/Makefile.lib | 1 +
1 file changed, 1 insertion(+)
diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index f97869f1f09b..c4b37f6b5478 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -152,6 +152,7 @@ ld_flags = $(LDFLAGS) $(ldflags-y)
dtc_cpp_flags = -Wp,-MD,$(depfile).pre.tmp -nostdinc \
-I$(srctree)/arch/$(SRCARCH)/boot/dts \
-I$(srctree)/arch/$(SRCARCH)/boot/dts/include \
+ -I$(srctree)/include \
-undef -D__DTS__
# Finds the multi-part object the current object will be linked into
--
2.0.0.rc2
Implement and enable context tracking for arm64 (which is
a prerequisite for FULL_NOHZ support). This patchset
builds upon earlier work by Kevin Hilman and is based on 3.15-rc2.
Larry Bassel (2):
arm64: adjust el0_sync so that a function can be called
arm64: enable context tracking
arch/arm64/Kconfig | 1 +
arch/arm64/include/asm/thread_info.h | 1 +
arch/arm64/kernel/entry.S | 36 +++++++++++++++++++++++++++++++-----
3 files changed, 33 insertions(+), 5 deletions(-)
--
1.8.3.2
Dear All:
we use arm64 dma_alloc_coherent to get a expected non-cacheable buffer. but when we use the buffer as a dma memory for device, after cpu write datas to the buffer, It is not coherent in ddr so that device cann't get proper datas. so we find the LSK current version's dma alloc is malfunctional.
we have to flushcacheall after cpu write datas and It is ok. It shows that dma_alloc_coherent doesn't work properly.
thanks
Peter
________________________________________
发件人: Alex Shi [alex.shi(a)linaro.org]
发送时间: 2014年5月7日 12:06
收件人: Panshilin (Peter); Mark Brown; Guodong Xu; Haojian Zhuang
主题: Re: 答复: 答复: Is this patch included in LSK April release
CC to guodong.
Peter,
I am not MM experts. So could you like to give bit more detailed info of
your concern?
I did find not any abuse of flush_cache_all in arm64 code.
And AFAIK, If you have no a *hardware* cache coherency unit for DMA
access, do you? kernel need to flush(inv) cache lines which involved.
but don't need to flush all. Flush range of involved address is fine.
Did you try this?
On 05/06/2014 05:10 PM, Panshilin (Peter) wrote:
> we verified the patch based on linaro lsk April release and it didn't work .now we have to call flush cache all before start dma transfer. obviously it is not ok for dma coherent function in LSK. Please solve this issue as soon as you can. urgency, thanks .
> ________________________________________
> 发件人: Alex Shi [alex.shi(a)linaro.org]
> 发送时间: 2014年5月6日 10:08
> 收件人: Panshilin (Peter); Mark Brown
> 主题: Re: 答复: Is this patch included in LSK April release
>
>> On 05/05/2014 03:18 PM, Panshilin (Peter) wrote:
>>> de2db74 arm64: Make DMA coherent and strongly ordered mappings not
>
> Peter, did you try the patch in your hardware? Does it work?
>
> --
> Thanks
> Alex
>
--
Thanks
Alex
Patches adding support for hibernation on ARM
- ARM hibernation / suspend-to-disk
- Change soft_restart to use non-tracing raw_local_irq_disable
Patches based on v3.14-rc5 tag, verified hibernation on beaglebone black on a
branch based on 3.13 merged with initial omap support from Russ Dill which
can be found here (includes v1 patchset):
http://git.linaro.org/git-ro/people/sebastian.capella/linux.git hibernation_3.13_russMerge
[PATCH v7 1/2] ARM: avoid tracers in soft_restart
arch/arm/kernel/process.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Use raw_local_irq_disable in place of local_irq_disable to avoid
infinite abort recursion while tracing. (unchanged since v3)
[PATCH v7 2/2] ARM hibernation / suspend-to-disk
arch/arm/include/asm/memory.h | 1 +
arch/arm/kernel/Makefile | 1 +
arch/arm/kernel/hibernate.c | 108 +++++++++++++++++++++++++++++++++++++++++
arch/arm/mm/Kconfig | 5 ++
include/linux/suspend.h | 2 +
5 files changed, 117 insertions(+)
Adds support for ARM based hibernation
Additional notes:
-----------------
There are two checkpatch warnings added by this patch. These follow
behavior in existing hibernation implementations on other platforms.
WARNING: externs should be avoided in .c files
#120: FILE: arch/arm/kernel/hibernate.c:24:
+extern const void __nosave_begin, __nosave_end;
This extern is picking up the linker nosave region definitions, only
used in hibernate. Follows same extern line used mips, powerpc, s390,
sh, sparc, x86 & unicore32
WARNING: externs should be avoided in .c files
#200: FILE: arch/arm/kernel/hibernate.c:104:
+ extern void call_with_stack(void (*fn)(void *), void *arg, void *sp);
This extern is used in the arch/arm/ in hibernate, process and bL_switcher
Changes in v7:
--------------
* remove use of RELOC_HIDE macro
* remove unused #includes
* fixup comment for arch_restore_image
* ensure alignment of resume stack on 8 byte boundary
Changes in v6:
--------------
* Simplify static variable names
Changes in v5:
--------------
* Fixed checkpatch warning on trailing whitespace
Changes in v4:
--------------
* updated comment for soft_restart with review feedback
* dropped freeze_processes patch which was queued separately
to 3.14 by Rafael Wysocki:
https://lkml.org/lkml/2014/2/25/683
Changes in v3:
--------------
* added comment to use of soft_restart
* drop irq disable soft_restart patch
* add patch to avoid tracers in soft_restart by using raw_local_irq_*
Changes in v2:
--------------
* Removed unneeded flush_thread, use of __naked and cpu_init.
* dropped Cyril Chemparathy <cyril(a)ti.com> from Cc: list as
emails are bouncing.
Thanks,
Sebastian Capella
clocksource core is using add_timer_on() to run clocksource_watchdog() on all
CPUs one by one. But when a core is brought down, clocksource core doesn't
remove this timer from the dying CPU. And in this case timer core gives this
(Gives this only with unmerged code, anyway in the current code as well timer
core is migrating a pinned timer to other CPUs, which is also wrong:
http://www.gossamer-threads.com/lists/linux/kernel/1898117)
migrate_timer_list: can't migrate pinned timer: ffffffff81f06a60,
timer->function: ffffffff810d7010,deactivating it Modules linked in:
CPU: 0 PID: 1932 Comm: 01-cpu-hotplug Not tainted 3.14.0-rc1-00088-gab3c4fd #4
Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
0000000000000009 ffff88001d407c38 ffffffff817237bd ffff88001d407c80
ffff88001d407c70 ffffffff8106a1dd 0000000000000010 ffffffff81f06a60
ffff88001e04d040 ffffffff81e3d4c0 ffff88001e04d030 ffff88001d407cd0
Call Trace:
[<ffffffff817237bd>] dump_stack+0x4d/0x66
[<ffffffff8106a1dd>] warn_slowpath_common+0x7d/0xa0
[<ffffffff8106a24c>] warn_slowpath_fmt+0x4c/0x50
[<ffffffff810761c3>] ? __internal_add_timer+0x113/0x130
[<ffffffff810d7010>] ? clocksource_watchdog_kthread+0x40/0x40
[<ffffffff8107753b>] migrate_timer_list+0xdb/0xf0
[<ffffffff810782dc>] timer_cpu_notify+0xfc/0x1f0
[<ffffffff8173046c>] notifier_call_chain+0x4c/0x70
[<ffffffff8109340e>] __raw_notifier_call_chain+0xe/0x10
[<ffffffff8106a3f3>] cpu_notify+0x23/0x50
[<ffffffff8106a44e>] cpu_notify_nofail+0xe/0x20
[<ffffffff81712a5d>] _cpu_down+0x1ad/0x2e0
[<ffffffff81712bc4>] cpu_down+0x34/0x50
[<ffffffff813fec54>] cpu_subsys_offline+0x14/0x20
[<ffffffff813f9f65>] device_offline+0x95/0xc0
[<ffffffff813fa060>] online_store+0x40/0x90
[<ffffffff813f75d8>] dev_attr_store+0x18/0x30
[<ffffffff8123309d>] sysfs_kf_write+0x3d/0x50
This patch tries to fix this by registering cpu notifiers from clocksource core,
only when we start clocksource-watchdog. And if on the CPU_DEAD notification it
is found that dying CPU was the CPU on which this timer is queued on, then it is
removed from that CPU and queued to next CPU.
Reported-and-tested-by: Jet Chen <jet.chen(a)intel.com>
Reported-by: Fengguang Wu <fengguang.wu(a)intel.com>
Signed-off-by: Viresh Kumar <viresh.kumar(a)linaro.org>
---
V1->V2:
- Moved 'static int timer_cpu' within #ifdef CONFIG_CLOCKSOURCE_WATCHDOG/endif
- replaced spin_lock with spin_lock_irqsave in clocksource_cpu_notify() as a bug
is reported by Jet Chen with that.
- Tested again by Jet Chen (Thanks again :))
kernel/time/clocksource.c | 65 +++++++++++++++++++++++++++++++++++++++--------
1 file changed, 54 insertions(+), 11 deletions(-)
diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
index ba3e502..d288f1f 100644
--- a/kernel/time/clocksource.c
+++ b/kernel/time/clocksource.c
@@ -23,10 +23,12 @@
* o Allow clocksource drivers to be unregistered
*/
+#include <linux/cpu.h>
#include <linux/device.h>
#include <linux/clocksource.h>
#include <linux/init.h>
#include <linux/module.h>
+#include <linux/notifier.h>
#include <linux/sched.h> /* for spin_unlock_irq() using preempt_count() m68k */
#include <linux/tick.h>
#include <linux/kthread.h>
@@ -180,6 +182,9 @@ static char override_name[CS_NAME_LEN];
static int finished_booting;
#ifdef CONFIG_CLOCKSOURCE_WATCHDOG
+/* Tracks current CPU to queue watchdog timer on */
+static int timer_cpu;
+
static void clocksource_watchdog_work(struct work_struct *work);
static void clocksource_select(void);
@@ -246,12 +251,25 @@ void clocksource_mark_unstable(struct clocksource *cs)
spin_unlock_irqrestore(&watchdog_lock, flags);
}
+static void queue_timer_on_next_cpu(void)
+{
+ /*
+ * Cycle through CPUs to check if the CPUs stay synchronized to each
+ * other.
+ */
+ timer_cpu = cpumask_next(timer_cpu, cpu_online_mask);
+ if (timer_cpu >= nr_cpu_ids)
+ timer_cpu = cpumask_first(cpu_online_mask);
+ watchdog_timer.expires = jiffies + WATCHDOG_INTERVAL;
+ add_timer_on(&watchdog_timer, timer_cpu);
+}
+
static void clocksource_watchdog(unsigned long data)
{
struct clocksource *cs;
cycle_t csnow, wdnow;
int64_t wd_nsec, cs_nsec;
- int next_cpu, reset_pending;
+ int reset_pending;
spin_lock(&watchdog_lock);
if (!watchdog_running)
@@ -336,27 +354,51 @@ static void clocksource_watchdog(unsigned long data)
if (reset_pending)
atomic_dec(&watchdog_reset_pending);
- /*
- * Cycle through CPUs to check if the CPUs stay synchronized
- * to each other.
- */
- next_cpu = cpumask_next(raw_smp_processor_id(), cpu_online_mask);
- if (next_cpu >= nr_cpu_ids)
- next_cpu = cpumask_first(cpu_online_mask);
- watchdog_timer.expires += WATCHDOG_INTERVAL;
- add_timer_on(&watchdog_timer, next_cpu);
+ queue_timer_on_next_cpu();
out:
spin_unlock(&watchdog_lock);
}
+static int clocksource_cpu_notify(struct notifier_block *self,
+ unsigned long action, void *hcpu)
+{
+ long cpu = (long)hcpu;
+ unsigned long flags;
+
+ spin_lock_irqsave(&watchdog_lock, flags);
+ if (!watchdog_running)
+ goto notify_out;
+
+ switch (action) {
+ case CPU_DEAD:
+ case CPU_DEAD_FROZEN:
+ if (cpu != timer_cpu)
+ break;
+ del_timer(&watchdog_timer);
+ queue_timer_on_next_cpu();
+ break;
+ }
+
+notify_out:
+ spin_unlock_irqrestore(&watchdog_lock, flags);
+ return NOTIFY_OK;
+}
+
+static struct notifier_block clocksource_nb = {
+ .notifier_call = clocksource_cpu_notify,
+ .priority = 1,
+};
+
static inline void clocksource_start_watchdog(void)
{
if (watchdog_running || !watchdog || list_empty(&watchdog_list))
return;
+ timer_cpu = cpumask_first(cpu_online_mask);
+ register_cpu_notifier(&clocksource_nb);
init_timer(&watchdog_timer);
watchdog_timer.function = clocksource_watchdog;
watchdog_timer.expires = jiffies + WATCHDOG_INTERVAL;
- add_timer_on(&watchdog_timer, cpumask_first(cpu_online_mask));
+ add_timer_on(&watchdog_timer, timer_cpu);
watchdog_running = 1;
}
@@ -365,6 +407,7 @@ static inline void clocksource_stop_watchdog(void)
if (!watchdog_running || (watchdog && !list_empty(&watchdog_list)))
return;
del_timer(&watchdog_timer);
+ unregister_cpu_notifier(&clocksource_nb);
watchdog_running = 0;
}
--
1.7.12.rc2.18.g61b472e
Hi Mark, Alex
Seems we finally have consensus on the default HMP task packing config,
so can you pull this change into LSK? Thanks...
The following changes since commit db3dba6818796b90053d5b1bc9f15837acdc9b9c:
Revert "hmp: sched: Clean up hmp_up_threshold checks into a utility fn" (2014-04-08 16:43:25 +0100)
are available in the git repository at:
git://git.linaro.org/arm/big.LITTLE/mp.git for-lsk
for you to fetch changes up to 1ade57e54ea2257ccf753dbd54144769439c3c70:
sched: hmp: Change small task packing defaults for all platforms (2014-05-07 11:34:00 +0100)
----------------------------------------------------------------
Chris Redpath (1):
sched: hmp: Change small task packing defaults for all platforms
kernel/sched/fair.c | 34 ++++++++++++++++++++++++----------
1 file changed, 24 insertions(+), 10 deletions(-)
Adding libdw DWARF post unwind support, which is part
of elfutils-devel/libdw-dev package from version 0.158.
Also includes the test suite for dwarf unwinding, by adding the
arch specific test code and the perf_regs_load function.
This series depends on the following kernel patches series:
- AARCH64 unwinding support [1]. Already mainlined.
- ARM libdw integration [2],
and on the changes from the branch for:
- libdw AARCH64 unwinding support [3].
[1] http://www.spinics.net/lists/arm-kernel/msg304483.html
[2] https://lkml.org/lkml/2014/5/6/366
[3] https://git.fedorahosted.org/cgit/elfutils.git/log/?h=mjw/aarch64-unwind
ToDo: investigate the libdw unwinding problem with compat binaries (i.e.
ARMv7 binaries running on ARMv8). Since this functionality works ok with
libunwind, the problem should be in libdw compat support [3].
Jean Pihet (3):
perf tests: Introduce perf_regs_load function on ARM64
perf tests: Add dwarf unwind test on ARM64
perf tools: Add libdw DWARF post unwind support for ARM64
tools/perf/Makefile.perf | 2 +-
tools/perf/arch/arm64/Makefile | 7 ++++
tools/perf/arch/arm64/include/perf_regs.h | 5 +++
tools/perf/arch/arm64/tests/dwarf-unwind.c | 59 ++++++++++++++++++++++++++++++
tools/perf/arch/arm64/tests/regs_load.S | 39 ++++++++++++++++++++
tools/perf/arch/arm64/util/unwind-libdw.c | 53 +++++++++++++++++++++++++++
tools/perf/tests/builtin-test.c | 3 +-
tools/perf/tests/tests.h | 3 +-
8 files changed, 168 insertions(+), 3 deletions(-)
create mode 100644 tools/perf/arch/arm64/tests/dwarf-unwind.c
create mode 100644 tools/perf/arch/arm64/tests/regs_load.S
create mode 100644 tools/perf/arch/arm64/util/unwind-libdw.c
---
Rebased on the latest jolsa/perf/core
--
1.7.11.7
Hi all, this is a proposal to change the default small task packing
for TC2 to 'enabled' rather than just present (aligned with all
other platforms) and to change the default packing_limit to 650
on all platforms to match the existing TC2 default.
This should by default restrict packing so that it will not cause
frequency of the little CPUs to go above 80%. Of course, the presence
of bigger tasks can still cause frequency to go up.
The section of the makefile that determines the TEXT_OFFSET is sorted
by address so that, in multi-arch kernel builds, the architecture with the
most stringent requirements for the kernel base address gets to define
TEXT_OFFSET. The comment should reflect that.
Signed-off-by: Daniel Thompson <daniel.thompson(a)linaro.org>
---
arch/arm/Makefile | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/arm/Makefile b/arch/arm/Makefile
index 41c1931..6857fec 100644
--- a/arch/arm/Makefile
+++ b/arch/arm/Makefile
@@ -127,6 +127,9 @@ CHECKFLAGS += -D__arm__
#Default value
head-y := arch/arm/kernel/head$(MMUEXT).o
+
+# Text offset. This list is sorted numerically by address in order to
+# provide a means to avoid/resolve conflicts in multi-arch kernels.
textofs-y := 0x00008000
textofs-$(CONFIG_ARCH_CLPS711X) := 0x00028000
# We don't want the htc bootloader to corrupt kernel during resume
--
1.9.0
Adding libdw DWARF post unwind support, which is part
of elfutils-devel/libdw-dev package from version 0.158.
Also includes the test suite for dwarf unwinding, by adding the
arch specific test code and the perf_regs_load function.
This series depends on the following kernel patches series:
- AARCH64 unwinding support [1],
- ARM libdw integration [2],
and on the changes from the branch for:
- libdw AARCH64 unwinding support [3].
[1] http://www.spinics.net/lists/arm-kernel/msg304483.html
[2] http://www.spinics.net/lists/arm-kernel/msg312423.html
[3] https://git.fedorahosted.org/cgit/elfutils.git/log/?h=mjw/aarch64-unwind
Jean Pihet (3):
perf tests: Introduce perf_regs_load function on ARM64
perf tests: Add dwarf unwind test on ARM64
perf tools: Add libdw DWARF post unwind support for ARM64
tools/perf/Makefile.perf | 2 +-
tools/perf/arch/arm64/Makefile | 7 ++++
tools/perf/arch/arm64/include/perf_regs.h | 5 +++
tools/perf/arch/arm64/tests/dwarf-unwind.c | 59 ++++++++++++++++++++++++++++++
tools/perf/arch/arm64/tests/regs_load.S | 39 ++++++++++++++++++++
tools/perf/arch/arm64/util/unwind-libdw.c | 53 +++++++++++++++++++++++++++
tools/perf/tests/builtin-test.c | 3 +-
tools/perf/tests/tests.h | 3 +-
8 files changed, 168 insertions(+), 3 deletions(-)
create mode 100644 tools/perf/arch/arm64/tests/dwarf-unwind.c
create mode 100644 tools/perf/arch/arm64/tests/regs_load.S
create mode 100644 tools/perf/arch/arm64/util/unwind-libdw.c
---
- Rebased on latest acme/perf/core git tree,
- Tested on the ARMv8 Foundation emulator.
--
1.7.11.7
Implement and enable context tracking for arm64 (which is
a prerequisite for FULL_NOHZ support). This patchset
builds upon earlier work by Kevin Hilman and is based on 3.15-rc2.
Kevin Hilman (1):
arm64: add support for context tracking
Larry Bassel (2):
arm64: adjust el0_sync so that a function can be called
arm64: enable context tracking
arch/arm64/Kconfig | 1 +
arch/arm64/include/asm/thread_info.h | 1 +
arch/arm64/kernel/entry.S | 33 ++++++++++++++++++++++++++++-----
3 files changed, 30 insertions(+), 5 deletions(-)
--
1.8.3.2
Adding Catalin..
On 5 May 2014 11:11, Panshilin (Peter) <peter.panshilin(a)hisilicon.com> wrote:
>
>
> ________________________________________
> 发件人: linaro-kernel-bounces(a)lists.linaro.org [linaro-kernel-bounces(a)lists.linaro.org] 代表 Rajan Srivastava [rajan_srivastava(a)hotmail.com]
> 发送时间: 2014年4月30日 15:44
> 收件人: linaro-kernel(a)lists.linaro.org
> 主题: ARMv8: Allowing user space to perform cache-maintenance
>
> ARMv8 allows AArch64-EL0 to execute cache maintenance instructions (eg, by setting SCTLR_EL1.UCI). It looks like the current ARMv8 kernel doesn't support the above feature.
>
> Is there any plan in Linux for allowing AArch64_EL0 to perform cache-line operations?
>
> Regards,
> Rajan
>
>
> _______________________________________________
> linaro-kernel mailing list
> linaro-kernel(a)lists.linaro.org
> http://lists.linaro.org/mailman/listinfo/linaro-kernel
> _______________________________________________
> linaro-kernel mailing list
> linaro-kernel(a)lists.linaro.org
> http://lists.linaro.org/mailman/listinfo/linaro-kernel
________________________________________
发件人: linaro-kernel-bounces(a)lists.linaro.org [linaro-kernel-bounces(a)lists.linaro.org] 代表 Rajan Srivastava [rajan_srivastava(a)hotmail.com]
发送时间: 2014年4月30日 15:44
收件人: linaro-kernel(a)lists.linaro.org
主题: ARMv8: Allowing user space to perform cache-maintenance
ARMv8 allows AArch64-EL0 to execute cache maintenance instructions (eg, by setting SCTLR_EL1.UCI). It looks like the current ARMv8 kernel doesn't support the above feature.
Is there any plan in Linux for allowing AArch64_EL0 to perform cache-line operations?
Regards,
Rajan
_______________________________________________
linaro-kernel mailing list
linaro-kernel(a)lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-kernel
Hi,
We are trying a write a bare metal test code for Switching tasks from A15 Core to A7 core.
As of now we have this article to start with https://lwn.net/Articles/481055/.
Can the list help us in
a) Where (file/directory ) in Linux kernel code that does the switching code (arch specific) for 5420 ?
b) Any test cases for big little on 5420 ?
-Regards
armdev team
This patchset provides three patches for the basis to integrate cpuidle with
the scheduler.
The first patch is a cleanup.
The second one adds the sched balance option as requested by Ingo.
The third one stores the idle state a cpu is and adds a rcu_barrier() to
prevent races when using the pointed object.
This patchset is based on top of v3.15-rc2.
This patchset does not modify the behavior of the scheduler.
Taking into account the cpuidle information from the scheduler will be
posted in a separate patchset in order to keep focused on the right decisions
the scheduler should take regarding the policy vs idle parameters.
Daniel Lezcano (3):
sched: idle: Encapsulate the code to compile it out
sched: idle: Add sched balance option
sched: idle: Store the idle state the cpu is
drivers/cpuidle/cpuidle.c | 6 ++
include/linux/sched/sysctl.h | 14 ++++
kernel/sched/fair.c | 92 ++++++++++++++++++++++-
kernel/sched/idle.c | 169 +++++++++++++++++++++++-------------------
kernel/sched/sched.h | 5 ++
kernel/sysctl.c | 11 +++
6 files changed, 220 insertions(+), 77 deletions(-)
--
1.7.9.5