Hi,
As it was discussed on [1] today, here is potential ARM specific
fix for uprobes dcache/icache flush problem. I am aware that
other options are still under discussion. This patch is provided for
reference only as one of possible solutions.
The xol slot flush code shares code with ARM backend of
copy_to_user_page - flush_ptrace_access function. But code and new
implementation of flush_uprobe_xol_access modified in such way that
xol flush does need vma.
Changes since V2 [2] version:
x) address Dave Long's comment about passing checkpatch
x) addressed Oleg's comment and instead of arch_uprobe_flush_xol_access
function use arch_uprobe_copy_ixol function that maps kernel pages,
copies, and flush caches
x) removed FLAG_UA_BROADCAST, during discussion on [1] it was
elaborated that task executing xol single step could be
migrated to another CPU, so we need to take care of remote
icaches if CPU does not support remote snooping. I.e
flush_uprobe_xol_access will check cache_ops_need_broadcast()
and perform smp_call_function on SMP CPUs that do not
support remote snooping.
x) added preempt_disable/preempt_enable in arch_uprobe_copy_ixol as
copy_to_user_page does. I admit that I have some guesses, but I
don't completely understand why copy_to_user_page does that, so
playing on safe side - added it similar to copy_to_user_page code.
Thanks,
Victor
[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2014-April/247611.html
[2] http://lists.infradead.org/pipermail/linux-arm-kernel/2014-April/245743.html
Victor Kamensky (1):
ARM: uprobes need icache flush after xol write
arch/arm/include/asm/cacheflush.h | 2 ++
arch/arm/kernel/uprobes.c | 22 ++++++++++++++++++++++
arch/arm/mm/flush.c | 33 ++++++++++++++++++++++++++++-----
include/linux/uprobes.h | 3 +++
kernel/events/uprobes.c | 25 +++++++++++++++++--------
5 files changed, 72 insertions(+), 13 deletions(-)
--
1.8.1.4
In switch_hrtimer_base() we are calling hrtimer_check_target() which guarantees
this:
/*
* With HIGHRES=y we do not migrate the timer when it is expiring
* before the next event on the target cpu because we cannot reprogram
* the target cpu hardware and we would cause it to fire late.
*
* Called with cpu_base->lock of target cpu held.
*/
But switch_hrtimer_base() is only called from one place, i.e.
__hrtimer_start_range_ns() and at that point (where we call
switch_hrtimer_base()) expiration time is not yet known as we call this routine
later: hrtimer_set_expires_range_ns().
To fix this, we need to find the updated expiry time before calling
switch_hrtimer_base().
Signed-off-by: Viresh Kumar <viresh.kumar(a)linaro.org>
---
Hi Thomas,
I have sent this previously as part of: https://lkml.org/lkml/2014/4/4/23
But as you asked to send bugfixes without any dependencies for ticks patches, I
thought of sending bugfixes separately for timers too. This was the only bugfix
from that series and other patches don't conflict with it, so I am not resending
other patches from above series again.
Not adding any stable tags as this is broken from a long long time and don't
know if you want to fix it for those kernels.
kernel/hrtimer.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/kernel/hrtimer.c b/kernel/hrtimer.c
index d55092c..c86b95a 100644
--- a/kernel/hrtimer.c
+++ b/kernel/hrtimer.c
@@ -968,11 +968,8 @@ int __hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim,
/* Remove an active timer from the queue: */
ret = remove_hrtimer(timer, base);
- /* Switch the timer base, if necessary: */
- new_base = switch_hrtimer_base(timer, base, mode & HRTIMER_MODE_PINNED);
-
if (mode & HRTIMER_MODE_REL) {
- tim = ktime_add_safe(tim, new_base->get_time());
+ tim = ktime_add_safe(tim, base->get_time());
/*
* CONFIG_TIME_LOW_RES is a temporary way for architectures
* to signal that they simply return xtime in
@@ -987,6 +984,9 @@ int __hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim,
hrtimer_set_expires_range_ns(timer, tim, delta_ns);
+ /* Switch the timer base, if necessary: */
+ new_base = switch_hrtimer_base(timer, base, mode & HRTIMER_MODE_PINNED);
+
timer_stats_hrtimer_set_start_info(timer);
leftmost = enqueue_hrtimer(timer, new_base);
--
1.7.12.rc2.18.g61b472e
Part of this patchset was previously part of the larger tasks packing patchset
[1]. I have splitted the latter in 3 different patchsets (at least) to make the
thing easier.
-configuration of sched_domain topology [2]
-update and consolidation of cpu_power (this patchset)
-tasks packing algorithm
SMT system is no more the only system that can have a CPUs with capacity that
is different from the default value. We need to extend the use of
cpu_power_orig to all kind of platform so the scheduler will have both the
maximum capacity (cpu_power_orig/power_orig) and the current capacity
(cpu_power/power) of CPUs and groups of the sched_domains.
During load balance, the scheduler evaluates the number of tasks that a group
of CPUs can handle. The current method ensures that we will not return more
capacity than number of real cores but it returns wrong value for group of
LITTLE cores and in some situation for SMT system. The proposed solution
computes the ratio between CPUs and cores for a group durint the init sequence
and uses it with power and power_orig to return the current capacity of a
group.
[1] https://lkml.org/lkml/2013/10/18/121
[2] https://lkml.org/lkml/2014/3/19/377
Vincent Guittot (4):
sched: extend the usage of cpu_power_orig
ARM: topology: use new cpu_power interface
sched: fix computed capacity for HMP
sched: add per group cpu_power_orig
arch/arm/kernel/topology.c | 4 ++--
kernel/sched/core.c | 9 ++++++++-
kernel/sched/fair.c | 31 +++++++++++++++++++------------
kernel/sched/sched.h | 3 ++-
4 files changed, 31 insertions(+), 16 deletions(-)
--
1.9.0
Hi Thomas,
These are separate cleanups from the timers/hrtimers ones I did. I was waiting
for the merge window to close in order to send these and by the time it
happened, I got a long pending list.
These are mostly cleanups, reorders for better readability or efficiency, and
few bugfixes.
I have pushed these here as well:
git://git.linaro.org/people/viresh.kumar/linux.git tick-cleanups
They will be tested by kbuild bot also starting from tonight.
Viresh Kumar (38):
tick: align to Coding Guidelines
tick: update doc comments for struct tick_sched
tick: rearrange members of 'struct tick_sched'
tick: move declaration of tick_cpu_device to tick.h
tick: move definition of tick_get_device() to tick.h
tick: create tick_get_cpu_device() to get tick_cpu_device on this cpu
tick-oneshot: drop local_irq_save/restore from
tick_switch_to_oneshot()
tick-oneshot: move tick_is_oneshot_available() to tick-oneshot.c
tick-oneshot: remove tick_resume_oneshot()
tick-common: remove extra checks from tick_check_new_device()
tick-common: fix wrong check in tick_check_replacement()
tick-common: call tick_check_percpu() from tick_check_preferred()
tick-common: don't check tick_oneshot_mode_active() from
tick_check_preferred()
tick-common: do additional checks in tick_check_preferred()
tick-common: remove tick_check_replacement()
tick-common: don't pass cpumask to tick_setup_device()
tick-common: call tick_install_replacement() from
tick_check_new_device()
tick-common: don't set mode to CLOCK_EVT_MODE_UNUSED in
tick_shutdown()
tick-common: remove local variable 'broadcast' from tick_resume()
tick-sched: initialize 'cpu' while defining it in
tick_nohz_full_setup()
tick-sched: no need to rewrite '1' to tick_nohz_enabled
tick-sched: no need to recheck cpu_online() in can_stop_idle_tick()
tick-sched: invert parameter of tick_check_oneshot_change()
tick-sched: don't check tick_nohz_full_cpu() in
__tick_nohz_task_switch()
tick-sched: don't call local_softirq_pending() thrice in
can_stop_idle_tick()
tick-sched: don't call update_wall_time() when delta is lesser than
tick_period
tick-sched: remove 'regs' parameter of tick_sched_handle()
tick-sched: remove parameters to {__}tick_nohz_task_switch() routines
tick-sched: remove wrapper around __tick_nohz_task_switch()
tick-sched: move nohz_full_buf[] inside tick_nohz_init()
tick-sched: initialize 'ts' during its definition
__tick_nohz_idle_enter()
tick-sched: add comment about 'idle_active' in tick_nohz_idle_exit()
tick-sched: replace tick_nohz_active with tick_nohz_enabled in
tick_nohz_switch_to_nohz()
tick-sched: remove local variable 'now' from tick_setup_sched_timer()
tick-broadcast: do checks before taking locks in
tick_do_broadcast_on_off()
tick-broadcast: get rid of extra comparison in
tick_do_broadcast_on_off()
tick-broadcast: merge tick_do_broadcast_on_off() into
tick_broadcast_on_off()
clockevents: set event_handler to clockevents_handle_noop() in
clockevents_exchange_device()
include/linux/clockchips.h | 2 -
include/linux/hrtimer.h | 3 -
include/linux/tick.h | 65 ++++++++++-------
kernel/hrtimer.c | 4 +-
kernel/sched/core.c | 2 +-
kernel/time/clockevents.c | 11 +--
kernel/time/tick-broadcast.c | 74 +++++++-------------
kernel/time/tick-common.c | 126 +++++++++++++--------------------
kernel/time/tick-internal.h | 15 ++--
kernel/time/tick-oneshot.c | 37 +++++-----
kernel/time/tick-sched.c | 163 +++++++++++++++++++++++--------------------
11 files changed, 232 insertions(+), 270 deletions(-)
--
1.7.12.rc2.18.g61b472e
Hi Thomas,
As suggested by you (https://lkml.org/lkml/2014/4/14/797), this is the first lot
of changes I have. These are all potential bug fixes (Sorry if I haven't read
the most obvious code correctly at some place :) ).
Patch 2/5 isn't a bug fix but was required as a dependency for 3/5.
Some discussions already happened for 5/5 here:
https://lkml.org/lkml/2014/4/9/243https://lkml.org/lkml/2014/4/9/346
I have tried to mark stable release wherever possible.
Viresh Kumar (5):
tick-common: fix wrong check in tick_check_replacement()
tick-common: don't check tick_oneshot_mode_active() from
tick_check_preferred()
tick-common: do additional checks in tick_check_preferred()
tick-sched: don't call update_wall_time() when delta is lesser than
tick_period
tick-sched: replace tick_nohz_active with tick_nohz_enabled in
tick_nohz_switch_to_nohz()
kernel/time/tick-common.c | 29 +++++++++++++++++++----------
kernel/time/tick-sched.c | 34 ++++++++++++++++++----------------
2 files changed, 37 insertions(+), 26 deletions(-)
--
1.7.12.rc2.18.g61b472e
Currently, KVM ARM/ARM64 only provides in-kernel emulation of Power State
and Coordination Interface (PSCI) v0.1.
This patchset aims at providing newer PSCI v0.2 for KVM ARM/ARM64 VCPUs
such that it does not break current KVM ARM/ARM64 ABI.
The user space tools (i.e. QEMU or KVMTOOL) will have to explicitly enable
KVM_ARM_VCPU_PSCI_0_2 feature using KVM_ARM_VCPU_INIT ioctl for providing
PSCI v0.2 to VCPUs.
Changlog:
V9:
- Rename undefined PSCI_VER_xxx defines to PSCI_VERSION_xxx defines
V8:
- Add #define for possible values of migrate type in uapi/linux/psci.h
- Simplified psci_affinity_mask() in psci.c
- Update comments in kvm_psci_vcpu_suspend() to indicate that for KVM
wakeup events are interrupts.
- Unconditionally update r0 (or x0) in kvm_psci_vcpu_on()
V7:
- Make uapi/linux/psci.h inline with Ashwin's patch
http://www.spinics.net/lists/arm-kernel/msg319090.html
- Incorporate Rob's suggestions for uapi/linux/psci.h
- Treat CPU_SUSPEND power-down request to be same as standby
request. This further simplifies CPU_SUSPEND emulation.
V6:
- Introduce uapi/linux/psci.h for sharing PSCI defines between
ARM kernel, ARM64 kernel, KVM ARM/ARM64 and user space
- Make CPU_SUSPEND emulation similar to WFI emulation
V5:
- Have separate last patch to advertise KVM_CAP_ARM_PSCI_0_2
- Use kvm_psci_version() in kvm_psci_vcpu_on()
- Return ALREADY_ON for PSCI v0.2 CPU_ON if VCPU is not paused
- Remove per-VCPU suspend context
- As-per PSCI v0.2 spec, only current CPU can suspend itself
V4:
- Implement all mandatory functions required by PSCI v0.2
V3:
- Make KVM_ARM_VCPU_PSCI_0_2 feature experiementatl for now so that
it fails for user space till all mandatory PSCI v0.2 functions are
emulated by KVM ARM/ARM64
- Have separate patch for making KVM_ARM_VCPU_PSCI_0_2 feature available
to user space. This patch can be defferred for now
V2:
- Don't rename PSCI return values KVM_PSCI_RET_NI and KVM_PSCI_RET_INVAL
- Added kvm_psci_version() to get PSCI version available to VCPU
- Fixed grammer in Documentation/virtual/kvm/api.txt
V1:
- Initial RFC PATCH
Anup Patel (12):
KVM: Add capability to advertise PSCI v0.2 support
ARM/ARM64: KVM: Add common header for PSCI related defines
ARM/ARM64: KVM: Add base for PSCI v0.2 emulation
KVM: Documentation: Add info regarding KVM_ARM_VCPU_PSCI_0_2 feature
ARM/ARM64: KVM: Make kvm_psci_call() return convention more flexible
KVM: Add KVM_EXIT_SYSTEM_EVENT to user space API header
ARM/ARM64: KVM: Emulate PSCI v0.2 SYSTEM_OFF and SYSTEM_RESET
ARM/ARM64: KVM: Emulate PSCI v0.2 AFFINITY_INFO
ARM/ARM64: KVM: Emulate PSCI v0.2 MIGRATE_INFO_TYPE and related
functions
ARM/ARM64: KVM: Fix CPU_ON emulation for PSCI v0.2
ARM/ARM64: KVM: Emulate PSCI v0.2 CPU_SUSPEND
ARM/ARM64: KVM: Advertise KVM_CAP_ARM_PSCI_0_2 to user space
Documentation/virtual/kvm/api.txt | 17 +++
arch/arm/include/asm/kvm_host.h | 2 +-
arch/arm/include/asm/kvm_psci.h | 6 +-
arch/arm/include/uapi/asm/kvm.h | 19 ++--
arch/arm/kvm/arm.c | 1 +
arch/arm/kvm/handle_exit.c | 10 +-
arch/arm/kvm/psci.c | 222 +++++++++++++++++++++++++++++++++----
arch/arm64/include/asm/kvm_host.h | 2 +-
arch/arm64/include/asm/kvm_psci.h | 6 +-
arch/arm64/include/uapi/asm/kvm.h | 21 ++--
arch/arm64/kvm/handle_exit.c | 10 +-
include/uapi/linux/Kbuild | 1 +
include/uapi/linux/kvm.h | 9 ++
include/uapi/linux/psci.h | 78 +++++++++++++
14 files changed, 356 insertions(+), 48 deletions(-)
create mode 100644 include/uapi/linux/psci.h
--
1.7.9.5