Hi,
This patch adds support for PCI to AArch64. It is based on my v5 patch
that adds support for creating generic host bridge structure from
device tree. With that in place, I was able to boot a platform that
has PCIe host bridge support and use a PCIe network card.
Changes from v4:
- Fixed the pci_domain_nr() implementation for arm64. Now we use
find_pci_host_bride() to find the host bridge before we retrieve
the domain number.
Changes from v3:
- Added Acks accumulated so far ;)
- Still carrying Catalin's patch for moving the PCI_IO_BASE until it
lands in linux-next or mainline, in order to ease applying the series
Changes from v2:
- Implement an arch specific version of pci_register_io_range() and
pci_address_to_pio().
- Return 1 from pci_proc_domain().
Changes from v1:
- Added Catalin's patch for moving the PCI_IO_BASE location and extend
its size to 16MB
- Integrated Arnd's version of pci_ioremap_io that uses a bitmap for
keeping track of assigned IO space and returns an io_offset. At the
moment the code is added in arch/arm64 but it can be moved in drivers/pci.
- Added a fix for the generic ioport_map() function when !CONFIG_GENERIC_IOMAP
as suggested by Arnd.
v4 thread here: https://lkml.org/lkml/2014/3/3/298
v3 thread here: https://lkml.org/lkml/2014/2/28/211
v2 thread here: https://lkml.org/lkml/2014/2/27/255
v1 thread here: https://lkml.org/lkml/2014/2/3/389
The API used is different from the one used by ARM architecture. There is
no pci_common_init_dev() function and no hw_pci structure, as that is no
longer needed. Once the last signature is added to the legal agreement, I
will post the host bridge driver code that I am using. Meanwhile, here
is an example of what the probe function looks like, posted as an example:
static int myhostbridge_probe(struct platform_device *pdev)
{
int err;
struct device_node *dev;
struct pci_host_bridge *bridge;
struct myhostbridge_port *pp;
resource_size_t lastbus;
dev = pdev->dev.of_node;
if (!of_device_is_available(dev)) {
pr_warn("%s: disabled\n", dev->full_name);
return -ENODEV;
}
pp = kzalloc(sizeof(struct myhostbridge_port), GFP_KERNEL);
if (!pp)
return -ENOMEM;
bridge = of_create_pci_host_bridge(&pdev->dev, &myhostbridge_ops, pp);
if (IS_ERR(bridge)) {
err = PTR_ERR(bridge);
goto bridge_create_fail;
}
err = myhostbridge_setup(bridge->bus);
if (err)
goto bridge_setup_fail;
/* We always enable PCI domains and we keep domain 0 backward
* compatible in /proc for video cards
*/
pci_add_flags(PCI_ENABLE_PROC_DOMAINS);
pci_add_flags(PCI_REASSIGN_ALL_BUS | PCI_REASSIGN_ALL_RSRC);
lastbus = pci_scan_child_bus(bridge->bus);
pci_bus_update_busn_res_end(bridge->bus, lastbus);
pci_assign_unassigned_bus_resources(bridge->bus);
pci_bus_add_devices(bridge->bus);
return 0;
bridge_setup_fail:
put_device(&bridge->dev);
device_unregister(&bridge->dev);
bridge_create_fail:
kfree(pp);
return err;
}
Best regards,
Liviu
Catalin Marinas (1):
arm64: Extend the PCI I/O space to 16MB
Liviu Dudau (2):
Fix ioport_map() for !CONFIG_GENERIC_IOMAP cases.
arm64: Add architecture support for PCI
Documentation/arm64/memory.txt | 16 ++--
arch/arm64/Kconfig | 19 ++++-
arch/arm64/include/asm/Kbuild | 1 +
arch/arm64/include/asm/io.h | 5 +-
arch/arm64/include/asm/pci.h | 49 ++++++++++++
arch/arm64/kernel/Makefile | 1 +
arch/arm64/kernel/pci.c | 178 +++++++++++++++++++++++++++++++++++++++++
include/asm-generic/io.h | 2 +-
8 files changed, 261 insertions(+), 10 deletions(-)
create mode 100644 arch/arm64/include/asm/pci.h
create mode 100644 arch/arm64/kernel/pci.c
--
1.9.0
This patch adds cpufreq suspend/resume calls to dpm_{suspend|resume}() for
handling suspend/resume of cpufreq governors.
Lan Tianyu (Intel) & Jinhyuk Choi (Broadcom) found an issue where tunables
configuration for clusters/sockets with non-boot CPUs was getting lost after
suspend/resume, as we were notifying governors with CPUFREQ_GOV_POLICY_EXIT on
removal of the last cpu for that policy and so deallocating memory for tunables.
This is fixed by this patch as we don't allow any operation on governors after
device suspend and before device resume now.
We could have added these callbacks at dpm_{suspend|resume}_noirq() level but
the problem here is that most of the devices (i.e. devices with ->suspend()
callbacks) have already been suspended by now and so if drivers want to change
frequency before suspending, then it might not be possible for many platforms
(which depend on other peripherals like i2c, regulators, etc).
Reported-and-tested-by: Lan Tianyu <tianyu.lan(a)intel.com>
Reported-by: Jinhyuk Choi <jinchoi(a)broadcom.com>
Signed-off-by: Viresh Kumar <viresh.kumar(a)linaro.org>
---
V5->V6: 1-2-3/7 merged into 1/5
drivers/base/power/main.c | 5 +++
drivers/cpufreq/cpufreq.c | 111 +++++++++++++++++++++++-----------------------
include/linux/cpufreq.h | 8 ++++
3 files changed, 69 insertions(+), 55 deletions(-)
diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index 42355e4..86d5e4f 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -29,6 +29,7 @@
#include <linux/async.h>
#include <linux/suspend.h>
#include <trace/events/power.h>
+#include <linux/cpufreq.h>
#include <linux/cpuidle.h>
#include <linux/timer.h>
@@ -866,6 +867,8 @@ void dpm_resume(pm_message_t state)
mutex_unlock(&dpm_list_mtx);
async_synchronize_full();
dpm_show_time(starttime, state, NULL);
+
+ cpufreq_resume();
}
/**
@@ -1434,6 +1437,8 @@ int dpm_suspend(pm_message_t state)
might_sleep();
+ cpufreq_suspend();
+
mutex_lock(&dpm_list_mtx);
pm_transition = state;
async_error = 0;
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
index 56b7b1b..2e43c08 100644
--- a/drivers/cpufreq/cpufreq.c
+++ b/drivers/cpufreq/cpufreq.c
@@ -26,7 +26,7 @@
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/slab.h>
-#include <linux/syscore_ops.h>
+#include <linux/suspend.h>
#include <linux/tick.h>
#include <trace/events/power.h>
@@ -47,6 +47,9 @@ static LIST_HEAD(cpufreq_policy_list);
static DEFINE_PER_CPU(char[CPUFREQ_NAME_LEN], cpufreq_cpu_governor);
#endif
+/* Flag to suspend/resume CPUFreq governors */
+static bool cpufreq_suspended;
+
static inline bool has_target(void)
{
return cpufreq_driver->target_index || cpufreq_driver->target;
@@ -1576,82 +1579,77 @@ static struct subsys_interface cpufreq_interface = {
};
/**
- * cpufreq_bp_suspend - Prepare the boot CPU for system suspend.
+ * cpufreq_suspend() - Suspend CPUFreq governors
*
- * This function is only executed for the boot processor. The other CPUs
- * have been put offline by means of CPU hotplug.
+ * Called during system wide Suspend/Hibernate cycles for suspending governors
+ * as some platforms can't change frequency after this point in suspend cycle.
+ * Because some of the devices (like: i2c, regulators, etc) they use for
+ * changing frequency are suspended quickly after this point.
*/
-static int cpufreq_bp_suspend(void)
+void cpufreq_suspend(void)
{
- int ret = 0;
-
- int cpu = smp_processor_id();
struct cpufreq_policy *policy;
- pr_debug("suspending cpu %u\n", cpu);
+ if (!cpufreq_driver)
+ return;
- /* If there's no policy for the boot CPU, we have nothing to do. */
- policy = cpufreq_cpu_get(cpu);
- if (!policy)
- return 0;
+ if (!has_target())
+ return;
- if (cpufreq_driver->suspend) {
- ret = cpufreq_driver->suspend(policy);
- if (ret)
- printk(KERN_ERR "cpufreq: suspend failed in ->suspend "
- "step on CPU %u\n", policy->cpu);
+ pr_debug("%s: Suspending Governors\n", __func__);
+
+ list_for_each_entry(policy, &cpufreq_policy_list, policy_list) {
+ if (__cpufreq_governor(policy, CPUFREQ_GOV_STOP))
+ pr_err("%s: Failed to stop governor for policy: %p\n",
+ __func__, policy);
+ else if (cpufreq_driver->suspend
+ && cpufreq_driver->suspend(policy))
+ pr_err("%s: Failed to suspend driver: %p\n", __func__,
+ policy);
}
- cpufreq_cpu_put(policy);
- return ret;
+ cpufreq_suspended = true;
}
/**
- * cpufreq_bp_resume - Restore proper frequency handling of the boot CPU.
+ * cpufreq_resume() - Resume CPUFreq governors
*
- * 1.) resume CPUfreq hardware support (cpufreq_driver->resume())
- * 2.) schedule call cpufreq_update_policy() ASAP as interrupts are
- * restored. It will verify that the current freq is in sync with
- * what we believe it to be. This is a bit later than when it
- * should be, but nonethteless it's better than calling
- * cpufreq_driver->get() here which might re-enable interrupts...
- *
- * This function is only executed for the boot CPU. The other CPUs have not
- * been turned on yet.
+ * Called during system wide Suspend/Hibernate cycle for resuming governors that
+ * are suspended with cpufreq_suspend().
*/
-static void cpufreq_bp_resume(void)
+void cpufreq_resume(void)
{
- int ret = 0;
-
- int cpu = smp_processor_id();
struct cpufreq_policy *policy;
- pr_debug("resuming cpu %u\n", cpu);
+ if (!cpufreq_driver)
+ return;
- /* If there's no policy for the boot CPU, we have nothing to do. */
- policy = cpufreq_cpu_get(cpu);
- if (!policy)
+ if (!has_target())
return;
- if (cpufreq_driver->resume) {
- ret = cpufreq_driver->resume(policy);
- if (ret) {
- printk(KERN_ERR "cpufreq: resume failed in ->resume "
- "step on CPU %u\n", policy->cpu);
- goto fail;
- }
- }
+ pr_debug("%s: Resuming Governors\n", __func__);
- schedule_work(&policy->update);
+ cpufreq_suspended = false;
-fail:
- cpufreq_cpu_put(policy);
-}
+ list_for_each_entry(policy, &cpufreq_policy_list, policy_list) {
+ if (__cpufreq_governor(policy, CPUFREQ_GOV_START)
+ || __cpufreq_governor(policy, CPUFREQ_GOV_LIMITS))
+ pr_err("%s: Failed to start governor for policy: %p\n",
+ __func__, policy);
+ else if (cpufreq_driver->resume
+ && cpufreq_driver->resume(policy))
+ pr_err("%s: Failed to resume driver: %p\n", __func__,
+ policy);
-static struct syscore_ops cpufreq_syscore_ops = {
- .suspend = cpufreq_bp_suspend,
- .resume = cpufreq_bp_resume,
-};
+ /*
+ * schedule call cpufreq_update_policy() for boot CPU, i.e. last
+ * policy in list. It will verify that the current freq is in
+ * sync with what we believe it to be.
+ */
+ if (list_is_last(&policy->policy_list, &cpufreq_policy_list))
+ schedule_work(&policy->update);
+ }
+}
/**
* cpufreq_get_current_driver - return current driver's name
@@ -1868,6 +1866,10 @@ static int __cpufreq_governor(struct cpufreq_policy *policy,
struct cpufreq_governor *gov = NULL;
#endif
+ /* Don't start any governor operations if we are entering suspend */
+ if (cpufreq_suspended)
+ return 0;
+
if (policy->governor->max_transition_latency &&
policy->cpuinfo.transition_latency >
policy->governor->max_transition_latency) {
@@ -2407,7 +2409,6 @@ static int __init cpufreq_core_init(void)
cpufreq_global_kobject = kobject_create();
BUG_ON(!cpufreq_global_kobject);
- register_syscore_ops(&cpufreq_syscore_ops);
return 0;
}
diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h
index 4d89e0e..94ed907 100644
--- a/include/linux/cpufreq.h
+++ b/include/linux/cpufreq.h
@@ -296,6 +296,14 @@ cpufreq_verify_within_cpu_limits(struct cpufreq_policy *policy)
policy->cpuinfo.max_freq);
}
+#ifdef CONFIG_CPU_FREQ
+void cpufreq_suspend(void);
+void cpufreq_resume(void);
+#else
+static inline void cpufreq_suspend(void) {}
+static inline void cpufreq_resume(void) {}
+#endif
+
/*********************************************************************
* CPUFREQ NOTIFIER INTERFACE *
*********************************************************************/
--
1.7.12.rc2.18.g61b472e
We call __find_governor() during addition of first CPU of every policy from
__cpufreq_add_dev() to find the last governor used for this CPU before it was
hotplugged-out.
After that we call cpufreq_parse_governor() in cpufreq_init_policy() either with
this governor or default governor. And right after that policy->governor is set
to NULL.
There is no functional problem here with this code, but it is just that some of
the code required in cpufreq_init_policy() is being called by its caller, i.e.
__cpufreq_add_dev(). So, it would make more sense to get all these together at a
single place to make code more readable.
So, instead of doing this move the relevant parts to cpufreq_init_policy()
policy only and initialize policy->governor to NULL at the beginning.
In order to clean the code a bit more, some of the #ifdefs for
CONFIG_HOTPLUG_CPU are also removed which used to maintain the last governor
used for any CPU. Keeping that code wouldn't harm us much but would improve the
cleanliness of code.
Signed-off-by: Viresh Kumar <viresh.kumar(a)linaro.org>
---
V1->V2:
- Improved commit logs
- Removed unrelated whitespace changes
- Removed some #ifdef CONFIG_HOTPLUG_CPU to clean code a bit.
drivers/cpufreq/cpufreq.c | 38 ++++++++++++++------------------------
1 file changed, 14 insertions(+), 24 deletions(-)
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
index 56b7b1b..fff2968 100644
--- a/drivers/cpufreq/cpufreq.c
+++ b/drivers/cpufreq/cpufreq.c
@@ -42,10 +42,8 @@ static DEFINE_RWLOCK(cpufreq_driver_lock);
DEFINE_MUTEX(cpufreq_governor_lock);
static LIST_HEAD(cpufreq_policy_list);
-#ifdef CONFIG_HOTPLUG_CPU
/* This one keeps track of the previously set governor of a removed CPU */
static DEFINE_PER_CPU(char[CPUFREQ_NAME_LEN], cpufreq_cpu_governor);
-#endif
static inline bool has_target(void)
{
@@ -879,18 +877,25 @@ err_out_kobj_put:
static void cpufreq_init_policy(struct cpufreq_policy *policy)
{
+ struct cpufreq_governor *gov = NULL;
struct cpufreq_policy new_policy;
int ret = 0;
memcpy(&new_policy, policy, sizeof(*policy));
+ /* Update governor of new_policy to the governor used before hotplug */
+ gov = __find_governor(per_cpu(cpufreq_cpu_governor, policy->cpu));
+ if (gov)
+ pr_debug("Restoring governor %s for cpu %d\n",
+ policy->governor->name, policy->cpu);
+ else
+ gov = CPUFREQ_DEFAULT_GOVERNOR;
+
+ new_policy.governor = gov;
+
/* Use the default policy if its valid. */
if (cpufreq_driver->setpolicy)
- cpufreq_parse_governor(policy->governor->name,
- &new_policy.policy, NULL);
-
- /* assure that the starting sequence is run in cpufreq_set_policy */
- policy->governor = NULL;
+ cpufreq_parse_governor(gov->name, &new_policy.policy, NULL);
/* set default policy */
ret = cpufreq_set_policy(policy, &new_policy);
@@ -949,6 +954,8 @@ static struct cpufreq_policy *cpufreq_policy_restore(unsigned int cpu)
read_unlock_irqrestore(&cpufreq_driver_lock, flags);
+ policy->governor = NULL;
+
return policy;
}
@@ -1036,7 +1043,6 @@ static int __cpufreq_add_dev(struct device *dev, struct subsys_interface *sif,
unsigned long flags;
#ifdef CONFIG_HOTPLUG_CPU
struct cpufreq_policy *tpolicy;
- struct cpufreq_governor *gov;
#endif
if (cpu_is_offline(cpu))
@@ -1094,7 +1100,6 @@ static int __cpufreq_add_dev(struct device *dev, struct subsys_interface *sif,
else
policy->cpu = cpu;
- policy->governor = CPUFREQ_DEFAULT_GOVERNOR;
cpumask_copy(policy->cpus, cpumask_of(cpu));
init_completion(&policy->kobj_unregister);
@@ -1179,15 +1184,6 @@ static int __cpufreq_add_dev(struct device *dev, struct subsys_interface *sif,
blocking_notifier_call_chain(&cpufreq_policy_notifier_list,
CPUFREQ_START, policy);
-#ifdef CONFIG_HOTPLUG_CPU
- gov = __find_governor(per_cpu(cpufreq_cpu_governor, cpu));
- if (gov) {
- policy->governor = gov;
- pr_debug("Restoring governor %s for cpu %d\n",
- policy->governor->name, cpu);
- }
-#endif
-
if (!frozen) {
ret = cpufreq_add_dev_interface(policy, dev);
if (ret)
@@ -1312,11 +1308,9 @@ static int __cpufreq_remove_dev_prepare(struct device *dev,
}
}
-#ifdef CONFIG_HOTPLUG_CPU
if (!cpufreq_driver->setpolicy)
strncpy(per_cpu(cpufreq_cpu_governor, cpu),
policy->governor->name, CPUFREQ_NAME_LEN);
-#endif
down_read(&policy->rwsem);
cpus = cpumask_weight(policy->cpus);
@@ -1955,9 +1949,7 @@ EXPORT_SYMBOL_GPL(cpufreq_register_governor);
void cpufreq_unregister_governor(struct cpufreq_governor *governor)
{
-#ifdef CONFIG_HOTPLUG_CPU
int cpu;
-#endif
if (!governor)
return;
@@ -1965,14 +1957,12 @@ void cpufreq_unregister_governor(struct cpufreq_governor *governor)
if (cpufreq_disabled())
return;
-#ifdef CONFIG_HOTPLUG_CPU
for_each_present_cpu(cpu) {
if (cpu_online(cpu))
continue;
if (!strcmp(per_cpu(cpufreq_cpu_governor, cpu), governor->name))
strcpy(per_cpu(cpufreq_cpu_governor, cpu), "\0");
}
-#endif
mutex_lock(&cpufreq_governor_mutex);
list_del(&governor->governor_list);
--
1.7.12.rc2.18.g61b472e
From: Mark Brown <broonie(a)linaro.org>
Try to spot locking issues by asserting that the DAPM mutex is held when
it should be. There's a bit of fun due to us not requiring the lock to be
held prior to the card being instantiated which mean we need to wrap the
assert in some paths and this isn't methodical by any stretch of the
imagination.
Signed-off-by: Mark Brown <broonie(a)linaro.org>
---
I've only compile tested this so it's entirely possible there's some
oversights here - at this point I'm mostly just throwing it out there
for people to play with as a work in progress.
sound/soc/soc-dapm.c | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)
diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c
index 3728e21..c8a780d 100644
--- a/sound/soc/soc-dapm.c
+++ b/sound/soc/soc-dapm.c
@@ -111,6 +111,12 @@ static int dapm_down_seq[] = {
[snd_soc_dapm_post] = 14,
};
+static void dapm_assert_locked(struct snd_soc_dapm_context *dapm)
+{
+ if (dapm->card && dapm->card->instantiated)
+ lockdep_assert_held(&dapm->card->dapm_mutex);
+}
+
static void pop_wait(u32 pop_time)
{
if (pop_time)
@@ -144,6 +150,8 @@ static bool dapm_dirty_widget(struct snd_soc_dapm_widget *w)
static void dapm_mark_dirty(struct snd_soc_dapm_widget *w, const char *reason)
{
+ dapm_assert_locked(w->dapm);
+
if (!dapm_dirty_widget(w)) {
dev_vdbg(w->dapm->dev, "Marking %s dirty due to %s\n",
w->name, reason);
@@ -356,6 +364,8 @@ static void dapm_reset(struct snd_soc_card *card)
{
struct snd_soc_dapm_widget *w;
+ lockdep_assert_held(&card->dapm_mutex);
+
memset(&card->dapm_stats, 0, sizeof(card->dapm_stats));
list_for_each_entry(w, &card->widgets, list) {
@@ -1750,6 +1760,8 @@ static int dapm_power_widgets(struct snd_soc_card *card, int event)
ASYNC_DOMAIN_EXCLUSIVE(async_domain);
enum snd_soc_bias_level bias;
+ lockdep_assert_held(&card->dapm_mutex);
+
trace_snd_soc_dapm_start(card);
list_for_each_entry(d, &card->dapm_list, list) {
@@ -2045,6 +2057,8 @@ static int soc_dapm_mux_update_power(struct snd_soc_card *card,
struct snd_soc_dapm_path *path;
int found = 0;
+ lockdep_assert_held(&card->dapm_mutex);
+
/* find dapm widget path assoc with kcontrol */
dapm_kcontrol_for_each_path(path, kcontrol) {
if (!path->name || !e->texts[mux])
@@ -2095,6 +2109,8 @@ static int soc_dapm_mixer_update_power(struct snd_soc_card *card,
struct snd_soc_dapm_path *path;
int found = 0;
+ lockdep_assert_held(&card->dapm_mutex);
+
/* find dapm widget path assoc with kcontrol */
dapm_kcontrol_for_each_path(path, kcontrol) {
found = 1;
@@ -2260,6 +2276,8 @@ static int snd_soc_dapm_set_pin(struct snd_soc_dapm_context *dapm,
{
struct snd_soc_dapm_widget *w = dapm_find_widget(dapm, pin, true);
+ dapm_assert_locked(dapm);
+
if (!w) {
dev_err(dapm->dev, "ASoC: DAPM unknown pin %s\n", pin);
return -EINVAL;
--
1.9.0
> You should apply ARM:7862/1 before the suspend series and then do not
> miss:
Think over the __get_cpu_var changes in upstream. I am going to give up too much patchset backporting for this change. and going to backport the this_cpu_ptr to __get_cpu_var by the following patch.
If we have to backport __get_cpu_var change later, it is easy to revert this patch. What's your opinions on this?
=========
*/
- for (slots = this_cpu_ptr(bp_on_reg), i = 0; i < core_num_brps; ++i) {
+ for (slots = __get_cpu_var(bp_on_reg), i = 0; i < core_num_brps; ++i) {
if (slots[i]) {
hw_breakpoint_control(slots[i], HW_BREAKPOINT_RESTORE);
} else {
@@ -870,7 +870,7 @@ static void hw_breakpoint_reset(void *unused)
}
}
- for (slots = this_cpu_ptr(wp_on_reg), i = 0; i < core_num_wrps; ++i) {
+ for (slots = __get_cpu_var(wp_on_reg), i = 0; i < core_num_wrps; ++i) {
>
> 65c021bb496a46ec06264e9d5e836dffa70ef380
> arm64: kernel: restore HW breakpoint registers in cpu_suspend
>
> fb4a96029c8a091c4365f57307e098543b48a222
> arm64: kernel: fix per-cpu offset restore on resume
>
I merged them both. Thanks!
The following is all patches for this feature, the 13th is just mentioned patch.
the git tree is git://git.linaro.org/kernel/linux-linaro-stable.git linux-linaro-lsk-test.
-rw-rw-r-- 1 alexs alexs 7620 Mar 6 11:14 0001-arm64-kernel-build-MPIDR_EL1-hash-function-data-stru.patch
-rw-rw-r-- 1 alexs alexs 4105 Mar 6 11:14 0002-arm64-kernel-suspend-resume-registers-save-restore.patch
-rw-rw-r-- 1 alexs alexs 16485 Mar 6 11:14 0003-arm64-kernel-cpu_-suspend-resume-implementation.patch
-rw-rw-r-- 1 alexs alexs 2688 Mar 6 11:14 0004-arm64-kernel-implement-fpsimd-CPU-PM-notifier.patch
-rw-rw-r-- 1 alexs alexs 1863 Mar 6 11:14 0005-arm-kvm-implement-CPU-PM-notifier.patch
-rw-rw-r-- 1 alexs alexs 5551 Mar 6 11:14 0006-arm64-kernel-refactor-code-to-install-uninstall-brea.patch
-rw-rw-r-- 1 alexs alexs 5286 Mar 6 11:14 0007-arm64-kernel-implement-HW-breakpoints-CPU-PM-notifie.patch
-rw-rw-r-- 1 alexs alexs 3195 Mar 6 11:14 0008-arm64-enable-generic-clockevent-broadcast.patch
-rw-rw-r-- 1 alexs alexs 1394 Mar 6 11:14 0009-arm64-kernel-add-CPU-idle-call.patch
-rw-rw-r-- 1 alexs alexs 2132 Mar 6 11:14 0010-arm64-kernel-add-PM-build-infrastructure.patch
-rw-rw-r-- 1 alexs alexs 892 Mar 6 11:14 0011-arm64-add-CPU-power-management-menu-entries.patch
-rw-rw-r-- 1 alexs alexs 1385 Mar 6 11:14 0012-arm64-kernel-add-MPIDR_EL1-accessors-macros.patch
-rw-rw-r-- 1 alexs alexs 1902 Mar 6 11:14 0013-arm64-hw_breakpoint-compile-error-fixing.patch
-rw-rw-r-- 1 alexs alexs 4343 Mar 6 11:14 0014-arm64-kernel-restore-HW-breakpoint-registers-in-cpu_.patch
-rw-rw-r-- 1 alexs alexs 4353 Mar 6 11:14 0015-arm64-percpu-implement-optimised-pcpu-access-using-t.patch
-rw-rw-r-- 1 alexs alexs 1814 Mar 6 11:14 0016-arm64-kernel-fix-per-cpu-offset-restore-on-resume.patch
alexs@alex-mac:~/lsk/kernel$
Thanks!
From: Mark Brown <broonie(a)linaro.org>
We now no longer have any users of hw_read() in the kernel so remove the
code in order to prevent any new users being added. Users should be using
regmap.
Signed-off-by: Mark Brown <broonie(a)linaro.org>
---
include/sound/soc.h | 1 -
sound/soc/soc-io.c | 13 -------------
2 files changed, 14 deletions(-)
diff --git a/include/sound/soc.h b/include/sound/soc.h
index 56c4c71..dc36331 100644
--- a/include/sound/soc.h
+++ b/include/sound/soc.h
@@ -706,7 +706,6 @@ struct snd_soc_codec {
/* codec IO */
void *control_data; /* codec control (i2c/3wire) data */
hw_write_t hw_write;
- unsigned int (*hw_read)(struct snd_soc_codec *, unsigned int);
unsigned int (*read)(struct snd_soc_codec *, unsigned int);
int (*write)(struct snd_soc_codec *, unsigned int, unsigned int);
void *reg_cache;
diff --git a/sound/soc/soc-io.c b/sound/soc/soc-io.c
index c3b6a0a..3c1de9c 100644
--- a/sound/soc/soc-io.c
+++ b/sound/soc/soc-io.c
@@ -26,18 +26,6 @@ static int hw_write(struct snd_soc_codec *codec, unsigned int reg,
return regmap_write(codec->control_data, reg, value);
}
-static unsigned int hw_read(struct snd_soc_codec *codec, unsigned int reg)
-{
- int ret;
- unsigned int val;
-
- ret = regmap_read(codec->control_data, reg, &val);
- if (ret == 0)
- return val;
- else
- return -1;
-}
-
/**
* snd_soc_codec_set_cache_io: Set up standard I/O functions.
*
@@ -64,7 +52,6 @@ int snd_soc_codec_set_cache_io(struct snd_soc_codec *codec,
int ret;
codec->write = hw_write;
- codec->read = hw_read;
switch (control) {
case SND_SOC_REGMAP:
--
1.9.0
From: Mark Brown <broonie(a)linaro.org>
There is no point in using FIQ if DMA is available (it is selected) and
selecting FIQ currently breaks the build on non-i.MX platforms. If FIQ
is actually required the build will need to be restricted or have a
select for the relevant FIQ code adding.
Signed-off-by: Mark Brown <broonie(a)linaro.org>
---
sound/soc/fsl/Kconfig | 1 -
1 file changed, 1 deletion(-)
diff --git a/sound/soc/fsl/Kconfig b/sound/soc/fsl/Kconfig
index a7d6ab6..b244830 100644
--- a/sound/soc/fsl/Kconfig
+++ b/sound/soc/fsl/Kconfig
@@ -175,7 +175,6 @@ config SND_SOC_EUKREA_TLV320
|| (OF && ARM)
depends on I2C
select SND_SOC_TLV320AIC23
- select SND_SOC_IMX_PCM_FIQ
select SND_SOC_IMX_AUDMUX
select SND_SOC_IMX_SSI
select SND_SOC_FSL_SSI
--
1.9.0
Patches adding support for hibernation on ARM
- ARM hibernation / suspend-to-disk
- Change soft_restart to use non-tracing raw_local_irq_disable
Patches based on v3.13 tag, verified hibernation on beaglebone black on a
branch based on 3.13 merged with initial omap support from Russ Dill which
can be found here (includes v1 patchset):
http://git.linaro.org/git-ro/people/sebastian.capella/linux.git hibernation_3.13_russMerge
[PATCH v6 1/2] ARM: avoid tracers in soft_restart
arch/arm/kernel/process.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Use raw_local_irq_disable in place of local_irq_disable to avoid
infinite abort recursion while tracing. (unchanged since v3)
[PATCH v6 2/2] ARM hibernation / suspend-to-disk
arch/arm/include/asm/memory.h | 1 +
arch/arm/kernel/Makefile | 1 +
arch/arm/kernel/hibernate.c | 113 +++++++++++++++++++++++++++++++++++++++++
arch/arm/mm/Kconfig | 5 ++
include/linux/suspend.h | 2 +
5 files changed, 122 insertions(+)
Adds support for ARM based hibernation
Additional notes:
-----------------
There are two checkpatch warnings added by this patch. These follow
behavior in existing hibernation implementations on other platforms.
WARNING: externs should be avoided in .c files
#116: FILE: arch/arm/kernel/hibernate.c:26:
+extern const void __nosave_begin, __nosave_end;
This extern is picking up the linker nosave region definitions, only
used in hibernate. Follows same extern line used mips, powerpc, s390,
sh, sparc, x86 & unicore32
WARNING: externs should be avoided in .c files
#199: FILE: arch/arm/kernel/hibernate.c:109:
+extern void call_with_stack(void (*fn)(void *), void *arg, void *sp);
This extern is used in the arch/arm/ in hibernate, process and bL_switcher
Changes in v6:
--------------
* Simplify static variable names
Changes in v5:
--------------
* Fixed checkpatch warning on trailing whitespace
Changes in v4:
--------------
* updated comment for soft_restart with review feedback
* dropped freeze_processes patch which was queued separately
to 3.14 by Rafael Wysocki:
https://lkml.org/lkml/2014/2/25/683
Changes in v3:
--------------
* added comment to use of soft_restart
* drop irq disable soft_restart patch
* add patch to avoid tracers in soft_restart by using raw_local_irq_*
Changes in v2:
--------------
* Removed unneeded flush_thread, use of __naked and cpu_init.
* dropped Cyril Chemparathy <cyril(a)ti.com> from Cc: list as
emails are bouncing.
Thanks,
Sebastian Capella
From: Mark Brown <broonie(a)linaro.org>
Add basic CPU topology support to arm64, based on the existing pre-v8
code and some work done by Mark Hambleton. This patch does not
implement any topology discovery support since that should be based on
information from firmware, it merely implements the scaffolding for
integration of topology support in the architecture.
No locking of the topology data is done since it is only modified during
CPU bringup with external serialisation from the SMP code.
The goal is to separate the architecture hookup for providing topology
information from the DT parsing in order to ease review and avoid
blocking the architecture code (which will be built on by other work)
with the DT code review by providing something simple and basic.
Following patches will implement support for interpreting topology
information from MPIDR and for parsing the DT topology bindings for ARM,
similar patches will be needed for ACPI.
Signed-off-by: Mark Brown <broonie(a)linaro.org>
Acked-by: Mark RUtland <mark.rutland(a)arm.com>
---
arch/arm64/Kconfig | 24 ++++++++++
arch/arm64/include/asm/topology.h | 39 ++++++++++++++++
arch/arm64/kernel/Makefile | 1 +
arch/arm64/kernel/smp.c | 11 +++++
arch/arm64/kernel/topology.c | 95 +++++++++++++++++++++++++++++++++++++++
5 files changed, 170 insertions(+)
create mode 100644 arch/arm64/include/asm/topology.h
create mode 100644 arch/arm64/kernel/topology.c
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 764d682..80e0eb1 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -165,6 +165,30 @@ config SMP
If you don't know what to do here, say N.
+config CPU_TOPOLOGY
+ bool "Support CPU topology definition"
+ depends on SMP
+ default y
+ help
+ Support CPU topology definition, based on configuration
+ provided by the firmware.
+
+config SCHED_MC
+ bool "Multi-core scheduler support"
+ depends on CPU_TOPOLOGY
+ help
+ Multi-core scheduler support improves the CPU scheduler's decision
+ making when dealing with multi-core CPU chips at a cost of slightly
+ increased overhead in some places. If unsure say N here.
+
+config SCHED_SMT
+ bool "SMT scheduler support"
+ depends on CPU_TOPOLOGY
+ help
+ Improves the CPU scheduler's decision making when dealing with
+ MultiThreading at a cost of slightly increased overhead in some
+ places. If unsure say N here.
+
config NR_CPUS
int "Maximum number of CPUs (2-32)"
range 2 32
diff --git a/arch/arm64/include/asm/topology.h b/arch/arm64/include/asm/topology.h
new file mode 100644
index 0000000..c8a47e8
--- /dev/null
+++ b/arch/arm64/include/asm/topology.h
@@ -0,0 +1,39 @@
+#ifndef __ASM_TOPOLOGY_H
+#define __ASM_TOPOLOGY_H
+
+#ifdef CONFIG_CPU_TOPOLOGY
+
+#include <linux/cpumask.h>
+
+struct cpu_topology {
+ int thread_id;
+ int core_id;
+ int cluster_id;
+ cpumask_t thread_sibling;
+ cpumask_t core_sibling;
+};
+
+extern struct cpu_topology cpu_topology[NR_CPUS];
+
+#define topology_physical_package_id(cpu) (cpu_topology[cpu].cluster_id)
+#define topology_core_id(cpu) (cpu_topology[cpu].core_id)
+#define topology_core_cpumask(cpu) (&cpu_topology[cpu].core_sibling)
+#define topology_thread_cpumask(cpu) (&cpu_topology[cpu].thread_sibling)
+
+#define mc_capable() (cpu_topology[0].cluster_id != -1)
+#define smt_capable() (cpu_topology[0].thread_id != -1)
+
+void init_cpu_topology(void);
+void store_cpu_topology(unsigned int cpuid);
+const struct cpumask *cpu_coregroup_mask(int cpu);
+
+#else
+
+static inline void init_cpu_topology(void) { }
+static inline void store_cpu_topology(unsigned int cpuid) { }
+
+#endif
+
+#include <asm-generic/topology.h>
+
+#endif /* _ASM_ARM_TOPOLOGY_H */
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index e52bcdc..da1dafb 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -21,6 +21,7 @@ arm64-obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
arm64-obj-$(CONFIG_ARM64_CPU_SUSPEND) += sleep.o suspend.o
arm64-obj-$(CONFIG_JUMP_LABEL) += jump_label.o
arm64-obj-$(CONFIG_KGDB) += kgdb.o
+arm64-obj-$(CONFIG_CPU_TOPOLOGY) += topology.o
obj-y += $(arm64-obj-y) vdso/
obj-m += $(arm64-obj-m)
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index 5070dc3..f0a141d 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -114,6 +114,11 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
return ret;
}
+static void smp_store_cpu_info(unsigned int cpuid)
+{
+ store_cpu_topology(cpuid);
+}
+
/*
* This is the secondary CPU boot entry. We're using this CPUs
* idle thread stack, but a set of temporary page tables.
@@ -152,6 +157,8 @@ asmlinkage void secondary_start_kernel(void)
*/
notify_cpu_starting(cpu);
+ smp_store_cpu_info(cpu);
+
/*
* OK, now it's safe to let the boot CPU continue. Wait for
* the CPU migration code to notice that the CPU is online
@@ -391,6 +398,10 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
int err;
unsigned int cpu, ncores = num_possible_cpus();
+ init_cpu_topology();
+
+ smp_store_cpu_info(smp_processor_id());
+
/*
* are we trying to boot more cores than exist?
*/
diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
new file mode 100644
index 0000000..3e06b0b
--- /dev/null
+++ b/arch/arm64/kernel/topology.c
@@ -0,0 +1,95 @@
+/*
+ * arch/arm64/kernel/topology.c
+ *
+ * Copyright (C) 2011,2013,2014 Linaro Limited.
+ *
+ * Based on the arm32 version written by Vincent Guittot in turn based on
+ * arch/sh/kernel/topology.c
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ */
+
+#include <linux/cpu.h>
+#include <linux/cpumask.h>
+#include <linux/init.h>
+#include <linux/percpu.h>
+#include <linux/node.h>
+#include <linux/nodemask.h>
+#include <linux/sched.h>
+
+#include <asm/topology.h>
+
+/*
+ * cpu topology table
+ */
+struct cpu_topology cpu_topology[NR_CPUS];
+EXPORT_SYMBOL_GPL(cpu_topology);
+
+const struct cpumask *cpu_coregroup_mask(int cpu)
+{
+ return &cpu_topology[cpu].core_sibling;
+}
+
+static void update_siblings_masks(unsigned int cpuid)
+{
+ struct cpu_topology *cpu_topo, *cpuid_topo = &cpu_topology[cpuid];
+ int cpu;
+
+ if (cpuid_topo->cluster_id == -1) {
+ /*
+ * DT does not contain topology information for this cpu
+ * reset it to default behaviour
+ */
+ pr_debug("CPU%u: No topology information configured\n", cpuid);
+ cpuid_topo->core_id = 0;
+ cpumask_set_cpu(cpuid, &cpuid_topo->core_sibling);
+ cpumask_set_cpu(cpuid, &cpuid_topo->thread_sibling);
+ return;
+ }
+
+ /* update core and thread sibling masks */
+ for_each_possible_cpu(cpu) {
+ cpu_topo = &cpu_topology[cpu];
+
+ if (cpuid_topo->cluster_id != cpu_topo->cluster_id)
+ continue;
+
+ cpumask_set_cpu(cpuid, &cpu_topo->core_sibling);
+ if (cpu != cpuid)
+ cpumask_set_cpu(cpu, &cpuid_topo->core_sibling);
+
+ if (cpuid_topo->core_id != cpu_topo->core_id)
+ continue;
+
+ cpumask_set_cpu(cpuid, &cpu_topo->thread_sibling);
+ if (cpu != cpuid)
+ cpumask_set_cpu(cpu, &cpuid_topo->thread_sibling);
+ }
+}
+
+void store_cpu_topology(unsigned int cpuid)
+{
+ update_siblings_masks(cpuid);
+}
+
+/*
+ * init_cpu_topology is called at boot when only one cpu is running
+ * which prevent simultaneous write access to cpu_topology array
+ */
+void __init init_cpu_topology(void)
+{
+ unsigned int cpu;
+
+ /* init core mask and power*/
+ for_each_possible_cpu(cpu) {
+ struct cpu_topology *cpu_topo = &cpu_topology[cpu];
+
+ cpu_topo->thread_id = -1;
+ cpu_topo->core_id = -1;
+ cpu_topo->cluster_id = -1;
+ cpumask_clear(&cpu_topo->core_sibling);
+ cpumask_clear(&cpu_topo->thread_sibling);
+ }
+}
--
1.9.0
From: Mark Brown <broonie(a)linaro.org>
Add basic CPU topology support to arm64, based on the existing pre-v8
code and some work done by Mark Hambleton. This patch does not
implement any topology discovery support since that should be based on
information from firmware, it merely implements the scaffolding for
integration of topology support in the architecture.
No locking of the topology data is done since it is only modified during
CPU bringup with external serialisation from the SMP code.
The goal is to separate the architecture hookup for providing topology
information from the DT parsing in order to ease review and avoid
blocking the architecture code (which will be built on by other work)
with the DT code review by providing something simple and basic.
A following patch will implement support for parsing the DT topology
bindings for ARM, similar patches will be needed for ACPI.
Signed-off-by: Mark Brown <broonie(a)linaro.org>
---
Just posting the first patch for now, I've tweaked the debug output so
we don't print errors on missing topology information now but otherwise
no changes. I will repost the DT stuff once I've managed to test
changes to completely discard the DT topology on error, should be next
week if not this.
arch/arm64/Kconfig | 24 ++++++++++
arch/arm64/include/asm/topology.h | 39 ++++++++++++++++
arch/arm64/kernel/Makefile | 1 +
arch/arm64/kernel/smp.c | 11 +++++
arch/arm64/kernel/topology.c | 95 +++++++++++++++++++++++++++++++++++++++
5 files changed, 170 insertions(+)
create mode 100644 arch/arm64/include/asm/topology.h
create mode 100644 arch/arm64/kernel/topology.c
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 764d682..80e0eb1 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -165,6 +165,30 @@ config SMP
If you don't know what to do here, say N.
+config CPU_TOPOLOGY
+ bool "Support CPU topology definition"
+ depends on SMP
+ default y
+ help
+ Support CPU topology definition, based on configuration
+ provided by the firmware.
+
+config SCHED_MC
+ bool "Multi-core scheduler support"
+ depends on CPU_TOPOLOGY
+ help
+ Multi-core scheduler support improves the CPU scheduler's decision
+ making when dealing with multi-core CPU chips at a cost of slightly
+ increased overhead in some places. If unsure say N here.
+
+config SCHED_SMT
+ bool "SMT scheduler support"
+ depends on CPU_TOPOLOGY
+ help
+ Improves the CPU scheduler's decision making when dealing with
+ MultiThreading at a cost of slightly increased overhead in some
+ places. If unsure say N here.
+
config NR_CPUS
int "Maximum number of CPUs (2-32)"
range 2 32
diff --git a/arch/arm64/include/asm/topology.h b/arch/arm64/include/asm/topology.h
new file mode 100644
index 0000000..c8a47e8
--- /dev/null
+++ b/arch/arm64/include/asm/topology.h
@@ -0,0 +1,39 @@
+#ifndef __ASM_TOPOLOGY_H
+#define __ASM_TOPOLOGY_H
+
+#ifdef CONFIG_CPU_TOPOLOGY
+
+#include <linux/cpumask.h>
+
+struct cpu_topology {
+ int thread_id;
+ int core_id;
+ int cluster_id;
+ cpumask_t thread_sibling;
+ cpumask_t core_sibling;
+};
+
+extern struct cpu_topology cpu_topology[NR_CPUS];
+
+#define topology_physical_package_id(cpu) (cpu_topology[cpu].cluster_id)
+#define topology_core_id(cpu) (cpu_topology[cpu].core_id)
+#define topology_core_cpumask(cpu) (&cpu_topology[cpu].core_sibling)
+#define topology_thread_cpumask(cpu) (&cpu_topology[cpu].thread_sibling)
+
+#define mc_capable() (cpu_topology[0].cluster_id != -1)
+#define smt_capable() (cpu_topology[0].thread_id != -1)
+
+void init_cpu_topology(void);
+void store_cpu_topology(unsigned int cpuid);
+const struct cpumask *cpu_coregroup_mask(int cpu);
+
+#else
+
+static inline void init_cpu_topology(void) { }
+static inline void store_cpu_topology(unsigned int cpuid) { }
+
+#endif
+
+#include <asm-generic/topology.h>
+
+#endif /* _ASM_ARM_TOPOLOGY_H */
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index e52bcdc..da1dafb 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -21,6 +21,7 @@ arm64-obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
arm64-obj-$(CONFIG_ARM64_CPU_SUSPEND) += sleep.o suspend.o
arm64-obj-$(CONFIG_JUMP_LABEL) += jump_label.o
arm64-obj-$(CONFIG_KGDB) += kgdb.o
+arm64-obj-$(CONFIG_CPU_TOPOLOGY) += topology.o
obj-y += $(arm64-obj-y) vdso/
obj-m += $(arm64-obj-m)
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index 5070dc3..f0a141d 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -114,6 +114,11 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
return ret;
}
+static void smp_store_cpu_info(unsigned int cpuid)
+{
+ store_cpu_topology(cpuid);
+}
+
/*
* This is the secondary CPU boot entry. We're using this CPUs
* idle thread stack, but a set of temporary page tables.
@@ -152,6 +157,8 @@ asmlinkage void secondary_start_kernel(void)
*/
notify_cpu_starting(cpu);
+ smp_store_cpu_info(cpu);
+
/*
* OK, now it's safe to let the boot CPU continue. Wait for
* the CPU migration code to notice that the CPU is online
@@ -391,6 +398,10 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
int err;
unsigned int cpu, ncores = num_possible_cpus();
+ init_cpu_topology();
+
+ smp_store_cpu_info(smp_processor_id());
+
/*
* are we trying to boot more cores than exist?
*/
diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
new file mode 100644
index 0000000..91889d7
--- /dev/null
+++ b/arch/arm64/kernel/topology.c
@@ -0,0 +1,95 @@
+/*
+ * arch/arm64/kernel/topology.c
+ *
+ * Copyright (C) 2011,2013,2014 Linaro Limited.
+ *
+ * Based on the arm32 version written by Vincent Guittot in turn based on
+ * arch/sh/kernel/topology.c
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ */
+
+#include <linux/cpu.h>
+#include <linux/cpumask.h>
+#include <linux/init.h>
+#include <linux/percpu.h>
+#include <linux/node.h>
+#include <linux/nodemask.h>
+#include <linux/sched.h>
+
+#include <asm/topology.h>
+
+/*
+ * cpu topology table
+ */
+struct cpu_topology cpu_topology[NR_CPUS];
+EXPORT_SYMBOL_GPL(cpu_topology);
+
+const struct cpumask *cpu_coregroup_mask(int cpu)
+{
+ return &cpu_topology[cpu].core_sibling;
+}
+
+static void update_siblings_masks(unsigned int cpuid)
+{
+ struct cpu_topology *cpu_topo, *cpuid_topo = &cpu_topology[cpuid];
+ int cpu;
+
+ if (cpuid_topo->cluster_id == -1) {
+ /*
+ * DT does not contain topology information for this cpu
+ * reset it to default behaviour
+ */
+ pr_dbg("CPU%u: No topology information configured\n", cpuid);
+ cpuid_topo->core_id = 0;
+ cpumask_set_cpu(cpuid, &cpuid_topo->core_sibling);
+ cpumask_set_cpu(cpuid, &cpuid_topo->thread_sibling);
+ return;
+ }
+
+ /* update core and thread sibling masks */
+ for_each_possible_cpu(cpu) {
+ cpu_topo = &cpu_topology[cpu];
+
+ if (cpuid_topo->cluster_id != cpu_topo->cluster_id)
+ continue;
+
+ cpumask_set_cpu(cpuid, &cpu_topo->core_sibling);
+ if (cpu != cpuid)
+ cpumask_set_cpu(cpu, &cpuid_topo->core_sibling);
+
+ if (cpuid_topo->core_id != cpu_topo->core_id)
+ continue;
+
+ cpumask_set_cpu(cpuid, &cpu_topo->thread_sibling);
+ if (cpu != cpuid)
+ cpumask_set_cpu(cpu, &cpuid_topo->thread_sibling);
+ }
+}
+
+void store_cpu_topology(unsigned int cpuid)
+{
+ update_siblings_masks(cpuid);
+}
+
+/*
+ * init_cpu_topology is called at boot when only one cpu is running
+ * which prevent simultaneous write access to cpu_topology array
+ */
+void __init init_cpu_topology(void)
+{
+ unsigned int cpu;
+
+ /* init core mask and power*/
+ for_each_possible_cpu(cpu) {
+ struct cpu_topology *cpu_topo = &cpu_topology[cpu];
+
+ cpu_topo->thread_id = -1;
+ cpu_topo->core_id = -1;
+ cpu_topo->cluster_id = -1;
+ cpumask_clear(&cpu_topo->core_sibling);
+ cpumask_clear(&cpu_topo->thread_sibling);
+ }
+}
--
1.9.0
We call __find_governor() during addition of first CPU of every policy to find
the last governor used for this CPU before it was hotplugged-out.
After that we call cpufreq_parse_governor() in cpufreq_init_policy() either with
this governor or default governor. And right after that policy->governor is set
to NULL.
So, instead of doing this move the relevant parts to cpufreq_init_policy()
policy only and initialize policy->governor to NULL at the beginning.
Signed-off-by: Viresh Kumar <viresh.kumar(a)linaro.org>
---
Hi Saravana,
I hope only the first two patches would fix things for you but probably you can
test all three.
@Rafael: We might need to get these in next rc only as these are more or less
fixes.
drivers/cpufreq/cpufreq.c | 34 ++++++++++++++++------------------
1 file changed, 16 insertions(+), 18 deletions(-)
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
index c755b5f..cc4f244 100644
--- a/drivers/cpufreq/cpufreq.c
+++ b/drivers/cpufreq/cpufreq.c
@@ -879,18 +879,27 @@ err_out_kobj_put:
static void cpufreq_init_policy(struct cpufreq_policy *policy)
{
+ struct cpufreq_governor *gov = NULL;
struct cpufreq_policy new_policy;
int ret = 0;
memcpy(&new_policy, policy, sizeof(*policy));
+ /* Update governor of new_policy to the governor used before hotplug */
+#ifdef CONFIG_HOTPLUG_CPU
+ gov = __find_governor(per_cpu(cpufreq_cpu_governor, policy->cpu));
+#endif
+ if (gov)
+ pr_debug("Restoring governor %s for cpu %d\n",
+ policy->governor->name, policy->cpu);
+ else
+ gov = CPUFREQ_DEFAULT_GOVERNOR;
+
+ new_policy.governor = gov;
+
/* Use the default policy if its valid. */
if (cpufreq_driver->setpolicy)
- cpufreq_parse_governor(policy->governor->name,
- &new_policy.policy, NULL);
-
- /* assure that the starting sequence is run in cpufreq_set_policy */
- policy->governor = NULL;
+ cpufreq_parse_governor(gov->name, &new_policy.policy, NULL);
/* set default policy */
ret = cpufreq_set_policy(policy, &new_policy);
@@ -944,11 +953,11 @@ static struct cpufreq_policy *cpufreq_policy_restore(unsigned int cpu)
unsigned long flags;
read_lock_irqsave(&cpufreq_driver_lock, flags);
-
policy = per_cpu(cpufreq_cpu_data_fallback, cpu);
-
read_unlock_irqrestore(&cpufreq_driver_lock, flags);
+ policy->governor = NULL;
+
return policy;
}
@@ -1036,7 +1045,6 @@ static int __cpufreq_add_dev(struct device *dev, struct subsys_interface *sif,
unsigned long flags;
#ifdef CONFIG_HOTPLUG_CPU
struct cpufreq_policy *tpolicy;
- struct cpufreq_governor *gov;
#endif
if (cpu_is_offline(cpu))
@@ -1094,7 +1102,6 @@ static int __cpufreq_add_dev(struct device *dev, struct subsys_interface *sif,
else
policy->cpu = cpu;
- policy->governor = CPUFREQ_DEFAULT_GOVERNOR;
cpumask_copy(policy->cpus, cpumask_of(cpu));
init_completion(&policy->kobj_unregister);
@@ -1179,15 +1186,6 @@ static int __cpufreq_add_dev(struct device *dev, struct subsys_interface *sif,
blocking_notifier_call_chain(&cpufreq_policy_notifier_list,
CPUFREQ_START, policy);
-#ifdef CONFIG_HOTPLUG_CPU
- gov = __find_governor(per_cpu(cpufreq_cpu_governor, cpu));
- if (gov) {
- policy->governor = gov;
- pr_debug("Restoring governor %s for cpu %d\n",
- policy->governor->name, cpu);
- }
-#endif
-
if (!frozen) {
ret = cpufreq_add_dev_interface(policy, dev);
if (ret)
--
1.7.12.rc2.18.g61b472e
This is v4 of my attempt to add support for a generic pci_host_bridge controller created
from a description passed in the device tree.
Changes from v3:
- Dynamically allocate bus_range resource in of_create_pci_host_bridge()
- Fix the domain number used when creating child busses.
- Changed domain number allocator to use atomic operations.
- Use ERR_PTR() to propagate the error out of pci_create_root_bus_in_domain()
and of_create_pci_host_bridge().
Changes from v2:
- Use range->cpu_addr when calling pci_address_to_pio()
- Introduce pci_register_io_range() helper function in order to register
io ranges ahead of their conversion to PIO values. This is needed as no
information is being stored yet regarding the range mapping, making
pci_address_to_pio() fail. Default weak implementation does nothing,
to cover the default weak implementation of pci_address_to_pio() that
expects direct mapping of physical addresses into PIO values (x86 view).
Changes from v1:
- Add patch to fix conversion of IO ranges into IO resources.
- Added a domain_nr member to pci_host_bridge structure, and a new function
to create a root bus in a given domain number. In order to facilitate that
I propose changing the order of initialisation between pci_host_bridge and
it's related bus in pci_create_root_bus() as sort of a rever of 7b5436635800.
This is done in patch 1/4 and 2/4.
- Added a simple allocator of domain numbers in drivers/pci/host-bridge.c. The
code will first try to get a domain id from of_alias_get_id(..., "pci-domain")
and if that fails assign the next unallocated domain id.
- Changed the name of the function that creates the generic host bridge from
pci_host_bridge_of_init to of_create_pci_host_bridge and exported as GPL symbol.
v1 thread here: https://lkml.org/lkml/2014/2/3/380
The following is an edit of the original blurb:
Following the discussion started here [1], I now have a proposal for tackling
generic support for host bridges described via device tree. It is an initial
stab at it, to try to get feedback and suggestions, but it is functional enough
that I have PCI Express for arm64 working on an FPGA using the patch that I am
also publishing that adds support for PCI for that platform.
Looking at the existing architectures that fit the requirements (use of device
tree and PCI) yields the powerpc and microblaze as generic enough to make them
candidates for conversion. I have a tentative patch for microblaze that I can
only compile test it, unfortunately using qemu-microblaze leads to an early
crash in the kernel.
As Bjorn has mentioned in the previous discussion, the idea is to add to
struct pci_host_bridge enough data to be able to reduce the size or remove the
architecture specific pci_controller structure. arm64 support actually manages
to get rid of all the architecture static data and has no pci_controller structure
defined. For host bridge drivers that means a change of API unless architectures
decide to provide a compatibility layer (comments here please).
In order to initialise a host bridge with the new API, the following example
code is sufficient for a _probe() function:
static int myhostbridge_probe(struct platform_device *pdev)
{
int err;
struct device_node *dev;
struct pci_host_bridge *bridge;
struct myhostbridge_port *pp;
resource_size_t lastbus;
dev = pdev->dev.of_node;
if (!of_device_is_available(dev)) {
pr_warn("%s: disabled\n", dev->full_name);
return -ENODEV;
}
pp = kzalloc(sizeof(struct myhostbridge_port), GFP_KERNEL);
if (!pp)
return -ENOMEM;
bridge = of_create_pci_host_bridge(&pdev->dev, &myhostbridge_ops, pp);
if (IS_ERR(bridge)) {
err = PTR_ERR(bridge);
goto bridge_create_fail;
}
err = myhostbridge_setup(bridge->bus);
if (err)
goto bridge_setup_fail;
/* We always enable PCI domains and we keep domain 0 backward
* compatible in /proc for video cards
*/
pci_add_flags(PCI_ENABLE_PROC_DOMAINS);
pci_add_flags(PCI_REASSIGN_ALL_BUS | PCI_REASSIGN_ALL_RSRC);
lastbus = pci_scan_child_bus(bridge->bus);
pci_bus_update_busn_res_end(bridge->bus, lastbus);
pci_assign_unassigned_bus_resources(bridge->bus);
pci_bus_add_devices(bridge->bus);
return 0;
bridge_setup_fail:
put_device(&bridge->dev);
device_unregister(&bridge->dev);
bridge_create_fail:
kfree(pp);
return err;
}
[1] http://thread.gmane.org/gmane.linux.kernel.pci/25946
Best regards,
Liviu
Liviu Dudau (6):
pci: Introduce pci_register_io_range() helper function.
pci: OF: Fix the conversion of IO ranges into IO resources.
pci: Create pci_host_bridge before its associated bus in pci_create_root_bus.
pci: Introduce a domain number for pci_host_bridge.
pci: Use parent domain number when allocating child busses.
pci: Add support for creating a generic host_bridge from device tree
drivers/of/address.c | 39 +++++++++++++
drivers/pci/host-bridge.c | 139 +++++++++++++++++++++++++++++++++++++++++++++
drivers/pci/probe.c | 72 +++++++++++++++--------
include/linux/of_address.h | 14 +----
include/linux/pci.h | 17 ++++++
5 files changed, 246 insertions(+), 35 deletions(-)
--
1.9.0
Hi,
This patch adds support for PCI to AArch64. It is based on my v4 patch
that adds support for creating generic host bridge structure from
device tree. With that in place, I was able to boot a platform that
has PCIe host bridge support and use a PCIe network card.
Changes from v3:
- Added Acks accumulated so far ;)
- Still carrying Catalin's patch for moving the PCI_IO_BASE until it
lands in linux-next or mainline, in order to ease applying the series
Changes from v2:
- Implement an arch specific version of pci_register_io_range() and
pci_address_to_pio().
- Return 1 from pci_proc_domain().
Changes from v1:
- Added Catalin's patch for moving the PCI_IO_BASE location and extend
its size to 16MB
- Integrated Arnd's version of pci_ioremap_io that uses a bitmap for
keeping track of assigned IO space and returns an io_offset. At the
moment the code is added in arch/arm64 but it can be moved in drivers/pci.
- Added a fix for the generic ioport_map() function when !CONFIG_GENERIC_IOMAP
as suggested by Arnd.
The API used is different from the one used by ARM architecture. There is
no pci_common_init_dev() function and no hw_pci structure, as that is no
longer needed. Once the last signature is added to the legal agreement, I
will post the host bridge driver code that I am using. Meanwhile, here
is an example of what the probe function looks like, posted as an example:
static int myhostbridge_probe(struct platform_device *pdev)
{
int err;
struct device_node *dev;
struct pci_host_bridge *bridge;
struct myhostbridge_port *pp;
resource_size_t lastbus;
dev = pdev->dev.of_node;
if (!of_device_is_available(dev)) {
pr_warn("%s: disabled\n", dev->full_name);
return -ENODEV;
}
pp = kzalloc(sizeof(struct myhostbridge_port), GFP_KERNEL);
if (!pp)
return -ENOMEM;
bridge = of_create_pci_host_bridge(&pdev->dev, &myhostbridge_ops, pp);
if (IS_ERR(bridge)) {
err = PTR_ERR(bridge);
goto bridge_create_fail;
}
err = myhostbridge_setup(bridge->bus);
if (err)
goto bridge_setup_fail;
/* We always enable PCI domains and we keep domain 0 backward
* compatible in /proc for video cards
*/
pci_add_flags(PCI_ENABLE_PROC_DOMAINS);
pci_add_flags(PCI_REASSIGN_ALL_BUS | PCI_REASSIGN_ALL_RSRC);
lastbus = pci_scan_child_bus(bridge->bus);
pci_bus_update_busn_res_end(bridge->bus, lastbus);
pci_assign_unassigned_bus_resources(bridge->bus);
pci_bus_add_devices(bridge->bus);
return 0;
bridge_setup_fail:
put_device(&bridge->dev);
device_unregister(&bridge->dev);
bridge_create_fail:
kfree(pp);
return err;
}
Best regards,
Liviu
Catalin Marinas (1):
arm64: Extend the PCI I/O space to 16MB
Liviu Dudau (2):
Fix ioport_map() for !CONFIG_GENERIC_IOMAP cases.
arm64: Add architecture support for PCI
Documentation/arm64/memory.txt | 16 ++--
arch/arm64/Kconfig | 19 ++++-
arch/arm64/include/asm/Kbuild | 1 +
arch/arm64/include/asm/io.h | 5 +-
arch/arm64/include/asm/pci.h | 47 +++++++++++
arch/arm64/kernel/Makefile | 1 +
arch/arm64/kernel/pci.c | 178 +++++++++++++++++++++++++++++++++++++++++
include/asm-generic/io.h | 2 +-
8 files changed, 259 insertions(+), 10 deletions(-)
create mode 100644 arch/arm64/include/asm/pci.h
create mode 100644 arch/arm64/kernel/pci.c
--
1.9.0
From: Vijaya Kumar K <Vijaya.Kumar(a)caviumnetworks.com>
Based on ARM64 KGDB support patches, KGDB support for
FPSIMD is added. Only debugging of FPSIMD kernel context
is supported.
This patch requires Ard's patches where in kernel support and
below patch for holding thread's fpsimd state.
http://permalink.gmane.org/gmane.linux.ports.arm.kernel/277228
So CONFIG_KERNEL_MODE_NEON should be enabled.
with this, FPSIMD registers can be viewed or set from gdb tool.
Unlike CPU registers, the FPSIMD registers are not saved on
exception entry. With the known restriction that FPSIMD should not be
touched in interrupt/exception context, in this patch the FPSIMD
registers are directly read/written on gdb tool request
Here, the FPSIMD registers are read and restored for every FPSIMD
register read and write by GDB tool. So this has impact on
gdb tool response which is neglible. Other architectures like
mips are also implemented similarly
v2 changes:
- Added API to know thread fpsimd state by checking
TIF_FOREIGN_FPSTATE flag. This is based on below patch
http://permalink.gmane.org/gmane.linux.ports.arm.kernel/277228
- Allow FPSIMD registers access only when FPSIMD is under use
by current thread
v1 changes:
- Initial patch
Tested on ARM64 simulator
Vijaya Kumar K (1):
ARM64: KGDB: Add FP/SIMD debug support
arch/arm64/include/asm/fpsimd.h | 1 +
arch/arm64/kernel/fpsimd.c | 5 ++
arch/arm64/kernel/kgdb.c | 105 +++++++++++++++++++++++++--------------
3 files changed, 73 insertions(+), 38 deletions(-)
--
1.7.9.5
In order to allow better integration between the cpuidle framework and the
scheduler, reducing the distance between these two sub-components will
facilitate this integration by moving part of the cpuidle code in the idle
task file and, because idle.c is in the sched directory, we have access to
the scheduler's private structures.
This patch splits the cpuidle_idle_call main entry function into 3 calls
to a newly added API:
1. select the idle state
2. enter the idle state
3. reflect the idle state
The cpuidle_idle_call calls these three functions to implement the main
idle entry function.
Signed-off-by: Daniel Lezcano <daniel.lezcano(a)linaro.org>
Acked-by: Nicolas Pitre <nico(a)linaro.org>
---
ChangeLog:
V4:
* rebased on top of tip/master
V3:
* moved broadcast timer outside of cpuidle_enter() as suggested by
Preeti U Murthy : https://lkml.org/lkml/2014/2/24/601
V2:
* splitted cpuidle_select error check into 'cpuidle_enabled' function
---
drivers/cpuidle/cpuidle.c | 96 +++++++++++++++++++++++++++++++++++-----------
include/linux/cpuidle.h | 19 +++++++++
2 files changed, 94 insertions(+), 21 deletions(-)
Index: cpuidle-next/drivers/cpuidle/cpuidle.c
===================================================================
--- cpuidle-next.orig/drivers/cpuidle/cpuidle.c
+++ cpuidle-next/drivers/cpuidle/cpuidle.c
@@ -65,6 +65,26 @@ int cpuidle_play_dead(void)
}
/**
+ * cpuidle_enabled - check if the cpuidle framework is ready
+ * @dev: cpuidle device for this cpu
+ * @drv: cpuidle driver for this cpu
+ *
+ * Return 0 on success, otherwise:
+ * -NODEV : the cpuidle framework is not available
+ * -EBUSY : the cpuidle framework is not initialized
+ */
+int cpuidle_enabled(struct cpuidle_driver *drv, struct cpuidle_device *dev)
+{
+ if (off || !initialized)
+ return -ENODEV;
+
+ if (!drv || !dev || !dev->enabled)
+ return -EBUSY;
+
+ return 0;
+}
+
+/**
* cpuidle_enter_state - enter the state and update stats
* @dev: cpuidle device for this cpu
* @drv: cpuidle driver for this cpu
@@ -108,6 +128,51 @@ int cpuidle_enter_state(struct cpuidle_d
}
/**
+ * cpuidle_select - ask the cpuidle framework to choose an idle state
+ *
+ * @drv: the cpuidle driver
+ * @dev: the cpuidle device
+ *
+ * Returns the index of the idle state.
+ */
+int cpuidle_select(struct cpuidle_driver *drv, struct cpuidle_device *dev)
+{
+ return cpuidle_curr_governor->select(drv, dev);
+}
+
+/**
+ * cpuidle_enter - enter into the specified idle state
+ *
+ * @drv: the cpuidle driver tied with the cpu
+ * @dev: the cpuidle device
+ * @index: the index in the idle state table
+ *
+ * Returns the index in the idle state, < 0 in case of error.
+ * The error code depends on the backend driver
+ */
+int cpuidle_enter(struct cpuidle_driver *drv, struct cpuidle_device *dev,
+ int index)
+{
+ if (cpuidle_state_is_coupled(dev, drv, index))
+ return cpuidle_enter_state_coupled(dev, drv, index);
+ return cpuidle_enter_state(dev, drv, index);
+}
+
+/**
+ * cpuidle_reflect - tell the underlying governor what was the state
+ * we were in
+ *
+ * @dev : the cpuidle device
+ * @index: the index in the idle state table
+ *
+ */
+void cpuidle_reflect(struct cpuidle_device *dev, int index)
+{
+ if (cpuidle_curr_governor->reflect)
+ cpuidle_curr_governor->reflect(dev, index);
+}
+
+/**
* cpuidle_idle_call - the main idle loop
*
* NOTE: no locks or semaphores should be used here
@@ -116,26 +181,21 @@ int cpuidle_enter_state(struct cpuidle_d
int cpuidle_idle_call(void)
{
struct cpuidle_device *dev = __this_cpu_read(cpuidle_devices);
- struct cpuidle_driver *drv;
- int next_state, entered_state;
+ struct cpuidle_driver *drv = cpuidle_get_cpu_driver(dev);
+ int next_state, entered_state, ret;
bool broadcast;
- if (off || !initialized)
- return -ENODEV;
-
- /* check if the device is ready */
- if (!dev || !dev->enabled)
- return -EBUSY;
-
- drv = cpuidle_get_cpu_driver(dev);
+ ret = cpuidle_enabled(drv, dev);
+ if (ret < 0)
+ return ret;
/* ask the governor for the next state */
- next_state = cpuidle_curr_governor->select(drv, dev);
+ next_state = cpuidle_select(drv, dev);
+
if (need_resched()) {
dev->last_residency = 0;
/* give the governor an opportunity to reflect on the outcome */
- if (cpuidle_curr_governor->reflect)
- cpuidle_curr_governor->reflect(dev, next_state);
+ cpuidle_reflect(dev, next_state);
local_irq_enable();
return 0;
}
@@ -146,14 +206,9 @@ int cpuidle_idle_call(void)
clockevents_notify(CLOCK_EVT_NOTIFY_BROADCAST_ENTER, &dev->cpu))
return -EBUSY;
-
trace_cpu_idle_rcuidle(next_state, dev->cpu);
- if (cpuidle_state_is_coupled(dev, drv, next_state))
- entered_state = cpuidle_enter_state_coupled(dev, drv,
- next_state);
- else
- entered_state = cpuidle_enter_state(dev, drv, next_state);
+ entered_state = cpuidle_enter(drv, dev, next_state);
trace_cpu_idle_rcuidle(PWR_EVENT_EXIT, dev->cpu);
@@ -161,8 +216,7 @@ int cpuidle_idle_call(void)
clockevents_notify(CLOCK_EVT_NOTIFY_BROADCAST_EXIT, &dev->cpu);
/* give the governor an opportunity to reflect on the outcome */
- if (cpuidle_curr_governor->reflect)
- cpuidle_curr_governor->reflect(dev, entered_state);
+ cpuidle_reflect(dev, entered_state);
return 0;
}
Index: cpuidle-next/include/linux/cpuidle.h
===================================================================
--- cpuidle-next.orig/include/linux/cpuidle.h
+++ cpuidle-next/include/linux/cpuidle.h
@@ -119,6 +119,15 @@ struct cpuidle_driver {
#ifdef CONFIG_CPU_IDLE
extern void disable_cpuidle(void);
+
+extern int cpuidle_enabled(struct cpuidle_driver *drv,
+ struct cpuidle_device *dev);
+extern int cpuidle_select(struct cpuidle_driver *drv,
+ struct cpuidle_device *dev);
+extern int cpuidle_enter(struct cpuidle_driver *drv,
+ struct cpuidle_device *dev, int index);
+extern void cpuidle_reflect(struct cpuidle_device *dev, int index);
+
extern int cpuidle_idle_call(void);
extern int cpuidle_register_driver(struct cpuidle_driver *drv);
extern struct cpuidle_driver *cpuidle_get_driver(void);
@@ -141,6 +150,16 @@ extern int cpuidle_play_dead(void);
extern struct cpuidle_driver *cpuidle_get_cpu_driver(struct cpuidle_device *dev);
#else
static inline void disable_cpuidle(void) { }
+static inline int cpuidle_enabled(struct cpuidle_driver *drv,
+ struct cpuidle_device *dev)
+{return -ENODEV; }
+static inline int cpuidle_select(struct cpuidle_driver *drv,
+ struct cpuidle_device *dev)
+{return -ENODEV; }
+static inline int cpuidle_enter(struct cpuidle_driver *drv,
+ struct cpuidle_device *dev, int index)
+{return -ENODEV; }
+static inline void cpuidle_reflect(struct cpuidle_device *dev, int index) { }
static inline int cpuidle_idle_call(void) { return -ENODEV; }
static inline int cpuidle_register_driver(struct cpuidle_driver *drv)
{return -ENODEV; }
This patchset creates/calls cpufreq suspend/resume callbacks from dpm_{suspend|resume}()
for handling suspend/resume of cpufreq governors and core.
There are multiple problems that are fixed by this patch:
- Nishanth Menon (TI) found an interesting problem on his platform, OMAP. His board
wasn't working well with suspend/resume as calls for removing non-boot CPUs
was turning out into a call to drivers ->target() which then tries to play
with regulators. But regulators and their I2C bus were already suspended and
this resulted in a failure. Many platforms have such problems, samsung, tegra,
etc.. They solved it with driver specific PM notifiers where they used to
disable their driver's ->target() routine.
- Lan Tianyu (Intel) & Jinhyuk Choi (Broadcom) found an issue where tunables
configuration for clusters/sockets with non-boot CPUs was getting lost after
suspend/resume, as we were notifying governors with CPUFREQ_GOV_POLICY_EXIT on
removal of the last cpu for that policy and so deallocating memory for
tunables. This is fixed by this patch as we don't allow any operation on
governors after device suspend and before device resume now.
This is already tested by few people and so incorporating their Tested-by as
well.
I have tested this again on latest stuff on my thinkpad for several
suspend/resume cycles.
V6->V7:
- Fixed crash reported by S.Warren on systems without a cpufreq driver.
- Moved some function doc-comments to patch 1 from a later patch.
For-v3.15
Viresh Kumar (7):
cpufreq: suspend governors on system suspend/hibernate
cpufreq: suspend governors from dpm_{suspend|resume}()
cpufreq: call driver's suspend/resume for each policy
cpufreq: Implement cpufreq_generic_suspend()
cpufreq: exynos: Use cpufreq_generic_suspend()
cpufreq: s5pv210: Use cpufreq_generic_suspend()
cpufreq: Tegra: Use cpufreq_generic_suspend()
drivers/base/power/main.c | 5 ++
drivers/cpufreq/cpufreq.c | 137 +++++++++++++++++++++++---------------
drivers/cpufreq/exynos-cpufreq.c | 96 ++------------------------
drivers/cpufreq/s5pv210-cpufreq.c | 49 +-------------
drivers/cpufreq/tegra-cpufreq.c | 46 ++-----------
include/linux/cpufreq.h | 11 +++
6 files changed, 113 insertions(+), 231 deletions(-)
--
1.7.12.rc2.18.g61b472e
This is v3 of my attempt to add support for a generic pci_host_bridge controller created
from a description passed in the device tree.
Changes from v2:
- Use range->cpu_addr when calling pci_address_to_pio()
- Introduce pci_register_io_range() helper function in order to register
io ranges ahead of their conversion to PIO values. This is needed as no
information is being stored yet regarding the range mapping, making
pci_address_to_pio() fail. Default weak implementation does nothing,
to cover the default weak implementation of pci_address_to_pio() that
expects direct mapping of physical addresses into PIO values (x86 view).
Changes from v1:
- Add patch to fix conversion of IO ranges into IO resources.
- Added a domain_nr member to pci_host_bridge structure, and a new function
to create a root bus in a given domain number. In order to facilitate that
I propose changing the order of initialisation between pci_host_bridge and
it's related bus in pci_create_root_bus() as sort of a rever of 7b5436635800.
This is done in patch 1/4 and 2/4.
- Added a simple allocator of domain numbers in drivers/pci/host-bridge.c. The
code will first try to get a domain id from of_alias_get_id(..., "pci-domain")
and if that fails assign the next unallocated domain id.
- Changed the name of the function that creates the generic host bridge from
pci_host_bridge_of_init to of_create_pci_host_bridge and exported as GPL symbol.
v1 thread here: https://lkml.org/lkml/2014/2/3/380
The following is an edit of the original blurb:
Following the discussion started here [1], I now have a proposal for tackling
generic support for host bridges described via device tree. It is an initial
stab at it, to try to get feedback and suggestions, but it is functional enough
that I have PCI Express for arm64 working on an FPGA using the patch that I am
also publishing that adds support for PCI for that platform.
Looking at the existing architectures that fit the requirements (use of device
tree and PCI) yields the powerpc and microblaze as generic enough to make them
candidates for conversion. I have a tentative patch for microblaze that I can
only compile test it, unfortunately using qemu-microblaze leads to an early
crash in the kernel.
As Bjorn has mentioned in the previous discussion, the idea is to add to
struct pci_host_bridge enough data to be able to reduce the size or remove the
architecture specific pci_controller structure. arm64 support actually manages
to get rid of all the architecture static data and has no pci_controller structure
defined. For host bridge drivers that means a change of API unless architectures
decide to provide a compatibility layer (comments here please).
In order to initialise a host bridge with the new API, the following example
code is sufficient for a _probe() function:
static int myhostbridge_probe(struct platform_device *pdev)
{
int err;
struct device_node *dev;
struct pci_host_bridge *bridge;
struct myhostbridge_port *pp;
resource_size_t lastbus;
dev = pdev->dev.of_node;
if (!of_device_is_available(dev)) {
pr_warn("%s: disabled\n", dev->full_name);
return -ENODEV;
}
pp = kzalloc(sizeof(struct myhostbridge_port), GFP_KERNEL);
if (!pp)
return -ENOMEM;
bridge = of_create_pci_host_bridge(&pdev->dev, &myhostbridge_ops, pp);
if (!bridge) {
err = -EINVAL;
goto bridge_init_fail;
}
err = myhostbridge_setup(bridge->bus);
if (err)
goto bridge_init_fail;
/* We always enable PCI domains and we keep domain 0 backward
* compatible in /proc for video cards
*/
pci_add_flags(PCI_ENABLE_PROC_DOMAINS | PCI_COMPAT_DOMAIN_0);
pci_add_flags(PCI_REASSIGN_ALL_BUS | PCI_REASSIGN_ALL_RSRC);
lastbus = pci_scan_child_bus(bridge->bus);
pci_bus_update_busn_res_end(bridge->bus, lastbus);
pci_assign_unassigned_bus_resources(bridge->bus);
pci_bus_add_devices(bridge->bus);
return 0;
bridge_init_fail:
kfree(pp);
return err;
}
[1] http://thread.gmane.org/gmane.linux.kernel.pci/25946
Best regards,
Liviu
Liviu Dudau (5):
pci: Introduce pci_register_io_range() helper function.
pci: OF: Fix the conversion of IO ranges into IO resources.
pci: Create pci_host_bridge before its associated bus in pci_create_root_bus.
pci: Introduce a domain number for pci_host_bridge.
pci: Add support for creating a generic host_bridge from device tree
drivers/of/address.c | 39 +++++++++++++
drivers/pci/host-bridge.c | 134 +++++++++++++++++++++++++++++++++++++++++++++
drivers/pci/probe.c | 66 ++++++++++++++--------
include/linux/of_address.h | 14 +----
include/linux/pci.h | 17 ++++++
5 files changed, 236 insertions(+), 34 deletions(-)
--
1.9.0
Automated DT boot report for various ARM defconfigs.
Tree/Branch: arm-soc
Git describe: v3.14-rc3-479-g87b00e7
Failed boot tests (console logs at the end)
===========================================
Full Report
===========
multi_v7_defconfig
------------------
omap4-panda PASS: 0 min 57.0 sec
omap3-overo-storm-tobi PASS: 0 min 24.2 sec
sun7i-a20-cubieboard2 PASS: 0 min 14.6 sec
armada-370-mirabox PASS: 0 min 22.4 sec
omap4-panda-es DEAD: 0 min 0.0 sec
Console logs for failures
=========================
multi_v7_defconfig
------------------
omap4-panda-es: DEAD: last 80 lines of boot log:
------------------------------------------------
In: serial
Out: serial
Err: serial
Net: No ethernet found.
Hit any key to stop autoboot:
3 0
Panda #
Panda #version
version
U-Boot 2013.04 (May 24 2013 - 14:00:03)
arm-linux-gnueabi-gcc (Ubuntu/Linaro 4.7.1-5ubuntu1~ppa1) 4.7.1
GNU ld (GNU Binutils for Ubuntu) 2.22.52.20120713
Panda # setenv usbethaddr 2e:40:70:f0:12:07
setenv usbethaddr 2e:40:70:f0:12:07
Panda # setenv preboot usb start
setenv preboot usb start
Panda # setenv initrd_high 0xffffffff
setenv initrd_high 0xffffffff
Panda # setenv bootargs console=ttyO2,115200n8 debug earlyprintk rw root=/dev/mmcblk0p2 rootwait rootfstype=ext4
setenv bootargs console=ttyO2,115200n8 debug earlyprintk rw root=/dev/mmcblk0p2 rootwait rootfstype=ext4
Panda # setenv netargs 'setenv bootargs ${bootargs} ip=${ipaddr}:${serverip}:${gatewayip}:${netmask}::::192.168.1.254:none'
setenv netargs 'setenv bootargs ${bootargs} ip=${ipaddr}:${serverip}:${gatewayip}:${netmask}::::192.168.1.254:none'
Panda # if test -n ${initenv}; then run initenv; fi
if test -n ${initenv}; then run initenv; fi
Panda # if test -n ${preboot}; then run preboot; fi
if test -n ${preboot}; then run preboot; fi
(Re)start USB...
USB0: USB EHCI 1.00
scanning bus 0 for devices... 3 USB Device(s) found
scanning usb for storage devices... 0 Storage Device(s) found
scanning usb for ethernet devices... 1 Ethernet Device(s) found
Panda # setenv autoload no; setenv autoboot no
setenv autoload no; setenv autoboot no
Panda # dhcp
dhcp
Waiting for Ethernet connection... done.
BOOTP broadcast 1
BOOTP broadcast 2
DHCP client bound to address 192.168.1.157
Panda # setenv serverip 192.168.1.2
setenv serverip 192.168.1.2
Panda # if test -n ${netargs}; then run netargs; fi
if test -n ${netargs}; then run netargs; fi
Panda # tftp 0x82000000 192.168.1.2:tmp/4460panda-es-RLKupu/zImage
tftp 0x82000000 192.168.1.2:tmp/4460panda-es-RLKupu/zImage
Waiting for Ethernet connection... done.
Using sms0 device
TFTP from server 192.168.1.2; our IP address is 192.168.1.157
Filename 'tmp/4460panda-es-RLKupu/zImage'.
Load address: 0x82000000
Loading: *EHCI timed out on TD - token=0x8008d80
T #################################################################
#################################################################
#################################################################
#################################################################
###############################################
397.5 KiB/s
done
Bytes transferred = 4500504 (44ac18 hex)
Panda # tftp 0x81000000 192.168.1.2:buildroot.cpio.gz.uboot
tftp 0x81000000 192.168.1.2:buildroot.cpio.gz.uboot
Waiting for Ethernet connection... done.
Using sms0 device
TFTP from server 192.168.1.2; our IP address is 192.168.1.157
Filename 'buildroot.cpio.gz.uboot'.
Load address: 0x81000000
Loading: *EHCI timed out on TD - token=0x88008d80
T ############################################
118.2 KiB/s
done
Bytes transferred = 642602 (9ce2a hex)
Panda # tftp 0x81f00000 192.168.1.2:tmp/4460panda-es-RLKupu/omap4-panda-es.dtb
tftp 0x81f00000 192.168.1.2:tmp/4460panda-es-RLKupu/omap4-panda-es.dtb
Waiting for Ethernet connection... done.
Using sms0 device
TFTP from server 192.168.1.2; our IP address is 192.168.1.157
Filename 'tmp/4460panda-es-RLKupu/omap4-panda-es.dtb'.
Load address: 0x81f00000
Loading: *
Hi,
This patch adds support for PCI to AArch64. It is based on my v2 patch
that adds support for creating generic host bridge structure from
device tree. With that in place, I was able to boot a platform that
has PCIe host bridge support and use a PCIe network card.
Changes from v1:
- Added Catalin's patch for moving the PCI_IO_BASE location and extend
its size to 16MB
- Integrated Arnd's version of pci_ioremap_io that uses a bitmap for
keeping track of assigned IO space and returns an io_offset. At the
moment the code is added in arch/arm64 but it can be moved in drivers/pci.
- Added a fix for the generic ioport_map() function when !CONFIG_GENERIC_IOMAP
as suggested by Arnd.
The API used is different from the one used by ARM architecture. There is
no pci_common_init_dev() function and no hw_pci structure, as that is no
longer needed. Once the last signature is added to the legal agreement, I
will post the host bridge driver code that I am using. Meanwhile, here
is an example of what the probe function looks like, posted as an example:
static int myhostbridge_probe(struct platform_device *pdev)
{
int err;
struct device_node *dev;
struct pci_host_bridge *bridge;
struct myhostbridge_port *pp;
resource_size_t lastbus;
dev = pdev->dev.of_node;
if (!of_device_is_available(dev)) {
pr_warn("%s: disabled\n", dev->full_name);
return -ENODEV;
}
pp = kzalloc(sizeof(struct myhostbridge_port), GFP_KERNEL);
if (!pp)
return -ENOMEM;
bridge = of_create_pci_host_bridge(&pdev->dev, &myhostbridge_ops, pp);
if (!bridge) {
err = -EINVAL;
goto bridge_init_fail;
}
err = myhostbridge_setup(bridge->bus);
if (err)
goto bridge_init_fail;
/* We always enable PCI domains and we keep domain 0 backward
* compatible in /proc for video cards
*/
pci_add_flags(PCI_ENABLE_PROC_DOMAINS | PCI_COMPAT_DOMAIN_0);
pci_add_flags(PCI_REASSIGN_ALL_BUS | PCI_REASSIGN_ALL_RSRC);
lastbus = pci_scan_child_bus(bridge->bus);
pci_bus_update_busn_res_end(bridge->bus, lastbus);
pci_assign_unassigned_bus_resources(bridge->bus);
pci_bus_add_devices(bridge->bus);
return 0;
bridge_init_fail:
kfree(pp);
return err;
}
Best regards,
Liviu
Catalin Marinas (1):
arm64: Extend the PCI I/O space to 16MB
Liviu Dudau (2):
Fix ioport_map() for !CONFIG_GENERIC_IOMAP cases.
arm64: Add architecture support for PCI
Documentation/arm64/memory.txt | 16 ++++--
arch/arm64/Kconfig | 19 ++++++-
arch/arm64/include/asm/Kbuild | 1 +
arch/arm64/include/asm/io.h | 5 +-
arch/arm64/include/asm/pci.h | 47 +++++++++++++++
arch/arm64/kernel/Makefile | 1 +
arch/arm64/kernel/pci.c | 126 +++++++++++++++++++++++++++++++++++++++++
include/asm-generic/io.h | 2 +-
8 files changed, 207 insertions(+), 10 deletions(-)
create mode 100644 arch/arm64/include/asm/pci.h
create mode 100644 arch/arm64/kernel/pci.c
--
1.9.0
Hi,
This patch adds support for PCI to AArch64. It is based on my v3 patch
that adds support for creating generic host bridge structure from
device tree. With that in place, I was able to boot a platform that
has PCIe host bridge support and use a PCIe network card.
Changes from v2:
- Implement an arch specific version of pci_register_io_range() and
pci_address_to_pio().
- Return 1 from pci_proc_domain().
Changes from v1:
- Added Catalin's patch for moving the PCI_IO_BASE location and extend
its size to 16MB
- Integrated Arnd's version of pci_ioremap_io that uses a bitmap for
keeping track of assigned IO space and returns an io_offset. At the
moment the code is added in arch/arm64 but it can be moved in drivers/pci.
- Added a fix for the generic ioport_map() function when !CONFIG_GENERIC_IOMAP
as suggested by Arnd.
The API used is different from the one used by ARM architecture. There is
no pci_common_init_dev() function and no hw_pci structure, as that is no
longer needed. Once the last signature is added to the legal agreement, I
will post the host bridge driver code that I am using. Meanwhile, here
is an example of what the probe function looks like, posted as an example:
static int myhostbridge_probe(struct platform_device *pdev)
{
int err;
struct device_node *dev;
struct pci_host_bridge *bridge;
struct myhostbridge_port *pp;
resource_size_t lastbus;
dev = pdev->dev.of_node;
if (!of_device_is_available(dev)) {
pr_warn("%s: disabled\n", dev->full_name);
return -ENODEV;
}
pp = kzalloc(sizeof(struct myhostbridge_port), GFP_KERNEL);
if (!pp)
return -ENOMEM;
bridge = of_create_pci_host_bridge(&pdev->dev, &myhostbridge_ops, pp);
if (!bridge) {
err = -EINVAL;
goto bridge_init_fail;
}
err = myhostbridge_setup(bridge->bus);
if (err)
goto bridge_init_fail;
/* We always enable PCI domains and we keep domain 0 backward
* compatible in /proc for video cards
*/
pci_add_flags(PCI_ENABLE_PROC_DOMAINS | PCI_COMPAT_DOMAIN_0);
pci_add_flags(PCI_REASSIGN_ALL_BUS | PCI_REASSIGN_ALL_RSRC);
lastbus = pci_scan_child_bus(bridge->bus);
pci_bus_update_busn_res_end(bridge->bus, lastbus);
pci_assign_unassigned_bus_resources(bridge->bus);
pci_bus_add_devices(bridge->bus);
return 0;
bridge_init_fail:
kfree(pp);
return err;
}
Best regards,
Liviu
Catalin Marinas (1):
arm64: Extend the PCI I/O space to 16MB
Liviu Dudau (2):
Fix ioport_map() for !CONFIG_GENERIC_IOMAP cases.
arm64: Add architecture support for PCI
Documentation/arm64/memory.txt | 16 ++--
arch/arm64/Kconfig | 19 ++++-
arch/arm64/include/asm/Kbuild | 1 +
arch/arm64/include/asm/io.h | 5 +-
arch/arm64/include/asm/pci.h | 47 +++++++++++
arch/arm64/kernel/Makefile | 1 +
arch/arm64/kernel/pci.c | 178 +++++++++++++++++++++++++++++++++++++++++
include/asm-generic/io.h | 2 +-
8 files changed, 259 insertions(+), 10 deletions(-)
create mode 100644 arch/arm64/include/asm/pci.h
create mode 100644 arch/arm64/kernel/pci.c
--
1.9.0
Hi Again,
I am now successful in isolating a CPU completely using CPUsets,
NO_HZ_FULL and CPU hotplug..
My setup and requirements for those who weren't following the
earlier mails:
For networking machines it is required to run data plane threads on
some CPUs (i.e. one thread per CPU) and these CPUs shouldn't be
interrupted by kernel at all.
Earlier I tried CPUSets with NO_HZ by creating two groups with
load_balancing disabled between them and manually tried to move
all tasks out to CPU0 group. But even then there were interruptions
which were continuously coming on CPU1 (which I am trying to
isolate). These were some workqueue events, some timers (like
prandom), timer overflow events (As NO_HZ_FULL pushes hrtimer
to long ahead in future, 450 seconds, rather than disabling them
completely, and these hardware timers were overflowing their
counters after 90 seconds on Samsung Exynos board).
So after creating CPUsets I hotunplugged CPU1 and added it back
immediately. This moved all these interruptions away and now
CPU1 is running my single thread ("stress") for ever.
Now my question is: Is there anything particularly wrong about using
hotplugging here ? Will that lead to a disaster :)
Thanks in Advance.
--
viresh