From: Mark Brown <broonie(a)linaro.org>
We now no longer have any users of hw_read() in the kernel so remove the
code in order to prevent any new users being added. Users should be using
regmap.
Signed-off-by: Mark Brown <broonie(a)linaro.org>
---
include/sound/soc.h | 1 -
sound/soc/soc-io.c | 13 -------------
2 files changed, 14 deletions(-)
diff --git a/include/sound/soc.h b/include/sound/soc.h
index 56c4c71..dc36331 100644
--- a/include/sound/soc.h
+++ b/include/sound/soc.h
@@ -706,7 +706,6 @@ struct snd_soc_codec {
/* codec IO */
void *control_data; /* codec control (i2c/3wire) data */
hw_write_t hw_write;
- unsigned int (*hw_read)(struct snd_soc_codec *, unsigned int);
unsigned int (*read)(struct snd_soc_codec *, unsigned int);
int (*write)(struct snd_soc_codec *, unsigned int, unsigned int);
void *reg_cache;
diff --git a/sound/soc/soc-io.c b/sound/soc/soc-io.c
index c3b6a0a..3c1de9c 100644
--- a/sound/soc/soc-io.c
+++ b/sound/soc/soc-io.c
@@ -26,18 +26,6 @@ static int hw_write(struct snd_soc_codec *codec, unsigned int reg,
return regmap_write(codec->control_data, reg, value);
}
-static unsigned int hw_read(struct snd_soc_codec *codec, unsigned int reg)
-{
- int ret;
- unsigned int val;
-
- ret = regmap_read(codec->control_data, reg, &val);
- if (ret == 0)
- return val;
- else
- return -1;
-}
-
/**
* snd_soc_codec_set_cache_io: Set up standard I/O functions.
*
@@ -64,7 +52,6 @@ int snd_soc_codec_set_cache_io(struct snd_soc_codec *codec,
int ret;
codec->write = hw_write;
- codec->read = hw_read;
switch (control) {
case SND_SOC_REGMAP:
--
1.9.0
From: Mark Brown <broonie(a)linaro.org>
There is no point in using FIQ if DMA is available (it is selected) and
selecting FIQ currently breaks the build on non-i.MX platforms. If FIQ
is actually required the build will need to be restricted or have a
select for the relevant FIQ code adding.
Signed-off-by: Mark Brown <broonie(a)linaro.org>
---
sound/soc/fsl/Kconfig | 1 -
1 file changed, 1 deletion(-)
diff --git a/sound/soc/fsl/Kconfig b/sound/soc/fsl/Kconfig
index a7d6ab6..b244830 100644
--- a/sound/soc/fsl/Kconfig
+++ b/sound/soc/fsl/Kconfig
@@ -175,7 +175,6 @@ config SND_SOC_EUKREA_TLV320
|| (OF && ARM)
depends on I2C
select SND_SOC_TLV320AIC23
- select SND_SOC_IMX_PCM_FIQ
select SND_SOC_IMX_AUDMUX
select SND_SOC_IMX_SSI
select SND_SOC_FSL_SSI
--
1.9.0
Patches adding support for hibernation on ARM
- ARM hibernation / suspend-to-disk
- Change soft_restart to use non-tracing raw_local_irq_disable
Patches based on v3.13 tag, verified hibernation on beaglebone black on a
branch based on 3.13 merged with initial omap support from Russ Dill which
can be found here (includes v1 patchset):
http://git.linaro.org/git-ro/people/sebastian.capella/linux.git hibernation_3.13_russMerge
[PATCH v6 1/2] ARM: avoid tracers in soft_restart
arch/arm/kernel/process.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Use raw_local_irq_disable in place of local_irq_disable to avoid
infinite abort recursion while tracing. (unchanged since v3)
[PATCH v6 2/2] ARM hibernation / suspend-to-disk
arch/arm/include/asm/memory.h | 1 +
arch/arm/kernel/Makefile | 1 +
arch/arm/kernel/hibernate.c | 113 +++++++++++++++++++++++++++++++++++++++++
arch/arm/mm/Kconfig | 5 ++
include/linux/suspend.h | 2 +
5 files changed, 122 insertions(+)
Adds support for ARM based hibernation
Additional notes:
-----------------
There are two checkpatch warnings added by this patch. These follow
behavior in existing hibernation implementations on other platforms.
WARNING: externs should be avoided in .c files
#116: FILE: arch/arm/kernel/hibernate.c:26:
+extern const void __nosave_begin, __nosave_end;
This extern is picking up the linker nosave region definitions, only
used in hibernate. Follows same extern line used mips, powerpc, s390,
sh, sparc, x86 & unicore32
WARNING: externs should be avoided in .c files
#199: FILE: arch/arm/kernel/hibernate.c:109:
+extern void call_with_stack(void (*fn)(void *), void *arg, void *sp);
This extern is used in the arch/arm/ in hibernate, process and bL_switcher
Changes in v6:
--------------
* Simplify static variable names
Changes in v5:
--------------
* Fixed checkpatch warning on trailing whitespace
Changes in v4:
--------------
* updated comment for soft_restart with review feedback
* dropped freeze_processes patch which was queued separately
to 3.14 by Rafael Wysocki:
https://lkml.org/lkml/2014/2/25/683
Changes in v3:
--------------
* added comment to use of soft_restart
* drop irq disable soft_restart patch
* add patch to avoid tracers in soft_restart by using raw_local_irq_*
Changes in v2:
--------------
* Removed unneeded flush_thread, use of __naked and cpu_init.
* dropped Cyril Chemparathy <cyril(a)ti.com> from Cc: list as
emails are bouncing.
Thanks,
Sebastian Capella
From: Mark Brown <broonie(a)linaro.org>
Add basic CPU topology support to arm64, based on the existing pre-v8
code and some work done by Mark Hambleton. This patch does not
implement any topology discovery support since that should be based on
information from firmware, it merely implements the scaffolding for
integration of topology support in the architecture.
No locking of the topology data is done since it is only modified during
CPU bringup with external serialisation from the SMP code.
The goal is to separate the architecture hookup for providing topology
information from the DT parsing in order to ease review and avoid
blocking the architecture code (which will be built on by other work)
with the DT code review by providing something simple and basic.
Following patches will implement support for interpreting topology
information from MPIDR and for parsing the DT topology bindings for ARM,
similar patches will be needed for ACPI.
Signed-off-by: Mark Brown <broonie(a)linaro.org>
Acked-by: Mark RUtland <mark.rutland(a)arm.com>
---
arch/arm64/Kconfig | 24 ++++++++++
arch/arm64/include/asm/topology.h | 39 ++++++++++++++++
arch/arm64/kernel/Makefile | 1 +
arch/arm64/kernel/smp.c | 11 +++++
arch/arm64/kernel/topology.c | 95 +++++++++++++++++++++++++++++++++++++++
5 files changed, 170 insertions(+)
create mode 100644 arch/arm64/include/asm/topology.h
create mode 100644 arch/arm64/kernel/topology.c
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 764d682..80e0eb1 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -165,6 +165,30 @@ config SMP
If you don't know what to do here, say N.
+config CPU_TOPOLOGY
+ bool "Support CPU topology definition"
+ depends on SMP
+ default y
+ help
+ Support CPU topology definition, based on configuration
+ provided by the firmware.
+
+config SCHED_MC
+ bool "Multi-core scheduler support"
+ depends on CPU_TOPOLOGY
+ help
+ Multi-core scheduler support improves the CPU scheduler's decision
+ making when dealing with multi-core CPU chips at a cost of slightly
+ increased overhead in some places. If unsure say N here.
+
+config SCHED_SMT
+ bool "SMT scheduler support"
+ depends on CPU_TOPOLOGY
+ help
+ Improves the CPU scheduler's decision making when dealing with
+ MultiThreading at a cost of slightly increased overhead in some
+ places. If unsure say N here.
+
config NR_CPUS
int "Maximum number of CPUs (2-32)"
range 2 32
diff --git a/arch/arm64/include/asm/topology.h b/arch/arm64/include/asm/topology.h
new file mode 100644
index 0000000..c8a47e8
--- /dev/null
+++ b/arch/arm64/include/asm/topology.h
@@ -0,0 +1,39 @@
+#ifndef __ASM_TOPOLOGY_H
+#define __ASM_TOPOLOGY_H
+
+#ifdef CONFIG_CPU_TOPOLOGY
+
+#include <linux/cpumask.h>
+
+struct cpu_topology {
+ int thread_id;
+ int core_id;
+ int cluster_id;
+ cpumask_t thread_sibling;
+ cpumask_t core_sibling;
+};
+
+extern struct cpu_topology cpu_topology[NR_CPUS];
+
+#define topology_physical_package_id(cpu) (cpu_topology[cpu].cluster_id)
+#define topology_core_id(cpu) (cpu_topology[cpu].core_id)
+#define topology_core_cpumask(cpu) (&cpu_topology[cpu].core_sibling)
+#define topology_thread_cpumask(cpu) (&cpu_topology[cpu].thread_sibling)
+
+#define mc_capable() (cpu_topology[0].cluster_id != -1)
+#define smt_capable() (cpu_topology[0].thread_id != -1)
+
+void init_cpu_topology(void);
+void store_cpu_topology(unsigned int cpuid);
+const struct cpumask *cpu_coregroup_mask(int cpu);
+
+#else
+
+static inline void init_cpu_topology(void) { }
+static inline void store_cpu_topology(unsigned int cpuid) { }
+
+#endif
+
+#include <asm-generic/topology.h>
+
+#endif /* _ASM_ARM_TOPOLOGY_H */
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index e52bcdc..da1dafb 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -21,6 +21,7 @@ arm64-obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
arm64-obj-$(CONFIG_ARM64_CPU_SUSPEND) += sleep.o suspend.o
arm64-obj-$(CONFIG_JUMP_LABEL) += jump_label.o
arm64-obj-$(CONFIG_KGDB) += kgdb.o
+arm64-obj-$(CONFIG_CPU_TOPOLOGY) += topology.o
obj-y += $(arm64-obj-y) vdso/
obj-m += $(arm64-obj-m)
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index 5070dc3..f0a141d 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -114,6 +114,11 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
return ret;
}
+static void smp_store_cpu_info(unsigned int cpuid)
+{
+ store_cpu_topology(cpuid);
+}
+
/*
* This is the secondary CPU boot entry. We're using this CPUs
* idle thread stack, but a set of temporary page tables.
@@ -152,6 +157,8 @@ asmlinkage void secondary_start_kernel(void)
*/
notify_cpu_starting(cpu);
+ smp_store_cpu_info(cpu);
+
/*
* OK, now it's safe to let the boot CPU continue. Wait for
* the CPU migration code to notice that the CPU is online
@@ -391,6 +398,10 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
int err;
unsigned int cpu, ncores = num_possible_cpus();
+ init_cpu_topology();
+
+ smp_store_cpu_info(smp_processor_id());
+
/*
* are we trying to boot more cores than exist?
*/
diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
new file mode 100644
index 0000000..3e06b0b
--- /dev/null
+++ b/arch/arm64/kernel/topology.c
@@ -0,0 +1,95 @@
+/*
+ * arch/arm64/kernel/topology.c
+ *
+ * Copyright (C) 2011,2013,2014 Linaro Limited.
+ *
+ * Based on the arm32 version written by Vincent Guittot in turn based on
+ * arch/sh/kernel/topology.c
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ */
+
+#include <linux/cpu.h>
+#include <linux/cpumask.h>
+#include <linux/init.h>
+#include <linux/percpu.h>
+#include <linux/node.h>
+#include <linux/nodemask.h>
+#include <linux/sched.h>
+
+#include <asm/topology.h>
+
+/*
+ * cpu topology table
+ */
+struct cpu_topology cpu_topology[NR_CPUS];
+EXPORT_SYMBOL_GPL(cpu_topology);
+
+const struct cpumask *cpu_coregroup_mask(int cpu)
+{
+ return &cpu_topology[cpu].core_sibling;
+}
+
+static void update_siblings_masks(unsigned int cpuid)
+{
+ struct cpu_topology *cpu_topo, *cpuid_topo = &cpu_topology[cpuid];
+ int cpu;
+
+ if (cpuid_topo->cluster_id == -1) {
+ /*
+ * DT does not contain topology information for this cpu
+ * reset it to default behaviour
+ */
+ pr_debug("CPU%u: No topology information configured\n", cpuid);
+ cpuid_topo->core_id = 0;
+ cpumask_set_cpu(cpuid, &cpuid_topo->core_sibling);
+ cpumask_set_cpu(cpuid, &cpuid_topo->thread_sibling);
+ return;
+ }
+
+ /* update core and thread sibling masks */
+ for_each_possible_cpu(cpu) {
+ cpu_topo = &cpu_topology[cpu];
+
+ if (cpuid_topo->cluster_id != cpu_topo->cluster_id)
+ continue;
+
+ cpumask_set_cpu(cpuid, &cpu_topo->core_sibling);
+ if (cpu != cpuid)
+ cpumask_set_cpu(cpu, &cpuid_topo->core_sibling);
+
+ if (cpuid_topo->core_id != cpu_topo->core_id)
+ continue;
+
+ cpumask_set_cpu(cpuid, &cpu_topo->thread_sibling);
+ if (cpu != cpuid)
+ cpumask_set_cpu(cpu, &cpuid_topo->thread_sibling);
+ }
+}
+
+void store_cpu_topology(unsigned int cpuid)
+{
+ update_siblings_masks(cpuid);
+}
+
+/*
+ * init_cpu_topology is called at boot when only one cpu is running
+ * which prevent simultaneous write access to cpu_topology array
+ */
+void __init init_cpu_topology(void)
+{
+ unsigned int cpu;
+
+ /* init core mask and power*/
+ for_each_possible_cpu(cpu) {
+ struct cpu_topology *cpu_topo = &cpu_topology[cpu];
+
+ cpu_topo->thread_id = -1;
+ cpu_topo->core_id = -1;
+ cpu_topo->cluster_id = -1;
+ cpumask_clear(&cpu_topo->core_sibling);
+ cpumask_clear(&cpu_topo->thread_sibling);
+ }
+}
--
1.9.0
From: Mark Brown <broonie(a)linaro.org>
Add basic CPU topology support to arm64, based on the existing pre-v8
code and some work done by Mark Hambleton. This patch does not
implement any topology discovery support since that should be based on
information from firmware, it merely implements the scaffolding for
integration of topology support in the architecture.
No locking of the topology data is done since it is only modified during
CPU bringup with external serialisation from the SMP code.
The goal is to separate the architecture hookup for providing topology
information from the DT parsing in order to ease review and avoid
blocking the architecture code (which will be built on by other work)
with the DT code review by providing something simple and basic.
A following patch will implement support for parsing the DT topology
bindings for ARM, similar patches will be needed for ACPI.
Signed-off-by: Mark Brown <broonie(a)linaro.org>
---
Just posting the first patch for now, I've tweaked the debug output so
we don't print errors on missing topology information now but otherwise
no changes. I will repost the DT stuff once I've managed to test
changes to completely discard the DT topology on error, should be next
week if not this.
arch/arm64/Kconfig | 24 ++++++++++
arch/arm64/include/asm/topology.h | 39 ++++++++++++++++
arch/arm64/kernel/Makefile | 1 +
arch/arm64/kernel/smp.c | 11 +++++
arch/arm64/kernel/topology.c | 95 +++++++++++++++++++++++++++++++++++++++
5 files changed, 170 insertions(+)
create mode 100644 arch/arm64/include/asm/topology.h
create mode 100644 arch/arm64/kernel/topology.c
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 764d682..80e0eb1 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -165,6 +165,30 @@ config SMP
If you don't know what to do here, say N.
+config CPU_TOPOLOGY
+ bool "Support CPU topology definition"
+ depends on SMP
+ default y
+ help
+ Support CPU topology definition, based on configuration
+ provided by the firmware.
+
+config SCHED_MC
+ bool "Multi-core scheduler support"
+ depends on CPU_TOPOLOGY
+ help
+ Multi-core scheduler support improves the CPU scheduler's decision
+ making when dealing with multi-core CPU chips at a cost of slightly
+ increased overhead in some places. If unsure say N here.
+
+config SCHED_SMT
+ bool "SMT scheduler support"
+ depends on CPU_TOPOLOGY
+ help
+ Improves the CPU scheduler's decision making when dealing with
+ MultiThreading at a cost of slightly increased overhead in some
+ places. If unsure say N here.
+
config NR_CPUS
int "Maximum number of CPUs (2-32)"
range 2 32
diff --git a/arch/arm64/include/asm/topology.h b/arch/arm64/include/asm/topology.h
new file mode 100644
index 0000000..c8a47e8
--- /dev/null
+++ b/arch/arm64/include/asm/topology.h
@@ -0,0 +1,39 @@
+#ifndef __ASM_TOPOLOGY_H
+#define __ASM_TOPOLOGY_H
+
+#ifdef CONFIG_CPU_TOPOLOGY
+
+#include <linux/cpumask.h>
+
+struct cpu_topology {
+ int thread_id;
+ int core_id;
+ int cluster_id;
+ cpumask_t thread_sibling;
+ cpumask_t core_sibling;
+};
+
+extern struct cpu_topology cpu_topology[NR_CPUS];
+
+#define topology_physical_package_id(cpu) (cpu_topology[cpu].cluster_id)
+#define topology_core_id(cpu) (cpu_topology[cpu].core_id)
+#define topology_core_cpumask(cpu) (&cpu_topology[cpu].core_sibling)
+#define topology_thread_cpumask(cpu) (&cpu_topology[cpu].thread_sibling)
+
+#define mc_capable() (cpu_topology[0].cluster_id != -1)
+#define smt_capable() (cpu_topology[0].thread_id != -1)
+
+void init_cpu_topology(void);
+void store_cpu_topology(unsigned int cpuid);
+const struct cpumask *cpu_coregroup_mask(int cpu);
+
+#else
+
+static inline void init_cpu_topology(void) { }
+static inline void store_cpu_topology(unsigned int cpuid) { }
+
+#endif
+
+#include <asm-generic/topology.h>
+
+#endif /* _ASM_ARM_TOPOLOGY_H */
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index e52bcdc..da1dafb 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -21,6 +21,7 @@ arm64-obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
arm64-obj-$(CONFIG_ARM64_CPU_SUSPEND) += sleep.o suspend.o
arm64-obj-$(CONFIG_JUMP_LABEL) += jump_label.o
arm64-obj-$(CONFIG_KGDB) += kgdb.o
+arm64-obj-$(CONFIG_CPU_TOPOLOGY) += topology.o
obj-y += $(arm64-obj-y) vdso/
obj-m += $(arm64-obj-m)
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index 5070dc3..f0a141d 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -114,6 +114,11 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
return ret;
}
+static void smp_store_cpu_info(unsigned int cpuid)
+{
+ store_cpu_topology(cpuid);
+}
+
/*
* This is the secondary CPU boot entry. We're using this CPUs
* idle thread stack, but a set of temporary page tables.
@@ -152,6 +157,8 @@ asmlinkage void secondary_start_kernel(void)
*/
notify_cpu_starting(cpu);
+ smp_store_cpu_info(cpu);
+
/*
* OK, now it's safe to let the boot CPU continue. Wait for
* the CPU migration code to notice that the CPU is online
@@ -391,6 +398,10 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
int err;
unsigned int cpu, ncores = num_possible_cpus();
+ init_cpu_topology();
+
+ smp_store_cpu_info(smp_processor_id());
+
/*
* are we trying to boot more cores than exist?
*/
diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
new file mode 100644
index 0000000..91889d7
--- /dev/null
+++ b/arch/arm64/kernel/topology.c
@@ -0,0 +1,95 @@
+/*
+ * arch/arm64/kernel/topology.c
+ *
+ * Copyright (C) 2011,2013,2014 Linaro Limited.
+ *
+ * Based on the arm32 version written by Vincent Guittot in turn based on
+ * arch/sh/kernel/topology.c
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ */
+
+#include <linux/cpu.h>
+#include <linux/cpumask.h>
+#include <linux/init.h>
+#include <linux/percpu.h>
+#include <linux/node.h>
+#include <linux/nodemask.h>
+#include <linux/sched.h>
+
+#include <asm/topology.h>
+
+/*
+ * cpu topology table
+ */
+struct cpu_topology cpu_topology[NR_CPUS];
+EXPORT_SYMBOL_GPL(cpu_topology);
+
+const struct cpumask *cpu_coregroup_mask(int cpu)
+{
+ return &cpu_topology[cpu].core_sibling;
+}
+
+static void update_siblings_masks(unsigned int cpuid)
+{
+ struct cpu_topology *cpu_topo, *cpuid_topo = &cpu_topology[cpuid];
+ int cpu;
+
+ if (cpuid_topo->cluster_id == -1) {
+ /*
+ * DT does not contain topology information for this cpu
+ * reset it to default behaviour
+ */
+ pr_dbg("CPU%u: No topology information configured\n", cpuid);
+ cpuid_topo->core_id = 0;
+ cpumask_set_cpu(cpuid, &cpuid_topo->core_sibling);
+ cpumask_set_cpu(cpuid, &cpuid_topo->thread_sibling);
+ return;
+ }
+
+ /* update core and thread sibling masks */
+ for_each_possible_cpu(cpu) {
+ cpu_topo = &cpu_topology[cpu];
+
+ if (cpuid_topo->cluster_id != cpu_topo->cluster_id)
+ continue;
+
+ cpumask_set_cpu(cpuid, &cpu_topo->core_sibling);
+ if (cpu != cpuid)
+ cpumask_set_cpu(cpu, &cpuid_topo->core_sibling);
+
+ if (cpuid_topo->core_id != cpu_topo->core_id)
+ continue;
+
+ cpumask_set_cpu(cpuid, &cpu_topo->thread_sibling);
+ if (cpu != cpuid)
+ cpumask_set_cpu(cpu, &cpuid_topo->thread_sibling);
+ }
+}
+
+void store_cpu_topology(unsigned int cpuid)
+{
+ update_siblings_masks(cpuid);
+}
+
+/*
+ * init_cpu_topology is called at boot when only one cpu is running
+ * which prevent simultaneous write access to cpu_topology array
+ */
+void __init init_cpu_topology(void)
+{
+ unsigned int cpu;
+
+ /* init core mask and power*/
+ for_each_possible_cpu(cpu) {
+ struct cpu_topology *cpu_topo = &cpu_topology[cpu];
+
+ cpu_topo->thread_id = -1;
+ cpu_topo->core_id = -1;
+ cpu_topo->cluster_id = -1;
+ cpumask_clear(&cpu_topo->core_sibling);
+ cpumask_clear(&cpu_topo->thread_sibling);
+ }
+}
--
1.9.0
We call __find_governor() during addition of first CPU of every policy to find
the last governor used for this CPU before it was hotplugged-out.
After that we call cpufreq_parse_governor() in cpufreq_init_policy() either with
this governor or default governor. And right after that policy->governor is set
to NULL.
So, instead of doing this move the relevant parts to cpufreq_init_policy()
policy only and initialize policy->governor to NULL at the beginning.
Signed-off-by: Viresh Kumar <viresh.kumar(a)linaro.org>
---
Hi Saravana,
I hope only the first two patches would fix things for you but probably you can
test all three.
@Rafael: We might need to get these in next rc only as these are more or less
fixes.
drivers/cpufreq/cpufreq.c | 34 ++++++++++++++++------------------
1 file changed, 16 insertions(+), 18 deletions(-)
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
index c755b5f..cc4f244 100644
--- a/drivers/cpufreq/cpufreq.c
+++ b/drivers/cpufreq/cpufreq.c
@@ -879,18 +879,27 @@ err_out_kobj_put:
static void cpufreq_init_policy(struct cpufreq_policy *policy)
{
+ struct cpufreq_governor *gov = NULL;
struct cpufreq_policy new_policy;
int ret = 0;
memcpy(&new_policy, policy, sizeof(*policy));
+ /* Update governor of new_policy to the governor used before hotplug */
+#ifdef CONFIG_HOTPLUG_CPU
+ gov = __find_governor(per_cpu(cpufreq_cpu_governor, policy->cpu));
+#endif
+ if (gov)
+ pr_debug("Restoring governor %s for cpu %d\n",
+ policy->governor->name, policy->cpu);
+ else
+ gov = CPUFREQ_DEFAULT_GOVERNOR;
+
+ new_policy.governor = gov;
+
/* Use the default policy if its valid. */
if (cpufreq_driver->setpolicy)
- cpufreq_parse_governor(policy->governor->name,
- &new_policy.policy, NULL);
-
- /* assure that the starting sequence is run in cpufreq_set_policy */
- policy->governor = NULL;
+ cpufreq_parse_governor(gov->name, &new_policy.policy, NULL);
/* set default policy */
ret = cpufreq_set_policy(policy, &new_policy);
@@ -944,11 +953,11 @@ static struct cpufreq_policy *cpufreq_policy_restore(unsigned int cpu)
unsigned long flags;
read_lock_irqsave(&cpufreq_driver_lock, flags);
-
policy = per_cpu(cpufreq_cpu_data_fallback, cpu);
-
read_unlock_irqrestore(&cpufreq_driver_lock, flags);
+ policy->governor = NULL;
+
return policy;
}
@@ -1036,7 +1045,6 @@ static int __cpufreq_add_dev(struct device *dev, struct subsys_interface *sif,
unsigned long flags;
#ifdef CONFIG_HOTPLUG_CPU
struct cpufreq_policy *tpolicy;
- struct cpufreq_governor *gov;
#endif
if (cpu_is_offline(cpu))
@@ -1094,7 +1102,6 @@ static int __cpufreq_add_dev(struct device *dev, struct subsys_interface *sif,
else
policy->cpu = cpu;
- policy->governor = CPUFREQ_DEFAULT_GOVERNOR;
cpumask_copy(policy->cpus, cpumask_of(cpu));
init_completion(&policy->kobj_unregister);
@@ -1179,15 +1186,6 @@ static int __cpufreq_add_dev(struct device *dev, struct subsys_interface *sif,
blocking_notifier_call_chain(&cpufreq_policy_notifier_list,
CPUFREQ_START, policy);
-#ifdef CONFIG_HOTPLUG_CPU
- gov = __find_governor(per_cpu(cpufreq_cpu_governor, cpu));
- if (gov) {
- policy->governor = gov;
- pr_debug("Restoring governor %s for cpu %d\n",
- policy->governor->name, cpu);
- }
-#endif
-
if (!frozen) {
ret = cpufreq_add_dev_interface(policy, dev);
if (ret)
--
1.7.12.rc2.18.g61b472e
This is v4 of my attempt to add support for a generic pci_host_bridge controller created
from a description passed in the device tree.
Changes from v3:
- Dynamically allocate bus_range resource in of_create_pci_host_bridge()
- Fix the domain number used when creating child busses.
- Changed domain number allocator to use atomic operations.
- Use ERR_PTR() to propagate the error out of pci_create_root_bus_in_domain()
and of_create_pci_host_bridge().
Changes from v2:
- Use range->cpu_addr when calling pci_address_to_pio()
- Introduce pci_register_io_range() helper function in order to register
io ranges ahead of their conversion to PIO values. This is needed as no
information is being stored yet regarding the range mapping, making
pci_address_to_pio() fail. Default weak implementation does nothing,
to cover the default weak implementation of pci_address_to_pio() that
expects direct mapping of physical addresses into PIO values (x86 view).
Changes from v1:
- Add patch to fix conversion of IO ranges into IO resources.
- Added a domain_nr member to pci_host_bridge structure, and a new function
to create a root bus in a given domain number. In order to facilitate that
I propose changing the order of initialisation between pci_host_bridge and
it's related bus in pci_create_root_bus() as sort of a rever of 7b5436635800.
This is done in patch 1/4 and 2/4.
- Added a simple allocator of domain numbers in drivers/pci/host-bridge.c. The
code will first try to get a domain id from of_alias_get_id(..., "pci-domain")
and if that fails assign the next unallocated domain id.
- Changed the name of the function that creates the generic host bridge from
pci_host_bridge_of_init to of_create_pci_host_bridge and exported as GPL symbol.
v1 thread here: https://lkml.org/lkml/2014/2/3/380
The following is an edit of the original blurb:
Following the discussion started here [1], I now have a proposal for tackling
generic support for host bridges described via device tree. It is an initial
stab at it, to try to get feedback and suggestions, but it is functional enough
that I have PCI Express for arm64 working on an FPGA using the patch that I am
also publishing that adds support for PCI for that platform.
Looking at the existing architectures that fit the requirements (use of device
tree and PCI) yields the powerpc and microblaze as generic enough to make them
candidates for conversion. I have a tentative patch for microblaze that I can
only compile test it, unfortunately using qemu-microblaze leads to an early
crash in the kernel.
As Bjorn has mentioned in the previous discussion, the idea is to add to
struct pci_host_bridge enough data to be able to reduce the size or remove the
architecture specific pci_controller structure. arm64 support actually manages
to get rid of all the architecture static data and has no pci_controller structure
defined. For host bridge drivers that means a change of API unless architectures
decide to provide a compatibility layer (comments here please).
In order to initialise a host bridge with the new API, the following example
code is sufficient for a _probe() function:
static int myhostbridge_probe(struct platform_device *pdev)
{
int err;
struct device_node *dev;
struct pci_host_bridge *bridge;
struct myhostbridge_port *pp;
resource_size_t lastbus;
dev = pdev->dev.of_node;
if (!of_device_is_available(dev)) {
pr_warn("%s: disabled\n", dev->full_name);
return -ENODEV;
}
pp = kzalloc(sizeof(struct myhostbridge_port), GFP_KERNEL);
if (!pp)
return -ENOMEM;
bridge = of_create_pci_host_bridge(&pdev->dev, &myhostbridge_ops, pp);
if (IS_ERR(bridge)) {
err = PTR_ERR(bridge);
goto bridge_create_fail;
}
err = myhostbridge_setup(bridge->bus);
if (err)
goto bridge_setup_fail;
/* We always enable PCI domains and we keep domain 0 backward
* compatible in /proc for video cards
*/
pci_add_flags(PCI_ENABLE_PROC_DOMAINS);
pci_add_flags(PCI_REASSIGN_ALL_BUS | PCI_REASSIGN_ALL_RSRC);
lastbus = pci_scan_child_bus(bridge->bus);
pci_bus_update_busn_res_end(bridge->bus, lastbus);
pci_assign_unassigned_bus_resources(bridge->bus);
pci_bus_add_devices(bridge->bus);
return 0;
bridge_setup_fail:
put_device(&bridge->dev);
device_unregister(&bridge->dev);
bridge_create_fail:
kfree(pp);
return err;
}
[1] http://thread.gmane.org/gmane.linux.kernel.pci/25946
Best regards,
Liviu
Liviu Dudau (6):
pci: Introduce pci_register_io_range() helper function.
pci: OF: Fix the conversion of IO ranges into IO resources.
pci: Create pci_host_bridge before its associated bus in pci_create_root_bus.
pci: Introduce a domain number for pci_host_bridge.
pci: Use parent domain number when allocating child busses.
pci: Add support for creating a generic host_bridge from device tree
drivers/of/address.c | 39 +++++++++++++
drivers/pci/host-bridge.c | 139 +++++++++++++++++++++++++++++++++++++++++++++
drivers/pci/probe.c | 72 +++++++++++++++--------
include/linux/of_address.h | 14 +----
include/linux/pci.h | 17 ++++++
5 files changed, 246 insertions(+), 35 deletions(-)
--
1.9.0