This patch rectifies a comment present in sugov_irq_work() function to
follow proper grammar.
Suggested-by: Ingo Molnar <mingo(a)kernel.org>
Signed-off-by: Viresh Kumar <viresh.kumar(a)linaro.org>
---
kernel/sched/cpufreq_schedutil.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index 42a220e78f00..71e1a980d40a 100644
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -315,15 +315,15 @@ static void sugov_irq_work(struct irq_work *irq_work)
sg_policy = container_of(irq_work, struct sugov_policy, irq_work);
/*
- * For Real Time and Deadline tasks, schedutil governor shoots the
- * frequency to maximum. And special care must be taken to ensure that
- * this kthread doesn't result in that.
+ * For Real Time and Deadline tasks, the schedutil governor shoots the
+ * frequency to maximum. Special care must be taken to ensure that this
+ * kthread doesn't result in the same behavior.
*
* This is (mostly) guaranteed by the work_in_progress flag. The flag is
- * updated only at the end of the sugov_work() and before that schedutil
- * rejects all other frequency scaling requests.
+ * updated only at the end of the sugov_work() function and before that
+ * the schedutil governor rejects all other frequency scaling requests.
*
- * Though there is a very rare case where the RT thread yields right
+ * There is a very rare case though, where the RT thread yields right
* after the work_in_progress flag is cleared. The effects of that are
* neglected for now.
*/
--
2.7.1.410.g6faf27b
Hi,
Some platforms (like TI) have complex DVFS configuration for CPU
devices, where multiple regulators are required to be configured to
change DVFS state of the device. This was explained well by Nishanth
earlier [1].
One of the major complaints around multiple regulators case was that the
DT isn't responsible in any way to represent the ordering in which
multiple supplies need to be programmed, before or after frequency
change. It was considered in this patch and such information is left to
the platform specific OPP driver now, which can register its own
opp_set_rate() callback with the OPP core and the OPP core will then
call it during DVFS.
The patches are tested on Exynos5250 (Dual A15). I have hacked around DT
and code to pass values for multiple regulators and verified that they
are all properly read by the kernel (using debugfs interface).
Dave Gerlach has already tested it on the real TI platforms and it works
well for him.
This is rebased over: linux-next branch in the PM tree.
V2->V3:
- The last patch is new
- Removed a debug leftover pr_info() message
- Renamed few names as s/set_rate/set_opp
- Removed a TODO comment (as it is done now with this series)
- created struct for min_uV and max_uV
- kerneldoc comments for structures in pm_opp.h
- s/const char */const char * const
- use kasprintf()
- Some more minor reformatting
- More Ack/RBY tags added
V1->V2:
- Ack from Rob for 1st patch
- Moved the supplies structure to pm_opp.h (Dave)
- Fixed an compilation warning.
--
viresh
[1] https://marc.info/?l=linux-pm&m=145684495832764&w=2
Viresh Kumar (9):
PM / OPP: Reword binding supporting multiple regulators per device
PM / OPP: Don't use OPP structure outside of rcu protected section
PM / OPP: Manage supply's voltage/current in a separate structure
PM / OPP: Pass struct dev_pm_opp_supply to _set_opp_voltage()
PM / OPP: Add infrastructure to manage multiple regulators
PM / OPP: Separate out _generic_opp_set_rate()
PM / OPP: Allow platform specific custom set_opp() callbacks
PM / OPP: Don't WARN on multiple calls to dev_pm_opp_set_regulators()
PM / OPP: Don't assume platform doesn't have regulators
Documentation/devicetree/bindings/opp/opp.txt | 25 +-
drivers/base/power/opp/core.c | 510 ++++++++++++++++++++------
drivers/base/power/opp/debugfs.c | 52 ++-
drivers/base/power/opp/of.c | 105 ++++--
drivers/base/power/opp/opp.h | 20 +-
drivers/cpufreq/cpufreq-dt.c | 9 +-
include/linux/pm_opp.h | 67 +++-
7 files changed, 605 insertions(+), 183 deletions(-)
--
2.7.1.410.g6faf27b
Hi,
If slow path frequency changes are conducted in a SCHED_OTHER context
then they may be delayed for some amount of time, including
indefinitely, when real time or deadline activity is taking place.
Move the slow path to a real time kernel thread using the kthread worker
infrastructure. In the future the thread should be made SCHED_DEADLINE.
The RT priority is arbitrarily set to 50 for now.
This was tested with Hackbench on ARM Exynos, dual core A15 platform and
no regressions were seen. The third patch has more details on it.
This work was started by Steve Muckle, where he used a simple kthread
instead of kthread-worker and that wasn't sufficient as some guarantees
weren't met.
I was wondering if the same should be done for ondemand/conservative
governors as well ?
V1->V2:
- first patch is new based on Peter's suggestions.
- fixed indented label
- Moved kthread creation/destruction into separate routines
- Used MACRO instead of magic number '50'
- minor formatting, commenting and improved commit logs
--
viresh
Viresh Kumar (4):
cpufreq: schedutil: Avoid indented labels
cpufreq: schedutil: enable fast switch earlier
cpufreq: schedutil: move slow path from workqueue to SCHED_FIFO task
cpufreq: schedutil: irq-work and mutex are only used in slow path
kernel/sched/cpufreq_schedutil.c | 119 ++++++++++++++++++++++++++++++++-------
1 file changed, 99 insertions(+), 20 deletions(-)
--
2.7.1.410.g6faf27b
What's returned from this function is the delta by which the frequency
must be increased or decreased and not the final frequency that should
be selected.
Name it properly to match its purpose. Also update the variables used to
store that value.
Signed-off-by: Viresh Kumar <viresh.kumar(a)linaro.org>
---
drivers/cpufreq/cpufreq_conservative.c | 22 +++++++++++-----------
1 file changed, 11 insertions(+), 11 deletions(-)
diff --git a/drivers/cpufreq/cpufreq_conservative.c b/drivers/cpufreq/cpufreq_conservative.c
index fa5ece3915a1..0681fcff5aae 100644
--- a/drivers/cpufreq/cpufreq_conservative.c
+++ b/drivers/cpufreq/cpufreq_conservative.c
@@ -37,16 +37,16 @@ struct cs_dbs_tuners {
#define DEF_SAMPLING_DOWN_FACTOR (1)
#define MAX_SAMPLING_DOWN_FACTOR (10)
-static inline unsigned int get_freq_target(struct cs_dbs_tuners *cs_tuners,
- struct cpufreq_policy *policy)
+static inline unsigned int get_freq_step(struct cs_dbs_tuners *cs_tuners,
+ struct cpufreq_policy *policy)
{
- unsigned int freq_target = (cs_tuners->freq_step * policy->max) / 100;
+ unsigned int freq_step = (cs_tuners->freq_step * policy->max) / 100;
/* max freq cannot be less than 100. But who knows... */
- if (unlikely(freq_target == 0))
- freq_target = DEF_FREQUENCY_STEP;
+ if (unlikely(freq_step == 0))
+ freq_step = DEF_FREQUENCY_STEP;
- return freq_target;
+ return freq_step;
}
/*
@@ -90,7 +90,7 @@ static unsigned int cs_dbs_update(struct cpufreq_policy *policy)
if (requested_freq == policy->max)
goto out;
- requested_freq += get_freq_target(cs_tuners, policy);
+ requested_freq += get_freq_step(cs_tuners, policy);
if (requested_freq > policy->max)
requested_freq = policy->max;
@@ -106,16 +106,16 @@ static unsigned int cs_dbs_update(struct cpufreq_policy *policy)
/* Check for frequency decrease */
if (load < cs_tuners->down_threshold) {
- unsigned int freq_target;
+ unsigned int freq_step;
/*
* if we cannot reduce the frequency anymore, break out early
*/
if (requested_freq == policy->min)
goto out;
- freq_target = get_freq_target(cs_tuners, policy);
- if (requested_freq > freq_target)
- requested_freq -= freq_target;
+ freq_step = get_freq_step(cs_tuners, policy);
+ if (requested_freq > freq_step)
+ requested_freq -= freq_step;
else
requested_freq = policy->min;
--
2.7.1.410.g6faf27b