OHCI-HCD driver does not support multiple SoC drivers during the compile
time. Hence the generic driver should be disabled in ubuntu.conf and related
OHCI Soc drivers should be enabled in respective board config files.
Signed-off-by: Tushar Behera <tushar.behera(a)linaro.org>
---
linaro/configs/ubuntu.conf | 1 -
1 files changed, 0 insertions(+), 1 deletions(-)
diff --git a/linaro/configs/ubuntu.conf b/linaro/configs/ubuntu.conf
index 5d0a372..88e58df 100644
--- a/linaro/configs/ubuntu.conf
+++ b/linaro/configs/ubuntu.conf
@@ -1556,7 +1556,6 @@ CONFIG_USB_OXU210HP_HCD=m
CONFIG_USB_ISP116X_HCD=m
CONFIG_USB_ISP1760_HCD=m
CONFIG_USB_OHCI_HCD=y
-CONFIG_USB_OHCI_HCD_PLATFORM=y
CONFIG_USB_OHCI_LITTLE_ENDIAN=y
CONFIG_USB_U132_HCD=m
CONFIG_USB_SL811_HCD=m
--
1.7.4.1
synchronize_rcu blocks the caller of opp_enable/disbale
for a complete grace period. This blocking duration prevents
any intensive use of the functions. Replace synchronize_rcu
by call_rcu which will call our function for freeing the old
opp element.
The duration of opp_enable and opp_disable will be no more
dependant of the grace period.
Signed-off-by: Vincent Guittot <vincent.guittot(a)linaro.org>
---
drivers/base/power/opp.c | 19 ++++++++++++++-----
1 file changed, 14 insertions(+), 5 deletions(-)
diff --git a/drivers/base/power/opp.c b/drivers/base/power/opp.c
index ac993ea..49e4626 100644
--- a/drivers/base/power/opp.c
+++ b/drivers/base/power/opp.c
@@ -64,6 +64,7 @@ struct opp {
unsigned long u_volt;
struct device_opp *dev_opp;
+ struct rcu_head head;
};
/**
@@ -441,6 +442,17 @@ int opp_add(struct device *dev, unsigned long freq, unsigned long u_volt)
}
/**
+ * opp_free_rcu() - helper to clear the struct opp when grace period has
+ * elapsed without blocking the the caller of opp_set_availability
+ */
+static void opp_free_rcu(struct rcu_head *head)
+{
+ struct opp *opp = container_of(head, struct opp, head);
+
+ kfree(opp);
+}
+
+/**
* opp_set_availability() - helper to set the availability of an opp
* @dev: device for which we do this operation
* @freq: OPP frequency to modify availability
@@ -511,7 +523,7 @@ static int opp_set_availability(struct device *dev, unsigned long freq,
list_replace_rcu(&opp->node, &new_opp->node);
mutex_unlock(&dev_opp_list_lock);
- synchronize_rcu();
+ call_rcu(&opp->head, opp_free_rcu);
/* Notify the change of the OPP availability */
if (availability_req)
@@ -521,13 +533,10 @@ static int opp_set_availability(struct device *dev, unsigned long freq,
srcu_notifier_call_chain(&dev_opp->head, OPP_EVENT_DISABLE,
new_opp);
- /* clean up old opp */
- new_opp = opp;
- goto out;
+ return 0;
unlock:
mutex_unlock(&dev_opp_list_lock);
-out:
kfree(new_opp);
return r;
}
--
1.7.9.5
Hi -
We've been getting some good mileage from the llct-based tilt-3.4
history tree the last months.
However a couple of points have been raised by TI which really boil down
to being about the deal with llct post-release. We know that it goes on
mutating and tracking as it should, but the release-specific version,
like "linux-linaro-3.4" just sits there afaik.
The points raised were:
1) Can we have linux stable point release content in tilt-3.4? Rather
than my doing it, isn't it better to add it to llc-3.4 and merge it on
the lt history tree periodically? That way every lt can get them from
one place.
2) What's the deal with things that were the latest and greatest at that
time, ie, the best "CMA" or whatever series was in tracking, but after
it got copied out to be linux-linaro-core-3.4, horrible bugs were fixed
in linux-linaro-tracking? What's happening is that TI are sticking with
these releases for a fair time as the basis for their release to customers.
I can see there's tension between tracking-style fix it for the future,
and backport to old and crusty things, there's also issue of testing,
but there must be some cases where this makes some sense. Again people
looking after the feature tree for llct are best placed to make those
calls about, "hm, that looks like it should maybe also go on the last
couple of llc release trees".
What do you think about this?
-Andy
--
Andy Green | TI Landing Team Leader
Linaro.org │ Open source software for ARM SoCs | Follow Linaro
http://facebook.com/pages/Linaro/155974581091106 -
http://twitter.com/#!/linaroorg - http://linaro.org/linaro-blog
From: Rajagopal Venkat <rajagopal.venkat(a)linaro.org>
This patchset updates devfreq core to add support for devices
which can idle. When device idleness is detected perhaps
through runtime-pm, need some mechanism to suspend devfreq
load monitoring and resume when device is back online.
patch 1 introduce core design changes - per device work, decouple
delayed work from core and event based interaction.
patch 2 add devfreq suspend and resume apis.
patch 3 add new sysfs attribute for governor predicted next target
frequency and callback for current device frequency.
The existing devfreq apis are kept intact. Two new apis
devfreq_suspend_device() and devfreq_resume_device() are
added to support suspend/resume of device devfreq.
Changes since v1:
- revised locking mechanism
- added kerneldoc comments for load monitoring helper functions
- Fixed minor review comments
--
Rajagopal Venkat (3):
devfreq: Core updates to support devices which can idle
devfreq: Add suspend and resume apis
devfreq: Add current freq callback in device profile
Documentation/ABI/testing/sysfs-class-devfreq | 15 +-
drivers/devfreq/devfreq.c | 413 +++++++++++---------------
drivers/devfreq/governor.h | 11 +
drivers/devfreq/governor_performance.c | 16 +-
drivers/devfreq/governor_powersave.c | 16 +-
drivers/devfreq/governor_simpleondemand.c | 40 +++
drivers/devfreq/governor_userspace.c | 23 +-
include/linux/devfreq.h | 46 ++-
8 files changed, 291 insertions(+), 289 deletions(-)
--
1.7.11.3
On tickless system, one CPU runs load balance for all idle CPUs.
The cpu_load of this CPU is updated before starting the load balance
of each other idle CPUs. We should instead update the cpu_load of the balance_cpu.
Signed-off-by: Vincent Guittot <vincent.guittot(a)linaro.org>
---
kernel/sched/fair.c | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 1ca4fe4..9ae3a5b 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4794,14 +4794,15 @@ static void nohz_idle_balance(int this_cpu, enum cpu_idle_type idle)
if (need_resched())
break;
- raw_spin_lock_irq(&this_rq->lock);
- update_rq_clock(this_rq);
- update_idle_cpu_load(this_rq);
- raw_spin_unlock_irq(&this_rq->lock);
+ rq = cpu_rq(balance_cpu);
+
+ raw_spin_lock_irq(&rq->lock);
+ update_rq_clock(rq);
+ update_idle_cpu_load(rq);
+ raw_spin_unlock_irq(&rq->lock);
rebalance_domains(balance_cpu, CPU_IDLE);
- rq = cpu_rq(balance_cpu);
if (time_after(this_rq->next_balance, rq->next_balance))
this_rq->next_balance = rq->next_balance;
}
--
1.7.9.5
This patchset creates an arch_scale_freq_power function for ARM, which is used
to set the relative capacity of each core of a big.LITTLE system. It also removes
the broken power estimation of x86.
Modification since v3:
- Add comments
- Add optimization for SMP system
- Ensure that capacity of a CPU will be at most 1
Modification since v2:
- set_power_scale function becomes static
- Rework loop in update_siblings_masks
- Remove useless code in parse_dt_topology
Modification since v1:
- Add and update explanation about the use of the table and the range of the value
- Remove the use of NR_CPUS and use nr_cpu_ids instead
- Remove broken power estimation of x86
Peter Zijlstra (1):
sched, x86: Remove broken power estimation
Vincent Guittot (4):
ARM: topology: Add arch_scale_freq_power function
ARM: topology: factorize the update of sibling masks
ARM: topology: Update cpu_power according to DT information
sched: cpu_power: enable ARCH_POWER
arch/arm/kernel/topology.c | 239 ++++++++++++++++++++++++++++++++++++++----
arch/x86/kernel/cpu/Makefile | 2 +-
arch/x86/kernel/cpu/sched.c | 55 ----------
kernel/sched/features.h | 2 +-
4 files changed, 219 insertions(+), 79 deletions(-)
delete mode 100644 arch/x86/kernel/cpu/sched.c
--
1.7.9.5
Wrong button make me removed others guys from the thread.
Sorry for this mistake.
On 13 September 2012 09:56, Mike Galbraith <efault(a)gmx.de> wrote:
> On Thu, 2012-09-13 at 09:44 +0200, Vincent Guittot wrote:
>> On 13 September 2012 09:29, Mike Galbraith <efault(a)gmx.de> wrote:
>> > On Thu, 2012-09-13 at 08:59 +0200, Vincent Guittot wrote:
>> >> On 13 September 2012 08:49, Mike Galbraith <efault(a)gmx.de> wrote:
>> >> > On Thu, 2012-09-13 at 06:11 +0200, Vincent Guittot wrote:
>> >> >> On tickless system, one CPU runs load balance for all idle CPUs.
>> >> >> The cpu_load of this CPU is updated before starting the load balance
>> >> >> of each other idle CPUs. We should instead update the cpu_load of the balance_cpu.
>> >> >>
>> >> >> Signed-off-by: Vincent Guittot <vincent.guittot(a)linaro.org>
>> >> >> ---
>> >> >> kernel/sched/fair.c | 11 ++++++-----
>> >> >> 1 file changed, 6 insertions(+), 5 deletions(-)
>> >> >>
>> >> >> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> >> >> index 1ca4fe4..9ae3a5b 100644
>> >> >> --- a/kernel/sched/fair.c
>> >> >> +++ b/kernel/sched/fair.c
>> >> >> @@ -4794,14 +4794,15 @@ static void nohz_idle_balance(int this_cpu, enum cpu_idle_type idle)
>> >> >> if (need_resched())
>> >> >> break;
>> >> >>
>> >> >> - raw_spin_lock_irq(&this_rq->lock);
>> >> >> - update_rq_clock(this_rq);
>> >> >> - update_idle_cpu_load(this_rq);
>> >> >> - raw_spin_unlock_irq(&this_rq->lock);
>> >> >> + rq = cpu_rq(balance_cpu);
>> >> >> +
>> >> >> + raw_spin_lock_irq(&rq->lock);
>> >> >> + update_rq_clock(rq);
>> >> >> + update_idle_cpu_load(rq);
>> >> >> + raw_spin_unlock_irq(&rq->lock);
>> >> >>
>> >> >> rebalance_domains(balance_cpu, CPU_IDLE);
>> >> >>
>> >> >> - rq = cpu_rq(balance_cpu);
>> >> >> if (time_after(this_rq->next_balance, rq->next_balance))
>> >> >> this_rq->next_balance = rq->next_balance;
>> >> >> }
>> >> >
>> >> > Ew, banging locks and updating clocks to what good end?
>> >>
>> >> The goal is to update the cpu_load table of the CPU before starting
>> >> the load balance. Other wise we will use outdated value in the load
>> >> balance sequence
>> >
>> > If there's load to distribute, seems it should all work out fine without
>> > doing that. What harm is being done that makes this worth while?
>>
>> this_load and avg_load can be wrong and make an idle CPU set as
>> balanced compared to the busy one
>
> I think you need to present numbers showing benefit. Crawling all over
> a mostly idle (4096p?) box is decidedly bad thing to do.
Yep, let me prepare some figures
You should also notice that you are already crawling all over the idle
processor in rebalance_domains
Vincent
>
> -Mike
>