Missed lists earlier :(
On 7 January 2014 20:31, Viresh Kumar <viresh.kumar(a)linaro.org> wrote:
> Hi Kevin/Frederic,
>
> In my traces I see a guaranteed interrupt on the isolated
> core, which is running under NO_HZ_FULL mode and is
> running a single thread "stress", after ~90 seconds.
>
> When I look into the traces I see we get only two events:
> - irq-handler-entry
> - irq-handler-exit
>
> And no more detail is available in the traces, system again
> goes to no interruption mode for next 90 seconds.
>
> I hope this is because the timers we have queued for long
> enough times are getting overflowed? I tried to enable
> cpusets and then see which timers are active on CPU1 from
> /proc/timer_list and that gave me:
>
> tick_sched_timer and it_real_fn. These are probably queued
> for long enough times, around 450 seconds and 2000 seconds.
>
> So, my question is: Why are these getting queued? And how
> can I get rid of those for my case, where I want zero interruption
> on isolated core, as that would be running a userspace thread
> to handle data plane packets.
>
>
> Another thing I tried out recently was to make my single threaded
> task "stress" a real time task with priority 99 (along with cpusets).
> But it seems there are more than one thread getting on that CPU
> and so tick occurs immediately.
>
> I tried to call "stress" with help of chrt.
>
> --
> viresh
From: Mark Brown <broonie(a)linaro.org>
Now that the SPI controllers are disabled by default for Exynos5250
there is no need to explicitly disable them in individual board files.
This hunk appears not to have been merged when doing the original
conversion, add it now.
Signed-off-by: Mark Brown <broonie(a)linaro.org>
---
arch/arm/boot/dts/exynos5250-smdk5250.dts | 4 ----
1 file changed, 4 deletions(-)
diff --git a/arch/arm/boot/dts/exynos5250-smdk5250.dts b/arch/arm/boot/dts/exynos5250-smdk5250.dts
index 3e69837c435c..b370f8a20cdf 100644
--- a/arch/arm/boot/dts/exynos5250-smdk5250.dts
+++ b/arch/arm/boot/dts/exynos5250-smdk5250.dts
@@ -164,10 +164,6 @@
};
};
- spi_0: spi@12d20000 {
- status = "disabled";
- };
-
spi_1: spi@12d30000 {
status = "okay";
--
1.8.5.2
Hi Frederic/Kevin,
I was doing some work where I was required to use NO_HZ_FULL
on core 1 on a dual core ARM machine.
I observed that I was able to isolate the second core using cpusets
but whenever the tick occurs, it occurs twice. i.e. Timer count
gets updated by two every time my core is disturbed.
I tried to trace it (output attached) and found this sequence (Talking
only about core 1 here):
- Single task was running on Core 1 (using cpusets)
- got an arch_timer interrupt
- started servicing vmstat stuff
- so came out of NO_HZ_FULL domain as there is more than
one task on Core
- queued work again and went to the existing single task (stress)
- again got arch_timer interrupt after 5 ms (HZ=200)
- got "tick_stop" event and went into NO_HZ_FULL domain again..
- Got isolated again for long duration..
So the query is: why don't we check that at the end of servicing vmstat
stuff and migrating back to "stress" ??
Thanks.
--
viresh