On 3 July 2013 15:59, Mans Rullgard mans.rullgard@linaro.org wrote:
Modern silicon processes are much more power-efficient than those of the 90s. For example, an old ~500MHz Alpha machine I have readily consumes 90W even when idle. A quad-core Intel i7 typically has a TDP of 130W at full load. That's orders of magnitude more gates clocked at 6x the frequency and still using only marginally more power.
I don't remember the numbers exactly, but the DX Intel machines weren't that power-hungry. Here[1] I read they used 600mA on a 5V input, which gives you 3W consumption, and it already had a heat-sink. ;)
An OMAP4460 will run at 1.2GHz indefinitely without overheating in
reasonable ambient temperature.
Probably in Sweden, "room temperature" is -10... ;)
But running at 1.2GHz doesn't mean it will be using the whole system, RAM and GPU included, which being on the same SoC, contribute to the overall temperature. I've seen some GPU errors on the syslog, not sure it's related to the failures, or caused by them.
If you don't have thermal management in the kernel you're running, you need to clamp the clock at a safe value.
I'd expect that Linaro's kernel on Ubuntu 13.03 already had a decent thermal control of the Panda. I can get the temperatures without special code, so I assume the kernel knows precisely what to do, and I also hope that the kernel can do scheduling, otherwise, what's the point of measuring temperatures...
But more to the point, I don't want to be scaled down when hot, I want it never to get hot in the first place, so I can run at full 1.2GHz, 24 / 7. If the scheduler reduces the frequency to decrease the temperature, I'll be testing more commits per run AND my benchmarks will be skewed, depending on room temperature, which is the same as to say they're not benchmarks at all.
cheers, --renato