On 4 July 2013 18:10, Renato Golin renato.golin@linaro.org wrote:
On 4 July 2013 17:13, Siarhei Siamashka siarhei.siamashka@gmail.com wrote:
By the way, power consumption is not constant and heavily depends on what the CPU is actually doing. And 100% CPU load in one application does not mean that it would consume the same amount of power as 100% CPU load in another application.
This is really interesting, I had not considered it until now. If I understood correctly, this has to do with what/how many paths are taken inside the cores (CPU, GPU), or how much data is passing between mem/cache/registers, etc.
Modern CPU designs can even clock-gate partial pipelines when not in use. Typical code doesn't even use the multiply pipeline most of the time, so it will spend a lot of time gated. A carefully crafted piece of code, like Siarhei's, maintains the maximum sustained issue rate for a long time, and mixes instructions such that most of the pipelines are active most of the time. This makes the power consumption go up significantly.
It would be a lot easier to convince hardware vendors and cluster builders to buy huge active coolers, than convince them to lower the CPU frequency.
Chips intended for compute clusters will no doubt be possible to cool sufficiently to run at full speed all the time. Designing chips for different markets involves different sets of tradeoffs, and you're seeing the result of that.