Chips intended for compute clusters will no doubt be possible to cool
sufficiently to run at full speed all the time. Designing chips for different
markets involves different sets of tradeoffs, and you're seeing the result
of that.
Yes, this is what I'm trying to get at. For toolchain testing we need a machine that doesn't give up under high constant load for really long periods (months/years). This is slightly lighter than the kind of load that you'll have when using servers like Calxeda, for uses like Facebook's. On mobile platforms, the diversity of uses is really reduced, so it's ok to make several compromises on the chip/SoC/system design to save on costs. But when ARM cores hit the desktop/server market, these assumptions will stop being valid.
This video of Linux Torvalds on why Linux haven't dominated the desktop market yet (and may never will) is relevant:
Basically, the usage patterns are so disparate between users, or even groups of users, that it's hard for any single Linux distribution / vendor to focus on all of them. ARM is similar, that itself can't focus on servers or desktops only, but if the designs allow for modularity (I believe they do), partners could (should) build SoCs focused on different markets, vendors (like HP, Dell) could put together production systems, etc.
Both ARM and Linux move into desktops would need a level of coordination between competitors that is probably not possible on today's market. (please, somebody tell me I'm wrong...)