On Sat, Sep 18, 2010 at 3:00 AM, Dave Martin dave.martin@linaro.org wrote:
Hi,
On Fri, Sep 17, 2010 at 3:50 AM, Michael Hope michael.hope@linaro.org wrote:
It's only part of the puzzle, but I run speed benchmarks as part of the continious build: http://ex.seabright.co.nz/helpers/buildlog http://ex.seabright.co.nz/helpers/benchcompare http://ex.seabright.co.nz/build/gcc-linaro-4.5-2010.09-1/logs/armv7l-maveric...
I've just modified this to build different variants as well. ffmpeg now builds as supplied (-O2 and others), with -Os, with hand-written assembler turned off, and with -mfpu=neon. corebench builds in -O2 and -Os.
This might be one way to approach things. It's simple to add other programs into the mix.
Could you easily add code size metrics?
The build currently runs 'size' on every executable file it can find. See: http://ex.seabright.co.nz/build/gcc-linaro-4.5-2010.09-1/logs/armv7l-maveric...
for an example.
It would be useful to watch those for regressions also, especially if there's an ongoing effort to make -Os better.
Yip. I'm recording at the moment and hope to hand the reporting side off to the Infrastructure team.
It would be good to have more system-oriented metrics as well, such as boot, login and app launch times, and cache and TLB performance. Results of microbenchmarks can be quite misleading when it comes to the performance of the system as a whole. I'm not sure the best way to approach that--- many variables affect performance, and you'd need to build many packages to get a system to benchmark. It might be overkill; the toolchain can definitely influence these such metrics, but it may become a less-dominant factor once you're studying a large enough blob of software.
It's not wholly a toolchain issue but one I'm interested in. Something we should talk about at Linaro@UDS...
-- Michael