Well, I suppose if we're setting figures like that, then it's really "Limit regressions in vectorised code over non-vectorised code". :-) But maybe it'd be better to keep figures out of it. 99% is awkward because I don't think we even have 100 benchmarks yet. And what about benchmarks like DENbench that run the same code more than once, but with a different data set? Does each data set count as a separate benchmark?
I would actually vote for each data set counting as a separate benchmark as it potentially exercises different code paths and we've got different things to look at. . Thus each benchmark that has a different workload constitutes a new benchmark to look at.
FWIW, all the examples I've seen so far are due to the over-promotion of vector operations (e.g. doing things on ints when shorts would do).
That's interesting to note. I'd be interested in trying to help figure out more such cases.
cheers Ramana
Richard
linaro-toolchain mailing list linaro-toolchain@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-toolchain