On 17 September 2015 at 15:45, Alex Shi alex.shi@linaro.org wrote:
On 09/17/2015 09:35 PM, Milosz Wasilewski wrote:
For performance/power measure, the criteria may not the result value. It should be the change percentage, like the performance increase/decrease percentage. or the power(W)/energy(J) changed percentage. Guess Amit will have better suggestion on power testing.
I amn't a keen of LKP. I'd like any auto testing framework if it can give me the per/power changes of two branches. :)
I'm really not sure what are you expecting here? That can already be done with a little bit of scripting using LAVA. As Amit mentioned above, we don't have any reporting tools in place at this time.
Hi Milosz,
I don't know if I am not clear with the following part. Do you mean lava is already can figure out this:
like, we can start from the performance measurement. We always has result for the performance testing, like the task running time, if one kernel patch cause 10 more percent running time. we say the performance decreases 10%. And if we set the performance alarm criteria as 5%. then, we know it mean the patch is bad. The other performance data or power data could be deal with similar idea.
LAVA is not doing any postprocessing/analysis of the data it generates. It's purely the tool to automatically run tests. So it will tell you what the test results are with the build (*) you submit. The decision whether the results are 'good' or 'bad' is always the responsibility of requester. From what you wrote above I assume you're interested in single measurement (like hackbench with fixed parameters). That is already available. If you submit the test jobs in a way they can be filtered in LAVA, creating a simple report with threshold is also possible. I'm not sure about threshold alerts as I never used them.
* it isn't enough to point to the tree/branch to have the testing done.
milosz