On 11 October 2016 at 22:26, Jens Bauer jens-lists-linaro@gpio.dk wrote:
Hi Christophe.
Thanks for the review; that was quick. :)
Thanks for your interest and efforts.
You deserve some contributions/freebies, with all the hard work you're already doing - thank you on behalf of thousands of people using your toolchain!
You may have noticed that your suggestions have been integrated: - support for Darwin - use of getconf
Sorry, I realize that I should have mentioned your name in the commit message.
- By default use twice as many cores for -jN as usual. (If you have
2 cores, use -j4)
-CPUS="`grep -c proces /proc/cpuinfo`"
+let CPUS=2*`getconf _NPROCESSORS_ONLN`
Using getconf is indeed more portable, I think. However, I'm not sure we want the '2*' part of the patch, since we sort-of rely on the -j factor in our validations (e.g. number of independent builds in parallel on a given machine)
Fair enough. :) -If you use the CPUS variable for the actual number of cores, then you could do like I do in my builds instead:
let CPUS=`getconf _NPROCESSORS_ONLN` let cpucount=2*$CPUS pmake="make -j$cpucount"; [ -w "$prefix" ] && smake="$pmake" || smake="sudo $pmake" case `uname -s` in 'Darwin') isMac=1; smake="sudo $pmake" ;; 'Linux') isLinux=1 ;; esac
-I mentioned this in the support request as well. I'm creating two variables, which contains the code used for executing make. It has a 'built-in' -jN, where N is twice the number of cores.
-But if you're somehow querying make for the -j argument, then you'll probably need to use $CPUS directly. (Normally -j2 will use around 50% CPU-time on a dual core machine - this is most likely due to the disk access involved, where the CPU waits for the disk to deliver data and sleeps until the data arrives).
This makes sense if you run only one build at a time on a machine, but this is not our main use case. For the time being, we try to optimize our validation bandwidth, and we adjust the -j factor along with the number of builds in parallel on a given build server.
As you may have guessed, when we run validations, we check several targets, which happened to be scheduled in parallel on several builders. We favor the throughput in terms of validation results. For instance, we prefer to have the results for 4 targets in ~2h rather than 3 targets in ~1h30, then having to wait for another 1h30 for the next batch. (Figures not accurate, just to give you an idea). YMMV.
Thanks,
Christophe
Love Jens