On Thu, Mar 08, 2012 at 07:08:51PM +0100, Ken Werner wrote:
On 03/08/2012 09:03 AM, Marcin Juszkiewicz wrote:
Just to give you an overview. The build of the sato and qt4e images takes about two hours my machine (24x "E5649 @ 2.53GHz" with 32gb of RAM) and creates about 37GiB of object files, binaries, packages and images. This is excluding the size of the sources (and the time for fetching them in case they are not there). I haven't spent time on optimizing my build environment (tmpfs etc), so I guess there is room for improvement. I cannot even say whether it's I/O or CPU bound. Sometimes it appears to be the latter - when building Qt for example. But sometimes only one CPU gets used and it's waiting for a source being unpacked that other tasks depend on.
BB_NUMBER_THREADS=12 PARALLEL_MAKE=8
Yep, I've already played with these and I'm currently using: BB_NUMBER_THREADS=24 PARALLEL_MAKE=24 It sounds way too much since in theory this would spin off 24x24 GCCs but in practice this rarely happens because some task is waiting on a dependency.
In rare cases of 24x24 GCCs you'll end up being I/O bound, but it's still worth it to max out the load at all times, unless you want to use your system for something else at the same time... :) On my 6-core/12-thread system I usually do 16x16 builds.
Or other values for them and your machine will have most of CPU time in use. First one control how many BitBake threads are called at same time, second is given as "-jX" argument to make. With both used you do not have to wait as there is always something to do (up to moment when do_rootfs is called as this is usually last task).
INHERIT += "rm_work"
This will remove contents of WORKDIR after recipe's build. With this enabled (and downloads outsite of tmp/) you can probably fit in tmpfs on your machine.
Ah, thanks - I forgot about that. It saves about 16GB of space and running on tmpfs saves about 30 minutes on my setup. Thanks!
Well, rm_work works less efficiently when used with high numbers in BB_NUMBER_THREADS/PARALLEL_MAKE, as the peak usage of tmp/ can go _almost_ to the max (e.g. in your case up to 37GB). The reason for that is due to rm_work doing its cleaning only after a package with all its dependencies gets built and packaged. So, often BitBake goes and starts building bunch of packages in parallel, branching off into dependency tree, quickly filling up the queue. So, until rm_work can start removing all the pieces that are done, most of the packages end up being unpacked, patched, configured, compiled and installed and waiting for the remaining package_write to happen - all that eating up the tmp/ space...