Hi Ken. In follow up to our 1-on-1 yesterday, here's what I'd like done next.
The goal is to use OE Core as a release test suite. The releases are tarballs so we can keep the current recipe format and punt bzr support for later. The first step is to be able to reliably build a release in the cloud or validation lab.
In all cases keep the other teams in mind. Much of this is related to Validation. Platform will be involved later. Ping them early.
Kernel: We're starting with GCC and need a kernel to supply headers and to boot some type of ARMv7 image. I don't want a linux-linaro recipe as people will use it and it's too early for that.
Find a kernel, preferably from OE Core, that is recent, ARMv7, >= 512 MB RAM, and works well with qemu-linaro. Prefer vexpress-a9, else OMAP?
Talking: Say Hi to Validation re: EC2 and plans Say Hi to the ARM landing team re: vexpress upstream support Say Hi to Beth Flanagan re: Yocto's existing auto builders and any hints
Cloud builds: Find out who is already doing OE builds in the cloud and how Run a build locally and time Push ~/downloads into the cloud, build, and time[1] Figure how much this build will cost in dollars
[1] c1.xlarge might be best. Builds are normally I/O bound and the cloud is I/O poor. Put /tmp and other chunks in a tmpfs? EC2 rounds up to the nearest hour as well.
If the cloud is too expensive then we'll get a machine installed.
S3 for storage: (only proceed if affordable) Use S3 for storing the input tarballs Use S3 either as a pre-mirror by serving over HTTP, or use s3cmd to sync down the tarballs before starting the build
Scripting: Re-use existing scripts if feasible. Integrate with LAVA providing we can run exactly the same scripts on a laptop for debugging.
Script the bitbake, OE meta layer, and Linaro meta layer setup. Script the configuration including setting the release tarball URL and GCC preferred version. Script the build and result capture, especially the log, any ICEs, and the final sizes
Future: OE can grab a repository seed then update based on that. Check if the bzr backend supports this. If so, play with seeding to do tip builds.
Let me know what you think then we'll spawn blueprints. Let's keep an eye on this as it's sounding expensive.
-- Michael
On 03/06/2012 01:26 AM, Michael Hope wrote:
Hi Ken. In follow up to our 1-on-1 yesterday, here's what I'd like done next.
The goal is to use OE Core as a release test suite. The releases are tarballs so we can keep the current recipe format and punt bzr support for later. The first step is to be able to reliably build a release in the cloud or validation lab.
In all cases keep the other teams in mind. Much of this is related to Validation. Platform will be involved later. Ping them early.
Kernel: We're starting with GCC and need a kernel to supply headers and to boot some type of ARMv7 image. I don't want a linux-linaro recipe as people will use it and it's too early for that.
Find a kernel, preferably from OE Core, that is recent, ARMv7,>= 512 MB RAM, and works well with qemu-linaro. Prefer vexpress-a9, else OMAP?
Done. I've updated the meta-linaro layer to (re-)use the default OE-Core kernel sources - a yocto kernel - that gets built using a vexpress defconfig.
Talking: Say Hi to Validation re: EC2 and plans Say Hi to the ARM landing team re: vexpress upstream support Say Hi to Beth Flanagan re: Yocto's existing auto builders and any hints
Yep, it looks like there are several options: a) the Yocto autobuilder b) Jenkins c) LAVA d) something homegrown
I talked to fabo and zyga and agreed that - as a starting point - I'll provide a shell script that automates the process and we'll go from there. I've made contact with Elizabeth and briefly looked into the yocto autobuilder whis is based on buildbot. Obviously it's meant for regression testing the various yocto images. I think we want to keep oe-core+linaro-meta stable and exchange the toolchain which is kind of the other way round what they are doing. But I'll keep an eye on it.
Cloud builds: Find out who is already doing OE builds in the cloud and how Run a build locally and time Push ~/downloads into the cloud, build, and time[1] Figure how much this build will cost in dollars
[1] c1.xlarge might be best. Builds are normally I/O bound and the cloud is I/O poor. Put /tmp and other chunks in a tmpfs? EC2 rounds up to the nearest hour as well.
If the cloud is too expensive then we'll get a machine installed.
S3 for storage: (only proceed if affordable) Use S3 for storing the input tarballs Use S3 either as a pre-mirror by serving over HTTP, or use s3cmd to sync down the tarballs before starting the build
Just to give you an overview. The build of the sato and qt4e images takes about two hours my machine (24x "E5649 @ 2.53GHz" with 32gb of RAM) and creates about 37GiB of object files, binaries, packages and images. This is excluding the size of the sources (and the time for fetching them in case they are not there). I haven't spent time on optimizing my build environment (tmpfs etc), so I guess there is room for improvement. I cannot even say whether it's I/O or CPU bound. Sometimes it appears to be the latter - when building Qt for example. But sometimes only one CPU gets used and it's waiting for a source being unpacked that other tasks depend on.
Scripting: Re-use existing scripts if feasible. Integrate with LAVA providing we can run exactly the same scripts on a laptop for debugging.
Script the bitbake, OE meta layer, and Linaro meta layer setup. Script the configuration including setting the release tarball URL and GCC preferred version. Script the build and result capture, especially the log, any ICEs, and the final sizes
Future: OE can grab a repository seed then update based on that. Check if the bzr backend supports this. If so, play with seeding to do tip builds.
Yeah, this is definitely useful for testing tip. I'd like to start with something smaller than the GCC though. For example the Linaro GDB or QEMU would be good. I hope these are the low hanging fruits, so I'll begin with them if I can squeeze some time in.
Let me know what you think then we'll spawn blueprints. Let's keep an eye on this as it's sounding expensive.
-- Michael
Regards, Ken
On 7 March 2012 23:47, Ken Werner ken.werner@linaro.org wrote:
On 03/06/2012 01:26 AM, Michael Hope wrote:
Hi Ken. In follow up to our 1-on-1 yesterday, here's what I'd like done next.
The goal is to use OE Core as a release test suite. The releases are tarballs so we can keep the current recipe format and punt bzr support for later. The first step is to be able to reliably build a release in the cloud or validation lab.
In all cases keep the other teams in mind. Much of this is related to Validation. Platform will be involved later. Ping them early.
Kernel: We're starting with GCC and need a kernel to supply headers and to boot some type of ARMv7 image. I don't want a linux-linaro recipe as people will use it and it's too early for that.
Find a kernel, preferably from OE Core, that is recent, ARMv7,>= 512 MB RAM, and works well with qemu-linaro. Prefer vexpress-a9, else OMAP?
Done. I've updated the meta-linaro layer to (re-)use the default OE-Core kernel sources - a yocto kernel - that gets built using a vexpress defconfig.
Talking: Say Hi to Validation re: EC2 and plans Say Hi to the ARM landing team re: vexpress upstream support Say Hi to Beth Flanagan re: Yocto's existing auto builders and any hints
Yep, it looks like there are several options: a) the Yocto autobuilder b) Jenkins c) LAVA d) something homegrown
I talked to fabo and zyga and agreed that - as a starting point - I'll provide a shell script that automates the process and we'll go from there. I've made contact with Elizabeth and briefly looked into the yocto autobuilder whis is based on buildbot. Obviously it's meant for regression testing the various yocto images. I think we want to keep oe-core+linaro-meta stable and exchange the toolchain which is kind of the other way round what they are doing. But I'll keep an eye on it.
(a) buildbot and (b) jenkins are both continuous integration tools. I guess that the main difference is roughly the infrastructure used (Yocto or Linaro).
LAVA is a testing framework. It doesn't build anything, it runs tests and generate reports/results. It can be extended (like all the tools mentioned) to meet your requirements but it has a cost (resources/time).
Cloud builds: Find out who is already doing OE builds in the cloud and how Run a build locally and time Push ~/downloads into the cloud, build, and time[1] Figure how much this build will cost in dollars
[1] c1.xlarge might be best. Builds are normally I/O bound and the cloud is I/O poor. Put /tmp and other chunks in a tmpfs? EC2 rounds up to the nearest hour as well.
If the cloud is too expensive then we'll get a machine installed.
S3 for storage: (only proceed if affordable) Use S3 for storing the input tarballs Use S3 either as a pre-mirror by serving over HTTP, or use s3cmd to sync down the tarballs before starting the build
Just to give you an overview. The build of the sato and qt4e images takes about two hours my machine (24x "E5649 @ 2.53GHz" with 32gb of RAM) and creates about 37GiB of object files, binaries, packages and images. This is excluding the size of the sources (and the time for fetching them in case they are not there). I haven't spent time on optimizing my build environment (tmpfs etc), so I guess there is room for improvement. I cannot even say whether it's I/O or CPU bound. Sometimes it appears to be the latter - when building Qt for example. But sometimes only one CPU gets used and it's waiting for a source being unpacked that other tasks depend on.
Scripting: Re-use existing scripts if feasible. Integrate with LAVA providing we can run exactly the same scripts on a laptop for debugging.
Script the bitbake, OE meta layer, and Linaro meta layer setup. Script the configuration including setting the release tarball URL and GCC preferred version. Script the build and result capture, especially the log, any ICEs, and the final sizes
Future: OE can grab a repository seed then update based on that. Check if the bzr backend supports this. If so, play with seeding to do tip builds.
Yeah, this is definitely useful for testing tip. I'd like to start with something smaller than the GCC though. For example the Linaro GDB or QEMU would be good. I hope these are the low hanging fruits, so I'll begin with them if I can squeeze some time in.
Let me know what you think then we'll spawn blueprints. Let's keep an eye on this as it's sounding expensive.
-- Michael
Regards, Ken
linaro-toolchain mailing list linaro-toolchain@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-toolchain
W dniu 07.03.2012 22:47, Ken Werner pisze:
On 03/06/2012 01:26 AM, Michael Hope wrote:
Talking: Say Hi to Validation re: EC2 and plans Say Hi to the ARM landing team re: vexpress upstream support Say Hi to Beth Flanagan re: Yocto's existing auto builders and any hints
Yep, it looks like there are several options: a) the Yocto autobuilder b) Jenkins c) LAVA d) something homegrown
I talked to fabo and zyga and agreed that - as a starting point - I'll provide a shell script that automates the process and we'll go from there. I've made contact with Elizabeth and briefly looked into the yocto autobuilder whis is based on buildbot. Obviously it's meant for regression testing the various yocto images. I think we want to keep oe-core+linaro-meta stable and exchange the toolchain which is kind of the other way round what they are doing. But I'll keep an eye on it.
I had setup at least two buildbots building OE images. For testing of Linaro toolchain it can be used too I think. If buildbot has ability to pull bzr for changes then you can get automatic tests after each commit/merge.
Just to give you an overview. The build of the sato and qt4e images takes about two hours my machine (24x "E5649 @ 2.53GHz" with 32gb of RAM) and creates about 37GiB of object files, binaries, packages and images. This is excluding the size of the sources (and the time for fetching them in case they are not there). I haven't spent time on optimizing my build environment (tmpfs etc), so I guess there is room for improvement. I cannot even say whether it's I/O or CPU bound. Sometimes it appears to be the latter - when building Qt for example. But sometimes only one CPU gets used and it's waiting for a source being unpacked that other tasks depend on.
BB_NUMBER_THREADS=12 PARALLEL_MAKE=8
Or other values for them and your machine will have most of CPU time in use. First one control how many BitBake threads are called at same time, second is given as "-jX" argument to make. With both used you do not have to wait as there is always something to do (up to moment when do_rootfs is called as this is usually last task).
INHERIT += "rm_work"
This will remove contents of WORKDIR after recipe's build. With this enabled (and downloads outsite of tmp/) you can probably fit in tmpfs on your machine.
On 03/08/2012 09:03 AM, Marcin Juszkiewicz wrote:
W dniu 07.03.2012 22:47, Ken Werner pisze:
On 03/06/2012 01:26 AM, Michael Hope wrote:
Talking: Say Hi to Validation re: EC2 and plans Say Hi to the ARM landing team re: vexpress upstream support Say Hi to Beth Flanagan re: Yocto's existing auto builders and any hints
Yep, it looks like there are several options: a) the Yocto autobuilder b) Jenkins c) LAVA d) something homegrown
I talked to fabo and zyga and agreed that - as a starting point - I'll provide a shell script that automates the process and we'll go from there. I've made contact with Elizabeth and briefly looked into the yocto autobuilder whis is based on buildbot. Obviously it's meant for regression testing the various yocto images. I think we want to keep oe-core+linaro-meta stable and exchange the toolchain which is kind of the other way round what they are doing. But I'll keep an eye on it.
I had setup at least two buildbots building OE images. For testing of Linaro toolchain it can be used too I think. If buildbot has ability to pull bzr for changes then you can get automatic tests after each commit/merge.
Sounds good. I'll come back to you in case I give it a go. :)
Just to give you an overview. The build of the sato and qt4e images takes about two hours my machine (24x "E5649 @ 2.53GHz" with 32gb of RAM) and creates about 37GiB of object files, binaries, packages and images. This is excluding the size of the sources (and the time for fetching them in case they are not there). I haven't spent time on optimizing my build environment (tmpfs etc), so I guess there is room for improvement. I cannot even say whether it's I/O or CPU bound. Sometimes it appears to be the latter - when building Qt for example. But sometimes only one CPU gets used and it's waiting for a source being unpacked that other tasks depend on.
BB_NUMBER_THREADS=12 PARALLEL_MAKE=8
Yep, I've already played with these and I'm currently using: BB_NUMBER_THREADS=24 PARALLEL_MAKE=24 It sounds way too much since in theory this would spin off 24x24 GCCs but in practice this rarely happens because some task is waiting on a dependency.
Or other values for them and your machine will have most of CPU time in use. First one control how many BitBake threads are called at same time, second is given as "-jX" argument to make. With both used you do not have to wait as there is always something to do (up to moment when do_rootfs is called as this is usually last task).
INHERIT += "rm_work"
This will remove contents of WORKDIR after recipe's build. With this enabled (and downloads outsite of tmp/) you can probably fit in tmpfs on your machine.
Ah, thanks - I forgot about that. It saves about 16GB of space and running on tmpfs saves about 30 minutes on my setup. Thanks!
-- Ken
On Thu, Mar 08, 2012 at 07:08:51PM +0100, Ken Werner wrote:
On 03/08/2012 09:03 AM, Marcin Juszkiewicz wrote:
Just to give you an overview. The build of the sato and qt4e images takes about two hours my machine (24x "E5649 @ 2.53GHz" with 32gb of RAM) and creates about 37GiB of object files, binaries, packages and images. This is excluding the size of the sources (and the time for fetching them in case they are not there). I haven't spent time on optimizing my build environment (tmpfs etc), so I guess there is room for improvement. I cannot even say whether it's I/O or CPU bound. Sometimes it appears to be the latter - when building Qt for example. But sometimes only one CPU gets used and it's waiting for a source being unpacked that other tasks depend on.
BB_NUMBER_THREADS=12 PARALLEL_MAKE=8
Yep, I've already played with these and I'm currently using: BB_NUMBER_THREADS=24 PARALLEL_MAKE=24 It sounds way too much since in theory this would spin off 24x24 GCCs but in practice this rarely happens because some task is waiting on a dependency.
In rare cases of 24x24 GCCs you'll end up being I/O bound, but it's still worth it to max out the load at all times, unless you want to use your system for something else at the same time... :) On my 6-core/12-thread system I usually do 16x16 builds.
Or other values for them and your machine will have most of CPU time in use. First one control how many BitBake threads are called at same time, second is given as "-jX" argument to make. With both used you do not have to wait as there is always something to do (up to moment when do_rootfs is called as this is usually last task).
INHERIT += "rm_work"
This will remove contents of WORKDIR after recipe's build. With this enabled (and downloads outsite of tmp/) you can probably fit in tmpfs on your machine.
Ah, thanks - I forgot about that. It saves about 16GB of space and running on tmpfs saves about 30 minutes on my setup. Thanks!
Well, rm_work works less efficiently when used with high numbers in BB_NUMBER_THREADS/PARALLEL_MAKE, as the peak usage of tmp/ can go _almost_ to the max (e.g. in your case up to 37GB). The reason for that is due to rm_work doing its cleaning only after a package with all its dependencies gets built and packaged. So, often BitBake goes and starts building bunch of packages in parallel, branching off into dependency tree, quickly filling up the queue. So, until rm_work can start removing all the pieces that are done, most of the packages end up being unpacked, patched, configured, compiled and installed and waiting for the remaining package_write to happen - all that eating up the tmp/ space...
On 03/08/2012 07:53 PM, Denys Dmytriyenko wrote:
On Thu, Mar 08, 2012 at 07:08:51PM +0100, Ken Werner wrote:
On 03/08/2012 09:03 AM, Marcin Juszkiewicz wrote:
Just to give you an overview. The build of the sato and qt4e images takes about two hours my machine (24x "E5649 @ 2.53GHz" with 32gb of RAM) and creates about 37GiB of object files, binaries, packages and images. This is excluding the size of the sources (and the time for fetching them in case they are not there). I haven't spent time on optimizing my build environment (tmpfs etc), so I guess there is room for improvement. I cannot even say whether it's I/O or CPU bound. Sometimes it appears to be the latter - when building Qt for example. But sometimes only one CPU gets used and it's waiting for a source being unpacked that other tasks depend on.
BB_NUMBER_THREADS=12 PARALLEL_MAKE=8
Yep, I've already played with these and I'm currently using: BB_NUMBER_THREADS=24 PARALLEL_MAKE=24 It sounds way too much since in theory this would spin off 24x24 GCCs but in practice this rarely happens because some task is waiting on a dependency.
In rare cases of 24x24 GCCs you'll end up being I/O bound, but it's still worth it to max out the load at all times, unless you want to use your system for something else at the same time... :) On my 6-core/12-thread system I usually do 16x16 builds.
Thanks for sharing your config. It's always interesting to know what other people are using on their build machines. I noticed that increasing the amount of parallel bitbake tasks past 24 won't do much as there a rarely 24 tasks running in parallel due to the dependencies.
Or other values for them and your machine will have most of CPU time in use. First one control how many BitBake threads are called at same time, second is given as "-jX" argument to make. With both used you do not have to wait as there is always something to do (up to moment when do_rootfs is called as this is usually last task).
INHERIT += "rm_work"
This will remove contents of WORKDIR after recipe's build. With this enabled (and downloads outsite of tmp/) you can probably fit in tmpfs on your machine.
Ah, thanks - I forgot about that. It saves about 16GB of space and running on tmpfs saves about 30 minutes on my setup. Thanks!
Well, rm_work works less efficiently when used with high numbers in BB_NUMBER_THREADS/PARALLEL_MAKE, as the peak usage of tmp/ can go _almost_ to the max (e.g. in your case up to 37GB). The reason for that is due to rm_work doing its cleaning only after a package with all its dependencies gets built and packaged. So, often BitBake goes and starts building bunch of packages in parallel, branching off into dependency tree, quickly filling up the queue. So, until rm_work can start removing all the pieces that are done, most of the packages end up being unpacked, patched, configured, compiled and installed and waiting for the remaining package_write to happen - all that eating up the tmp/ space...
Ok, I see. But at least in my case (for the sato and qt4e images) it's not getting too large during the build. :)
Regards, Ken
On 9 March 2012 07:08, Ken Werner ken.werner@linaro.org wrote:
On 03/08/2012 09:03 AM, Marcin Juszkiewicz wrote:
W dniu 07.03.2012 22:47, Ken Werner pisze: I had setup at least two buildbots building OE images. For testing of Linaro toolchain it can be used too I think. If buildbot has ability to pull bzr for changes then you can get automatic tests after each commit/merge.
Sounds good. I'll come back to you in case I give it a go. :)
Let's use whatever the Validation team's are using (Jenkins?). The fewer tools the better.
The OE people weren't worried if we didn't use buildbot. Our system doesn't have to integrate with theirs. What ever we use the build status and history should be public as that's how we work at Linaro.
INHERIT += "rm_work"
This will remove contents of WORKDIR after recipe's build. With this enabled (and downloads outsite of tmp/) you can probably fit in tmpfs on your machine.
Ah, thanks - I forgot about that. It saves about 16GB of space and running on tmpfs saves about 30 minutes on my setup. Thanks!
You can build a Gnome distro from source all in RAM. How awesome is that.
-- Michael
On 8 March 2012 21:24, Michael Hope michael.hope@linaro.org wrote:
On 9 March 2012 07:08, Ken Werner ken.werner@linaro.org wrote:
On 03/08/2012 09:03 AM, Marcin Juszkiewicz wrote:
Ah, thanks - I forgot about that. It saves about 16GB of space and running on tmpfs saves about 30 minutes on my setup. Thanks!
You can build a Gnome distro from source all in RAM. How awesome is that.
Hmmmm - can we increase our throughput on the x86 builders in the cloud by doing this for our build areas ?
regards, Ramana
-- Michael
linaro-toolchain mailing list linaro-toolchain@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-toolchain
On 9 March 2012 10:36, Ramana Radhakrishnan ramana.radhakrishnan@linaro.org wrote:
On 8 March 2012 21:24, Michael Hope michael.hope@linaro.org wrote:
On 9 March 2012 07:08, Ken Werner ken.werner@linaro.org wrote:
On 03/08/2012 09:03 AM, Marcin Juszkiewicz wrote:
Ah, thanks - I forgot about that. It saves about 16GB of space and running on tmpfs saves about 30 minutes on my setup. Thanks!
You can build a Gnome distro from source all in RAM. How awesome is that.
Hmmmm - can we increase our throughput on the x86 builders in the cloud by doing this for our build areas ?
The x86_64 EC2 instances are already fast but are marginal on the amount of memory and get expensive past that. The i686 instances have limited RAM and I'd have to use setarch or similar on a x86_64 host to get more.
Not worth it unfortunately. Another trap is EC2 rounds your usage up to the nearest hour. Saving 10 minutes on a 1:30 build doesn't reduce the cost.
-- Michael
linaro-toolchain@lists.linaro.org