On 03/06/2012 01:26 AM, Michael Hope wrote:
Hi Ken. In follow up to our 1-on-1 yesterday, here's what I'd like done next.
The goal is to use OE Core as a release test suite. The releases are tarballs so we can keep the current recipe format and punt bzr support for later. The first step is to be able to reliably build a release in the cloud or validation lab.
In all cases keep the other teams in mind. Much of this is related to Validation. Platform will be involved later. Ping them early.
Kernel: We're starting with GCC and need a kernel to supply headers and to boot some type of ARMv7 image. I don't want a linux-linaro recipe as people will use it and it's too early for that.
Find a kernel, preferably from OE Core, that is recent, ARMv7,>= 512 MB RAM, and works well with qemu-linaro. Prefer vexpress-a9, else OMAP?
Done. I've updated the meta-linaro layer to (re-)use the default OE-Core kernel sources - a yocto kernel - that gets built using a vexpress defconfig.
Talking: Say Hi to Validation re: EC2 and plans Say Hi to the ARM landing team re: vexpress upstream support Say Hi to Beth Flanagan re: Yocto's existing auto builders and any hints
Yep, it looks like there are several options: a) the Yocto autobuilder b) Jenkins c) LAVA d) something homegrown
I talked to fabo and zyga and agreed that - as a starting point - I'll provide a shell script that automates the process and we'll go from there. I've made contact with Elizabeth and briefly looked into the yocto autobuilder whis is based on buildbot. Obviously it's meant for regression testing the various yocto images. I think we want to keep oe-core+linaro-meta stable and exchange the toolchain which is kind of the other way round what they are doing. But I'll keep an eye on it.
Cloud builds: Find out who is already doing OE builds in the cloud and how Run a build locally and time Push ~/downloads into the cloud, build, and time[1] Figure how much this build will cost in dollars
[1] c1.xlarge might be best. Builds are normally I/O bound and the cloud is I/O poor. Put /tmp and other chunks in a tmpfs? EC2 rounds up to the nearest hour as well.
If the cloud is too expensive then we'll get a machine installed.
S3 for storage: (only proceed if affordable) Use S3 for storing the input tarballs Use S3 either as a pre-mirror by serving over HTTP, or use s3cmd to sync down the tarballs before starting the build
Just to give you an overview. The build of the sato and qt4e images takes about two hours my machine (24x "E5649 @ 2.53GHz" with 32gb of RAM) and creates about 37GiB of object files, binaries, packages and images. This is excluding the size of the sources (and the time for fetching them in case they are not there). I haven't spent time on optimizing my build environment (tmpfs etc), so I guess there is room for improvement. I cannot even say whether it's I/O or CPU bound. Sometimes it appears to be the latter - when building Qt for example. But sometimes only one CPU gets used and it's waiting for a source being unpacked that other tasks depend on.
Scripting: Re-use existing scripts if feasible. Integrate with LAVA providing we can run exactly the same scripts on a laptop for debugging.
Script the bitbake, OE meta layer, and Linaro meta layer setup. Script the configuration including setting the release tarball URL and GCC preferred version. Script the build and result capture, especially the log, any ICEs, and the final sizes
Future: OE can grab a repository seed then update based on that. Check if the bzr backend supports this. If so, play with seeding to do tip builds.
Yeah, this is definitely useful for testing tip. I'd like to start with something smaller than the GCC though. For example the Linaro GDB or QEMU would be good. I hope these are the low hanging fruits, so I'll begin with them if I can squeeze some time in.
Let me know what you think then we'll spawn blueprints. Let's keep an eye on this as it's sounding expensive.
-- Michael
Regards, Ken