Paul,
I've been having some thoughts about CBuild and Lava and the TCWG integration of them both. I wish to share them and open them up for general discussion.
The background to this has been the flakiness of the Panda's (due to heat), the Arndale (due to board 'set-up' issues), and getting a batch of Calxeda nodes working.
The following discussion refers to building and testing only, *not* benchmarking.
If you look at http://cbuild.validation.linaro.org/helpers/scheduler you will see a bunch of calxeda01_* nodes have been added to CBuild. After a week of sorting them out they provide builds twice as fast as the Panda boards. However, during the setup of the boards I came to the conclusion that we set build slaves up incorrectly, and that there is a better way.
The issues I encountered were: * The Calxeda's run quantal - yet we want to build on precise. * Its hard to get a machine running in hard-float to bootstrap a soft-float compiler and vice-versa. * My understanding of how the Lava integration works is that it runs the cbuild install scripts each time, and so we can't necessarily reproduce a build if the upstream packages have been changed.
Having thought about this a bit I came to the conclusion that the simple solution is to use chroots (managed by schroot), and to change the architecture a bit. The old architecture is everything is put into the main file-system as one layer. The new architecture would be to split this into two:
1. Rootfs - Contains just enough to boot the system and knows how to download an appropriate chroot and start it. 2. Chroots - these contain a setup build system that can be used for particular builds.
The rootfs can be machine type specific (as necessary), and for builds can be a stock linaro root filesystem. It will contain scripts to set the users needed up, and then to download an appropriate chroot and run it.
The chroot will be set up for a particular type of build (soft-float vs hard-float) and will be the same for all platforms. The advantage of this is that I can then download a chroot to my ChromeBook and reproduce a build locally in the same environment to diagnose issues.
The Calxeda nodes in cbuild use this type of infrastructure - the rootfs is running quantal (and I have no idea how it is configured - it is what Steve supplied me with). Each node then runs two chroots (precise armel and precise armhf) which take it in turns to ask the cbuild scheduler whether there is a job available.
So my first question is does any of the above make sense?
Next steps as I see it are:
1. Paul/Dave - what stage is getting the Pandaboards in the Lava farm cooled at? One advantage of the above architecture is we could use a stock Pandaboard kernel & rootfs that has thermal limiting turned on for builds, so that things don't fall over all the time.
2. Paul - how hard would it be to try and fire up a Calxeda node into Lava? We can use one of the ones assigned to me. I don't need any fancy multinode stuff that Michael Hudson-Doyle is working on - each node can be considered a separate board. I feel guilty that I put the nodes into CBuild without looking at Lava - but it was easier to do and got me going - I think correcting that is important
3. Generally - What's the state of the Arndale boards in Lava? Fathi has got GCC building reliably, although I believe he is now facing networking issues.
4. Paul - If Arndale boards are available in Lava - how much effort would it be to make them available to CBuild?
One issue the above doesn't solve as far as I see it is being able to say to Lava that we can do a build on any ARMv7-A CBuild compatible board. I don't generally care whether the build happens on an Arndale, Panda, or Calxeda board - I want the result in the shortest possible time.
A final note on benchmarking. I think the above scheme could work for benchmarking targets all we need to do is build a kernel/rootfs that is setup to provide a system that produces repeatable benchmarking results.
Comments welcome from all.
Thanks,
Matt