I couldn't find the arm-none-eabi- bare metal version of Linaro GCC 4.8. Please provide the link for prebuilt baremetal tool chain binaries?
-Sugumar
-- IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Paul,
I've been having some thoughts about CBuild and Lava and the TCWG
integration of them both. I wish to share them and open them up for general
discussion.
The background to this has been the flakiness of the Panda's (due to heat),
the Arndale (due to board 'set-up' issues), and getting a batch of Calxeda
nodes working.
The following discussion refers to building and testing only, *not*
benchmarking.
If you look at http://cbuild.validation.linaro.org/helpers/scheduler you
will see a bunch of calxeda01_* nodes have been added to CBuild. After a
week of sorting them out they provide builds twice as fast as the Panda
boards. However, during the setup of the boards I came to the conclusion
that we set build slaves up incorrectly, and that there is a better way.
The issues I encountered were:
* The Calxeda's run quantal - yet we want to build on precise.
* Its hard to get a machine running in hard-float to bootstrap a
soft-float compiler and vice-versa.
* My understanding of how the Lava integration works is that it runs the
cbuild install scripts each time, and so we can't necessarily reproduce a
build if the upstream packages have been changed.
Having thought about this a bit I came to the conclusion that the simple
solution is to use chroots (managed by schroot), and to change the
architecture a bit. The old architecture is everything is put into the main
file-system as one layer. The new architecture would be to split this into two:
1. Rootfs - Contains just enough to boot the system and knows how to
download an appropriate chroot and start it.
2. Chroots - these contain a setup build system that can be used for
particular builds.
The rootfs can be machine type specific (as necessary), and for builds can
be a stock linaro root filesystem. It will contain scripts to set the users
needed up, and then to download an appropriate chroot and run it.
The chroot will be set up for a particular type of build (soft-float vs
hard-float) and will be the same for all platforms. The advantage of this
is that I can then download a chroot to my ChromeBook and reproduce a build
locally in the same environment to diagnose issues.
The Calxeda nodes in cbuild use this type of infrastructure - the rootfs is
running quantal (and I have no idea how it is configured - it is what Steve
supplied me with). Each node then runs two chroots (precise armel and
precise armhf) which take it in turns to ask the cbuild scheduler whether
there is a job available.
So my first question is does any of the above make sense?
Next steps as I see it are:
1. Paul/Dave - what stage is getting the Pandaboards in the Lava farm
cooled at? One advantage of the above architecture is we could use a stock
Pandaboard kernel & rootfs that has thermal limiting turned on for builds,
so that things don't fall over all the time.
2. Paul - how hard would it be to try and fire up a Calxeda node into
Lava? We can use one of the ones assigned to me. I don't need any fancy
multinode stuff that Michael Hudson-Doyle is working on - each node can be
considered a separate board. I feel guilty that I put the nodes into CBuild
without looking at Lava - but it was easier to do and got me going - I think
correcting that is important
3. Generally - What's the state of the Arndale boards in Lava? Fathi has
got GCC building reliably, although I believe he is now facing networking
issues.
4. Paul - If Arndale boards are available in Lava - how much effort would
it be to make them available to CBuild?
One issue the above doesn't solve as far as I see it is being able to say to
Lava that we can do a build on any ARMv7-A CBuild compatible board. I don't
generally care whether the build happens on an Arndale, Panda, or Calxeda
board - I want the result in the shortest possible time.
A final note on benchmarking. I think the above scheme could work for
benchmarking targets all we need to do is build a kernel/rootfs that is
setup to provide a system that produces repeatable benchmarking results.
Comments welcome from all.
Thanks,
Matt
--
Matthew Gretton-Dann
Toolchain Working Group, Linaro
== Progress ==
* Disable-peeling: waiting for reference job results
* Libsanitizer:
- identified the reason for different behaviour on board and on simulator
- proposed a workaround on gcc-patches (qemu-user has a few
limitations, so it's desirable to skip some libsanitizer tests)
- in the mean time libsanitizer developers received similar concerns
from android, and are considering a runtime environment variable to
control use of stderr
* Neon intrinsics:
- resumed investigation on vuzp crc
* Internal support
== Next ==
* Peeling: analyze results when available
* Revert-coalesce: same
* Libsanitizer: reach agreement with upstream
* Neon intrinsics: continue with vuzp crc
== Progress ==
* Further work on glibc memcpy IFUNC patch based on review.
* Ported libdwarf to aarch64 and submitted upstream.
* Disabled gold build in binutils cbuild job and created a gold job.
* Investigated binutils testsuite failure on precise.
* On leave Friday.
== Issues ==
* None.
== Plan ==
* Submit a patch for binutils testsuite failure on precise.
* Complete work on glibc memcpy iFUNC patch.
* Follow up binutils IFUNC patch.
* AArch64 IFUNC next...
--
Will Newton
Toolchain Working Group, Linaro
== Progress ==
* Completed investigation of GDB dwarf test suite failures on ARM.
Almost all failures will be fixed with small updates to test cases
assembly language files.
* Checked-out GDB 7.6 branch and did comparison of dwarf test case failures
on arm. Some test cases have been updated already and dont need a patch.
* Started filtering GDB test suite results on ARM for possible failures due
to assembly language incompatibility.
* Tried some experimentation with cbuild, and ec2 instances only to find
out they are not connected with lab.
* Completed Ireland Visa process. Spent Friday and most part of Thursday
for the documents attestation, fee submission and postage stuff.
*** Still No blue-print available to log work in JIRA.
== Plan ==
* Filter all arm assembly language compatibility updates required to gdb
test suite and create a patch.
* Get an intro to gdb patch submission process for a possible fix to gdb
test suite for arm assembly compatibility.
* Start looking into gdb.mi test suite failures on arm.
* Fill up the arm native vs arm remote comparison sheet.
* Figure out a way to use hackbox for gdb testing on arm.
== Progress ==
Very short week, was doing some AMD internal tasks and attend local meetings.
* gprof support work for Aarch64.
Not much progress this week.
Working on GCC side to support gprof for Aarch64.
== Plan ==
* Continue gprof support work for Aarch64
== Summary ==
- http://cards.linaro.org/browse/TCWG-13
- performance regression is due to alignment of function.
- there is an increase in runtime of core_state_transition even
though there is no difference in the code generated with the patch
- Adding nops seems to improve the locality and performance; it gets
better than without the patch for THUMB2.
- Get spec2000 results for http://cards.linaro.org/browse/TCWG-14 to
decide on the next step
- Couldn’t get SPEC2000 results with CBUILD
- set-up spec benchmarks in chromebook and now running locally.
-
https://blueprints.launchpad.net/gcc-linaro/+spec/better-end-of-loop-counte…
- Initial investigation shows that the code generated is same as
expected.
== Plan ==
- http://cards.linaro.org/browse/TCWG-13 - follow it up
- Get spec2000 results for http://cards.linaro.org/browse/TCWG-14 to
decide on the next step
- Look for improvement in VRP for zero/sign extension
== Progress ==
* Running around JM/lencode bug
- caused by a codegen opt (ICMP fold) that had repercussions only on A9
and A15 code generation
- spent three days trying to reduce the case when the problem fixed itself
miraculously >:(
* Planning for the future
- Agreeing on short-term plans for Q2
* Buildbot
- Working on self-hosting bot
- Moved local buildmaster to hackbox
* Investigating LLVMLinux
- Building Android kernel with LLVM
- Investigating breakage in Debug mode
== Plans ==
* Continue self-hosting bot
* Try running a CBuild benchmark with LLVM
* Start putting up together the infrastructure for release 3.3
* Try to extract useful information from perf database
Progress:
* qemu maintenance
** sent arm-devs and target-arm pullreqs now we're in softfreeze
* VIRT-4
** received Arndale board, confirmed it works (took several
hours mostly due to bonkers power switch design)
** VIRT-49
*** making progress; updated card with a list of sub-subtasks
Plans:
* keep pushing on with VIRT-49
* finish config of arndale board, test running KVM on it
* book travel/hotel for Connect Dublin
* office move Fri 26/Mon 29
-- PMM