Hi,
I need following packages for ARMv8 (aarch64) target. Where can I find
these packages? Or do I have to download source code and compile those
using linaro-cross-compiler?
libqt4-dev libqt4-opengl-dev libphonon-dev libicu-dev libsqlite3-dev
libxext-dev libxrender-dev gperf libfontconfig1-dev libphonon-dev
libpng12-dev libjpeg62-dev
Thanks
Aparna
== Progress ==
* Benchmarks
- Running EEMBC on a Panda
- LLVM on par with GCC in code size and run time
* Release Planning
- Calxeda busted, delays, but got test-suite running on it
- Bootstrap and test-suite fail with atomic support, investigating
- Preparing a Beagleboard for conscience relief
- Coordinating with other parties on hardware/testing/roles
* EuroLLVM 2013
- Meetings, badges, preparations, final run
* Support
- Helping folks with test-suite, buildbots, reviewing patches, etc
== Issues ==
Broke my glasses on a place impossible to fix, had to resort to epoxy while
I wait for an eye test
== Plan ==
* EuroLLVM Mon~Tue
* Continue setting up release hardware/process, order some more boards
* Help Sylvestre/Galina setting up Jenkins/Buildbots for release
* Try to run a more substantial benchmark
* Fix my glasses
== Progress ==
* Disable-peeling: had to re-spawn benchmark jobs (both reference and
updated patch)
* Libsanitizer: updated patch running under cbuild before updating my
proposal upstream.
* Neon intrinsics:
- some progress on removing unnecessary moves around vuzp.
- there are still some around veor
* Bi-endian compiler: read article, attended call
* Internal support
== Next ==
Holidays next week
== Future ==
* Disable peeling: analyze results
* Revert-coalesce: same
* Libsanitizer: sent updated patch upstream if validation OK
* Neon intrinsics: continue improving crc with vuzp
== Progress ==
* Submitted binutils patch for testsuite failure on precise.
* Updated glibc memcpy patch based on feedback.
* Updated binutils IFUNC patch based on feedback.
* Started working on AArch64 IFUNC.
* Submitted a couple of cleanup patches for AArch64 binutils.
== Issues ==
* 2 cards blocked on upstream review, 1 blocked on Android team.
== Plan ==
* Should get binutils IFUNC patch committed Monday.
* Need to ping other patches again.
* More work on AArch64 IFUNC.
--
Will Newton
Toolchain Working Group, Linaro
Progress:
* office move
* VIRT-49:
** confirmed I can run KVM on the Arndale, started using it as
test platform for the migration work
** I have most of the code for cp15 register migration written
** in debug phase; there is a case I hadn't considered that needs a
little thought
Plans:
* keep pushing on with VIRT-49
* book travel/hotel for Connect Dublin
* office move unpacking
-- PMM
Hi.
First.
#include <Im-doing-something-somewhat-odd.h>
I'm trying to use a current clang/llvm (current as in git checkout
from just the other day) to build an opencl kernel and then link that
with some code which has been compiled with gcc/g++.
When the clang .o is linked to the gcc/gcc+, I'm getting
/home/tgall/opencl/SNU/tmp2/cl_temp_1.tkl uses VFP register arguments,
/home/tgall/opencl/SNU/tmp2/cl_temp_1.o does not
the cl_temp_1.o was produced with clang.
the cl_temp_1.tkl via gcc/g++.
Let's dive into details.
This is following in the footsteps of an open source framework called
SNU which implements OpenCL. Within SNU they had a fairly old version
of clang+llvm which wouldn't even build on ARM so step one has been to
figure out what SNU was doing with clang and replicate this using
latest clang.
So given the following minimal test kernel placed into cl_temp_1.cl
/* Header to make Clang compatible with OpenCL */
#define __global __attribute__((address_space(1)))
int get_global_id(int index);
/* Test kernel */
__kernel void test(__global float *in, __global float *out) {
int index = get_global_id(0);
out[index] = 3.14159f * in[index] + in[index];
}
then we following the following steps:
clang -mfloat-abi=hard -mfpu=neon -S -emit-llvm -x cl
-I/home/tgall/opencl/SNU/src/compiler/tools/clang/lib/Headers
-I/home/tgall/opencl/SNU/inc -include
/home/tgall/opencl/SNU/inc/comp/cl_kernel.h
/home/tgall/opencl/SNU/tmp2/cl_temp_1.cl -o
/home/tgall/opencl/SNU/tmp2/cl_temp_1.ll
with the resulting cl_temp_1.ll we:
llc /home/tgall/opencl/SNU/tmp2/cl_temp_1.ll
which results in cl_temp_1.s. Then:
clang -c -mfloat-abi=hard -mfpu=neon -o
/home/tgall/opencl/SNU/tmp2/cl_temp_1.o
/home/tgall/opencl/SNU/tmp2/cl_temp_1.s
so now in theory we should have a perflectly good cl_temp_1.o ready for linking.
But first let's get the bits ready that will be built by the
traditional gnu toolschain. We have:
gcc -shared -fPIC -O3 -o /home/tgall/opencl/SNU/tmp2/cl_temp_1_info.so
/home/tgall/opencl/SNU/tmp2/cl_temp_1_info.c
and
gcc -shared -fPIC -march=armv7-a -mtune=cortex-a9 -mfloat-abi=hard
-mfpu=neon -fsigned-char -DDEF_INCLUDE_ARM -I. -I
/home/tgall/opencl/SNU/src/runtime/hal/device/cpu -I
/home/tgall/opencl/SNU/src/runtime/include -I
/home/tgall/opencl/SNU/src/runtime/core -I
/home/tgall/opencl/SNU/src/runtime/core/device -I
/home/tgall/opencl/SNU/src/runtime/hal -I
/home/tgall/opencl/SNU/src/runtime/hal/device -DTARGET_MACH_CPU -O3 -c
/home/tgall/opencl/SNU/src/runtime/hal/device/cpu/hal.c -o
/home/tgall/opencl/SNU/tmp2/hal.o
And here we try to link it all together.
g++ -shared -fPIC -march=armv7-a -mtune=cortex-a9 -mfloat-abi=hard
-mfpu=neon -fsigned-char -DDEF_INCLUDE_ARM -O3 -o
/home/tgall/opencl/SNU/tmp2/cl_temp_1.tkl
/home/tgall/opencl/SNU/tmp2/hal.o
/home/tgall/opencl/SNU/tmp2/cl_temp_1.o
-L/home/tgall/opencl/SNU/lib/lnx_arm
-lsnusamsung_opencl_builtin_lnx_arm -lpthread -lm
and bang we're back to the error I first mentioned:
/usr/bin/ld: error: /home/tgall/opencl/SNU/tmp2/cl_temp_1.tkl uses VFP
register arguments, /home/tgall/opencl/SNU/tmp2/cl_temp_1.o does not
so first obvious question is -mfloat-abi=hard -mfpu=neon correct for clang?
tgall@miranda:~/opencl/SNU/tmp2$ clang --version
clang version 3.3
Target: armv7l-unknown-linux-gnueabihf
Thread model: posix
Thanks for any suggestions!
--
Regards,
Tom
"Where's the kaboom!? There was supposed to be an earth-shattering
kaboom!" Marvin Martian
Tech Lead, Graphics Working Group | Linaro.org │ Open source software
for ARM SoCs
w) tom.gall att linaro.org
h) tom_gall att mac.com
The Linaro Toolchain and Platform Working Groups are pleased to announce
the 2013.04 release of the Linaro Toolchain Binaries, a pre-built version
of Linaro GCC and Linaro GDB that runs on generic Linux or Windows and
targets the glibc Linaro Evaluation Build.
This will likely be the last binary release based on gcc 4.7 -- it also
introduces the first gcc 4.8 based build.
Uses include:
* Cross compiling ARM applications from your laptop
* Remote debugging
* Build the Linux kernel for your board
What's included:
* Linaro GCC 4.7 2013.04 and Linaro GCC 4.8 2013.04
* Linaro GDB 7.5 2012.12
* A statically linked gdbserver
* A system root
* Manuals under share/doc/
The system root contains the basic header files and libraries to link your
programs against.
Interesting changes include:
* gcc is updated to 4.8 (in the 4.8 builds)
* rpc support in eglibc is re-enabled
* Version reported by ARMv7 and AArch64 cross toolchains has been unified
The Linux version is supported on Ubuntu 10.04.3 and 12.04, Debian 6.0.2,
Fedora 16, openSUSE 12.1, Red Hat Enterprise Linux Workstation 5.7 and
later, and should run on any Linux Standard Base 3.0 compatible
distribution. Please see the README about running on x86_64 hosts.
The Windows version is supported on Windows XP Pro SP3, Windows Vista
Business SP2, and Windows 7 Pro SP1.
The binaries and build scripts are available from:
https://launchpad.net/linaro-toolchain-binaries/trunk/2013.04
Need help? Ask a question on https://ask.linaro.org/
Already on Launchpad? Submit a bug at
https://bugs.launchpad.net/linaro-toolchain-binaries
On IRC? See us on #linaro on Freenode.
Other ways that you can contact us or get involved are listed at
https://wiki.linaro.org/GettingInvolved.
Summary:
* ARM internal training and R/M toolchain related work.
* Investigate Linaro toolchain 4.8 build issues.
Details:
1. Fix several linaro toolchain 4.8 binary build issues:
* nls patch need be updated to add (char *) when assigning the
result of xmalloc to a char*.
* gcc build pass-2 need build libbacktrace (get patch from
crosstool-ng upstream).
* gcc build pass-2 build with "-j4" fail. Seams build order issue. A
workaround is to remove "-j4".
* Mingw32 confiugre fail due to missing ISL. A workaround is to add
"--without-isl"
Plan:
* Work with Bero to release 4.8.
* Swith to ISL/CLooG for future release.
Best Regards!
-Zhenqiang
I couldn't find the arm-none-eabi- bare metal version of Linaro GCC 4.8. Please provide the link for prebuilt baremetal tool chain binaries?
-Sugumar
-- IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Paul,
I've been having some thoughts about CBuild and Lava and the TCWG
integration of them both. I wish to share them and open them up for general
discussion.
The background to this has been the flakiness of the Panda's (due to heat),
the Arndale (due to board 'set-up' issues), and getting a batch of Calxeda
nodes working.
The following discussion refers to building and testing only, *not*
benchmarking.
If you look at http://cbuild.validation.linaro.org/helpers/scheduler you
will see a bunch of calxeda01_* nodes have been added to CBuild. After a
week of sorting them out they provide builds twice as fast as the Panda
boards. However, during the setup of the boards I came to the conclusion
that we set build slaves up incorrectly, and that there is a better way.
The issues I encountered were:
* The Calxeda's run quantal - yet we want to build on precise.
* Its hard to get a machine running in hard-float to bootstrap a
soft-float compiler and vice-versa.
* My understanding of how the Lava integration works is that it runs the
cbuild install scripts each time, and so we can't necessarily reproduce a
build if the upstream packages have been changed.
Having thought about this a bit I came to the conclusion that the simple
solution is to use chroots (managed by schroot), and to change the
architecture a bit. The old architecture is everything is put into the main
file-system as one layer. The new architecture would be to split this into two:
1. Rootfs - Contains just enough to boot the system and knows how to
download an appropriate chroot and start it.
2. Chroots - these contain a setup build system that can be used for
particular builds.
The rootfs can be machine type specific (as necessary), and for builds can
be a stock linaro root filesystem. It will contain scripts to set the users
needed up, and then to download an appropriate chroot and run it.
The chroot will be set up for a particular type of build (soft-float vs
hard-float) and will be the same for all platforms. The advantage of this
is that I can then download a chroot to my ChromeBook and reproduce a build
locally in the same environment to diagnose issues.
The Calxeda nodes in cbuild use this type of infrastructure - the rootfs is
running quantal (and I have no idea how it is configured - it is what Steve
supplied me with). Each node then runs two chroots (precise armel and
precise armhf) which take it in turns to ask the cbuild scheduler whether
there is a job available.
So my first question is does any of the above make sense?
Next steps as I see it are:
1. Paul/Dave - what stage is getting the Pandaboards in the Lava farm
cooled at? One advantage of the above architecture is we could use a stock
Pandaboard kernel & rootfs that has thermal limiting turned on for builds,
so that things don't fall over all the time.
2. Paul - how hard would it be to try and fire up a Calxeda node into
Lava? We can use one of the ones assigned to me. I don't need any fancy
multinode stuff that Michael Hudson-Doyle is working on - each node can be
considered a separate board. I feel guilty that I put the nodes into CBuild
without looking at Lava - but it was easier to do and got me going - I think
correcting that is important
3. Generally - What's the state of the Arndale boards in Lava? Fathi has
got GCC building reliably, although I believe he is now facing networking
issues.
4. Paul - If Arndale boards are available in Lava - how much effort would
it be to make them available to CBuild?
One issue the above doesn't solve as far as I see it is being able to say to
Lava that we can do a build on any ARMv7-A CBuild compatible board. I don't
generally care whether the build happens on an Arndale, Panda, or Calxeda
board - I want the result in the shortest possible time.
A final note on benchmarking. I think the above scheme could work for
benchmarking targets all we need to do is build a kernel/rootfs that is
setup to provide a system that produces repeatable benchmarking results.
Comments welcome from all.
Thanks,
Matt
--
Matthew Gretton-Dann
Toolchain Working Group, Linaro