== Issues ==
* None
== Progress ==
* Enable multi-arch for Linaro aarch64-linux-gnu build.
- Configure gcc with --enable-multiarch.
- Backport eglibc patches for rtlddir.
- Configure eglibc to set libdir, slibdir and rtlddir.
* Follow up the patch for pr57637.
* Reassociate X == CST1 || X == CST2 if popcount (CST2 - CST1) == 1
into ((X - CST1) & ~(CST2 - CST1)) == 0.
- Test is ongoing.
* Rebase conditional compare changes to trunk.
== Leave ==
* July 17.
== Plan ==
* Send out the conditional compare changes for Linaro internal review.
== Progress ==
Short week (3 days)
* Divmod
- Finished most of the work, patch upstream (r186390)
- Trying to lower remainder as divmod without sacrificing the div+mod
merge during legalization
* Buildbots
- One panda fails even at 920MHz, replaced with a new one
- Using decent power supplies now, should work a whole week
- Lab config is messed up, need to start from scratch
- Failed at 920MHz with decent power supply
- Giving up on Pandas as buildbots...
* Background
- Not many code reviews last two weeks, as Connect and the Pandas took
most of my spare time
- Test-suite's Lencod bug fixed upstream, no more need of dirty hacks on
stack offset calculation
== Plan ==
* Continue on divmod lowering, finding a way to merge Custom with Expand
lowering, so not to break the div+rem merge during legalization, but still
lower divmod as Custom. 64-bit types still use them.
* Study cross-compilation issues in Clang and LLVM, prepare a document on
how it works (or not), and what paths we can take to make it better in the
short/medium term.
* Follow up on Phoronix results and CBuild2 benchmarks, time allowing
== Progress ==
* Investigating glibc stack guard support
* Developed initial patches
* Raised ABI issue
* More malloc work
* Integrated tcmalloc into benchmark framework
* Improved benchmark repeatability
* Added Python benchmark script and graphing script
* Respin and commit gdb TLS testsuite patch (my gdb patch queue now empty)
* Committed two of Omair's gdb patches
== Issues ==
* None.
== Plan ==
* Produce some malloc benchmark graphs
* Progress the stack guard work
* Start writing some malloc code
--
Will Newton
Toolchain Working Group, Linaro
Progress:
* attended Linaro Connect (wk 28)
* sent pull requests for target-arm/arm-devs queues
* resent virtio-mmio patchset; this is now ready for commit
and I will send a pullreq with it in next week
* finished off a cleanup of linux-user to remove the support
for configuring targets without threading support
* resorted todo list against cards
* started on getting v7 mach-virt into shape for upstreaming
-- PMM
Hi All,
I am new to linaro. so would like to know more about linaro.
Is linero alternative to yocto project.
does the tool chain support armv5te based SOC.
Ratheendran
Hi Folks,
I'm running two buildbots here at home and am getting consistent failures
from the Pandas because of overheating. I've set up a monitor that will
tell me the current CPU temperature and the allowed maximum, and when the
bot passes 90%, it shuts itself off.
The problem is that I'm running with heat-sinks and the boards are on top
of three fans, so there really isn't much more I can do to solve this
problem.
I personally think this is a hardware problem, since everything is in the
same die, CPU, GPU and RAM, and the physical dimensions of the chip are
quite small. I remember when Intel started overheating (around 486DX66) and
the die was huge (more head dissipation), plus RAM and GPU were separate,
and it still needed a hefty heat-sink.
It's true that gates are far smaller today, but it's not true that a dual
core 1.3GHz + GPU + RAM will produce less heat on a small die than a 66KHz
CPU on a huge die, so why anyone think it's a good idea to release a 1+GHz
chip without *any* form of heat dissipation is beyond my comprehension.
Manufacturers only got away with it, so far, because people rarely use 100%
of the CPU power for extended periods of time, because ARM devices end up
as set-top boxes, mobile phones and tablets. However, even those devices
will heat up when playing 2 h films or games, and they do have some form of
heat sink.
We, at the toolchain group, make things worse by using 100% CPU, 24 / 7,
something that Panda boards, or Arndales were not designed to do. However,
with ARM moving into the server space, their designs will have to be
re-thought, and what a better place than Linaro for making sure we get it
right?
For the time being, I believe we *must* have air conditioning in the Lab
all the time, and we *must* have heat-sinks on every board, and we *must*
monitor the CPU temperature of the boards, at least until we're comfortable
that they're not failing all the time.
Can we make a temperature monitor (like the one attached) a default feature
on Linaro Ubuntu distributions? We could dump that info to the syslog/dmesg
whenever it crosses the (say) 75% threshold, and report more often when it
crosses the 95%, possibly dumping the processe(s) that are consuming more
CPU at the time, to enable post-mortem debugging.
cheers,
--renato
As a side note, the quad-A9 ODroid does ship with a massive heat-sink,
which also serves as a fancy case. Quite clever, really.
Hello,
I use gcc-linaro-aarch64-linux-gnu-4.8 to compile my C code with
thread-local variables.
Here is an example of my C code:
__thread u32 threadedVar;
void test(void)
{
threadedVar = 0xDEAD;
}
gcc produces the following assembly to access my threaded variable:
threadedVar = 0xDEAD;
72b0: d00000c0 adrp x0, 21000
72b4: f945ac00 ldr x0, [x0,#2904]
72b8: d503201f nop
72bc: d503201f nop
72c0: d53bd041 mrs x1, tpidr_el0
72c4: 529bd5a2 movz w2, #0xdead
72c8: b8206822 str w2, [x1,x0]
This assembly fits dynamically linked code, but in my case I have
statically linked application that does not load any additional modules.
Since I have exactly one TLS block containing all thread-local variable gcc
should be able to calculate the offset at link time.
Can I make gcc to produce the following assembly ?
threadedVar = 0xDEAD;
72c0: d53bd041 mrs x1, tpidr_el0
72c4: 529bd5a2 movz w2, #0xdead
72c8: b8206822 str w2, [x1,#offset_to_threadedVar]
Thank you,
Vitali
Hello,
I use gcc-linaro-aarch64-linux-gnu-4.8 to compile my C code with
thread-local variables.
Here is an example of my C code:
__thread u32 threadedVar;
void test(void)
{
threadedVar = 0xDEAD;
}
gcc produces the following assembly to access my threaded variable:
threadedVar = 0xDEAD;
72b0: d00000c0 adrp x0, 21000
72b4: f945ac00 ldr x0, [x0,#2904]
72b8: d503201f nop
72bc: d503201f nop
72c0: d53bd041 mrs x1, tpidr_el0
72c4: 529bd5a2 movz w2, #0xdead
72c8: b8206822 str w2, [x1,x0]
This assembly fits dynamically linked code, but in my case I have
statically linked application that does not load any additional modules.
Since I have exactly one TLS block containing all thread-local variable gcc
should be able to calculate the offset at link time.
Can I make gcc to produce the following assembly ?
threadedVar = 0xDEAD;
72c0: d53bd041 mrs x1, tpidr_el0
72c4: 529bd5a2 movz w2, #0xdead
72c8: b8206822 str w2, [x1,#offset_to_threadedVar]
Thank you,
Vitali
The Linaro Toolchain Working Group is pleased to announce the release
of both Linaro GCC 4.8 and Linaro GCC 4.7.
Linaro GCC 4.8 2013.07 is the fourth release in the 4.8 series. Based
off the latest GCC 4.8.0+svn200355 release, it includes performance
improvements and bug fixes.
Interesting changes include:
* Updates to GCC 4.8.0+svn200355
* Address Sanitizer support for ARM.
* New -mrestrict-it option support.
* Backport of support for further AArch64 instructions.
* Backport of support for further ARMv8 AArch32 instructions.
* Reverted recent changes to shrink-wrapping and tail-calls.
Linaro GCC 4.7-2013.07 is the sixteenth release in the 4.7
series. Based off the latest GCC 4.7.3+svn200408 release, this is the
third release after entering maintenance.
Interesting changes include:
* Updates to GCC 4.7.3+svn200408
The source tarballs are available from:
https://launchpad.net/gcc-linaro/+milestone/4.7-2013.07https://launchpad.net/gcc-linaro/+milestone/4.8-2013.07
Downloads are available from the Linaro GCC page on Launchpad:
https://launchpad.net/gcc-linaro
Mailing list: http://lists.linaro.org/mailman/listinfo/linaro-toolchain
Bugs: https://bugs.launchpad.net/gcc-linaro/
Questions? http://ask.linaro.org/
Interested in commercial support? inquire at support(a)linaro.org