Linaro TCWG,
In newer toolchains that are built with ABE, libc.a contains a lot of debugging information, including the paths to the source files on the build machine. I think that's because ABE builds the libraries with -g and never strips out the debug information. I verified this with both the 4.9-2015.05 and 5.2-2015.11 binary releases with the command:
arm-linux-gnueabih-objdump -g libc.a | grep '\.c'
In older toolchains that were built with crosstool-ng, libc.a did not contain the paths to the source files. I guess crosstool-ng either didn't build the libraries with -g, or it stripped out the debug information later. I verified this with the 4.9-2014.09 binary release.
I'm not sure whether this change was intentional, or just an oversight during the switchover to ABE. Regardless, it makes the libraries a lot bigger, and it potentially affects the end user during debugging.
The source files of libc, etc. are not typically included with the binary releases. So, when a user of an ABE-built binary release chooses to step into an extern function of libc, gdb will search for the source file. It likely won't be able to access the source file along the same path that worked for the build machine, so it will search its list of source directories. Ultimately, unless the user has downloaded the source files, gdb will likely display a message like "printf.c: No such file or directory".
In contrast, when a user of a crosstool-ng-built toolchain tries to step into an extern function of libc, gdb will be unaware of the name of the source file. As a result, the user will not get a message about a missing file.
So, should the toolchains' libraries really contain debug information? I think it could be useful for a theoretical multilib folder that covers a -g option. On the other hand, for the usual release builds, isn't the debug information a waste of space and a source of confusion?
Thanks,
Fred Peterson
Engineer - Developer Tools
NXP Semiconductors
Hello All,
I found one difference between gcc-linaro-5.1 vs gcc-linaro-4.8 while I'm doing lmbench benchmark test for our LS1043 (cortex-A53).
While using gcc-linaro-4.8, gcc will generate advanced SIMD instructions (like as ld1, etc), however, gcc-linaro-5.1 will not generate advance SIMD instructions. This will cause big performance gap between gcc-4.8 and gcc-5.1 for lmbench memory bandwidth "fcp" test (bw_mem program).
My compiler flags is "-O3 -mcpu=cortex-a53". I also tried several different compiler flags ("-O3 -mcpu=cortex-a53+fp+simd", "-O2 -ftree-vectorize -mcpu=cortex-a53", "-O3 -ftree-vectorize -mcpu=cortex-a53"), all of them doesn't work.
Gcc-5.1 toolchain was downloaded from following link:
https://snapshots.linaro.org/openembedded/sources/gcc-linaro-5.1-snapshot-2…
Can I have your comments on this?
Thanks
Ron
Hello toolchain gurus,
In the course of Linaro's kernel tinification project, the ability to
compile the Linux kernel using LTO is a frequent requirement. However
the kernel makes heavy usage of 'ld -r' with .o files resulting from LTO
build of .c files as well as .o files resulting from pure assembly code.
This mix of LTO and non-LTO object files is not supported by upstream
binutils unless a patch from H.J. Lu is applied. That patch has been
available since 2013 and was last refreshed in his 2.25.51.0.4 branch
last September. It is accessible here:
https://git.linaro.org/toolchain/binutils-gdb.git/commit/6da5456971
I've attached a very simple test case demonstrating the problem. With
the binutils-lto-mixed.patch applied, this test case compiles to a
working executable. Otherwise compilation fails at the 'ld -r' step.
One question and one request:
- What, if anything, has prevented this patch from being merged in the
master branch upstream?
- In the mean time, could we include this patch in the Linaro binutils
package and releases?
Having this available in our toolchain releases would greatly simplify
the LTO related work on the kernel. It was included in all binutils
releases from H.J. Lu since 2013 and therefore has obtained significant
exposure already.
Thanks.
Nicolas
== This Week ==
* TCWG-72 (2/10)
- Trying to address segfault with DImode
* LTO ICE with 483.xalancbmk (6/10)
- fighting with benchmark scripts
- segfaults with ICE on -flto-partitions=none
on symbol _ZThn8_N10xalanc_1_819XercesParserLiaison11resetErrorsEv
demangler.com says the symbol is non-virtual thunk to:
non-virtual thunk to xalanc_1_8::XercesParserLiaison::resetErrors()
./-lm.res (resolution map file) says PREVAILING_DEF_IRONLY
- Trace: http://pastebin.com/vxzmQFHg
- Possible fix: http://pastebin.com/rwq8z1N1
Patch passes test and bootstrap, not sure if that's correct approach however.
* Target hook conversion (2/10)
- Unconditionalizing ASM_OUTPUT_DEF on SET_ASM_OP and converting
both to hooks.
NB Last _6_ working days.
Centralised benchmark source - TCWG-354 [6/10]
* Abe integration
* Wrote up some notes on collaborate
* Enabled clang build, which flushed out some bugs, now fixed
* LAVA reporting script
Port to microinstance - TCWG-432 [1/10]
* Investigating issues with Juno boot
** One turns out to be user error
** The other is a failed heath check, passed to the admins
Backport benchmarking - TCWG-352 [2/10]
* Difficulties with passing information from matrix children to parent
** Came up with an ugly workaround, may have thought of a better one
Misc [3/10]
=Plan=
Holidays until 4th January
# Progress #
* TCWG-156, GDB Test-Suite Parity Between Aarch64 and x86_64. Done. [4/10]
After two patches are committed, except some tests written for x86_64
unnecessarily, the test results between aarch64 and x86_64 looks no
difference.
* TCWG-424, timeout when interrupt the inferior in remote debugging. [3/10]
The fail is caused by different two problems. Two patches are ready,
and being regression tested.
* TCWG-171, Enable gdb core file tests when testing remotely. [1/10]
Write down my conclusion as it can't be fixed.
* Upstream review, [2/10]
** Review patch about handling ada aarch64 HVA array in GDB.
** Discuss target description of GDB for cortex-m device with openocd.
# Plan #
* Post patches upstream for TCWG-424,
* Patches review.
* On holiday since Wed.
--
Yao
Hi All,
I am interested in understanding Linaro LLVM activity.
I have already gone through
https://wiki.linaro.org/WorkingGroups/ToolChain/LLVM.
Could you please guide me on below questions.
1. On which LLVM & clang version, linaro is actively working now ?
2. Where can I find the latest "linaro-llvm" source code & binary? I could
not find any official git repo for "linaro-llvm" at https://git.linaro.org/.
3. Could you please explain Linaro LLVM working model? How
similar/different it is when compared with Linaro-GCC engagement.
4. Certain links (e.g Roadmap) at
https://wiki.linaro.org/WorkingGroups/ToolChain/LLVM ask for login
credentials. Any comment on how to obtain the permission?
Thanks in advance for your time.
--
with regards,
Virendra Kumar Pathak
* 1 day off (Friday) (2/10)
== Progress ==
* Validation: (2/10)
- improvements to the comparison scripts
- updated ABE stable and staging branches
* GCC: (3/10)
- TCWG-484: sent a new version of the patch to address problems with
the testsuite and target attributes.
- TCWG-485/PR68620: WIP, trying to understand the regressions
introduced by my patch
* Misc (conf calls, meetings, emails, ...) (3/10)
== Next ==
Holidays until next year
Hi All,
As per release notes, linaro_binutils-2_25-2015_01-2_release should be
present at http://git.linaro.org/toolchain/binutils-gdb.git.
But I am not able to see any "linaro_binutils" tags on
https://git.linaro.org/toolchain/binutils-gdb.git/tags.
Am I doing any mistake? or on git it is renamed to something else (same as
FSF binutils tags?).
Apology for asking such a trivial question.
Thanks for your time.
--
with regards,
Virendra Kumar Pathak