Summary:
* Rebase and test the shrink-wrap patches.
* Learn how branch cost impact on code generation.
Details:
1. Rebase and test the shrink-wrap patches.
* For pretend arguments, it is hard to generate correct dwarf info for case:
* Add dwarf info for ldrd_pop. Testing is ongoing.
* Expose an interface from regcpprog and do copy propagation for the
entry block. Benchmark logs show there are much more functions can be
shrink-wrapped.
2. Read codes and enhance ifcvt.c.
* For IF-THEN-ELSE, if the last insn of then_bb or else_bb is
ANY_RETURN_P, we can save one JUMP. In this, we'd keep max
MAX_CONDITIONAL_EXECUTE, not "max *= 2". A patch is sent out for
review.
* Take the branch probability into account. Test is ongoing.
3. R/M toolchain related work.
Plan:
* Follow up the shrink-wrap dwarf info issue.
* Investigate benchmarks which is impacted by different branch cost.
Planed leaves:
* Feb. 9 - 15: Chinese Spring Festival.
Best Regards!
-Zhenqiang
== Progress ==
* Short week again (leading compilation courses)
* Merge request review
- finished 4.6 and 4.7 requests.
* Boehm GC AArch64 support
- Back on this topic
== Next ==
* Continue Boehm GC activity
== This Week ==
* Livermore Loops
- Test was badly adapted, failures were due to undefined behaviour
- Removed from test-suite until a propper adaptation can be done
- LTO and Static Analysis issues raised
- http://llvm.org/bugs/show_bug.cgi?id=14851
- http://llvm.org/bugs/show_bug.cgi?id=14852
* LLVM Builds
- Builds fine on Chromebook, same check-all failures as other ARM targets
- test-suite fails, haven't had time to properly investigate
- Getting a pandaboard to act as a buildbot
- Creating a LAVA job to run often and on-demand internally
* AArch64 in LLVM
- reviewing lots of patches from ARM (Tim Northover)
- full back-end just sent, will review over the weekend
* Loop Vectorize
- Some discussions with Tao Wang about cost models
- Got some ideas on what are the best changes for LLVM's cost model
- Not much on this front
* EuroLLVM 2013
- Trying to define the level of sponsorship Linaro can provide
- CFP will go out next week, committee created, conference confirmed
* Commits
http://llvm.org/viewvc/llvm-project?view=rev&revision=171642http://llvm.org/viewvc/llvm-project?view=rev&revision=171859
== Next Week ==
* Get the LAVA job running
- need account at people.linaro.org, RT created
* Get at least one buildbot in sync with LLVM's lab
* Get some traction on the cost model
* Cambridge LLVM Social, liaise with ARM
== Future ==
* Try to draft an LLVM story for Linaro (and understand why I'm here in the
first place) ;)
* Have builds with vectorization turned on
== Blueprints ==
gcc-investigate-lra-for-arm
== Progress ==
* Built "lra" gcc branch of gcc for x86
* Collected and compared SPEC benchmark results with and without LRA
enabled
* Bootstrapped ARM toolchain with last reported working revision from
lra branch
* Tracked down and resolved ICE
* Bootstrapped ARM toolchain with head revision from lra branch
* Tracked down and resolved another ICE
* Verified two patches (from above ICEs) have no regressions on trunk
* Began investigation into target hooks for LRA for ARM to improve
performance
* Admin
* Connect registration and trip preparations
== Next week ==
* Collect benchmark results from SPEC for LRA on ARM
* Complete target hooks and benchmark again
* Review roster
== Progress ==
* 64-bits ops in Neon: pinged patch proposal.
* disable peeling/vectorzer cost model: initial benchmarking done wth
cost-model on (now default). Received some results with cost model
off, waiting for spec2k.
* started looking at smin-umin idiom patch from Ramana. Rebased and
launched build to make some benchmarking.
* restarted working on local board setup for benchmarking
* discussed bug reports on ARM-Neon instrinsics testsuite
== Next ==
* handle 64-bits bitops in Neon feedback from upstream if any
* analyze results of benchmarking with vectorizer cost model
* analyze results of benchmarking with smin-umin idiom patch
* continue board setup/update
== Blueprints ==
Initial Current Actual
fix-gcc-multiarch-testing 31 Dec 2012 31 Jan 2013
== Progress ==
* Infrastructure
* Investigations of why Cortex-A9 HF boards are failing
* Admin
* Booked tickets to Connect
* 'Onboarding' prep for new starters and assignees
* Cortex Strings
* Applied patches
== Next week ==
* Prepare Cortex Strings release
* Ensure GCC backports are up to date.
* Release week.
* Catch up on outstanding cards.
== Future ==
* Run HOT/COLD partitioning benchmarks
* Analyse ARM results
* On x86_64 to see what the actual benefit we could get
* fix-gcc-multiarch-testing
* Come up with strawman proposal for updating testsuite to handle
testing with varying command-line options.
--
Matthew Gretton-Dann
Toolchain Working Group, Linaro
Hi,
Would you please help on how to generate correct epilogue dwarf info?
Without correct dwarf info, when shrink-wrap is enabled, it tends to
ICE at dwarf2cfi.c: function maybe_record_trace_start.
/* We ought to have the same state incoming to a given trace no
matter how we arrive at the trace. Anything else means we've
got some kind of optimization error. */
gcc_checking_assert (cfi_row_equal_p (cur_row, ti->beg_row));
Issues:
1) pretend_args
The attached pretend_arg.c shows an example about pretend_args dwarf info
push {r2, r3}
.cfi_def_cfa_offset 8
.cfi_offset 2, -8
.cfi_offset 3, -4
use r3
push {r4, r5, lr}
...
pop {r4, r5, lr}
add sp, sp, #8
//No instruction here to restore r2 and r3
Can we RESTORE r2 and r3?
* If we notes to RESTORE r2 and r3, it might lead to wrong info for
GDB since no instruction restores them.
* If we do not RESTORE them, the reg_save dwarf info will not be
cleared. Then the dwarf check will fail when the function is
shrink-wrapped.
2) frame_pointer_needed
In prologue, we set fp like
fp = sp + INT
After this instruction, cfi_def_cfa_register is set to fp
In epilogue. we have
fp += INT
sp = fp
Can we reset cfi_def_cfa_register back to sp?
* If we set it back to sp, how to handle it in arm_unwind_emit_set,
which assumes sp can not be set from other register?
/* A stack increment. */
if (GET_CODE (e1) != PLUS
|| !REG_P (XEXP (e1, 0))
|| REGNO (XEXP (e1, 0)) != SP_REGNUM
|| !CONST_INT_P (XEXP (e1, 1)))
abort ();
* If we do not set it back, to get correct dwarf info for POP after
"sp = fp", we have to add notes " sp = fp + INT" for dwarf-info while
we have "sp = sp + INT" in the insn. Here is the workaround POP RTL
example for the attached alloca,c:
(insn/f 62 61 66 3 (parallel [
(set/f (reg/f:SI 13 sp) // sp = sp + 8
(plus:SI (reg/f:SI 13 sp)
(const_int 8 [0x8])))
(set/f (reg:SI 3 r3)
(mem/c:SI (reg/f:SI 13 sp) [3 S4 A32]))
(set/f (reg/f:SI 7 r7)
(mem/c:SI (plus:SI (reg/f:SI 13 sp)
(const_int 4 [0x4])) [3 S4 A32]))
]) alloca.c:8 329 {*load_multiple_with_writeback}
(expr_list:REG_UNUSED (reg:SI 3 r3)
(expr_list:REG_CFA_ADJUST_CFA (set (reg/f:SI 13 sp)
(plus:SI (reg/f:SI 7 r7) // sp = fp + 8
(const_int 8 [0x8])))
(expr_list:REG_CFA_RESTORE (reg/f:SI 7 r7)
(expr_list:REG_CFA_RESTORE (reg:SI 3 r3)
(nil))))))
(3) No idea for
if (crtl->calls_eh_return)
emit_insn (gen_addsi3 (stack_pointer_rtx,
stack_pointer_rtx,
gen_rtx_REG (SImode, ARM_EH_STACKADJ_REGNUM)));
Currently I have no shrink-wrapped test case which
crtl->calls_eh_return is true.
Thanks!
-Zhenqiang
Hi all,
I'm helping the loop vectorization in LLVM and for that we need first to
build the instruction cost model so we can decide whether the vectorization
is worth or not.
I was looking at some papers, but most of them concern with energy
consumption, which is not the issue (at least not for now). The "cost"
model should take the point of view of latency, stalls and pipeline cost.
I know there are sporadic comments on this on the ARM ARM, but would be
good to have a definite resource where to get the data from (and hope it's
a public document). Does anyone know of a good place to start looking for
that?
Even if the document is private, we can certainly hide the information
enough to make it to the LLVM code base.
cheers,
--renato
Hi All,
I am using linaro-precise-ubuntu-desktop-20120626 on my ZYNQ ZC702 board
which has an ARM Cortex-A9 dual core processor.
I am trying to compile Point Cloud Library and its dependent libraries like
FLANN, VTK, EIGEN etc.. which are basically c++ libraries.
The compiler crashes with the following error msg and i am unable to figure
out where the problem is.
linaro@linaro-ubuntu-desktop:~/flann/flann-1.8.3-src/build$ make install
[ 33%] Building CXX object src/cpp/CMakeFiles/flann_s.dir/flann/flann.cpp.o
In file included from
/home/linaro/flann/flann-1.8.3-src/src/cpp/flann/algorithms/kmeans_index.h:51:0,
from
/home/linaro/flann/flann-1.8.3-src/src/cpp/flann/algorithms/all_indices.h:38,
from /home/linaro/flann/flann-1.8.3-src/src/cpp/flann/flann.hpp:45,
from /home/linaro/flann/flann-1.8.3-src/src/cpp/flann/flann.h:466,
from /home/linaro/flann/flann-1.8.3-src/src/cpp/flann/flann.cpp:31:
/home/linaro/flann/flann-1.8.3-src/src/cpp/flann/util/logger.h:73:9: note:
the mangling of ‘va_list’ has changed in GCC 4.4
c++: internal compiler error: Killed (program cc1plus)
Please submit a full bug report,
with preprocessed source if appropriate.
See file:///usr/share/doc/gcc-4.6/README.Bugs for instructions.
make[2]: *** [src/cpp/CMakeFiles/flann_s.dir/flann/flann.cpp.o] Error 4
make[1]: *** [src/cpp/CMakeFiles/flann_s.dir/all] Error 2
make: *** [all] Error 2
linaro@linaro-ubuntu-desktop:~/flann/flann-1.8.3-src/build$
Let me know if someone has faced similar issue or has any solution for this.
--
*Anup Kini
*Systems Engineer****
*
------------------------------
*
*Synapticon** * | Cyber-Physical System Solutions ****
Direct:****
+49 7335 / 186 999 17****
Fax:****
+49 7335 / 186 999 1
****
****
synapticon.com <http://www.synapticon.com/> |
@synapticon_co<https://twitter.com/#!/synapticon_co>
****
Synapticon GmbH | Hohlbachweg 2 | 73344 Gruibingen, DE
Secretary +49 7335 / 186 999 0 | General Manager: Nikolai Ensslen
Registry Court Ulm HRB 725114 | USt-ID DE271647127****
This message and any files transmitted with it are confidential and
intended
solely for the use of the individual or entity to whom they are addressed.
Please notify the sender immediately if you have received this e-mail by
mistake and delete it from your system.