Hi there. Currently you can't use NEON instructions in inline
assembly if the compiler is set to -mfpu=vfp such as Ubuntu's
-mfpu=vfpv3-d16. Trying code like this:
int main()
{
asm("veor d1, d2, d3");
return 0;
}
gives an error message like:
test.s: Assembler messages:
test.s:29: Error: selected processor does not support Thumb mode `veor d1,d2,d3'
The problem is that -mfpu=vfpv3-d16 has two jobs: it tells the
compiler what instructions to use, and also tells the assembler what
instructions are valid. We might want the compiler to use the VFP for
compatibility or power reasons, but still be able to use NEON
instructions in inline assembler without passing extra flags.
Inserting ".fpu neon" to the start of the inline assembly fixes the
problem. Is this valid? Are assembly files with multiple .fpu
statements allowed? Passing '-Wa,-mfpu=neon' to GCC doesn't work as
gas seems to ignore the second -mfpu.
What's the best way to handle this? Some options are:
* Add '.fpu neon' directives to the start of any inline assembly
* Separate out the features, so you can specify the capabilities with
one option and restrict the compiler to a subset with another.
Something like '-mfpu=neon -mfpu-tune=vfpv3-d16'
* Relax the assembler so that any instructions are accepted. We'd
lose some checking of GCC's output though.
-- Michael
- Continued looking into NEON special loads and stores.
- Benchmarks: concentrated on EEMBC Telecom:
- autcor gets vectorized
- viterbi, besides strided data accesses, needs to sink conditional
stores to allow if-conversion and make the main loop vectorizable.
Since the potential here is 4x, I think it's worthwhile to work on
this.
- conven, fbital also have control-flow issue, but much more
complicated than viterbi
- fft has a problem with loop count, I would like to investigate
this a bit more
- diffmeasure doesn't seem to have vectorization potential
- Fixed GCC PR 46663 on trunk, testing the fix for 4.3, 4.4, 4.5.
Hi,
Here's a work-in-progress patch which fixes many execution failures
seen in big-endian mode when -mvectorize-with-neon-quad is in effect
(which is soon to be the default, if we stick to the current plan).
But, it's pretty hairy, and I'm not at all convinced it's not working
"for the wrong reason" in a couple of places.
I'm mainly posting to gauge opinions on what we should do in big-endian
mode. This patch works with the assumption that quad-word vectors in
big-endian mode are in "vldm" order (i.e. with constituent double-words
in little-endian order: see previous discussions). But, that's pretty
confusing, leads to less than optimal code, and is bound to cause more
problems in the future. So I'm not sure how much effort to expend on
making it work right, given that we might be throwing that vector
ordering away in the future (at least in some cases: see below).
The "problem" patterns are as follows.
* Full-vector shifts: these don't work with big-endian vldm-order quad
vectors. For now, I've disabled them, although they could
potentially be implemented using vtbl (at some cost).
* Widening moves (unpacks) & widening multiplies: when widening from
D-reg to Q-reg size, we must swap double-words in the result (I've
done this with vext). This seems to work fine, but what "hi" and "lo"
refer to is rather muddled (in my head!). Also they should be
expanders instead of emitting multiple assembler insns.
* Narrowing moves: implemented by "open-coded" permute & vmovn (for 2x
D-reg -> D-reg), or 2x vmovn and vrev64.32 for Q-regs (as
suggested by Paul). These seem to work fine.
* Reduction operations: when reducing Q-reg values, GCC currently
tries to extract the result from the "wrong half" of the reduced
vector. The fix in the attached patch is rather dubious, but seems
to work (I'd like to understand why better).
We can sort those bits out, but the question is, do we want to go that
route? Vectors are used in three quite distinct ways by GCC:
1. By the vectorizer.
2. By the NEON intrinsics.
3. By the "generic vector" support.
For the first of these, I think we can get away with changing the
vectorizer to use explicit "array" loads and stores (i.e. vldN/vstN), so
that vector registers will hold elements in memory order -- so, all the
contortions in the attached patch will be unnecessary. ABI issues are
irrelevant, since vectors are "invisible" at the source code layer
generally, including at ABI boundaries.
For the second, intrinsics, we should do exactly what the user
requests: so, vectors are essentially treated as opaque objects. This
isn't a problem as such, but might mean that instruction patterns
written using "canonical" RTL for the vectorizer can't be shared with
intrinsics when the order of elements matters. (I'm not sure how many
patterns this would refer to at present; possibly none.)
The third case would continue to use "vldm" ordering, so if users
inadvertantly write code such as:
res = vaddq_u32 (*foo, bar);
instead of writing an explicit vld* intrinsic (for the load of *foo),
the result might be different from what they expect. It'd be nice to
diagnose such code as erroneous, but that's another issue.
The important observation is that vectors from case 1 and from cases 2/3
never interact: it's quite safe for them to use different element
orderings, without extensive changes to GCC infrastructure (i.e.,
multiple internal representations). I don't think I quite realised this
previously.
So, anyway, back to the patch in question. The choices are, I think:
1. Apply as-is (after I've ironed out the wrinkles), and then remove
the "ugly" bits at a later point when vectorizer "array load/store"
support is implemented.
2. Apply a version which simply disables all the troublesome
patterns until the same support appears.
Apologies if I'm retreading old ground ;-).
(The CANNOT_CHANGE_MODE_CLASS fragment is necessary to generate good
code for the quad-word vec_pack_trunc_<mode> pattern. It would
eventually be applied as a separate patch.)
Thoughts?
Julian
ChangeLog
gcc/
* config/arm/arm.h (CANNOT_CHANGE_MODE_CLASS): Allow changing mode
of vector registers.
* config/arm/neon.md (vec_shr_<mode>, vec_shl_<mode>): Disable in
big-endian mode.
(reduc_splus_<mode>, reduc_smin_<mode>, reduc_smax_<mode>)
(reduc_umin_<mode>, reduc_umax_<mode>)
(neon_vec_unpack<US>_lo_<mode>, neon_vec_unpack<US>_hi_<mode>)
(neon_vec_<US>mult_lo_<mode>, neon_vec_<US>mult_hi_<mode>)
(vec_pack_trunc_<mode>, neon_vec_pack_trunc_<mode>): Handle
big-endian mode for quad-word vectors.
Hi there. I've had a few questions recently about how to build a
cross compiler, so I took a stab at writing the steps down in a
Makefile. See:
https://code.launchpad.net/~michaelh1/+junk/cross-build
Hopefully it's easy to follow. It uses a binary sysroot and gives you
vanilla binutils 2.20 and Linaro GCC 2010.11 in a good enough way that
you can cross-compile for Maverick. The script is minimal and trades
readability for flexibility.
Note that Marcin's cross compiler packages or the Embedian toolchains
are a better way to go, but if you want to see the steps involved
check out the script.
Marcin or Matthias, would you mind reviewing it?
-- Michael
Hi,
- the struggle with the board took a lot of time
- continued to investigate special loads/stores
- looked for benchmarks:
EEMBC Consumer filters rgbcmy and rgbyiq should be vectorizable
once vld3, vst3/4 are supported
EEMBC Telecom viterbi is supposed to give 4x on NEON once
vectorized (according to
http://www.jp.arm.com/event/pdf/forum2007/t1-5.pdf slide 29). My old
version of viterbi is not vectorizable because of if-conversion
problems. I'd be really happy to check the new version (it is supposed
to be slightly different).
Looking into other EEMBC benchmarks.
FFMPEG http://www.ffmpeg.org/ (got this from Rony Nandy from
User Platforms). It contains hand-vectorized code for NEON.
Investigating.
I am probably taking a day off on Sunday.
Ira
This wiki page came up during the toolchain call:
https://wiki.linaro.org/Internal/People/KenWerner/AtomicMemoryOperations/
It gives the code generated for __sync_val_compare_and_swap
as including a push {r4} / pop {r4} pair because it uses too many
temporaries to fit them all in callee-saves registers. I think you
can tweak it a bit to get rid of that:
# int __sync_val_compare_and_swap (int *mem, int old, int new);
# if the current value of *mem is old, then write new into *mem
# r0: mem, r1 old, r2 new
mov r3, r0 # move r0 into r3
dmb sy # full memory barrier
.LSYT7:
ldrex r0, [r3] # load (exclusive) from memory pointed to
by r3 into r0
cmp r0, r1 # compare contents of r0 (mem) with r1
(old) -> updates the condition flag
bne .LSYB7 # branch to LSYB7 if mem != old
# This strex trashes the r0 we just loaded, but since we didn't take
# the branch we know that r0 == r1
strex r0, r2, [r3] # store r2 (new) into memory pointed to
by r3 (mem)
# r0 contains 0 if the store was
successful, otherwise 1
teq r0, #0 # compares contents of r0 with zero ->
updates the condition flag
bne .LSYT7 # branch to LSYT7 if r0 != 0 (if the
store wasn't successful)
# Move the value that was in memory into the right register to return it
mov r0, r1
dmb sy # full memory barrier
.LSYB7:
bx lr # return
I think you can do a similar trick with __sync_fetch_and_add
(although you have to use a subtract to regenerate r0 from
r1 and r2).
On the other hand I just looked at the gcc code that does this
and it's not simply dumping canned sequences out to the
assembler, so maybe it's not worth the effort just to drop a
stack push/pop.
-- PMM
== Linaro GCC ==
* Finished testing patch for lp675347 (QT inline-asm atomics), and
send upstream for comments (no response yet). Suggested reverting a
patch (which enabled -fstrict-volatile-bitfields by default on ARM)
locally for Ubuntu in the bug log.
* Continued working on NEON quad-word vectors/big-endian patch. This
turned out to be slightly fiddlier than I expected: I think I now have
semantics which make sense, though my patch requires (a) slight
middle-end changes, and (b) workarounds for unexpected combiner
behaviour re: subregs & sign/zero-extend ops. I will send a new version
of the patch to linaro-toolchain fairly soon for comments.
Hello,
I have a question about cbnz/cbz thumb-2 instruction implementation in
thumb2.md file:
I have an example where we jump to a label which appears before the branch;
for example:
L4
...
cmp r3, 0
bne .L4
It seems that cbnz instruction should be applied in this loop; replacing
cmp+bne; however, cbnz fails to be applied as diff = ADDRESS (L4) - ADDRESS
(bne .L4) is negative and according to thumb2_cbz in thumb2.md it should
be 2<=diff<=128 (please see snippet below taken from thumb2_cbz).
So I want to double check if the current implementation of thumb2_cbnz
in thumb2.md needs to be changed to enable it.
The following is from thumb2_cbnz in thumb2.md:
[(set (attr "length")
(if_then_else
(and (ge (minus (match_dup 1) (pc)) (const_int 2))
(le (minus (match_dup 1) (pc)) (const_int 128))
(eq (symbol_ref ("which_alternative")) (const_int 0)))
(const_int 2)
(const_int 8)))]
Thanks,
Revital
== This week ==
* More ARM testing of binutils support for STT_GNU_IFUNC.
* Implemented the GLIBC support for STT_GNU_IFUNC. Simple ARM testcases
seem to run correctly.
* Ran the GLIBC testsuite -- which includes some ifunc coverage --
but haven't analysed the results yet.
* Started looking at Thumb for STT_GNU_IFUNC. The problem is that
BFD internally represents Thumb symbols with an even value and
a special st_type (STT_ARM_TFUNC); this is also the old, pre-EABI
external representation. We need something different for STT_GNU_IFUNC.
* Tried making BFD represent Thumb symbols as odd-value functions
internally. I got it to "work", but I wasn't really happy with
the results.
* Looked at alternatives, and in the end decided that it would be
better to have extra internal-only information in Elf_Internal_Sym.
This "works" too, and seems cleaner to me. Sent an RFC upstream:
http://sourceware.org/ml/binutils/2010-11/msg00475.html
* Started writing some Thumb tests for STT_GNU_IFUNC.
* Investigated #618684. Turned out to be something that Bernd
had already fixed upstream. Tested a backport.
== Next week ==
* More IFUNC tests (ARM and Thumb, EGLIBC and binutils).
Richard
== Linaro and upstream GCC ==
* LP #674146, dpkg segfaults during debootstrap on natty armel: analyzed
and found this should be a case of PR44768, backported mainline revision
161947 to fix this.
* LP #641379, bitfields poorly optimized. Discussed some with Andrew Stubbs.
* GCC bugzilla PR45416: Code generation regression on ARM. Been looking
at this regression, that started from the expand from SSA changes since
4.5-experimental. The problem seems to be TER not properly being
substituted during expand (compared to prior "convert to GENERIC then
expand"). I now have a patch for this, which fixes the PR's testcase,
but when testing current upstream trunk, hit an assert fail ICE on
several testcases in the alias-oracle; it does however, test without
regressions on a 4.5 based compiler. I am still looking at the upstream
failures.
== This week ==
* Continue with GCC issues and PRs.
* Think about GCC performance opportunities (Linaro)