On 19 March 2013 21:56, Renato Golin renato.golin@linaro.org wrote:
Hi folks,
I found an issue while fixing a test using the wrong VMUL.f32, and I'd like to know what should be our choice on this topic that is slightly controversial.
Basically, LLVM chooses to lower single-precision FMUL to NEON's VMUL.f32 instead of VFP's version because, on some cores (A8, A5 and Apple's Swift), the VFP variant is really slow.
This is all cool and dandy, but NEON is not IEEE 754 compliant, so the result is slightly different. So slightly that only one test, that was really pushing the boundaries (ie. going below FLT_MIN) did catch it.
There are two ways we can go here:
- Strict IEEE compatibility and *only* lower NEON's VMUL if unsafe-math is
on. This will make generic single-prec. code slower but you can always turn unsafe-math on if you want more speed.
- Continue using NEON for f32 by default and put a note somewhere that
people should turn this option (FeatureNEONForFP) off on A5/A8 if they *really* care about maximum IEEE compliance.
Apple already said that for Darwin, 2 is still the option of choice. Do we agree and ignore this issue? Or for GNU/EABI we want strict conformance by default?
GCC uses fmuls...
The NEON vmul.f32 takes two possibly unexpected shortcuts: it flushes denormals to zero, and it ignores the selected rounding mode. Both of these can result in incorrect operation of code assuming standard behaviour.
C99 requires, and users generally expect, IEEE754 behaviour, so deviating from this by default is in my opinion a bad idea. The fact that well-known flags exist to explicitly request relaxed requirements in favour of speed further reinforce the expectation that the default will be standards compliance.
I am strongly in favour of your option 1.