After llvm commit adec9223616477df023026b0269ccd008701cc94 Author: David Green david.green@arm.com
[AArch64] Make -mcpu=generic schedule for an in-order core
the following benchmarks grew in size by more than 1%: - 444.namd grew in size by 2% from 185531 to 188815 bytes
the following hot functions grew in size by more than 10% (but their benchmarks grew in size by less than 1%): - 482.sphinx3:[.] OUTLINED_FUNCTION_4 grew in size by 14% from 28 to 32 bytes
Below reproducer instructions can be used to re-build both "first_bad" and "last_good" cross-toolchains used in this bisection. Naturally, the scripts will fail when triggerring benchmarking jobs if you don't have access to Linaro TCWG CI.
For your convenience, we have uploaded tarballs with pre-processed source and assembly files at: - First_bad save-temps: https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-a... - Last_good save-temps: https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-a... - Baseline save-temps: https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-a...
Configuration: - Benchmark: SPEC CPU2006 - Toolchain: Clang + Glibc + LLVM Linker - Version: all components were built from their tip of trunk - Target: aarch64-linux-gnu - Compiler flags: -Oz - Hardware: APM Mustang 8x X-Gene1
This benchmarking CI is work-in-progress, and we welcome feedback and suggestions at linaro-toolchain@lists.linaro.org . In our improvement plans is to add support for SPEC CPU2017 benchmarks and provide "perf report/annotate" data behind these reports.
THIS IS THE END OF INTERESTING STUFF. BELOW ARE LINKS TO BUILDS, REPRODUCTION INSTRUCTIONS, AND THE RAW COMMIT.
This commit has regressed these CI configurations: - tcwg_bmk_llvm_apm/llvm-master-aarch64-spec2k6-Oz
First_bad build: https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-a... Last_good build: https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-a... Baseline build: https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-a... Even more details: https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-a...
Reproduce builds: <cut> mkdir investigate-llvm-adec9223616477df023026b0269ccd008701cc94 cd investigate-llvm-adec9223616477df023026b0269ccd008701cc94
# Fetch scripts git clone https://git.linaro.org/toolchain/jenkins-scripts
# Fetch manifests and test.sh script mkdir -p artifacts/manifests curl -o artifacts/manifests/build-baseline.sh https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-a... --fail curl -o artifacts/manifests/build-parameters.sh https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-a... --fail curl -o artifacts/test.sh https://ci.linaro.org/job/tcwg_bmk_ci_llvm-bisect-tcwg_bmk_apm-llvm-master-a... --fail chmod +x artifacts/test.sh
# Reproduce the baseline build (build all pre-requisites) ./jenkins-scripts/tcwg_bmk-build.sh @@ artifacts/manifests/build-baseline.sh
# Save baseline build state (which is then restored in artifacts/test.sh) mkdir -p ./bisect rsync -a --del --delete-excluded --exclude /bisect/ --exclude /artifacts/ --exclude /llvm/ ./ ./bisect/baseline/
cd llvm
# Reproduce first_bad build git checkout --detach adec9223616477df023026b0269ccd008701cc94 ../artifacts/test.sh
# Reproduce last_good build git checkout --detach e2a2e5475cbd370044474e132a1b5c58e6a3d458 ../artifacts/test.sh
cd .. </cut>
Full commit (up to 1000 lines): <cut> commit adec9223616477df023026b0269ccd008701cc94 Author: David Green david.green@arm.com Date: Sat Oct 9 15:58:31 2021 +0100
[AArch64] Make -mcpu=generic schedule for an in-order core
We would like to start pushing -mcpu=generic towards enabling the set of features that improves performance for some CPUs, without hurting any others. A blend of the performance options hopefully beneficial to all CPUs. The largest part of that is enabling in-order scheduling using the Cortex-A55 schedule model. This is similar to the Arm backend change from eecb353d0e25ba which made -mcpu=generic perform in-order scheduling using the cortex-a8 schedule model.
The idea is that in-order cpu's require the most help in instruction scheduling, whereas out-of-order cpus can for the most part out-of-order schedule around different codegen. Our benchmarking suggests that hypothesis holds. When running on an in-order core this improved performance by 3.8% geomean on a set of DSP workloads, 2% geomean on some other embedded benchmark and between 1% and 1.8% on a set of singlecore and multicore workloads, all running on a Cortex-A55 cluster.
On an out-of-order cpu the results are a lot more noisy but show flat performance or an improvement. On the set of DSP and embedded benchmarks, run on a Cortex-A78 there was a very noisy 1% speed improvement. Using the most detailed results I could find, SPEC2006 runs on a Neoverse N1 show a small increase in instruction count (+0.127%), but a decrease in cycle counts (-0.155%, on average). The instruction count is very low noise, the cycle count is more noisy with a 0.15% decrease not being significant. SPEC2k17 shows a small decrease (-0.2%) in instruction count leading to a -0.296% decrease in cycle count. These results are within noise margins but tend to show a small improvement in general.
When specifying an Apple target, clang will set "-target-cpu apple-a7" on the command line, so should not be affected by this change when running from clang. This also doesn't enable more runtime unrolling like -mcpu=cortex-a55 does, only changing the schedule used.
A lot of existing tests have updated. This is a summary of the important differences: - Most changes are the same instructions in a different order. - Sometimes this leads to very minor inefficiencies, such as requiring an extra mov to move variables into r0/v0 for the return value of a test function. - misched-fusion.ll was no longer fusing the pairs of instructions it should, as per D110561. I've changed the schedule used in the test for now. - neon-mla-mls.ll now uses "mul; sub" as opposed to "neg; mla" due to the different latencies. This seems fine to me. - Some SVE tests do not always remove movprfx where they did before due to different register allocation giving different destructive forms. - The tests argument-blocks-array-of-struct.ll and arm64-windows-calls.ll produce two LDR where they previously produced an LDP due to store-pair-suppress kicking in. - arm64-ldp.ll and arm64-neon-copy.ll are missing pre/postinc on LPD. - Some tests such as arm64-neon-mul-div.ll and ragreedy-local-interval-cost.ll have more, less or just different spilling. - In aarch64_generated_funcs.ll.generated.expected one part of the function is no longer outlined. Interestingly if I switch this to use any other scheduled even less is outlined.
Some of these are expected to happen, such as differences in outlining or register spilling. There will be places where these result in worse codegen, places where they are better, with the SPEC instruction counts suggesting it is not a decrease overall, on average.
Differential Revision: https://reviews.llvm.org/D110830 --- llvm/lib/Target/AArch64/AArch64.td | 2 +- .../Analysis/CostModel/AArch64/shuffle-select.ll | 2 +- .../Analysis/CostModel/AArch64/vector-select.ll | 4 +- llvm/test/CodeGen/AArch64/DAGCombine_vscale.ll | 2 +- .../CodeGen/AArch64/GlobalISel/arm64-atomic.ll | 68 +- llvm/test/CodeGen/AArch64/GlobalISel/byval-call.ll | 4 +- .../call-translator-variadic-musttail.ll | 26 +- .../CodeGen/AArch64/GlobalISel/combine-udiv.ll | 308 +- .../AArch64/GlobalISel/merge-stores-truncating.ll | 10 +- llvm/test/CodeGen/AArch64/GlobalISel/swifterror.ll | 86 +- llvm/test/CodeGen/AArch64/aarch64-addv.ll | 2 +- llvm/test/CodeGen/AArch64/aarch64-be-bv.ll | 40 +- .../CodeGen/AArch64/aarch64-dup-ext-scalable.ll | 40 +- llvm/test/CodeGen/AArch64/aarch64-dup-ext.ll | 18 +- llvm/test/CodeGen/AArch64/aarch64-fold-lslfast.ll | 12 +- llvm/test/CodeGen/AArch64/aarch64-load-ext.ll | 36 +- .../CodeGen/AArch64/aarch64-matrix-umull-smull.ll | 24 +- llvm/test/CodeGen/AArch64/aarch64-smull.ll | 124 +- llvm/test/CodeGen/AArch64/aarch64-tail-dup-size.ll | 6 +- .../test/CodeGen/AArch64/aarch64_win64cc_vararg.ll | 4 +- llvm/test/CodeGen/AArch64/addimm-mulimm.ll | 32 +- .../CodeGen/AArch64/addsub-constant-folding.ll | 18 +- llvm/test/CodeGen/AArch64/addsub.ll | 2 +- llvm/test/CodeGen/AArch64/align-down.ll | 10 +- llvm/test/CodeGen/AArch64/and-mask-removal.ll | 12 +- .../AArch64/argument-blocks-array-of-struct.ll | 51 +- llvm/test/CodeGen/AArch64/arm64-AdvSIMD-Scalar.ll | 24 +- .../CodeGen/AArch64/arm64-addr-type-promotion.ll | 37 +- llvm/test/CodeGen/AArch64/arm64-addrmode.ll | 6 +- .../test/CodeGen/AArch64/arm64-bitfield-extract.ll | 14 +- llvm/test/CodeGen/AArch64/arm64-collect-loh.ll | 2 +- llvm/test/CodeGen/AArch64/arm64-convert-v4f64.ll | 22 +- llvm/test/CodeGen/AArch64/arm64-csel.ll | 16 +- llvm/test/CodeGen/AArch64/arm64-dup.ll | 10 +- llvm/test/CodeGen/AArch64/arm64-fcopysign.ll | 18 +- llvm/test/CodeGen/AArch64/arm64-fmadd.ll | 4 +- .../arm64-homogeneous-prolog-epilog-no-helper.ll | 18 +- llvm/test/CodeGen/AArch64/arm64-indexed-memory.ll | 54 +- .../CodeGen/AArch64/arm64-indexed-vector-ldst.ll | 180 +- llvm/test/CodeGen/AArch64/arm64-inline-asm.ll | 8 +- .../AArch64/arm64-instruction-mix-remarks.ll | 20 +- llvm/test/CodeGen/AArch64/arm64-ldp.ll | 20 +- llvm/test/CodeGen/AArch64/arm64-memset-inline.ll | 4 +- llvm/test/CodeGen/AArch64/arm64-neon-3vdiff.ll | 64 +- llvm/test/CodeGen/AArch64/arm64-neon-aba-abd.ll | 6 +- llvm/test/CodeGen/AArch64/arm64-neon-copy.ll | 13 +- llvm/test/CodeGen/AArch64/arm64-neon-mul-div.ll | 1428 ++++---- llvm/test/CodeGen/AArch64/arm64-nvcast.ll | 10 +- llvm/test/CodeGen/AArch64/arm64-popcnt.ll | 198 +- .../arm64-promote-const-complex-initializers.ll | 8 +- .../test/CodeGen/AArch64/arm64-register-pairing.ll | 4 +- llvm/test/CodeGen/AArch64/arm64-rev.ll | 14 +- .../AArch64/arm64-setcc-int-to-fp-combine.ll | 20 +- llvm/test/CodeGen/AArch64/arm64-shrink-wrapping.ll | 92 +- llvm/test/CodeGen/AArch64/arm64-sli-sri-opt.ll | 30 +- llvm/test/CodeGen/AArch64/arm64-srl-and.ll | 2 +- .../test/CodeGen/AArch64/arm64-subvector-extend.ll | 630 ++-- llvm/test/CodeGen/AArch64/arm64-tls-dynamics.ll | 8 +- llvm/test/CodeGen/AArch64/arm64-tls-local-exec.ll | 8 +- llvm/test/CodeGen/AArch64/arm64-trunc-store.ll | 4 +- llvm/test/CodeGen/AArch64/arm64-vabs.ll | 446 ++- llvm/test/CodeGen/AArch64/arm64-vhadd.ll | 32 +- llvm/test/CodeGen/AArch64/arm64-vmul.ll | 226 +- llvm/test/CodeGen/AArch64/arm64-windows-calls.ll | 19 +- .../CodeGen/AArch64/arm64-zero-cycle-zeroing.ll | 8 +- llvm/test/CodeGen/AArch64/arm64_32-addrs.ll | 6 +- llvm/test/CodeGen/AArch64/arm64_32-atomics.ll | 2 +- llvm/test/CodeGen/AArch64/atomic-ops-lse.ll | 17 +- .../CodeGen/AArch64/atomic-ops-not-barriers.ll | 2 +- llvm/test/CodeGen/AArch64/bcmp-inline-small.ll | 4 +- llvm/test/CodeGen/AArch64/bitcast-promote-widen.ll | 8 +- llvm/test/CodeGen/AArch64/bitfield-insert.ll | 34 +- llvm/test/CodeGen/AArch64/build-one-lane.ll | 9 +- llvm/test/CodeGen/AArch64/build-vector-extract.ll | 126 +- llvm/test/CodeGen/AArch64/cgp-usubo.ll | 24 +- llvm/test/CodeGen/AArch64/cmp-select-sign.ll | 44 +- llvm/test/CodeGen/AArch64/cmpxchg-idioms.ll | 16 +- .../CodeGen/AArch64/combine-comparisons-by-cse.ll | 50 +- llvm/test/CodeGen/AArch64/cond-sel-value-prop.ll | 12 +- llvm/test/CodeGen/AArch64/consthoist-gep.ll | 32 +- llvm/test/CodeGen/AArch64/csr-split.ll | 4 +- llvm/test/CodeGen/AArch64/ctpop-nonean.ll | 30 +- llvm/test/CodeGen/AArch64/dag-combine-select.ll | 2 +- .../CodeGen/AArch64/dag-combine-trunc-build-vec.ll | 14 +- llvm/test/CodeGen/AArch64/dag-numsignbits.ll | 12 +- .../AArch64/div-rem-pair-recomposition-signed.ll | 210 +- .../AArch64/div-rem-pair-recomposition-unsigned.ll | 210 +- llvm/test/CodeGen/AArch64/emutls.ll | 6 +- llvm/test/CodeGen/AArch64/expand-select.ll | 50 +- llvm/test/CodeGen/AArch64/expand-vector-rot.ll | 12 +- llvm/test/CodeGen/AArch64/extract-bits.ll | 484 +-- llvm/test/CodeGen/AArch64/extract-lowbits.ll | 116 +- llvm/test/CodeGen/AArch64/f16-instructions.ll | 18 +- llvm/test/CodeGen/AArch64/fabs.ll | 8 +- llvm/test/CodeGen/AArch64/fadd-combines.ll | 14 +- llvm/test/CodeGen/AArch64/faddp-half.ll | 8 +- .../CodeGen/AArch64/fast-isel-addressing-modes.ll | 6 +- .../CodeGen/AArch64/fast-isel-branch-cond-split.ll | 4 +- llvm/test/CodeGen/AArch64/fast-isel-gep.ll | 6 +- llvm/test/CodeGen/AArch64/fast-isel-memcpy.ll | 6 +- llvm/test/CodeGen/AArch64/fast-isel-shift.ll | 24 +- llvm/test/CodeGen/AArch64/fdiv_combine.ll | 6 +- llvm/test/CodeGen/AArch64/fold-global-offsets.ll | 10 +- llvm/test/CodeGen/AArch64/fp16-v8-instructions.ll | 1441 ++++---- llvm/test/CodeGen/AArch64/fp16-vector-shuffle.ll | 2 +- llvm/test/CodeGen/AArch64/fptosi-sat-scalar.ll | 198 +- llvm/test/CodeGen/AArch64/fptosi-sat-vector.ll | 958 +++--- llvm/test/CodeGen/AArch64/fptoui-sat-scalar.ll | 114 +- llvm/test/CodeGen/AArch64/fptoui-sat-vector.ll | 708 ++-- .../CodeGen/AArch64/framelayout-frame-record.mir | 3 +- .../CodeGen/AArch64/framelayout-unaligned-fp.ll | 4 +- llvm/test/CodeGen/AArch64/func-calls.ll | 2 +- llvm/test/CodeGen/AArch64/funnel-shift-rot.ll | 30 +- llvm/test/CodeGen/AArch64/funnel-shift.ll | 108 +- llvm/test/CodeGen/AArch64/global-merge-3.ll | 24 +- llvm/test/CodeGen/AArch64/half.ll | 10 +- .../hoist-and-by-const-from-lshr-in-eqcmp-zero.ll | 6 +- .../test/CodeGen/AArch64/hwasan-check-memaccess.ll | 2 +- .../CodeGen/AArch64/i128_volatile_load_store.ll | 36 +- llvm/test/CodeGen/AArch64/implicit-null-check.ll | 12 +- .../AArch64/insert-subvector-res-legalization.ll | 70 +- llvm/test/CodeGen/AArch64/isinf.ll | 2 +- llvm/test/CodeGen/AArch64/known-never-nan.ll | 16 +- llvm/test/CodeGen/AArch64/ldst-opt.ll | 5 +- llvm/test/CodeGen/AArch64/llvm-ir-to-intrinsic.ll | 163 +- llvm/test/CodeGen/AArch64/logical_shifted_reg.ll | 137 +- llvm/test/CodeGen/AArch64/lowerMUL-newload.ll | 24 +- .../CodeGen/AArch64/machine-licm-sink-instr.ll | 24 +- .../test/CodeGen/AArch64/machine-outliner-throw.ll | 4 +- .../AArch64/machine_cse_impdef_killflags.ll | 4 +- llvm/test/CodeGen/AArch64/madd-lohi.ll | 4 +- llvm/test/CodeGen/AArch64/memcpy-scoped-aa.ll | 50 +- llvm/test/CodeGen/AArch64/merge-trunc-store.ll | 72 +- llvm/test/CodeGen/AArch64/midpoint-int.ll | 308 +- llvm/test/CodeGen/AArch64/min-max.ll | 260 +- llvm/test/CodeGen/AArch64/minmax-of-minmax.ll | 256 +- llvm/test/CodeGen/AArch64/minmax.ll | 10 +- llvm/test/CodeGen/AArch64/misched-fusion-lit.ll | 5 +- llvm/test/CodeGen/AArch64/misched-fusion.ll | 4 +- .../CodeGen/AArch64/named-vector-shuffles-neon.ll | 18 +- .../CodeGen/AArch64/named-vector-shuffles-sve.ll | 408 +-- llvm/test/CodeGen/AArch64/neg-abs.ll | 8 +- llvm/test/CodeGen/AArch64/neg-imm.ll | 3 +- .../CodeGen/AArch64/neon-bitwise-instructions.ll | 6 +- llvm/test/CodeGen/AArch64/neon-dotpattern.ll | 4 +- llvm/test/CodeGen/AArch64/neon-dotreduce.ll | 88 +- llvm/test/CodeGen/AArch64/neon-mla-mls.ll | 30 +- llvm/test/CodeGen/AArch64/neon-mov.ll | 2 +- llvm/test/CodeGen/AArch64/neon-reverseshuffle.ll | 2 +- llvm/test/CodeGen/AArch64/neon-shift-neg.ll | 24 +- llvm/test/CodeGen/AArch64/neon-truncstore.ll | 30 +- llvm/test/CodeGen/AArch64/nontemporal.ll | 74 +- llvm/test/CodeGen/AArch64/overeager_mla_fusing.ll | 10 +- llvm/test/CodeGen/AArch64/pow.ll | 12 +- .../pull-conditional-binop-through-shift.ll | 6 +- llvm/test/CodeGen/AArch64/qmovn.ll | 8 +- .../AArch64/ragreedy-local-interval-cost.ll | 187 +- llvm/test/CodeGen/AArch64/rand.ll | 10 +- llvm/test/CodeGen/AArch64/reduce-and.ll | 348 +- llvm/test/CodeGen/AArch64/reduce-or.ll | 348 +- llvm/test/CodeGen/AArch64/reduce-xor.ll | 164 +- llvm/test/CodeGen/AArch64/regress-tblgen-chains.ll | 4 +- llvm/test/CodeGen/AArch64/rotate-extract.ll | 14 +- .../rvmarker-pseudo-expansion-and-outlining.mir | 4 +- llvm/test/CodeGen/AArch64/sadd_sat.ll | 12 +- llvm/test/CodeGen/AArch64/sadd_sat_plus.ll | 36 +- llvm/test/CodeGen/AArch64/sadd_sat_vec.ll | 68 +- llvm/test/CodeGen/AArch64/sat-add.ll | 30 +- llvm/test/CodeGen/AArch64/sdivpow2.ll | 2 +- llvm/test/CodeGen/AArch64/seh-finally.ll | 8 +- llvm/test/CodeGen/AArch64/select-with-and-or.ll | 32 +- llvm/test/CodeGen/AArch64/select_const.ll | 112 +- llvm/test/CodeGen/AArch64/select_fmf.ll | 32 +- llvm/test/CodeGen/AArch64/selectcc-to-shiftand.ll | 16 +- llvm/test/CodeGen/AArch64/settag-merge-order.ll | 4 +- llvm/test/CodeGen/AArch64/settag-merge.ll | 8 +- llvm/test/CodeGen/AArch64/settag.ll | 10 +- llvm/test/CodeGen/AArch64/shift-amount-mod.ll | 168 +- llvm/test/CodeGen/AArch64/shift-by-signext.ll | 20 +- llvm/test/CodeGen/AArch64/shift-mod.ll | 2 +- llvm/test/CodeGen/AArch64/shrink-wrapping-vla.ll | 4 +- llvm/test/CodeGen/AArch64/sibling-call.ll | 2 +- llvm/test/CodeGen/AArch64/signbit-shift.ll | 8 +- llvm/test/CodeGen/AArch64/sink-addsub-of-const.ll | 48 +- llvm/test/CodeGen/AArch64/sitofp-fixed-legal.ll | 18 +- .../CodeGen/AArch64/speculation-hardening-loads.ll | 4 +- .../test/CodeGen/AArch64/speculation-hardening.mir | 2 +- llvm/test/CodeGen/AArch64/split-vector-insert.ll | 70 +- llvm/test/CodeGen/AArch64/sqrt-fastmath.ll | 254 +- llvm/test/CodeGen/AArch64/srem-lkk.ll | 2 +- .../CodeGen/AArch64/srem-seteq-illegal-types.ll | 90 +- llvm/test/CodeGen/AArch64/srem-seteq-optsize.ll | 16 +- .../CodeGen/AArch64/srem-seteq-vec-nonsplat.ll | 382 +-- llvm/test/CodeGen/AArch64/srem-seteq-vec-splat.ll | 64 +- llvm/test/CodeGen/AArch64/srem-seteq.ll | 12 +- llvm/test/CodeGen/AArch64/srem-vector-lkk.ll | 446 +-- llvm/test/CodeGen/AArch64/ssub_sat.ll | 12 +- llvm/test/CodeGen/AArch64/ssub_sat_plus.ll | 36 +- llvm/test/CodeGen/AArch64/ssub_sat_vec.ll | 68 +- .../CodeGen/AArch64/stack-guard-remat-bitcast.ll | 12 +- llvm/test/CodeGen/AArch64/stack-guard-sysreg.ll | 30 +- .../CodeGen/AArch64/statepoint-call-lowering.ll | 6 +- .../AArch64/sve-calling-convention-mixed.ll | 16 +- llvm/test/CodeGen/AArch64/sve-expand-div.ll | 12 +- llvm/test/CodeGen/AArch64/sve-extract-element.ll | 4 +- .../CodeGen/AArch64/sve-extract-fixed-vector.ll | 64 +- .../CodeGen/AArch64/sve-extract-scalable-vector.ll | 60 +- llvm/test/CodeGen/AArch64/sve-fcopysign.ll | 18 +- llvm/test/CodeGen/AArch64/sve-fcvt.ll | 64 +- .../CodeGen/AArch64/sve-fixed-length-concat.ll | 28 +- .../AArch64/sve-fixed-length-extract-vector-elt.ll | 12 +- .../AArch64/sve-fixed-length-float-compares.ll | 28 +- .../AArch64/sve-fixed-length-fp-extend-trunc.ll | 54 +- .../CodeGen/AArch64/sve-fixed-length-fp-select.ll | 48 +- .../CodeGen/AArch64/sve-fixed-length-fp-to-int.ll | 54 +- .../CodeGen/AArch64/sve-fixed-length-fp-vselect.ll | 1716 +++++----- .../AArch64/sve-fixed-length-insert-vector-elt.ll | 148 +- .../CodeGen/AArch64/sve-fixed-length-int-div.ll | 216 +- .../AArch64/sve-fixed-length-int-extends.ll | 56 +- .../AArch64/sve-fixed-length-int-immediates.ll | 56 +- .../CodeGen/AArch64/sve-fixed-length-int-mulh.ll | 30 +- .../CodeGen/AArch64/sve-fixed-length-int-rem.ll | 282 +- .../CodeGen/AArch64/sve-fixed-length-int-select.ll | 144 +- .../CodeGen/AArch64/sve-fixed-length-int-to-fp.ll | 108 +- .../AArch64/sve-fixed-length-int-vselect.ll | 3584 ++++++++++---------- .../AArch64/sve-fixed-length-masked-gather.ll | 296 +- .../AArch64/sve-fixed-length-masked-loads.ll | 46 +- .../AArch64/sve-fixed-length-masked-scatter.ll | 342 +- .../AArch64/sve-fixed-length-masked-stores.ll | 82 +- .../AArch64/sve-fixed-length-vector-shuffle.ll | 78 +- llvm/test/CodeGen/AArch64/sve-forward-st-to-ld.ll | 7 +- llvm/test/CodeGen/AArch64/sve-fptrunc-store.ll | 4 +- llvm/test/CodeGen/AArch64/sve-gep.ll | 4 +- .../CodeGen/AArch64/sve-implicit-zero-filling.ll | 13 +- llvm/test/CodeGen/AArch64/sve-insert-element.ll | 192 +- llvm/test/CodeGen/AArch64/sve-insert-vector.ll | 80 +- llvm/test/CodeGen/AArch64/sve-int-arith-imm.ll | 30 +- llvm/test/CodeGen/AArch64/sve-int-arith.ll | 2 +- llvm/test/CodeGen/AArch64/sve-intrinsics-index.ll | 10 +- .../CodeGen/AArch64/sve-intrinsics-int-arith.ll | 4 +- llvm/test/CodeGen/AArch64/sve-ld-post-inc.ll | 6 +- llvm/test/CodeGen/AArch64/sve-ld1r.ll | 2 +- .../sve-lsr-scaled-index-addressing-mode.ll | 1 + .../CodeGen/AArch64/sve-masked-gather-legalize.ll | 6 +- .../CodeGen/AArch64/sve-masked-scatter-legalize.ll | 2 +- llvm/test/CodeGen/AArch64/sve-masked-scatter.ll | 2 +- llvm/test/CodeGen/AArch64/sve-pred-arith.ll | 16 +- llvm/test/CodeGen/AArch64/sve-sext-zext.ll | 12 +- llvm/test/CodeGen/AArch64/sve-split-extract-elt.ll | 100 +- llvm/test/CodeGen/AArch64/sve-split-fcvt.ll | 40 +- llvm/test/CodeGen/AArch64/sve-split-fp-reduce.ll | 2 +- llvm/test/CodeGen/AArch64/sve-split-insert-elt.ll | 72 +- llvm/test/CodeGen/AArch64/sve-split-int-reduce.ll | 10 +- llvm/test/CodeGen/AArch64/sve-split-load.ll | 6 +- llvm/test/CodeGen/AArch64/sve-split-store.ll | 6 +- .../AArch64/sve-st1-addressing-mode-reg-imm.ll | 12 +- llvm/test/CodeGen/AArch64/sve-stepvector.ll | 22 +- llvm/test/CodeGen/AArch64/sve-trunc.ll | 30 +- llvm/test/CodeGen/AArch64/sve-vscale-attr.ll | 40 +- llvm/test/CodeGen/AArch64/sve-vscale.ll | 2 +- llvm/test/CodeGen/AArch64/sve-vselect-imm.ll | 12 +- llvm/test/CodeGen/AArch64/swift-async.ll | 20 +- llvm/test/CodeGen/AArch64/swift-return.ll | 2 +- llvm/test/CodeGen/AArch64/swifterror.ll | 6 +- llvm/test/CodeGen/AArch64/tiny-model-pic.ll | 12 +- llvm/test/CodeGen/AArch64/tiny-model-static.ll | 12 +- .../test/CodeGen/AArch64/typepromotion-overflow.ll | 136 +- llvm/test/CodeGen/AArch64/typepromotion-signed.ll | 38 +- llvm/test/CodeGen/AArch64/uadd_sat.ll | 6 +- llvm/test/CodeGen/AArch64/uadd_sat_plus.ll | 30 +- llvm/test/CodeGen/AArch64/uadd_sat_vec.ll | 72 +- .../AArch64/umulo-128-legalisation-lowering.ll | 27 +- ...old-masked-merge-scalar-constmask-innerouter.ll | 18 +- ...asked-merge-scalar-constmask-interleavedbits.ll | 12 +- ...merge-scalar-constmask-interleavedbytehalves.ll | 12 +- ...unfold-masked-merge-scalar-constmask-lowhigh.ll | 2 +- .../unfold-masked-merge-scalar-variablemask.ll | 98 +- llvm/test/CodeGen/AArch64/urem-lkk.ll | 20 +- .../CodeGen/AArch64/urem-seteq-illegal-types.ll | 28 +- llvm/test/CodeGen/AArch64/urem-seteq-nonzero.ll | 46 +- llvm/test/CodeGen/AArch64/urem-seteq-optsize.ll | 14 +- .../CodeGen/AArch64/urem-seteq-vec-nonsplat.ll | 340 +- .../test/CodeGen/AArch64/urem-seteq-vec-nonzero.ll | 56 +- llvm/test/CodeGen/AArch64/urem-seteq-vec-splat.ll | 38 +- .../CodeGen/AArch64/urem-seteq-vec-tautological.ll | 56 +- llvm/test/CodeGen/AArch64/urem-seteq.ll | 14 +- llvm/test/CodeGen/AArch64/urem-vector-lkk.ll | 330 +- .../AArch64/use-cr-result-of-dom-icmp-st.ll | 8 +- llvm/test/CodeGen/AArch64/usub_sat_plus.ll | 20 +- llvm/test/CodeGen/AArch64/usub_sat_vec.ll | 48 +- llvm/test/CodeGen/AArch64/vcvt-oversize.ll | 4 +- llvm/test/CodeGen/AArch64/vec-libcalls.ll | 34 +- llvm/test/CodeGen/AArch64/vec_cttz.ll | 8 +- llvm/test/CodeGen/AArch64/vec_uaddo.ll | 168 +- llvm/test/CodeGen/AArch64/vec_umulo.ll | 296 +- .../CodeGen/AArch64/vecreduce-and-legalization.ll | 36 +- .../AArch64/vecreduce-fadd-legalization-strict.ll | 96 +- .../CodeGen/AArch64/vecreduce-fadd-legalization.ll | 6 +- llvm/test/CodeGen/AArch64/vecreduce-fadd.ll | 188 +- .../CodeGen/AArch64/vecreduce-fmax-legalization.ll | 246 +- .../CodeGen/AArch64/vecreduce-fmin-legalization.ll | 246 +- .../CodeGen/AArch64/vecreduce-umax-legalization.ll | 14 +- llvm/test/CodeGen/AArch64/vector-fcopysign.ll | 346 +- llvm/test/CodeGen/AArch64/vector-gep.ll | 6 +- .../CodeGen/AArch64/vector-popcnt-128-ult-ugt.ll | 680 ++-- llvm/test/CodeGen/AArch64/vldn_shuffle.ll | 6 +- llvm/test/CodeGen/AArch64/vselect-constants.ll | 42 +- llvm/test/CodeGen/AArch64/win-tls.ll | 6 +- llvm/test/CodeGen/AArch64/win64_vararg.ll | 32 +- llvm/test/CodeGen/AArch64/win64_vararg_float.ll | 12 +- llvm/test/CodeGen/AArch64/win64_vararg_float_cc.ll | 12 +- llvm/test/CodeGen/AArch64/xor.ll | 8 +- llvm/test/MC/AArch64/elf-globaladdress.ll | 6 +- .../CanonicalizeFreezeInLoops/aarch64.ll | 2 +- .../CodeGenPrepare/AArch64/large-offset-gep.ll | 30 +- .../AArch64/lsr-pre-inc-offset-check.ll | 12 +- .../LoopStrengthReduce/AArch64/small-constant.ll | 2 +- .../aarch64_generated_funcs.ll.generated.expected | 30 +- ...aarch64_generated_funcs.ll.nogenerated.expected | 24 +- 319 files changed, 14045 insertions(+), 13817 deletions(-)
diff --git a/llvm/lib/Target/AArch64/AArch64.td b/llvm/lib/Target/AArch64/AArch64.td index 5c1bf783ba2a..cb52532343fe 100644 --- a/llvm/lib/Target/AArch64/AArch64.td +++ b/llvm/lib/Target/AArch64/AArch64.td @@ -1156,7 +1156,7 @@ def ProcTSV110 : SubtargetFeature<"tsv110", "ARMProcFamily", "TSV110", FeatureFP16FML, FeatureDotProd]>;
-def : ProcessorModel<"generic", NoSchedModel, [ +def : ProcessorModel<"generic", CortexA55Model, [ FeatureFPARMv8, FeatureFuseAES, FeatureNEON, diff --git a/llvm/test/Analysis/CostModel/AArch64/shuffle-select.ll b/llvm/test/Analysis/CostModel/AArch64/shuffle-select.ll index 5008c7f5c847..cb8ec7ba6f21 100644 --- a/llvm/test/Analysis/CostModel/AArch64/shuffle-select.ll +++ b/llvm/test/Analysis/CostModel/AArch64/shuffle-select.ll @@ -4,7 +4,7 @@ ; COST-LABEL: sel.v8i8 ; COST: Found an estimated cost of 42 for instruction: %tmp0 = shufflevector <8 x i8> %v0, <8 x i8> %v1, <8 x i32> <i32 0, i32 9, i32 2, i32 11, i32 4, i32 13, i32 6, i32 15> ; CODE-LABEL: sel.v8i8 -; CODE: tbl v0.8b, { v0.16b }, v2.8b +; CODE: tbl v0.8b, { v0.16b }, v1.8b define <8 x i8> @sel.v8i8(<8 x i8> %v0, <8 x i8> %v1) { %tmp0 = shufflevector <8 x i8> %v0, <8 x i8> %v1, <8 x i32> <i32 0, i32 9, i32 2, i32 11, i32 4, i32 13, i32 6, i32 15> ret <8 x i8> %tmp0 diff --git a/llvm/test/Analysis/CostModel/AArch64/vector-select.ll b/llvm/test/Analysis/CostModel/AArch64/vector-select.ll index f2271c4ed71f..6e77612815f4 100644 --- a/llvm/test/Analysis/CostModel/AArch64/vector-select.ll +++ b/llvm/test/Analysis/CostModel/AArch64/vector-select.ll @@ -119,15 +119,15 @@ define <2 x i64> @v2i64_select_sle(<2 x i64> %a, <2 x i64> %b, <2 x i64> %c) {
; CODE-LABEL: v3i64_select_sle ; CODE: bb.0 -; CODE: ldr ; CODE: mov +; CODE: ldr ; CODE: mov ; CODE: mov ; CODE: cmge ; CODE: cmge ; CODE: bif -; CODE: ext ; CODE: bif +; CODE: ext ; CODE: ret
define <3 x i64> @v3i64_select_sle(<3 x i64> %a, <3 x i64> %b, <3 x i64> %c) { diff --git a/llvm/test/CodeGen/AArch64/DAGCombine_vscale.ll b/llvm/test/CodeGen/AArch64/DAGCombine_vscale.ll index 6fe73e067e1a..c2436ccecc75 100644 --- a/llvm/test/CodeGen/AArch64/DAGCombine_vscale.ll +++ b/llvm/test/CodeGen/AArch64/DAGCombine_vscale.ll @@ -51,8 +51,8 @@ define <vscale x 4 x i32> @ashr_add_shl_nxv4i8(<vscale x 4 x i32> %a) { ; CHECK-LABEL: ashr_add_shl_nxv4i8: ; CHECK: // %bb.0: ; CHECK-NEXT: mov w8, #16777216 -; CHECK-NEXT: mov z1.s, w8 ; CHECK-NEXT: lsl z0.s, z0.s, #24 +; CHECK-NEXT: mov z1.s, w8 ; CHECK-NEXT: add z0.s, z0.s, z1.s ; CHECK-NEXT: asr z0.s, z0.s, #24 ; CHECK-NEXT: ret diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/arm64-atomic.ll b/llvm/test/CodeGen/AArch64/GlobalISel/arm64-atomic.ll index fd3a0072d2a8..4385e3ede36f 100644 --- a/llvm/test/CodeGen/AArch64/GlobalISel/arm64-atomic.ll +++ b/llvm/test/CodeGen/AArch64/GlobalISel/arm64-atomic.ll @@ -705,14 +705,14 @@ define i32 @atomic_load(i32* %p) #0 { define i8 @atomic_load_relaxed_8(i8* %p, i32 %off32) #0 { ; CHECK-NOLSE-O1-LABEL: atomic_load_relaxed_8: ; CHECK-NOLSE-O1: ; %bb.0: -; CHECK-NOLSE-O1-NEXT: ldrb w8, [x0, #4095] -; CHECK-NOLSE-O1-NEXT: ldrb w9, [x0, w1, sxtw] -; CHECK-NOLSE-O1-NEXT: ldurb w10, [x0, #-256] -; CHECK-NOLSE-O1-NEXT: add x11, x0, #291, lsl #12 ; =1191936 -; CHECK-NOLSE-O1-NEXT: ldrb w11, [x11] -; CHECK-NOLSE-O1-NEXT: add w8, w8, w9 -; CHECK-NOLSE-O1-NEXT: add w8, w8, w10 -; CHECK-NOLSE-O1-NEXT: add w0, w8, w11 +; CHECK-NOLSE-O1-NEXT: add x8, x0, #291, lsl #12 ; =1191936 +; CHECK-NOLSE-O1-NEXT: ldrb w9, [x0, #4095] +; CHECK-NOLSE-O1-NEXT: ldrb w10, [x0, w1, sxtw] +; CHECK-NOLSE-O1-NEXT: ldurb w11, [x0, #-256] +; CHECK-NOLSE-O1-NEXT: ldrb w8, [x8] +; CHECK-NOLSE-O1-NEXT: add w9, w9, w10 +; CHECK-NOLSE-O1-NEXT: add w9, w9, w11 +; CHECK-NOLSE-O1-NEXT: add w0, w9, w8 ; CHECK-NOLSE-O1-NEXT: ret ; ; CHECK-NOLSE-O0-LABEL: atomic_load_relaxed_8: @@ -775,14 +775,14 @@ define i8 @atomic_load_relaxed_8(i8* %p, i32 %off32) #0 { define i16 @atomic_load_relaxed_16(i16* %p, i32 %off32) #0 { ; CHECK-NOLSE-O1-LABEL: atomic_load_relaxed_16: ; CHECK-NOLSE-O1: ; %bb.0: -; CHECK-NOLSE-O1-NEXT: ldrh w8, [x0, #8190] -; CHECK-NOLSE-O1-NEXT: ldrh w9, [x0, w1, sxtw #1] -; CHECK-NOLSE-O1-NEXT: ldurh w10, [x0, #-256] -; CHECK-NOLSE-O1-NEXT: add x11, x0, #291, lsl #12 ; =1191936 -; CHECK-NOLSE-O1-NEXT: ldrh w11, [x11] -; CHECK-NOLSE-O1-NEXT: add w8, w8, w9 -; CHECK-NOLSE-O1-NEXT: add w8, w8, w10 -; CHECK-NOLSE-O1-NEXT: add w0, w8, w11 +; CHECK-NOLSE-O1-NEXT: add x8, x0, #291, lsl #12 ; =1191936 +; CHECK-NOLSE-O1-NEXT: ldrh w9, [x0, #8190] +; CHECK-NOLSE-O1-NEXT: ldrh w10, [x0, w1, sxtw #1] +; CHECK-NOLSE-O1-NEXT: ldurh w11, [x0, #-256] +; CHECK-NOLSE-O1-NEXT: ldrh w8, [x8] +; CHECK-NOLSE-O1-NEXT: add w9, w9, w10 +; CHECK-NOLSE-O1-NEXT: add w9, w9, w11 +; CHECK-NOLSE-O1-NEXT: add w0, w9, w8 ; CHECK-NOLSE-O1-NEXT: ret ; ; CHECK-NOLSE-O0-LABEL: atomic_load_relaxed_16: @@ -845,14 +845,14 @@ define i16 @atomic_load_relaxed_16(i16* %p, i32 %off32) #0 { define i32 @atomic_load_relaxed_32(i32* %p, i32 %off32) #0 { ; CHECK-NOLSE-O1-LABEL: atomic_load_relaxed_32: ; CHECK-NOLSE-O1: ; %bb.0: -; CHECK-NOLSE-O1-NEXT: ldr w8, [x0, #16380] -; CHECK-NOLSE-O1-NEXT: ldr w9, [x0, w1, sxtw #2] -; CHECK-NOLSE-O1-NEXT: ldur w10, [x0, #-256] -; CHECK-NOLSE-O1-NEXT: add x11, x0, #291, lsl #12 ; =1191936 -; CHECK-NOLSE-O1-NEXT: ldr w11, [x11] -; CHECK-NOLSE-O1-NEXT: add w8, w8, w9 -; CHECK-NOLSE-O1-NEXT: add w8, w8, w10 -; CHECK-NOLSE-O1-NEXT: add w0, w8, w11 +; CHECK-NOLSE-O1-NEXT: add x8, x0, #291, lsl #12 ; =1191936 +; CHECK-NOLSE-O1-NEXT: ldr w9, [x0, #16380] +; CHECK-NOLSE-O1-NEXT: ldr w10, [x0, w1, sxtw #2] +; CHECK-NOLSE-O1-NEXT: ldur w11, [x0, #-256] +; CHECK-NOLSE-O1-NEXT: ldr w8, [x8] +; CHECK-NOLSE-O1-NEXT: add w9, w9, w10 +; CHECK-NOLSE-O1-NEXT: add w9, w9, w11 +; CHECK-NOLSE-O1-NEXT: add w0, w9, w8 ; CHECK-NOLSE-O1-NEXT: ret ; ; CHECK-NOLSE-O0-LABEL: atomic_load_relaxed_32: @@ -911,14 +911,14 @@ define i32 @atomic_load_relaxed_32(i32* %p, i32 %off32) #0 { define i64 @atomic_load_relaxed_64(i64* %p, i32 %off32) #0 { ; CHECK-NOLSE-O1-LABEL: atomic_load_relaxed_64: ; CHECK-NOLSE-O1: ; %bb.0: -; CHECK-NOLSE-O1-NEXT: ldr x8, [x0, #32760] -; CHECK-NOLSE-O1-NEXT: ldr x9, [x0, w1, sxtw #3] -; CHECK-NOLSE-O1-NEXT: ldur x10, [x0, #-256] -; CHECK-NOLSE-O1-NEXT: add x11, x0, #291, lsl #12 ; =1191936 -; CHECK-NOLSE-O1-NEXT: ldr x11, [x11] -; CHECK-NOLSE-O1-NEXT: add x8, x8, x9 -; CHECK-NOLSE-O1-NEXT: add x8, x8, x10 -; CHECK-NOLSE-O1-NEXT: add x0, x8, x11 +; CHECK-NOLSE-O1-NEXT: add x8, x0, #291, lsl #12 ; =1191936 +; CHECK-NOLSE-O1-NEXT: ldr x9, [x0, #32760] +; CHECK-NOLSE-O1-NEXT: ldr x10, [x0, w1, sxtw #3] +; CHECK-NOLSE-O1-NEXT: ldur x11, [x0, #-256] +; CHECK-NOLSE-O1-NEXT: ldr x8, [x8] +; CHECK-NOLSE-O1-NEXT: add x9, x9, x10 +; CHECK-NOLSE-O1-NEXT: add x9, x9, x11 +; CHECK-NOLSE-O1-NEXT: add x0, x9, x8 ; CHECK-NOLSE-O1-NEXT: ret ; ; CHECK-NOLSE-O0-LABEL: atomic_load_relaxed_64: @@ -2717,8 +2717,8 @@ define { i8, i1 } @cmpxchg_i8(i8* %ptr, i8 %desired, i8 %new) { ; CHECK-NOLSE-O1-NEXT: ; kill: def $w0 killed $w0 killed $x0 ; CHECK-NOLSE-O1-NEXT: ret ; CHECK-NOLSE-O1-NEXT: LBB47_4: ; %cmpxchg.nostore -; CHECK-NOLSE-O1-NEXT: clrex ; CHECK-NOLSE-O1-NEXT: mov w1, wzr +; CHECK-NOLSE-O1-NEXT: clrex ; CHECK-NOLSE-O1-NEXT: ; kill: def $w0 killed $w0 killed $x0 ; CHECK-NOLSE-O1-NEXT: ret ; @@ -2783,8 +2783,8 @@ define { i16, i1 } @cmpxchg_i16(i16* %ptr, i16 %desired, i16 %new) { ; CHECK-NOLSE-O1-NEXT: ; kill: def $w0 killed $w0 killed $x0 ; CHECK-NOLSE-O1-NEXT: ret ; CHECK-NOLSE-O1-NEXT: LBB48_4: ; %cmpxchg.nostore -; CHECK-NOLSE-O1-NEXT: clrex ; CHECK-NOLSE-O1-NEXT: mov w1, wzr +; CHECK-NOLSE-O1-NEXT: clrex ; CHECK-NOLSE-O1-NEXT: ; kill: def $w0 killed $w0 killed $x0 ; CHECK-NOLSE-O1-NEXT: ret ; diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/byval-call.ll b/llvm/test/CodeGen/AArch64/GlobalISel/byval-call.ll index f8d4731d3249..651ca31ae555 100644 --- a/llvm/test/CodeGen/AArch64/GlobalISel/byval-call.ll +++ b/llvm/test/CodeGen/AArch64/GlobalISel/byval-call.ll @@ -27,8 +27,8 @@ define void @call_byval_a64i32([64 x i32]* %incoming) { ; CHECK: // %bb.0: ; CHECK-NEXT: sub sp, sp, #288 ; CHECK-NEXT: stp x29, x30, [sp, #256] // 16-byte Folded Spill -; CHECK-NEXT: str x28, [sp, #272] // 8-byte Folded Spill ; CHECK-NEXT: add x29, sp, #256 +; CHECK-NEXT: str x28, [sp, #272] // 8-byte Folded Spill ; CHECK-NEXT: .cfi_def_cfa w29, 32 ; CHECK-NEXT: .cfi_offset w28, -16 ; CHECK-NEXT: .cfi_offset w30, -24 @@ -66,8 +66,8 @@ define void @call_byval_a64i32([64 x i32]* %incoming) { ; CHECK-NEXT: ldr q0, [x0, #240] ; CHECK-NEXT: str q0, [sp, #240] ; CHECK-NEXT: bl byval_a64i32 -; CHECK-NEXT: ldr x28, [sp, #272] // 8-byte Folded Reload ; CHECK-NEXT: ldp x29, x30, [sp, #256] // 16-byte Folded Reload +; CHECK-NEXT: ldr x28, [sp, #272] // 8-byte Folded Reload ; CHECK-NEXT: add sp, sp, #288 ; CHECK-NEXT: ret call void @byval_a64i32([64 x i32]* byval([64 x i32]) %incoming) diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/call-translator-variadic-musttail.ll b/llvm/test/CodeGen/AArch64/GlobalISel/call-translator-variadic-musttail.ll index 42e91f631822..44c0854ea03d 100644 --- a/llvm/test/CodeGen/AArch64/GlobalISel/call-translator-variadic-musttail.ll +++ b/llvm/test/CodeGen/AArch64/GlobalISel/call-translator-variadic-musttail.ll @@ -63,15 +63,12 @@ define i32 @test_musttail_variadic_spill(i32 %arg0, ...) { ; CHECK-NEXT: mov x25, x6 ; CHECK-NEXT: mov x26, x7 ; CHECK-NEXT: stp q1, q0, [sp, #96] ; 32-byte Folded Spill +; CHECK-NEXT: mov x27, x8 ; CHECK-NEXT: stp q3, q2, [sp, #64] ; 32-byte Folded Spill ; CHECK-NEXT: stp q5, q4, [sp, #32] ; 32-byte Folded Spill ; CHECK-NEXT: stp q7, q6, [sp] ; 32-byte Folded Spill -; CHECK-NEXT: mov x27, x8 ; CHECK-NEXT: bl _puts ; CHECK-NEXT: ldp q1, q0, [sp, #96] ; 32-byte Folded Reload -; CHECK-NEXT: ldp q3, q2, [sp, #64] ; 32-byte Folded Reload -; CHECK-NEXT: ldp q5, q4, [sp, #32] ; 32-byte Folded Reload -; CHECK-NEXT: ldp q7, q6, [sp] ; 32-byte Folded Reload ; CHECK-NEXT: mov w0, w19 ; CHECK-NEXT: mov x1, x20 ; CHECK-NEXT: mov x2, x21 @@ -81,6 +78,9 @@ define i32 @test_musttail_variadic_spill(i32 %arg0, ...) { ; CHECK-NEXT: mov x6, x25 ; CHECK-NEXT: mov x7, x26 ; CHECK-NEXT: mov x8, x27 +; CHECK-NEXT: ldp q3, q2, [sp, #64] ; 32-byte Folded Reload +; CHECK-NEXT: ldp q5, q4, [sp, #32] ; 32-byte Folded Reload +; CHECK-NEXT: ldp q7, q6, [sp] ; 32-byte Folded Reload ; CHECK-NEXT: ldp x29, x30, [sp, #208] ; 16-byte Folded Reload ; CHECK-NEXT: ldp x20, x19, [sp, #192] ; 16-byte Folded Reload ; CHECK-NEXT: ldp x22, x21, [sp, #176] ; 16-byte Folded Reload @@ -122,9 +122,8 @@ define void @f_thunk(i8* %this, ...) { ; CHECK-NEXT: .cfi_offset w26, -80 ; CHECK-NEXT: .cfi_offset w27, -88 ; CHECK-NEXT: .cfi_offset w28, -96 -; CHECK-NEXT: mov x27, x8 -; CHECK-NEXT: add x8, sp, #128 -; CHECK-NEXT: add x9, sp, #256 +; CHECK-NEXT: add x9, sp, #128 +; CHECK-NEXT: add x10, sp, #256 ; CHECK-NEXT: mov x19, x0 ; CHECK-NEXT: mov x20, x1 ; CHECK-NEXT: mov x21, x2 @@ -134,16 +133,14 @@ define void @f_thunk(i8* %this, ...) { ; CHECK-NEXT: mov x25, x6 ; CHECK-NEXT: mov x26, x7 ; CHECK-NEXT: stp q1, q0, [sp, #96] ; 32-byte Folded Spill +; CHECK-NEXT: mov x27, x8 ; CHECK-NEXT: stp q3, q2, [sp, #64] ; 32-byte Folded Spill ; CHECK-NEXT: stp q5, q4, [sp, #32] ; 32-byte Folded Spill ; CHECK-NEXT: stp q7, q6, [sp] ; 32-byte Folded Spill -; CHECK-NEXT: str x9, [x8] +; CHECK-NEXT: str x10, [x9] ; CHECK-NEXT: bl _get_f -; CHECK-NEXT: mov x9, x0 ; CHECK-NEXT: ldp q1, q0, [sp, #96] ; 32-byte Folded Reload -; CHECK-NEXT: ldp q3, q2, [sp, #64] ; 32-byte Folded Reload -; CHECK-NEXT: ldp q5, q4, [sp, #32] ; 32-byte Folded Reload -; CHECK-NEXT: ldp q7, q6, [sp] ; 32-byte Folded Reload +; CHECK-NEXT: mov x9, x0 ; CHECK-NEXT: mov x0, x19 ; CHECK-NEXT: mov x1, x20 ; CHECK-NEXT: mov x2, x21 @@ -153,6 +150,9 @@ define void @f_thunk(i8* %this, ...) { ; CHECK-NEXT: mov x6, x25 ; CHECK-NEXT: mov x7, x26 ; CHECK-NEXT: mov x8, x27 +; CHECK-NEXT: ldp q3, q2, [sp, #64] ; 32-byte Folded Reload +; CHECK-NEXT: ldp q5, q4, [sp, #32] ; 32-byte Folded Reload +; CHECK-NEXT: ldp q7, q6, [sp] ; 32-byte Folded Reload ; CHECK-NEXT: ldp x29, x30, [sp, #240] ; 16-byte Folded Reload ; CHECK-NEXT: ldp x20, x19, [sp, #224] ; 16-byte Folded Reload ; CHECK-NEXT: ldp x22, x21, [sp, #208] ; 16-byte Folded Reload @@ -195,9 +195,9 @@ define void @h_thunk(%struct.Foo* %this, ...) { ; CHECK-NEXT: Lloh2: ; CHECK-NEXT: adrp x10, _g@GOTPAGE ; CHECK-NEXT: ldr x9, [x0, #16] +; CHECK-NEXT: mov w11, #42 ; CHECK-NEXT: Lloh3: ; CHECK-NEXT: ldr x10, [x10, _g@GOTPAGEOFF] -; CHECK-NEXT: mov w11, #42 ; CHECK-NEXT: Lloh4: ; CHECK-NEXT: str w11, [x10] ; CHECK-NEXT: br x9 diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/combine-udiv.ll b/llvm/test/CodeGen/AArch64/GlobalISel/combine-udiv.ll index 6d9dad450ef1..3dc45e4cf5a7 100644 --- a/llvm/test/CodeGen/AArch64/GlobalISel/combine-udiv.ll +++ b/llvm/test/CodeGen/AArch64/GlobalISel/combine-udiv.ll @@ -18,20 +18,20 @@ define <8 x i16> @combine_vec_udiv_uniform(<8 x i16> %x) { ; ; GISEL-LABEL: combine_vec_udiv_uniform: ; GISEL: // %bb.0: -; GISEL-NEXT: adrp x8, .LCPI0_1 -; GISEL-NEXT: ldr q1, [x8, :lo12:.LCPI0_1] -; GISEL-NEXT: adrp x8, .LCPI0_0 -; GISEL-NEXT: ldr q2, [x8, :lo12:.LCPI0_0] ; GISEL-NEXT: adrp x8, .LCPI0_2 -; GISEL-NEXT: ldr q3, [x8, :lo12:.LCPI0_2] -; GISEL-NEXT: sub v1.8h, v2.8h, v1.8h -; GISEL-NEXT: neg v1.8h, v1.8h -; GISEL-NEXT: umull2 v2.4s, v0.8h, v3.8h -; GISEL-NEXT: umull v3.4s, v0.4h, v3.4h -; GISEL-NEXT: uzp2 v2.8h, v3.8h, v2.8h -; GISEL-NEXT: sub v0.8h, v0.8h, v2.8h -; GISEL-NEXT: ushl v0.8h, v0.8h, v1.8h -; GISEL-NEXT: add v0.8h, v0.8h, v2.8h +; GISEL-NEXT: adrp x9, .LCPI0_0 +; GISEL-NEXT: ldr q1, [x8, :lo12:.LCPI0_2] +; GISEL-NEXT: adrp x8, .LCPI0_1 +; GISEL-NEXT: ldr q4, [x9, :lo12:.LCPI0_0] +; GISEL-NEXT: umull2 v2.4s, v0.8h, v1.8h +; GISEL-NEXT: ldr q3, [x8, :lo12:.LCPI0_1] +; GISEL-NEXT: umull v1.4s, v0.4h, v1.4h +; GISEL-NEXT: uzp2 v1.8h, v1.8h, v2.8h +; GISEL-NEXT: sub v2.8h, v4.8h, v3.8h +; GISEL-NEXT: sub v0.8h, v0.8h, v1.8h +; GISEL-NEXT: neg v2.8h, v2.8h +; GISEL-NEXT: ushl v0.8h, v0.8h, v2.8h +; GISEL-NEXT: add v0.8h, v0.8h, v1.8h ; GISEL-NEXT: ushr v0.8h, v0.8h, #4 ; GISEL-NEXT: ret %1 = udiv <8 x i16> %x, <i16 23, i16 23, i16 23, i16 23, i16 23, i16 23, i16 23, i16 23> @@ -44,53 +44,53 @@ define <8 x i16> @combine_vec_udiv_nonuniform(<8 x i16> %x) { ; SDAG-NEXT: adrp x8, .LCPI1_0 ; SDAG-NEXT: ldr q1, [x8, :lo12:.LCPI1_0] ; SDAG-NEXT: adrp x8, .LCPI1_1 +; SDAG-NEXT: ushl v1.8h, v0.8h, v1.8h ; SDAG-NEXT: ldr q2, [x8, :lo12:.LCPI1_1] ; SDAG-NEXT: adrp x8, .LCPI1_2 -; SDAG-NEXT: ldr q3, [x8, :lo12:.LCPI1_2] -; SDAG-NEXT: ushl v1.8h, v0.8h, v1.8h -; SDAG-NEXT: umull2 v4.4s, v1.8h, v2.8h +; SDAG-NEXT: umull2 v3.4s, v1.8h, v2.8h ; SDAG-NEXT: umull v1.4s, v1.4h, v2.4h +; SDAG-NEXT: ldr q2, [x8, :lo12:.LCPI1_2] ; SDAG-NEXT: adrp x8, .LCPI1_3 -; SDAG-NEXT: uzp2 v1.8h, v1.8h, v4.8h -; SDAG-NEXT: ldr q2, [x8, :lo12:.LCPI1_3] +; SDAG-NEXT: uzp2 v1.8h, v1.8h, v3.8h ; SDAG-NEXT: sub v0.8h, v0.8h, v1.8h -; SDAG-NEXT: umull2 v4.4s, v0.8h, v3.8h -; SDAG-NEXT: umull v0.4s, v0.4h, v3.4h -; SDAG-NEXT: uzp2 v0.8h, v0.8h, v4.8h +; SDAG-NEXT: umull2 v3.4s, v0.8h, v2.8h +; SDAG-NEXT: umull v0.4s, v0.4h, v2.4h +; SDAG-NEXT: uzp2 v0.8h, v0.8h, v3.8h ; SDAG-NEXT: add v0.8h, v0.8h, v1.8h -; SDAG-NEXT: ushl v0.8h, v0.8h, v2.8h +; SDAG-NEXT: ldr q1, [x8, :lo12:.LCPI1_3] +; SDAG-NEXT: ushl v0.8h, v0.8h, v1.8h ; SDAG-NEXT: ret ; ; GISEL-LABEL: combine_vec_udiv_nonuniform: ; GISEL: // %bb.0: -; GISEL-NEXT: adrp x8, .LCPI1_5 -; GISEL-NEXT: ldr q1, [x8, :lo12:.LCPI1_5] ; GISEL-NEXT: adrp x8, .LCPI1_4 -; GISEL-NEXT: ldr q2, [x8, :lo12:.LCPI1_4] +; GISEL-NEXT: adrp x10, .LCPI1_0 +; GISEL-NEXT: adrp x9, .LCPI1_1 +; GISEL-NEXT: ldr q1, [x8, :lo12:.LCPI1_4] ; GISEL-NEXT: adrp x8, .LCPI1_3 -; GISEL-NEXT: ldr q3, [x8, :lo12:.LCPI1_3] -; GISEL-NEXT: adrp x8, .LCPI1_1 -; GISEL-NEXT: ldr q4, [x8, :lo12:.LCPI1_1] -; GISEL-NEXT: adrp x8, .LCPI1_0 -; GISEL-NEXT: ldr q5, [x8, :lo12:.LCPI1_0] +; GISEL-NEXT: ldr q5, [x10, :lo12:.LCPI1_0] +; GISEL-NEXT: ldr q6, [x9, :lo12:.LCPI1_1] +; GISEL-NEXT: neg v1.8h, v1.8h +; GISEL-NEXT: ldr q2, [x8, :lo12:.LCPI1_3] ; GISEL-NEXT: adrp x8, .LCPI1_2 -; GISEL-NEXT: neg v2.8h, v2.8h -; GISEL-NEXT: ldr q6, [x8, :lo12:.LCPI1_2] -; GISEL-NEXT: ushl v2.8h, v0.8h, v2.8h -; GISEL-NEXT: cmeq v1.8h, v1.8h, v5.8h -; GISEL-NEXT: umull2 v5.4s, v2.8h, v3.8h +; GISEL-NEXT: ushl v1.8h, v0.8h, v1.8h +; GISEL-NEXT: umull2 v3.4s, v1.8h, v2.8h +; GISEL-NEXT: umull v1.4s, v1.4h, v2.4h +; GISEL-NEXT: uzp2 v1.8h, v1.8h, v3.8h +; GISEL-NEXT: ldr q3, [x8, :lo12:.LCPI1_2] +; GISEL-NEXT: adrp x8, .LCPI1_5 +; GISEL-NEXT: sub v2.8h, v0.8h, v1.8h +; GISEL-NEXT: umull2 v4.4s, v2.8h, v3.8h ; GISEL-NEXT: umull v2.4s, v2.4h, v3.4h -; GISEL-NEXT: uzp2 v2.8h, v2.8h, v5.8h -; GISEL-NEXT: sub v3.8h, v0.8h, v2.8h -; GISEL-NEXT: umull2 v5.4s, v3.8h, v6.8h -; GISEL-NEXT: umull v3.4s, v3.4h, v6.4h -; GISEL-NEXT: uzp2 v3.8h, v3.8h, v5.8h -; GISEL-NEXT: neg v4.8h, v4.8h -; GISEL-NEXT: shl v1.8h, v1.8h, #15 -; GISEL-NEXT: add v2.8h, v3.8h, v2.8h -; GISEL-NEXT: ushl v2.8h, v2.8h, v4.8h -; GISEL-NEXT: sshr v1.8h, v1.8h, #15 -; GISEL-NEXT: bif v0.16b, v2.16b, v1.16b +; GISEL-NEXT: ldr q3, [x8, :lo12:.LCPI1_5] +; GISEL-NEXT: cmeq v3.8h, v3.8h, v5.8h +; GISEL-NEXT: uzp2 v2.8h, v2.8h, v4.8h +; GISEL-NEXT: neg v4.8h, v6.8h +; GISEL-NEXT: add v1.8h, v2.8h, v1.8h +; GISEL-NEXT: shl v2.8h, v3.8h, #15 +; GISEL-NEXT: ushl v1.8h, v1.8h, v4.8h +; GISEL-NEXT: sshr v2.8h, v2.8h, #15 +; GISEL-NEXT: bif v0.16b, v1.16b, v2.16b ; GISEL-NEXT: ret %1 = udiv <8 x i16> %x, <i16 23, i16 34, i16 -23, i16 56, i16 128, i16 -1, i16 -256, i16 -32768> ret <8 x i16> %1 @@ -100,41 +100,41 @@ define <8 x i16> @combine_vec_udiv_nonuniform2(<8 x i16> %x) { ; SDAG-LABEL: combine_vec_udiv_nonuniform2: ; SDAG: // %bb.0: ; SDAG-NEXT: adrp x8, .LCPI2_0 -; SDAG-NEXT: adrp x9, .LCPI2_1 ; SDAG-NEXT: ldr q1, [x8, :lo12:.LCPI2_0] -; SDAG-NEXT: ldr q2, [x9, :lo12:.LCPI2_1] +; SDAG-NEXT: adrp x8, .LCPI2_1 +; SDAG-NEXT: ushl v0.8h, v0.8h, v1.8h +; SDAG-NEXT: ldr q1, [x8, :lo12:.LCPI2_1] ; SDAG-NEXT: adrp x8, .LCPI2_2 -; SDAG-NEXT: ldr q3, [x8, :lo12:.LCPI2_2] +; SDAG-NEXT: umull2 v2.4s, v0.8h, v1.8h +; SDAG-NEXT: umull v0.4s, v0.4h, v1.4h +; SDAG-NEXT: ldr q1, [x8, :lo12:.LCPI2_2] +; SDAG-NEXT: uzp2 v0.8h, v0.8h, v2.8h ; SDAG-NEXT: ushl v0.8h, v0.8h, v1.8h -; SDAG-NEXT: umull2 v1.4s, v0.8h, v2.8h -; SDAG-NEXT: umull v0.4s, v0.4h, v2.4h -; SDAG-NEXT: uzp2 v0.8h, v0.8h, v1.8h -; SDAG-NEXT: ushl v0.8h, v0.8h, v3.8h ; SDAG-NEXT: ret ; ; GISEL-LABEL: combine_vec_udiv_nonuniform2: ; GISEL: // %bb.0: -; GISEL-NEXT: adrp x8, .LCPI2_4 -; GISEL-NEXT: ldr q1, [x8, :lo12:.LCPI2_4] ; GISEL-NEXT: adrp x8, .LCPI2_3 -; GISEL-NEXT: ldr q2, [x8, :lo12:.LCPI2_3] -; GISEL-NEXT: adrp x8, .LCPI2_1 -; GISEL-NEXT: ldr q3, [x8, :lo12:.LCPI2_1] -; GISEL-NEXT: adrp x8, .LCPI2_0 -; GISEL-NEXT: ldr q4, [x8, :lo12:.LCPI2_0] +; GISEL-NEXT: adrp x9, .LCPI2_4 +; GISEL-NEXT: adrp x10, .LCPI2_0 +; GISEL-NEXT: ldr q1, [x8, :lo12:.LCPI2_3] ; GISEL-NEXT: adrp x8, .LCPI2_2 -; GISEL-NEXT: ldr q5, [x8, :lo12:.LCPI2_2] +; GISEL-NEXT: ldr q3, [x9, :lo12:.LCPI2_4] +; GISEL-NEXT: ldr q4, [x10, :lo12:.LCPI2_0] +; GISEL-NEXT: neg v1.8h, v1.8h +; GISEL-NEXT: ldr q2, [x8, :lo12:.LCPI2_2] +; GISEL-NEXT: adrp x8, .LCPI2_1 +; GISEL-NEXT: cmeq v3.8h, v3.8h, v4.8h +; GISEL-NEXT: ushl v1.8h, v0.8h, v1.8h +; GISEL-NEXT: shl v3.8h, v3.8h, #15 +; GISEL-NEXT: umull2 v5.4s, v1.8h, v2.8h +; GISEL-NEXT: umull v1.4s, v1.4h, v2.4h +; GISEL-NEXT: ldr q2, [x8, :lo12:.LCPI2_1] ; GISEL-NEXT: neg v2.8h, v2.8h -; GISEL-NEXT: ushl v2.8h, v0.8h, v2.8h -; GISEL-NEXT: cmeq v1.8h, v1.8h, v4.8h -; GISEL-NEXT: umull2 v4.4s, v2.8h, v5.8h -; GISEL-NEXT: umull v2.4s, v2.4h, v5.4h -; GISEL-NEXT: neg v3.8h, v3.8h -; GISEL-NEXT: shl v1.8h, v1.8h, #15 -; GISEL-NEXT: uzp2 v2.8h, v2.8h, v4.8h -; GISEL-NEXT: ushl v2.8h, v2.8h, v3.8h -; GISEL-NEXT: sshr v1.8h, v1.8h, #15 -; GISEL-NEXT: bif v0.16b, v2.16b, v1.16b +; GISEL-NEXT: uzp2 v1.8h, v1.8h, v5.8h +; GISEL-NEXT: ushl v1.8h, v1.8h, v2.8h +; GISEL-NEXT: sshr v2.8h, v3.8h, #15 +; GISEL-NEXT: bif v0.16b, v1.16b, v2.16b ; GISEL-NEXT: ret %1 = udiv <8 x i16> %x, <i16 -34, i16 35, i16 36, i16 -37, i16 38, i16 -39, i16 40, i16 -41> ret <8 x i16> %1 @@ -146,43 +146,43 @@ define <8 x i16> @combine_vec_udiv_nonuniform3(<8 x i16> %x) { ; SDAG-NEXT: adrp x8, .LCPI3_0 ; SDAG-NEXT: ldr q1, [x8, :lo12:.LCPI3_0] ; SDAG-NEXT: adrp x8, .LCPI3_1 -; SDAG-NEXT: ldr q3, [x8, :lo12:.LCPI3_1] ; SDAG-NEXT: umull2 v2.4s, v0.8h, v1.8h ; SDAG-NEXT: umull v1.4s, v0.4h, v1.4h ; SDAG-NEXT: uzp2 v1.8h, v1.8h, v2.8h ; SDAG-NEXT: sub v0.8h, v0.8h, v1.8h ; SDAG-NEXT: usra v1.8h, v0.8h, #1 -; SDAG-NEXT: ushl v0.8h, v1.8h, v3.8h +; SDAG-NEXT: ldr q0, [x8, :lo12:.LCPI3_1] +; SDAG-NEXT: ushl v0.8h, v1.8h, v0.8h ; SDAG-NEXT: ret ; ; GISEL-LABEL: combine_vec_udiv_nonuniform3: ; GISEL: // %bb.0: -; GISEL-NEXT: adrp x8, .LCPI3_5 -; GISEL-NEXT: ldr q1, [x8, :lo12:.LCPI3_5] ; GISEL-NEXT: adrp x8, .LCPI3_4 -; GISEL-NEXT: ldr q2, [x8, :lo12:.LCPI3_4] -; GISEL-NEXT: adrp x8, .LCPI3_2 -; GISEL-NEXT: ldr q3, [x8, :lo12:.LCPI3_2] -; GISEL-NEXT: adrp x8, .LCPI3_1 -; GISEL-NEXT: ldr q4, [x8, :lo12:.LCPI3_1] -; GISEL-NEXT: adrp x8, .LCPI3_3 -; GISEL-NEXT: ldr q5, [x8, :lo12:.LCPI3_3] -; GISEL-NEXT: adrp x8, .LCPI3_0 -; GISEL-NEXT: ldr q6, [x8, :lo12:.LCPI3_0] -; GISEL-NEXT: sub v3.8h, v4.8h, v3.8h -; GISEL-NEXT: umull2 v4.4s, v0.8h, v2.8h -; GISEL-NEXT: umull v2.4s, v0.4h, v2.4h -; GISEL-NEXT: uzp2 v2.8h, v2.8h, v4.8h -; GISEL-NEXT: neg v3.8h, v3.8h -; GISEL-NEXT: sub v4.8h, v0.8h, v2.8h -; GISEL-NEXT: cmeq v1.8h, v1.8h, v6.8h -; GISEL-NEXT: ushl v3.8h, v4.8h, v3.8h -; GISEL-NEXT: neg v5.8h, v5.8h -; GISEL-NEXT: shl v1.8h, v1.8h, #15 -; GISEL-NEXT: add v2.8h, v3.8h, v2.8h -; GISEL-NEXT: ushl v2.8h, v2.8h, v5.8h -; GISEL-NEXT: sshr v1.8h, v1.8h, #15 -; GISEL-NEXT: bif v0.16b, v2.16b, v1.16b +; GISEL-NEXT: adrp x9, .LCPI3_2 +; GISEL-NEXT: adrp x10, .LCPI3_1 +; GISEL-NEXT: ldr q1, [x8, :lo12:.LCPI3_4] +; GISEL-NEXT: adrp x8, .LCPI3_5 +; GISEL-NEXT: ldr q2, [x9, :lo12:.LCPI3_2] +; GISEL-NEXT: adrp x9, .LCPI3_3 +; GISEL-NEXT: ldr q3, [x10, :lo12:.LCPI3_1] +; GISEL-NEXT: adrp x10, .LCPI3_0 +; GISEL-NEXT: umull2 v4.4s, v0.8h, v1.8h +; GISEL-NEXT: umull v1.4s, v0.4h, v1.4h +; GISEL-NEXT: ldr q6, [x9, :lo12:.LCPI3_3] +; GISEL-NEXT: sub v2.8h, v3.8h, v2.8h +; GISEL-NEXT: ldr q5, [x10, :lo12:.LCPI3_0] +; GISEL-NEXT: uzp2 v1.8h, v1.8h, v4.8h +; GISEL-NEXT: ldr q4, [x8, :lo12:.LCPI3_5] +; GISEL-NEXT: neg v2.8h, v2.8h +; GISEL-NEXT: sub v3.8h, v0.8h, v1.8h +; GISEL-NEXT: ushl v2.8h, v3.8h, v2.8h +; GISEL-NEXT: cmeq v3.8h, v4.8h, v5.8h +; GISEL-NEXT: neg v4.8h, v6.8h +; GISEL-NEXT: add v1.8h, v2.8h, v1.8h +; GISEL-NEXT: shl v2.8h, v3.8h, #15 +; GISEL-NEXT: ushl v1.8h, v1.8h, v4.8h +; GISEL-NEXT: sshr v2.8h, v2.8h, #15 +; GISEL-NEXT: bif v0.16b, v1.16b, v2.16b ; GISEL-NEXT: ret %1 = udiv <8 x i16> %x, <i16 7, i16 23, i16 25, i16 27, i16 31, i16 47, i16 63, i16 127> ret <8 x i16> %1 @@ -192,39 +192,39 @@ define <16 x i8> @combine_vec_udiv_nonuniform4(<16 x i8> %x) { ; SDAG-LABEL: combine_vec_udiv_nonuniform4: ; SDAG: // %bb.0: ; SDAG-NEXT: adrp x8, .LCPI4_0 +; SDAG-NEXT: adrp x9, .LCPI4_3 ; SDAG-NEXT: ldr q1, [x8, :lo12:.LCPI4_0] ; SDAG-NEXT: adrp x8, .LCPI4_1 +; SDAG-NEXT: ldr q3, [x9, :lo12:.LCPI4_3] +; SDAG-NEXT: umull2 v2.8h, v0.16b, v1.16b +; SDAG-NEXT: umull v1.8h, v0.8b, v1.8b +; SDAG-NEXT: and v0.16b, v0.16b, v3.16b +; SDAG-NEXT: uzp2 v1.16b, v1.16b, v2.16b ; SDAG-NEXT: ldr q2, [x8, :lo12:.LCPI4_1] ; SDAG-NEXT: adrp x8, .LCPI4_2 -; SDAG-NEXT: ldr q3, [x8, :lo12:.LCPI4_2] -; SDAG-NEXT: adrp x8, .LCPI4_3 -; SDAG-NEXT: ldr q4, [x8, :lo12:.LCPI4_3] -; SDAG-NEXT: umull2 v5.8h, v0.16b, v1.16b -; SDAG-NEXT: umull v1.8h, v0.8b, v1.8b -; SDAG-NEXT: uzp2 v1.16b, v1.16b, v5.16b ; SDAG-NEXT: ushl v1.16b, v1.16b, v2.16b -; SDAG-NEXT: and v1.16b, v1.16b, v3.16b -; SDAG-NEXT: and v0.16b, v0.16b, v4.16b +; SDAG-NEXT: ldr q2, [x8, :lo12:.LCPI4_2] +; SDAG-NEXT: and v1.16b, v1.16b, v2.16b ; SDAG-NEXT: orr v0.16b, v0.16b, v1.16b ; SDAG-NEXT: ret ; ; GISEL-LABEL: combine_vec_udiv_nonuniform4: ; GISEL: // %bb.0: ; GISEL-NEXT: adrp x8, .LCPI4_3 +; GISEL-NEXT: adrp x9, .LCPI4_2 +; GISEL-NEXT: adrp x10, .LCPI4_1 ; GISEL-NEXT: ldr q1, [x8, :lo12:.LCPI4_3] ; GISEL-NEXT: adrp x8, .LCPI4_0 -; GISEL-NEXT: ldr q2, [x8, :lo12:.LCPI4_0] -; GISEL-NEXT: adrp x8, .LCPI4_2 -; GISEL-NEXT: ldr q3, [x8, :lo12:.LCPI4_2] -; GISEL-NEXT: adrp x8, .LCPI4_1 -; GISEL-NEXT: ldr q4, [x8, :lo12:.LCPI4_1] -; GISEL-NEXT: cmeq v1.16b, v1.16b, v2.16b -; GISEL-NEXT: umull2 v2.8h, v0.16b, v3.16b -; GISEL-NEXT: umull v3.8h, v0.8b, v3.8b -; GISEL-NEXT: neg v4.16b, v4.16b -; GISEL-NEXT: uzp2 v2.16b, v3.16b, v2.16b +; GISEL-NEXT: ldr q2, [x9, :lo12:.LCPI4_2] +; GISEL-NEXT: ldr q3, [x10, :lo12:.LCPI4_1] +; GISEL-NEXT: ldr q4, [x8, :lo12:.LCPI4_0] +; GISEL-NEXT: umull2 v5.8h, v0.16b, v2.16b +; GISEL-NEXT: umull v2.8h, v0.8b, v2.8b +; GISEL-NEXT: cmeq v1.16b, v1.16b, v4.16b +; GISEL-NEXT: neg v3.16b, v3.16b +; GISEL-NEXT: uzp2 v2.16b, v2.16b, v5.16b ; GISEL-NEXT: shl v1.16b, v1.16b, #7 -; GISEL-NEXT: ushl v2.16b, v2.16b, v4.16b +; GISEL-NEXT: ushl v2.16b, v2.16b, v3.16b ; GISEL-NEXT: sshr v1.16b, v1.16b, #7 ; GISEL-NEXT: bif v0.16b, v2.16b, v1.16b ; GISEL-NEXT: ret @@ -236,55 +236,55 @@ define <8 x i16> @pr38477(<8 x i16> %a0) { </cut>