This is an automated email from the git hooks/post-receive script.
unknown user pushed a change to branch release/2.33/master in repository glibc.
from 5eddc29c92 S390: Add new s390 platform z16. new 374d54d0a0 x86: Adding an upper bound for Enhanced REP MOVSB. new f1e050f6c4 x86-64: Refactor and improve performance of strchr-avx2.S new e2afbc1ed8 x86: Update large memcpy case in memmove-vec-unaligned-erms.S new 2763627abe x86-64: Require BMI2 for strchr-avx2.S new d130da85f3 x86: Optimize less_vec evex and avx512 memset-vec-unaligned-erms.S new 8a1e30d13c x86: Optimize strchr-avx2.S new 21252de9ce x86: Optimize strchr-evex.S new 6d74f1b712 x86: Set rep_movsb_threshold to 2112 on processors with FSRM new abb0dc2f3a x86: Add EVEX optimized memchr family not safe for RTM new a44a43e998 x86: Optimize memcmp-avx2-movbe.S new 903190e981 x86: Optimize memcmp-evex-movbe.S new 6903448d93 x86: Improve memset-vec-unaligned-erms.S new de7fd57d75 x86: Improve memmove-vec-unaligned-erms.S new 853f83686a x86: Remove unnecessary overflow check from wcsnlen-sse4_1.S new 0834ee7398 x86-64: Add Avoid_Short_Distance_REP_MOVSB new 91010c4cf8 x86-64: Use testl to check __x86_string_control new 83454fe8d7 x86: Fix __wcsncmp_evex in strcmp-evex.S [BZ# 28755] new c09d92d4f6 x86-64: Optimize load of all bits set into ZMM register [BZ #28252] new e47c117d01 x86: Modify ENTRY in sysdep.h so that p2align can be specified new 280fcf7f56 x86: Optimize memcmp-evex-movbe.S for frontend behavior and size new a4e41f7253 x86: Optimize memset-vec-unaligned-erms.S new 4df7e006ec x86: Replace sse2 instructions with avx in memcmp-evex-movbe.S new 1a7af4e140 x86-64: Improve EVEX strcmp with masked load new ce78592170 x86-64: Remove Prefer_AVX2_STRCMP new e36de6a3cd x86-64: Replace movzx with movzbl new 5d56ee94f2 x86: Optimize memmove-vec-unaligned-erms.S new d0fac98a30 x86: Double size of ERMS rep_movsb_threshold in dl-cacheinfo.h new 3a8cc38d57 x86: Shrink memcmp-sse4.S code size new d77349767a x86-64: Use notl in EVEX strcmp [BZ #28646] new 63fd074112 x86: Don't set Prefer_No_AVX512 for processors with AVX512 a [...] new b002995ea4 x86: Optimize L(less_vec) case in memcmp-evex-movbe.S
The 31 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "adds" were already present in the repository and have only been added to this reference.
Summary of changes: string/test-strcmp.c | 28 + sysdeps/x86/cacheinfo.h | 13 + sysdeps/x86/cpu-features.c | 20 +- sysdeps/x86/cpu-tunables.c | 2 - sysdeps/x86/dl-cacheinfo.h | 27 +- sysdeps/x86/dl-tunables.list | 26 +- .../cpu-features-preferred_feature_index_1.def | 2 +- sysdeps/x86/include/cpu-features.h | 2 + sysdeps/x86/sysdep.h | 12 +- .../x86_64/fpu/multiarch/svml_d_cos8_core_avx512.S | 7 +- .../x86_64/fpu/multiarch/svml_d_log8_core_avx512.S | 7 +- .../x86_64/fpu/multiarch/svml_d_sin8_core_avx512.S | 7 +- .../fpu/multiarch/svml_d_sincos8_core_avx512.S | 7 +- .../fpu/multiarch/svml_s_cosf16_core_avx512.S | 7 +- .../fpu/multiarch/svml_s_expf16_core_avx512.S | 7 +- .../fpu/multiarch/svml_s_logf16_core_avx512.S | 7 +- .../fpu/multiarch/svml_s_powf16_core_avx512.S | 12 +- .../fpu/multiarch/svml_s_sincosf16_core_avx512.S | 7 +- .../fpu/multiarch/svml_s_sinf16_core_avx512.S | 7 +- sysdeps/x86_64/memmove.S | 2 +- sysdeps/x86_64/memset.S | 10 +- sysdeps/x86_64/multiarch/Makefile | 7 +- sysdeps/x86_64/multiarch/ifunc-avx2.h | 4 +- .../multiarch/{ifunc-wcslen.h => ifunc-evex.h} | 15 +- sysdeps/x86_64/multiarch/ifunc-impl-list.c | 73 +- sysdeps/x86_64/multiarch/ifunc-memcmp.h | 1 + sysdeps/x86_64/multiarch/ifunc-memset.h | 6 +- sysdeps/x86_64/multiarch/memchr-evex-rtm.S | 8 + sysdeps/x86_64/multiarch/memchr-evex.S | 161 +- sysdeps/x86_64/multiarch/memchr.c | 2 +- sysdeps/x86_64/multiarch/memcmp-avx2-movbe.S | 676 +++--- sysdeps/x86_64/multiarch/memcmp-evex-movbe.S | 669 +++--- sysdeps/x86_64/multiarch/memcmp-sse4.S | 2267 ++++++-------------- .../multiarch/memmove-avx-unaligned-erms-rtm.S | 2 +- .../x86_64/multiarch/memmove-avx-unaligned-erms.S | 2 +- .../multiarch/memmove-avx512-unaligned-erms.S | 2 +- .../x86_64/multiarch/memmove-evex-unaligned-erms.S | 2 +- .../x86_64/multiarch/memmove-vec-unaligned-erms.S | 847 +++++--- .../x86_64/multiarch/memset-avx2-unaligned-erms.S | 10 +- .../multiarch/memset-avx512-unaligned-erms.S | 13 +- .../x86_64/multiarch/memset-evex-unaligned-erms.S | 13 +- .../x86_64/multiarch/memset-vec-unaligned-erms.S | 286 ++- sysdeps/x86_64/multiarch/rawmemchr-evex-rtm.S | 3 + sysdeps/x86_64/multiarch/rawmemchr.c | 2 +- sysdeps/x86_64/multiarch/strchr-avx2.S | 319 +-- sysdeps/x86_64/multiarch/strchr-evex.S | 392 ++-- sysdeps/x86_64/multiarch/strchr.c | 4 +- sysdeps/x86_64/multiarch/strcmp-evex.S | 475 ++-- sysdeps/x86_64/multiarch/strcmp-sse42.S | 4 +- sysdeps/x86_64/multiarch/strcmp.c | 3 +- sysdeps/x86_64/multiarch/strlen-vec.S | 7 - sysdeps/x86_64/multiarch/strncmp.c | 3 +- sysdeps/x86_64/multiarch/wmemchr-evex-rtm.S | 3 + sysdeps/x86_64/multiarch/wmemchr.c | 2 +- sysdeps/x86_64/strcmp.S | 4 +- 55 files changed, 3244 insertions(+), 3262 deletions(-) copy sysdeps/x86_64/multiarch/{ifunc-wcslen.h => ifunc-evex.h} (87%) create mode 100644 sysdeps/x86_64/multiarch/memchr-evex-rtm.S create mode 100644 sysdeps/x86_64/multiarch/rawmemchr-evex-rtm.S create mode 100644 sysdeps/x86_64/multiarch/wmemchr-evex-rtm.S