Introduce support for the N32 and N64 ABIs. As preparation, the entrypoint is first simplified significantly. Thanks to Maciej for all the valuable information.
Signed-off-by: Thomas Weißschuh linux@weissschuh.net --- Changes in v2: - Clean up entrypoint first - Annotate #endifs - Link to v1: https://lore.kernel.org/r/20250212-nolibc-mips-n32-v1-1-6892e58d1321@weisssc...
--- Thomas Weißschuh (4): tools/nolibc: MIPS: drop $gp setup tools/nolibc: MIPS: drop manual stack pointer alignment tools/nolibc: MIPS: drop noreorder option tools/nolibc: MIPS: add support for N64 and N32 ABIs
tools/include/nolibc/arch-mips.h | 117 +++++++++++++++++++++------- tools/testing/selftests/nolibc/Makefile | 28 ++++++- tools/testing/selftests/nolibc/run-tests.sh | 2 +- 3 files changed, 118 insertions(+), 29 deletions(-) --- base-commit: 9c812b01f13d37410ea103e00bc47e5e0f6d2bad change-id: 20231105-nolibc-mips-n32-234901bd910d
Best regards,
The setup of the global pointer "$gp" register was necessary when the C entrypoint was called through "jal <symbol>". However since commit 0daf8c86a451 ("tools/nolibc: mips: load current function to $t9") "jalr" is used instead which does not require "$gp".
Remove the unnecessary $gp setup, simplifying the code and opening the road for some other cleanups.
Suggested-by: Maciej W. Rozycki macro@orcam.me.uk Link: https://lore.kernel.org/lkml/alpine.DEB.2.21.2502172208570.65342@angie.orcam... Signed-off-by: Thomas Weißschuh linux@weissschuh.net --- tools/include/nolibc/arch-mips.h | 6 ------ 1 file changed, 6 deletions(-)
diff --git a/tools/include/nolibc/arch-mips.h b/tools/include/nolibc/arch-mips.h index 753a8ed2cf695f0b5eac4b5e4d317fdb383ebf93..6d023d9f120301b2d6837c923c142ab2cf87ae5a 100644 --- a/tools/include/nolibc/arch-mips.h +++ b/tools/include/nolibc/arch-mips.h @@ -185,13 +185,7 @@ void __attribute__((weak, noreturn)) __nolibc_entrypoint __no_stack_protector __ __asm__ volatile ( ".set push\n" ".set noreorder\n" - "bal 1f\n" /* prime $ra for .cpload */ - "nop\n" - "1:\n" - ".cpload $ra\n" "move $a0, $sp\n" /* save stack pointer to $a0, as arg1 of _start_c */ - "addiu $sp, $sp, -4\n" /* space for .cprestore to store $gp */ - ".cprestore 0\n" "li $t0, -8\n" "and $sp, $sp, $t0\n" /* $sp must be 8-byte aligned */ "addiu $sp, $sp, -16\n" /* the callee expects to save a0..a3 there */
The stack pointer is already aligned by the kernel to a multiple of 16. All modifications of the register have been removed from the entrypoint, so the manual realignment is unnecessary.
Drop the manual alignment.
Suggested-by: Maciej W. Rozycki macro@orcam.me.uk Link: https://lore.kernel.org/lkml/alpine.DEB.2.21.2502161523290.65342@angie.orcam... Signed-off-by: Thomas Weißschuh linux@weissschuh.net --- tools/include/nolibc/arch-mips.h | 2 -- 1 file changed, 2 deletions(-)
diff --git a/tools/include/nolibc/arch-mips.h b/tools/include/nolibc/arch-mips.h index 6d023d9f120301b2d6837c923c142ab2cf87ae5a..0776de7574b451aeb34531bc4696c7bd9b694268 100644 --- a/tools/include/nolibc/arch-mips.h +++ b/tools/include/nolibc/arch-mips.h @@ -186,8 +186,6 @@ void __attribute__((weak, noreturn)) __nolibc_entrypoint __no_stack_protector __ ".set push\n" ".set noreorder\n" "move $a0, $sp\n" /* save stack pointer to $a0, as arg1 of _start_c */ - "li $t0, -8\n" - "and $sp, $sp, $t0\n" /* $sp must be 8-byte aligned */ "addiu $sp, $sp, -16\n" /* the callee expects to save a0..a3 there */ "lui $t9, %hi(_start_c)\n" /* ABI requires current function address in $t9 */ "ori $t9, %lo(_start_c)\n"
There are no more statements in the assembly code which would require the usage of ".set noreorder".
Remove the option.
This also allows removal of the manual "nop" instruction in the delay slot.
Suggested-by: Maciej W. Rozycki macro@orcam.me.uk Link: https://lore.kernel.org/lkml/alpine.DEB.2.21.2502172208570.65342@angie.orcam... Signed-off-by: Thomas Weißschuh linux@weissschuh.net --- tools/include/nolibc/arch-mips.h | 4 ---- 1 file changed, 4 deletions(-)
diff --git a/tools/include/nolibc/arch-mips.h b/tools/include/nolibc/arch-mips.h index 0776de7574b451aeb34531bc4696c7bd9b694268..4f0b969f66af610d3c986f3ff0e1c3f3a0be16b5 100644 --- a/tools/include/nolibc/arch-mips.h +++ b/tools/include/nolibc/arch-mips.h @@ -183,15 +183,11 @@ void __start(void); void __attribute__((weak, noreturn)) __nolibc_entrypoint __no_stack_protector __start(void) { __asm__ volatile ( - ".set push\n" - ".set noreorder\n" "move $a0, $sp\n" /* save stack pointer to $a0, as arg1 of _start_c */ "addiu $sp, $sp, -16\n" /* the callee expects to save a0..a3 there */ "lui $t9, %hi(_start_c)\n" /* ABI requires current function address in $t9 */ "ori $t9, %lo(_start_c)\n" "jalr $t9\n" /* transfer to c runtime */ - " nop\n" /* delayed slot */ - ".set pop\n" ); __nolibc_entrypoint_epilogue(); }
Add support for the MIPS 64bit N64 and ILP32 N32 ABIs.
In addition to different byte orders and ABIs there are also different releases of the MIPS architecture. To avoid blowing up the test matrix, only add a subset of all possible test combinations.
Signed-off-by: Thomas Weißschuh linux@weissschuh.net --- tools/include/nolibc/arch-mips.h | 105 ++++++++++++++++++++++++---- tools/testing/selftests/nolibc/Makefile | 28 +++++++- tools/testing/selftests/nolibc/run-tests.sh | 2 +- 3 files changed, 118 insertions(+), 17 deletions(-)
diff --git a/tools/include/nolibc/arch-mips.h b/tools/include/nolibc/arch-mips.h index 4f0b969f66af610d3c986f3ff0e1c3f3a0be16b5..0cbac63b249adf80ecbf70ba074f9ea5d56d9278 100644 --- a/tools/include/nolibc/arch-mips.h +++ b/tools/include/nolibc/arch-mips.h @@ -10,7 +10,7 @@ #include "compiler.h" #include "crt.h"
-#if !defined(_ABIO32) +#if !defined(_ABIO32) && !defined(_ABIN32) && !defined(_ABI64) #error Unsupported MIPS ABI #endif
@@ -32,11 +32,32 @@ * - the arguments are cast to long and assigned into the target registers * which are then simply passed as registers to the asm code, so that we * don't have to experience issues with register constraints. + * + * Syscalls for MIPS ABI N32, same as ABI O32 with the following differences : + * - arguments are in a0, a1, a2, a3, t0, t1, t2, t3. + * t0..t3 are also known as a4..a7. + * - stack is 16-byte aligned */
+#if defined(_ABIO32) + #define _NOLIBC_SYSCALL_CLOBBERLIST \ "memory", "cc", "at", "v1", "hi", "lo", \ "t0", "t1", "t2", "t3", "t4", "t5", "t6", "t7", "t8", "t9" +#define _NOLIBC_SYSCALL_STACK_RESERVE "addiu $sp, $sp, -32\n" +#define _NOLIBC_SYSCALL_STACK_UNRESERVE "addiu $sp, $sp, 32\n" + +#else /* _ABIN32 || _ABI64 */ + +/* binutils, GCC and clang disagree about register aliases, use numbers instead. */ +#define _NOLIBC_SYSCALL_CLOBBERLIST \ + "memory", "cc", "at", "v1", \ + "10", "11", "12", "13", "14", "15", "24", "25" + +#define _NOLIBC_SYSCALL_STACK_RESERVE +#define _NOLIBC_SYSCALL_STACK_UNRESERVE + +#endif /* _ABIO32 */
#define my_syscall0(num) \ ({ \ @@ -44,9 +65,9 @@ register long _arg4 __asm__ ("a3"); \ \ __asm__ volatile ( \ - "addiu $sp, $sp, -32\n" \ + _NOLIBC_SYSCALL_STACK_RESERVE \ "syscall\n" \ - "addiu $sp, $sp, 32\n" \ + _NOLIBC_SYSCALL_STACK_UNRESERVE \ : "=r"(_num), "=r"(_arg4) \ : "r"(_num) \ : _NOLIBC_SYSCALL_CLOBBERLIST \ @@ -61,9 +82,9 @@ register long _arg4 __asm__ ("a3"); \ \ __asm__ volatile ( \ - "addiu $sp, $sp, -32\n" \ + _NOLIBC_SYSCALL_STACK_RESERVE \ "syscall\n" \ - "addiu $sp, $sp, 32\n" \ + _NOLIBC_SYSCALL_STACK_UNRESERVE \ : "=r"(_num), "=r"(_arg4) \ : "0"(_num), \ "r"(_arg1) \ @@ -80,9 +101,9 @@ register long _arg4 __asm__ ("a3"); \ \ __asm__ volatile ( \ - "addiu $sp, $sp, -32\n" \ + _NOLIBC_SYSCALL_STACK_RESERVE \ "syscall\n" \ - "addiu $sp, $sp, 32\n" \ + _NOLIBC_SYSCALL_STACK_UNRESERVE \ : "=r"(_num), "=r"(_arg4) \ : "0"(_num), \ "r"(_arg1), "r"(_arg2) \ @@ -100,9 +121,9 @@ register long _arg4 __asm__ ("a3"); \ \ __asm__ volatile ( \ - "addiu $sp, $sp, -32\n" \ + _NOLIBC_SYSCALL_STACK_RESERVE \ "syscall\n" \ - "addiu $sp, $sp, 32\n" \ + _NOLIBC_SYSCALL_STACK_UNRESERVE \ : "=r"(_num), "=r"(_arg4) \ : "0"(_num), \ "r"(_arg1), "r"(_arg2), "r"(_arg3) \ @@ -120,9 +141,9 @@ register long _arg4 __asm__ ("a3") = (long)(arg4); \ \ __asm__ volatile ( \ - "addiu $sp, $sp, -32\n" \ + _NOLIBC_SYSCALL_STACK_RESERVE \ "syscall\n" \ - "addiu $sp, $sp, 32\n" \ + _NOLIBC_SYSCALL_STACK_UNRESERVE \ : "=r" (_num), "=r"(_arg4) \ : "0"(_num), \ "r"(_arg1), "r"(_arg2), "r"(_arg3), "r"(_arg4) \ @@ -131,6 +152,8 @@ _arg4 ? -_num : _num; \ })
+#if defined(_ABIO32) + #define my_syscall5(num, arg1, arg2, arg3, arg4, arg5) \ ({ \ register long _num __asm__ ("v0") = (num); \ @@ -141,10 +164,10 @@ register long _arg5 = (long)(arg5); \ \ __asm__ volatile ( \ - "addiu $sp, $sp, -32\n" \ + _NOLIBC_SYSCALL_STACK_RESERVE \ "sw %7, 16($sp)\n" \ "syscall\n" \ - "addiu $sp, $sp, 32\n" \ + _NOLIBC_SYSCALL_STACK_UNRESERVE \ : "=r" (_num), "=r"(_arg4) \ : "0"(_num), \ "r"(_arg1), "r"(_arg2), "r"(_arg3), "r"(_arg4), "r"(_arg5) \ @@ -164,11 +187,53 @@ register long _arg6 = (long)(arg6); \ \ __asm__ volatile ( \ - "addiu $sp, $sp, -32\n" \ + _NOLIBC_SYSCALL_STACK_RESERVE \ "sw %7, 16($sp)\n" \ "sw %8, 20($sp)\n" \ "syscall\n" \ - "addiu $sp, $sp, 32\n" \ + _NOLIBC_SYSCALL_STACK_UNRESERVE \ + : "=r" (_num), "=r"(_arg4) \ + : "0"(_num), \ + "r"(_arg1), "r"(_arg2), "r"(_arg3), "r"(_arg4), "r"(_arg5), \ + "r"(_arg6) \ + : _NOLIBC_SYSCALL_CLOBBERLIST \ + ); \ + _arg4 ? -_num : _num; \ +}) + +#else /* _ABIN32 || _ABI64 */ + +#define my_syscall5(num, arg1, arg2, arg3, arg4, arg5) \ +({ \ + register long _num __asm__ ("v0") = (num); \ + register long _arg1 __asm__ ("$4") = (long)(arg1); \ + register long _arg2 __asm__ ("$5") = (long)(arg2); \ + register long _arg3 __asm__ ("$6") = (long)(arg3); \ + register long _arg4 __asm__ ("$7") = (long)(arg4); \ + register long _arg5 __asm__ ("$8") = (long)(arg5); \ + \ + __asm__ volatile ( \ + "syscall\n" \ + : "=r" (_num), "=r"(_arg4) \ + : "0"(_num), \ + "r"(_arg1), "r"(_arg2), "r"(_arg3), "r"(_arg4), "r"(_arg5) \ + : _NOLIBC_SYSCALL_CLOBBERLIST \ + ); \ + _arg4 ? -_num : _num; \ +}) + +#define my_syscall6(num, arg1, arg2, arg3, arg4, arg5, arg6) \ +({ \ + register long _num __asm__ ("v0") = (num); \ + register long _arg1 __asm__ ("$4") = (long)(arg1); \ + register long _arg2 __asm__ ("$5") = (long)(arg2); \ + register long _arg3 __asm__ ("$6") = (long)(arg3); \ + register long _arg4 __asm__ ("$7") = (long)(arg4); \ + register long _arg5 __asm__ ("$8") = (long)(arg5); \ + register long _arg6 __asm__ ("$9") = (long)(arg6); \ + \ + __asm__ volatile ( \ + "syscall\n" \ : "=r" (_num), "=r"(_arg4) \ : "0"(_num), \ "r"(_arg1), "r"(_arg2), "r"(_arg3), "r"(_arg4), "r"(_arg5), \ @@ -178,15 +243,25 @@ _arg4 ? -_num : _num; \ })
+#endif /* _ABIO32 */ + /* startup code, note that it's called __start on MIPS */ void __start(void); void __attribute__((weak, noreturn)) __nolibc_entrypoint __no_stack_protector __start(void) { __asm__ volatile ( "move $a0, $sp\n" /* save stack pointer to $a0, as arg1 of _start_c */ +#if defined(_ABIO32) "addiu $sp, $sp, -16\n" /* the callee expects to save a0..a3 there */ +#endif /* _ABIO32 */ "lui $t9, %hi(_start_c)\n" /* ABI requires current function address in $t9 */ "ori $t9, %lo(_start_c)\n" +#if defined(_ABI64) + "lui $t0, %highest(_start_c)\n" + "ori $t0, %higher(_start_c)\n" + "dsll $t0, 0x20\n" + "or $t9, $t0\n" +#endif /* _ABI64 */ "jalr $t9\n" /* transfer to c runtime */ ); __nolibc_entrypoint_epilogue(); diff --git a/tools/testing/selftests/nolibc/Makefile b/tools/testing/selftests/nolibc/Makefile index 14fc8c7e7c3067efddf0f729890fb78df731efb3..7db2a7b1c94ff79cd9410c66205aa1dc5fdcb4f8 100644 --- a/tools/testing/selftests/nolibc/Makefile +++ b/tools/testing/selftests/nolibc/Makefile @@ -52,6 +52,10 @@ ARCH_ppc64 = powerpc ARCH_ppc64le = powerpc ARCH_mips32le = mips ARCH_mips32be = mips +ARCH_mipsn32le = mips +ARCH_mipsn32be = mips +ARCH_mips64le = mips +ARCH_mips64be = mips ARCH_riscv32 = riscv ARCH_riscv64 = riscv ARCH_s390x = s390 @@ -65,6 +69,10 @@ IMAGE_arm64 = arch/arm64/boot/Image IMAGE_arm = arch/arm/boot/zImage IMAGE_mips32le = vmlinuz IMAGE_mips32be = vmlinuz +IMAGE_mipsn32le = vmlinuz +IMAGE_mipsn32be = vmlinuz +IMAGE_mips64le = vmlinuz +IMAGE_mips64be = vmlinuz IMAGE_ppc = vmlinux IMAGE_ppc64 = vmlinux IMAGE_ppc64le = arch/powerpc/boot/zImage @@ -85,6 +93,10 @@ DEFCONFIG_arm64 = defconfig DEFCONFIG_arm = multi_v7_defconfig DEFCONFIG_mips32le = malta_defconfig DEFCONFIG_mips32be = malta_defconfig generic/eb.config +DEFCONFIG_mipsn32le = malta_defconfig generic/64r2.config +DEFCONFIG_mipsn32be = malta_defconfig generic/64r6.config generic/eb.config +DEFCONFIG_mips64le = malta_defconfig generic/64r6.config +DEFCONFIG_mips64be = malta_defconfig generic/64r2.config generic/eb.config DEFCONFIG_ppc = pmac32_defconfig DEFCONFIG_ppc64 = powernv_be_defconfig DEFCONFIG_ppc64le = powernv_defconfig @@ -108,7 +120,11 @@ QEMU_ARCH_x86 = x86_64 QEMU_ARCH_arm64 = aarch64 QEMU_ARCH_arm = arm QEMU_ARCH_mips32le = mipsel # works with malta_defconfig -QEMU_ARCH_mips32be = mips +QEMU_ARCH_mips32be = mips +QEMU_ARCH_mipsn32le = mips64el +QEMU_ARCH_mipsn32be = mips64 +QEMU_ARCH_mips64le = mips64el +QEMU_ARCH_mips64be = mips64 QEMU_ARCH_ppc = ppc QEMU_ARCH_ppc64 = ppc64 QEMU_ARCH_ppc64le = ppc64 @@ -121,6 +137,8 @@ QEMU_ARCH_loongarch = loongarch64 QEMU_ARCH = $(QEMU_ARCH_$(XARCH))
QEMU_ARCH_USER_ppc64le = ppc64le +QEMU_ARCH_USER_mipsn32le = mipsn32el +QEMU_ARCH_USER_mipsn32be = mipsn32 QEMU_ARCH_USER = $(or $(QEMU_ARCH_USER_$(XARCH)),$(QEMU_ARCH_$(XARCH)))
QEMU_BIOS_DIR = /usr/share/edk2/ @@ -138,6 +156,10 @@ QEMU_ARGS_arm64 = -M virt -cpu cortex-a53 -append "panic=-1 $(TEST:%=NOLIBC QEMU_ARGS_arm = -M virt -append "panic=-1 $(TEST:%=NOLIBC_TEST=%)" QEMU_ARGS_mips32le = -M malta -append "panic=-1 $(TEST:%=NOLIBC_TEST=%)" QEMU_ARGS_mips32be = -M malta -append "panic=-1 $(TEST:%=NOLIBC_TEST=%)" +QEMU_ARGS_mipsn32le = -M malta -cpu 5KEc -append "panic=-1 $(TEST:%=NOLIBC_TEST=%)" +QEMU_ARGS_mipsn32be = -M malta -cpu I6400 -append "panic=-1 $(TEST:%=NOLIBC_TEST=%)" +QEMU_ARGS_mips64le = -M malta -cpu I6400 -append "panic=-1 $(TEST:%=NOLIBC_TEST=%)" +QEMU_ARGS_mips64be = -M malta -cpu 5KEc -append "panic=-1 $(TEST:%=NOLIBC_TEST=%)" QEMU_ARGS_ppc = -M g3beige -append "console=ttyS0 panic=-1 $(TEST:%=NOLIBC_TEST=%)" QEMU_ARGS_ppc64 = -M powernv -append "console=hvc0 panic=-1 $(TEST:%=NOLIBC_TEST=%)" QEMU_ARGS_ppc64le = -M powernv -append "console=hvc0 panic=-1 $(TEST:%=NOLIBC_TEST=%)" @@ -167,6 +189,10 @@ CFLAGS_s390x = -m64 CFLAGS_s390 = -m31 CFLAGS_mips32le = -EL -mabi=32 -fPIC CFLAGS_mips32be = -EB -mabi=32 +CFLAGS_mipsn32le = -EL -mabi=n32 -fPIC -march=mips64r2 +CFLAGS_mipsn32be = -EB -mabi=n32 -march=mips64r6 +CFLAGS_mips64le = -EL -mabi=64 -march=mips64r6 +CFLAGS_mips64be = -EB -mabi=64 -march=mips64r2 CFLAGS_STACKPROTECTOR ?= $(call cc-option,-mstack-protector-guard=global $(call cc-option,-fstack-protector-all)) CFLAGS ?= -Os -fno-ident -fno-asynchronous-unwind-tables -std=c89 -W -Wall -Wextra \ $(call cc-option,-fno-stack-protector) $(call cc-option,-Wmissing-prototypes) \ diff --git a/tools/testing/selftests/nolibc/run-tests.sh b/tools/testing/selftests/nolibc/run-tests.sh index b5ded13bdda2702bbc3b84f7e5249fe79bca6dc6..dff5c1aac4d333b8264373e67d753dfb55e9c7c0 100755 --- a/tools/testing/selftests/nolibc/run-tests.sh +++ b/tools/testing/selftests/nolibc/run-tests.sh @@ -20,7 +20,7 @@ llvm= all_archs=( i386 x86_64 arm64 arm - mips32le mips32be + mips32le mips32be mipsn32le mipsn32be mips64le mips64be ppc ppc64 ppc64le riscv32 riscv64 s390x s390
Hi Thomas!
On Tue, Feb 25, 2025 at 06:02:34PM +0100, Thomas Weißschuh wrote:
Introduce support for the N32 and N64 ABIs. As preparation, the entrypoint is first simplified significantly. Thanks to Maciej for all the valuable information.
Signed-off-by: Thomas Weißschuh linux@weissschuh.net
Changes in v2:
- Clean up entrypoint first
- Annotate #endifs
- Link to v1: https://lore.kernel.org/r/20250212-nolibc-mips-n32-v1-1-6892e58d1321@weisssc...
OK I tested this series on my glinet (MIPS 24Kc, XARCH=mips32be) and it worked fine, confirming that the stack alignments were not needed and that the cleanup is quite welcome!
Tested-by: Willy Tarreau w@1wt.eu
Willy
On Sat, 1 Mar 2025, Willy Tarreau wrote:
Introduce support for the N32 and N64 ABIs. As preparation, the entrypoint is first simplified significantly. Thanks to Maciej for all the valuable information.
Signed-off-by: Thomas Weißschuh linux@weissschuh.net
Changes in v2:
- Clean up entrypoint first
- Annotate #endifs
- Link to v1: https://lore.kernel.org/r/20250212-nolibc-mips-n32-v1-1-6892e58d1321@weisssc...
OK I tested this series on my glinet (MIPS 24Kc, XARCH=mips32be) and it worked fine, confirming that the stack alignments were not needed and that the cleanup is quite welcome!
I do hope it can wait two weeks until I'm back from my holiday. I mean to double-check the code visually and verify it with my R3000 and R4000 hardware (the latter for n64/n32 too), both of which are less forgiving when it comes to instruction scheduling (I can check with a 74Kf too).
Maciej
On 2025-03-01 15:47:52+0000, Maciej W. Rozycki wrote:
On Sat, 1 Mar 2025, Willy Tarreau wrote:
Introduce support for the N32 and N64 ABIs. As preparation, the entrypoint is first simplified significantly. Thanks to Maciej for all the valuable information.
Signed-off-by: Thomas Weißschuh linux@weissschuh.net
Changes in v2:
- Clean up entrypoint first
- Annotate #endifs
- Link to v1: https://lore.kernel.org/r/20250212-nolibc-mips-n32-v1-1-6892e58d1321@weisssc...
OK I tested this series on my glinet (MIPS 24Kc, XARCH=mips32be) and it worked fine, confirming that the stack alignments were not needed and that the cleanup is quite welcome!
I do hope it can wait two weeks until I'm back from my holiday.
Absolutely.
I mean to double-check the code visually and verify it with my R3000 and R4000 hardware (the latter for n64/n32 too), both of which are less forgiving when it comes to instruction scheduling (I can check with a 74Kf too).
Your testing and feedback is very valuable, I'm happy to wait for it.
Thanks, Thomas
On Sat, Mar 01, 2025 at 03:47:52PM +0000, Maciej W. Rozycki wrote:
On Sat, 1 Mar 2025, Willy Tarreau wrote:
Introduce support for the N32 and N64 ABIs. As preparation, the entrypoint is first simplified significantly. Thanks to Maciej for all the valuable information.
Signed-off-by: Thomas Weißschuh linux@weissschuh.net
Changes in v2:
- Clean up entrypoint first
- Annotate #endifs
- Link to v1: https://lore.kernel.org/r/20250212-nolibc-mips-n32-v1-1-6892e58d1321@weisssc...
OK I tested this series on my glinet (MIPS 24Kc, XARCH=mips32be) and it worked fine, confirming that the stack alignments were not needed and that the cleanup is quite welcome!
I do hope it can wait two weeks until I'm back from my holiday. I mean to double-check the code visually and verify it with my R3000 and R4000 hardware (the latter for n64/n32 too), both of which are less forgiving when it comes to instruction scheduling (I can check with a 74Kf too).
Oh that would be great, thank you Maciej!
Willy
linux-kselftest-mirror@lists.linaro.org