Progress:
* UM-2 [QEMU upstream maintainership]
- Mostly caught up on code review now: have done all the patches that
are targeting 6.0
- Sent out some pullreqs with for-6.0 material
- Wrote a patch to make us emulate the right number of counters for
the PMU according to what each CPU should have, rather than always 4
- Had a go at fixing the M-profile vector table load on reset to
handle aliased memory regions
* QEMU-364 [QEMU support for ARMv8.1-M extensions]
- mps3-an524 and -an547 patchseries now in master: this epic is done!
thanks
-- PMM
Folks,
I am pleased to announce the move of libc++ to pre-commit CI. Over the past
few months, we have set up Buildkite jobs on top of the Phabricator
integration built by Mikhail and Christian, and we now run almost all of
the libc++ build bots whenever a Phabricator review is created. The bots
also run when a commit is pushed to the master branch, similarly to the
existing Buildbot setup. You can see the libc++ pipeline in action here:
https://buildkite.com/llvm-project/libcxx-ci.
This is great -- we’ve been waiting to set up pre-commit CI for a long
time, and we’ve seen a giant productivity gain since it’s up. I think
everyone who contributes to libc++ greatly benefits, seeing how reviews are
now used to trigger CI and improve our confidence in changes.
This change does have an impact on existing build bots that are not owned
by one of the libc++ maintainers. While I transferred the build bots that
we owned (which Eric had set up) to Buildkite, the remaining build bots
will have to be moved to Buildkite by their respective owners. These builds
bots are (owners in CC):
libcxx-libcxxabi-x86_64-linux-debian
libcxx-libcxxabi-x86_64-linux-debian-noexceptions
libcxx-libcxxabi-libunwind-x86_64-linux-debian
libcxx-libcxxabi-singlethreaded-x86_64-linux-debian
libcxx-libcxxabi-libunwind-armv7-linux
libcxx-libcxxabi-libunwind-armv8-linux
libcxx-libcxxabi-libunwind-armv7-linux-noexceptions
libcxx-libcxxabi-libunwind-armv8-linux-noexceptions
libcxx-libcxxabi-libunwind-aarch64-linux
libcxx-libcxxabi-libunwind-aarch64-linux-noexceptions
The process of moving these bots over to Buildkite is really easy. Please
take a look at the documentation at
https://libcxx.llvm.org/docs/AddingNewCIJobs.html#addingnewcijobs and
contact me if you need additional help.
To make sure we get the full benefits of pre-commit CI soon, I would like
to put a cutoff date on supporting the old libc++ builders at
http://lab.llvm.org:8011/builders. I would propose that after January 1st
2021 (approx. 1 month from now), the libc++ specific build bots at
lab.llvm.org be removed in favor of the Buildkite ones. If you currently
own a bot, please make sure to add an equivalent Buildkite bot by that
cutoff date to make sure your configuration is still supported, or let me
know if you need an extension.
Furthermore, with the ease of creating new CI jobs with this
infrastructure, we will consider any libc++ configuration not covered by a
pre-commit bot as not explicitly supported. It doesn’t mean that such
configurations won’t work -- it just means that we won’t be making bold
claims about supporting configurations we’re unable to actually test. So if
you care about a configuration, please open a discussion and let’s see how
we can make sure it's tested properly!
I am thrilled to be moving into the pre-commit CI era. The benefits we see
so far are huge, and we're loving it.
Thanks,
Louis
PS: This has nothing to do with a potential move or non-move to GitHub. The
current pre-commit CI works with Phabricator, and would work with GitHub if
we decided to switch. Let’s try to keep those discussions separate :-).
PPS: We’re still aiming to support non libc++ specific Buildbots. For
example, if something in libc++ breaks a Clang bot, we’ll still be
monitoring that. I’m just trying to move the libc++-specific configurations
to pre-commit.
[VIRT-349 # QEMU SVE2 Support ]
I have a working FVP SVE2 install!
I used a new FVP version (11.13.36) from the last time that I tried (11.13.21
on Jan 27).
I used a debian-testing snapshot from 1-MAR, which has linux 5.10 bundled, and
fvp revc dtb installed.
I used
https://git.linaro.org/landing-teams/working/arm/arm-reference-platforms.gi…
(which is linked to by one of the howtos that Peter forwarded) and chose the
"pre-built uefi" version. I need to report a bug on this build script -- the
"build from source" option does not work on a system that has all python3 and
no python2.
I've rebuilt all of the risu trace files for vq=4 (512-bit).
I'm now refreshing my qemu branch to test.
[UM-61 # TCG Core Maintainership ]
PR for patch queue; aa64 fixes, tci fixes, tb_lookup cleanups.
[UM-2 # Upstream Maintainership ]
Patch review, mostly v8.1m board stuff.
r~
Progress (short week, 2 days):
* Some time spent on setting up new work laptop
* Started in on the pile of code review that had built up
while I was on holiday... made a bit of progress and sent out
one pullreq. Queue length now: 16 series.
thanks
-- PMM
Hi, all
Does anybody know what does '.....isra.0' mean in GCC 10.2 compiled objects?
I just noticed this issue when using bcc/eBPF tools. I submitted the detail
into
* https://github.com/iovisor/bcc/issues/3293
Simply put, when building a linux kernel with GCC 10.2, the symbol
'finish_task_switch' becomes 'finish_task_switch.isra.0' in the object (as
reported by 'nm')
Because a lot of kernel tracers (such as bcc) use 'finish_task_switch' as
the probe point, this change in the compilation result can make all result
such tools fail.
Thanks.
Best regards,
Guodong Xu
== Progress ==
* GCC upstream validation:
- a couple of regressions to bisect.
- minor testcase fix
- reported a couple of new failures
* GCC
- MVE autovectorization:
- vcmp support mostly complete. Minor update needed to support FP types.
- working on interleaved vector load/store support
* Misc
- fixed stm32 benchmarking harness, it's working again.
- submitted patches to reduce the toolchain build time (for
benchmarking we don't need all multilibs which take ages to build)
== Next ==
* MVE auto-vectorization/intrinsics improvements
* GCC/cortex-M testing improvements & fixes
* cortex-m benchmarking
Hi
I've been trying to run clang on a Windows on Arm machine, but it keeps trying to using the link.exe located in "Visual studio/..../Host64/arm64", which is (seemingly) an x64 tool and as such doesn't run, and crashes the process.
Is there a way to set clang to look at VS's x86 link.exe? Or if there is an arm64 version that clang should be using instead?
Thanks,
Joel
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.