=== 64 bit atomics
* I got the race in membase down to a futex issue, and asking dmart
pointed me at a kernel bug that
affects recent kernels where a fix had gone in about a month ago.
That was a nasty one!
* I've still got a few bugs left; most are turning out to be timing
races in the test code (e.g. one that
times out after 2seconds but the code takes around 1.7 seconds ish -
but if something else gets
in trips over the line, and another one where it did a recv_from on a
socket but only got
the start of a message, presumably because the sender had used
multiple sends). It's tricky going
because the tests are a combination of most scripting languages (perl,
python, ruby with a splash of Erlang).
I've so far found no bugs in the atomic code.
* I looked at apr and SDL-1.3; both of which use atomics; but end up
not using 64bit atomics;
the tendency is for them to ensure they can do atomics on long and on
a void*; both of which
for us are 32bit.
=== String routines
* I've got the Newlib A15 optimised memcpy running in a test harness
at the moment for
comparison.
=== Listening to connect
* I listened in to a few connect sessions each day; the 1st day or
so was 3/4 lost on
audio systems that didn't work (I'm especially annoyed at not being
able to hear the QEMU for A15/KVM session
and toolchain support for kernel). The Rypple session was rather lost
through the lack of any screen share
or slides.
Hello all,
I've been playing around with linaro and have it working on my
Pandaboard locally. I have a couple of questions about the linaro
environment; if this is the wrong forum, I'm happy to take it elsewhere.
I see that Linaro makes monthly releases of the hwpacks and images.
How are the packages/binaries in those images created? Are they
cross-compiled, or compiled natively on the target platform? If they
are cross-compiled, how is the environment created?
The reason I ask is that we've been looking at cross-compiling some
packages ourselves, and have been running into issues. So we were
wondering what toolchain the linaro community uses.
Thanks in advance,
--
Chris Lalancette
Hello all,
I've been playing around with linaro and have it working on my
Pandaboard locally. I have a couple of questions about the linaro
environment; if this is the wrong forum, I'm happy to take it elsewhere.
I see that Linaro makes monthly releases of the hwpacks and images.
How are the packages/binaries in those images created? Are they
cross-compiled, or compiled natively on the target platform? If they
are cross-compiled, how is the environment created?
The reason I ask is that we've been looking at cross-compiling some
packages ourselves, and have been running into issues. So we were
wondering what toolchain the linaro community uses.
Thanks in advance,
--
Chris Lalancette
Hi,
- Finished rewriting SLP analysis to support not only unary and binary
operations. Committed upstream.
- Implemented cond_expr support in SLP (for libav weight_h264_pixels).
Testing it now.
- Vectorizer maintenance (test/bug fixes, patch reviews).
Ira
Testing an initial version of the implementation which estimates
register pressure in SMS on libav micro benchmarks.
I see 20% improvements in mjpegenc microbench and 11% on aacsbr-2 with
SMS. However swscale-rgb24ToY_c
still have spills in the final code although it requires maximum 64
VFP_REGS registers out of the available 64 registers so I'm trying to
understand the reason for the spill.
==Progress===
* Off for one day during the week for Diwali.
* Connect preparation - Wrote down areas to look at during connect and
tried to plan what we
want to look at during connect.
* Looked at some of the cases with vcond<float> with Ira and helped
frame blueprint.
* Investigated one of the big performance regressions in the popular
embedded benchmark
and looked at why it wasn't being vectorized only to realize that it
couldn't be. Thanks
Ira. Still don't know why ARM state is 22% faster than Thumb2 state.
* Looked at the issue with fPIC where GCSE appears to remove a label
for sometime
but not much progress.
=== Plans ===
* Connect ! next week and then vacation.
Absences.
* 26 Oct - Diwali
* 31st Oct - 4th Nov - Linaro Summit Orlando - Travel booked -
* 08 Nov - 11 Nov - Vacation booked
* Dec 19 - 31st Dec - Vacation booked
== 64 bit atomics ==
* I've been building and testing membase
* Version 1.7.1.1 source builds OK (after turning off -Werror due to
some of their curious type naming)
* The git version fails to build - it doesn't seem consistent
* 1.7.1.1 passes simple tests, but there are 3 tests in its test
suite that intermittently fail on ARM and
seem to be solid on x86. (There are also some that just require
timeouts increased due to the
relatively slow machine).
* t/issue_163.t turned out to be a timing race in the test itself,
made worse by being on a relatively slow
machine and probably made worse by the Pandas odd idea of timing.
That was reported to them with
a break down of it, and upstream has fixed their test. (
http://code.google.com/p/memcached/issues/detail?id=230 )
* t/issue_67.t is proving tougher; once in a while memcached will
lock up during init in thread_init;
there is one particular point where adding a printf will make it work
apparently reliably. I've got one
or two ideas but I need to check my understanding of pthread_cond_wait first.
* There is an assert I've seen triggered once - not looked at that yet.
== String routines ==
* While I was off last week, my memchr and strlen were accepted into newlib
* Joseph has responded to my eglibc mail, with a couple of small queries.
== Other ==
* Wrote a more detailed test case for bug 873453 (odd timing
behaviour on panda); it's
quite odd - I can get > ~80ms timing discrepency so it's not a clock
granularity issue.
* Replicated a QEMU crash for Peter.
Dave
Hi,
* finished changing libunwind to be more portable
* tested patchset on ARM and X86_64
* now builds on Android without modifications
(Android.mk, config.h and libunwind-common.h are still required)
* verified that the modified debuggerd still works
* discussed backtracing using libunwind on ARM with Harald from the BSC
* they use libunwind in a sampling tool that generates Paraver
tracefiles
* started to upgrade my Linaro Android environment and ran into issues
* need to check:
* why building the toolchain using linaro-build.sh fails
* why repo sync fails due to invalid platform/bionic SHA1
* what happened to LEB-panda.xml
Regards
Ken
RAG:
Red:
Amber:
Green:
Current Milestones:
|| || Planned || Estimate || Actual ||
||a15-usermode-support || 2011-11-10 || 2011-11-10 || 2011-10-27 ||
||upstream-omap3-cleanup || 2011-11-10 || 2011-11-10 || ||
Historical Milestones:
||qemu-linaro-2011-07 || 2011-07-21 || 2011-07-21 || 2011-07-21 ||
||qemu-linaro 2011-08 || 2011-08-18 || 2011-08-18 || 2011-08-18 ||
||qemu-linaro 2011-09 || 2011-09-15 || 2011-09-15 || 2011-09-15 ||
||add-omap3-networking || 2011-10-13 || 2011-10-13 || 2011-10-13 ||
||a15-systemmode-planning || 2011-10-13 || 2011-10-13 || 2011-09-22 ||
== a15-usermode-support ==
* A15 instruction support patches committed upstream in time for
upstream's 1.0 release
== upstream-omap3-cleanup ==
* some work on restructuring the omap3 patchset -- it's now basically
in the right order and the last 'touches several different
bits of code' jumbo patch has been split
== other ==
* sent some patches upstream which address the main things I
want to get into qemu 1.0 (PL041 audio support and fixing a
regression in handling multithreaded programs in linux-user mode)
* A15 KVM planning work and other preparation for Linaro Connect
* finally tracked down the qemu-on-ARM memory corruption: we
mmap the code generation buffer at 0x1000000 with MAP_FIXED;
unfortunately this is now in the middle of glibc's heap...
(filed as LP:883133)
* qemu now has a coroutine implementation which defaults to using
makecontext() if it is present. Unfortunately ARM eglibc provides
an implementation which always returns ENOSYS, which is a bit
tricky to detect with a compile time configure check (without
breaking cross-compilation support).
* these two things (and some other known bugs) mean that QEMU on
ARM hosts is basically broken, and will probably continue to be
since we don't have the spare resource to test and fix bugs
(beyond those which we need to fix for KVM-on-ARM)