Hi, sorry for the topic, I wanted to catch your attention.
This is a quick brain dump based on my own observations/battle with
master images last week.
1) Unless we use external USB/ETH adapters then cloning a master image
clones the mac address as well. This has serious consequences and I'm
100% sure that's why lava-test had to be switched to the random UUID
mode. This problem applies to the master image mode. In the test image
the software can do anything so we may run with a random MAC or with
the mac that master images' boot loader set (we should check that).
Since making master images is a mess, unless is becomes automated I
will not be convinced that people just know how to make them properly
and are not simply copying from someone. There is no reproducible
master image creation process that ensure two people with the same
board can run a single test in a reproducible way! (different starting
rootfs/bootloader/package selection/random mistakes)
2) Running code via serial on the master image is a mess. It is very
fragile. We need an agent on the board instead of a random master
image+serial shell. The agent will expose board identity, capabilities
and standard APIs to LAVA (notably the dispatcher).
The same API, if done sensibly, will work for software emulators and
hardware boards. Agent API for a software emulator can do different
things. Dispatcher should be based on agent API instead of ramming the
serial line.
3) The master image, as we know it today, should be booting remotely.
The boot loader can stay on the board until we can push it over USB.
The only thing that absolutely has to stay in the card is the lava
board identity file which would be generated from the web UI. There is
no reason to keep rootfs/kernel/initrd there. This means that a single
small card can fit all tests as well. It also means we can reset the
master image (as currently it is writeable by the board and can be
corrupted) before booting to ensure consistent behaviour. I did some
work on that and I managed to boot panda over NFS. Ideally I want to
boot over nbd (netblock device) which is much faster and with proper
"master image" init script we can expose a single read only net block
device to _all_ the boards.
4) With agent on each board, identity file on the SD card LAVA will
know if cloning happened. We could do dynamic board detection (unplug
the board -> it goes away, plug it back -> it shows up). We could move
a board from system to system and have 0config transitions.
5) Dispatcher should drop all configuration files. Sure it made sense
12 months ago when the idea was to run it standalone. Now all of that
configuration should be in the database and should be provided by the
scheduler to the dispatcher as a big serialized argument (or a file
descriptor or a temporary file on disk). Setting up the dispatcher for
a new instance is a pain and unless you can copy stuff from the
validation server and ask everyone around for help it's very hard to
get right. If master images could be constructed programmatically and
with a agent on each "master image" lava would just get that
configuration for free.
6) We should drop conmux. As in the lab we already have TCP/IP sockets
for the serial lines we could just provide my example serial->tcp
script as lava-serial service that people with directly attached
boards would use. We could get a similar lava-power service if that
would make sense. The lava-serial service could be started as an
instance for all USB/SERIAL adapters plugged in if we really wanted
(hello upstart!). The lava-power service would be custom and would
require some config but it is very rare. Only lab and me have
something like that. Again it should be instance based IMHO so I can
say: 'start lava-power CONF=/etc/lava-power/magic-hack.conf' and see
LAVA know about a power service. One could then say that a particular
board uses a particular serial and power services.
That's it.
Best regards
ZK
Hi,
When PM_RUNTIME is enabled, PL330 probe fails because of some
mismatch in pm_runtime calls. This patchset fixes those issues.
This patch is based on Kukjin's for-next branch and tested on
EXYNOS4 based Origen board.
d3d936c "Merge branch 'samsung-fixes-2' into for-next"
Tushar Behera (2):
To Vinod Koul <vinod.koul(a)intel.com>:
DMA: PL330: Remove pm_runtime_xxx calls from pl330 probe/remove
To Kukjin Kim <kgene.kim(a)samsung.com>:
ARM: EXYNOS: Add apb_pclk clkdev entry for mdma1
arch/arm/mach-exynos/clock.c | 1 +
drivers/dma/pl330.c | 17 ++---------------
2 files changed, 3 insertions(+), 15 deletions(-)
--
1.7.4.1
Hi All,
one of the blueprints we have for 11.12 is to modify the LEB/ALIP
images so they include more linaro branding. A linaro wallpaper, maybe
a linaro image as the system is booting, that kind of thing.
Towards that end (and given that time is short if this is to make
11.12) I'd like to propose the following graphic be our new wallpaper
image. This would be the image displayed in the background on a
graphical desktop by default.
I created it in gimp.
http://people.linaro.org/~tgall/LinaroDesktop-1920x1080-1.png
Thoughts? Concerns? Feedback?
--
Regards,
Tom
"Where's the kaboom!? There was supposed to be an earth-shattering
kaboom!" Marvin Martian
Multimedia Tech Lead | Linaro.org │ Open source software for ARM SoCs
w) tom.gall att linaro.org
w) tom_gall att vnet.ibm.com
h) tom_gall att mac.com
On Thu 08 Dec 2011 14:59:02 GMT, Amber Graner wrote:
> The benefits of becoming a Community Contributor will include:
>
> * a Linaro e-mail address
> * the right to carry Linaro business cards (we supply the artwork,
> youprint your own cards)
> * a Linaro IRC cloak
> * listing in the relevant Working Group on our Linaro organisation
> structure
> * listing in the Launchpad Community Contributors Team
Giving a community member a Linaro email address presumably also gives
the access to Google Apps?
Google docs has been, up to now, a suitable place to share things that
must be kept hidden from the general public (for license reasons,
mostly). This also applies to other infrastructure like
people.linaro.org, and the 'Internal' pages of the wiki.
The particular data I'm concerned about is proprietary benchmark
sources, and figures (these are not exactly highly sensitive, but
never-the-less we try to stick to the terms of the license). I'm sure
there are others. I have no idea whether being an officially recognised
member of the team would satisfy the license, or not?
Now, I know that there are technical solutions to this problem, if it is
a problem (i.e. file permissions), but I think there probably needs to
be some official word on how we deal with this.
Andrew
Hi folks.
So there's been some justified uproar about me moving some stuff to
github without asking. I wanted to let you all know that we have a
Linaro organization on github (ping danilos to get your github account
included there). I've created the validation team there and allowed it
to push/pull to https://github.com/Linaro/lava-deployment-tool I also
have my own fork.
I wanted to give you a few reasons why I wanted to use github instead
of bzr (note, we've got a bzr mirror setup on launchpad so you can
always get the code from there)
1) Linaro uses git a lot and not all of us are fluent in it. Using git
on a small project is an excellent way to learn it without affecting a
ton of things
2) Github is IMHO arguably a much much much better place for code than
launchpad, this is a personal feeling but one I wanted to explain
better below.
3) With the git model I can keep all my branches in one repo, that is
a nice feature (I still prefer to work with many directories but the
annoyance of syncing them all is bigger than the adjustment to single
working directory model)
So having said that, those are the things that I prefer on github over
their launchpad counterparts:
* Github has hands down better UI, it is more pleasant to use and I
found myself to be a happier person. It has a ton of tiny things that
matter (like automatic archives, tarballs, zips etc, for all the tags
in your code, no more manual releases!)
* I like the social aspects of github, ability to follow on other
people's code and get notified. Launchpad feels like a corporate world
in comparison.
* Github has great code review UI, ability to do merges straight from
the web, better commenting on code pieces (apart from UI)
* Github still has an integrated bug tracker if you wanted to jump all
the way there. It also shares the niceness of the UI compared to LP
* Github does better stuff at presenting your projects, it can display
the README file (with nice formatting) by default, it can store your
wiki and your arbitrary static HTML site for free as well. It feels
much more complete compared to launchpad's "bring your own for those
two quite essential topics"
* Github has an _awesome_ integration with bazillion 3rd party tools
(for us we could get post-commit hooks for readthedocs for example,
something that is just not possible with launchpad). There are many
possibilities we could use for our own workflow (commit in production
branch gets deployed automatically, etc etc). While it is possible to
do such stuff with launchpad is always requires painful polling with
the python LP api.
So there you have it.
Best regards
ZK
Hi there,
The linux-arm-kernel[1] project on patches.l.o was using the following source
tree
http://ftp.arm.linux.org.uk/pub/linux/arm/kernel/git-cur/linux-2.6-arm.git
to track committed patches, but fetching that repo seems to have become
ridiculously slow, causing git to take ages to update the local copy or get a
new one. So I'm wondering if there's any mirror of that repo (served via
HTTP as we can't fetch git:// URLs from the machine that runs patches.l.o)
somewhere with a bigger pipe? If not, would it be ok to change the source
tree of linux-arm-kernel to Linus' tree?
[1] http://patches.linaro.org/project/linux-arm-kernel/
--
Guilherme Salgado <https://launchpad.net/~salgado>
Updated weekly dashboard for Graphics (NEW!) with more details and links
[1]:
https://wiki.linaro.org/WorkingGroups/Middleware/Graphics/WeeklyReport
Last weekly meeting notes:
https://wiki.linaro.org/WorkingGroups/Middleware/Graphics/Notes/2011-12-07
Highlights:
- Jesse's blog post attracted more than 7500 visits since its upload
- glmark2: ready for the release and also has a prototype working for
LAVA dashboard
- glproxy: Finishing EGL support
- glcompbench: Adapting glproxy branch to use the new EGL support
- dmabuf rework based on comments from list
- API trace - sent patches upstream. Some have landed in master. Some
are pending/under discussion
Issues:
- Unity/NUX/Compiz:
* though NUX code is merged upstream there has not been a new version of
NUX from DX (plus it would need a new version of Unity to show the new
features). This is expected 2nd week of January.
* Also Compiz/Unity: they were blocked from DX side because of the need
to put test automation and build infrastructure - This is now done for
Unity, not done for Compiz.
* Worst case scenario: we release the last working versions for Linaro's
builds, until all 3 are unblocked from DX. We are following up with DX
closely - Priority: resolve the framebuffer object issues (as per
Travis's suggestion)
Release: The release is in good shape overall. However for
Compiz/Nux/Unity there is a risk as indicated above. We are checking
with Ricardo Salveti, whether we can at least update the compiz packages
to add recent bugfixes and some other changes, if there are no breakages
in Unity code.
Comments/Questions? Please send them across :-)
[1]: I have updated the layout of the weekly report, experimentally, in
an attempt to improve readability. It should give a highlight now at a
glance. One can dig into the blueprints for more information or ask via
email/IRC if there are deeper questions. Feel free to comment.
--
Ilias Biris ilias.biris(a)linaro.org
Project Manager, Linaro
M: +358504839608, IRC: ibiris Skype: ilias_biris
Linaro.org│ Open source software for ARM SoCs
Hi,
Another package that was requested to be able to cross-compiled was
chromium. Now this is possible also, following the instructions at:
https://wiki.linaro.org/Platform/DevPlatform/CrossCompile/ChromiumCrossComp…
The starting point was chromium not building on arm at all,
fortunately it was quite easy to fix. While easy, it was time
consuming as I had to build chrome a few times natively. With builds
taking around 20h on panda. Meanwhile, on dual-core (hypertheaded, so
kinda 4-core) Intel i5 M 520, cross-compile took around 1.5h. When the
native builds were failing at the final linking stage, the value of
cross-compiling was surely becoming clear ;)
Cheers,
Riku