See https://wiki.linaro.org/OfficeofCTO/2011-04-26:
== Actions from previous meeting ==
* CARRIEDOVER: Steve to investigate Samsung bits
* ACTION: Grant to mail David with the exact sessions / meetings he would like to have
* Superseded, Grant has added blueprints to LDS, ready for scheduling
== Attendees ==
* David Rusling
* Patrik Ryd
* Grant Likely
=== Holiday ===
* Loïc Minier
* Steve !McIntyre
== Minutes ==
* Discussed Developer Summit
* Blueprints coming together, see [[https://blueprints.launchpad.net/sprints/uds-o?searchtext=linaro-]]
* David
* OCTO update to Linaro board, slides circulated
* ACTION: all, please review
* Monthly executive newsletter
* Emphasis on memory management
* Organising subarchitecture kernel maintenance in time for LDS
* Grant
* Working towards getting stuff merged into the device tree
* Russell looking at the device tree
* UDS: documents need to be readied for presenting
* ACTION: dig into device tree blueprint for Qemu / toolchain
* Patrik
* Busy getting things to work (Panda does, Beagle doesn't)
* Trying to get first Android LEB up and running, might demo at LDS
David Rusling, CTO
http://www.linaro.org
Linaro
Lockton House
Clarendon Rd
Cambridge
CB2 8FH
How significant is the cache maintenance over head?
It depends, the eMMC are much faster now
compared to a few years ago and cache maintenance cost more due to
multiple cache levels and speculative cache pre-fetch. In relation the
cost for handling the caches have increased and is now a bottle neck
dealing with fast eMMC together with DMA.
The intention for introducing none blocking mmc requests is to minimize the
time between a mmc request ends and another mmc request starts. In the
current implementation the MMC controller is idle when dma_map_sg and
dma_unmap_sg is processing. Introducing none blocking mmc request makes it
possible to prepare the caches for next job parallel with an active
mmc request.
This is done by making the issue_rw_rq() none blocking.
The increase in throughput is proportional to the time it takes to
prepare (major part of preparations is dma_map_sg and dma_unmap_sg)
a request and how fast the memory is. The faster the MMC/SD is
the more significant the prepare request time becomes. Measurements on U5500
and Panda on eMMC and SD shows significant performance gain for for large
reads when running DMA mode. In the PIO case the performance is unchanged.
There are two optional hooks pre_req() and post_req() that the host driver
may implement in order to move work to before and after the actual mmc_request
function is called. In the DMA case pre_req() may do dma_map_sg() and prepare
the dma descriptor and post_req runs the dma_unmap_sg.
Details on measurements from IOZone and mmc_test:
https://wiki.linaro.org/WorkingGroups/KernelConsolidation/Specs/StoragePerf…
Changes since v1:
* Add support for omap_hsmmc
* Add test in mmc_test to compare performance with
and without none blocking request.
* Add random fault injection in mmc core to exercise error
handling in the mmc block code.
* Fix serveral issue in the mmc block error handling.
* Add a host_cookie member in mmc_data to be used by
pre_req to mark the data. The host driver will then
check this mark to see if the data is prepared or not.
* Previous patch subject was
"add double buffering for mmc block requests".
Per Forlin (12):
mmc: add none blocking mmc request function
mmc: mmc_test: add debugfs file to list all tests
mmc: mmc_test: add test for none blocking transfers
mmc: add member in mmc queue struct to hold request data
mmc: add a block request prepare function
mmc: move error code in mmc_block_issue_rw_rq to a separate function.
mmc: add a second mmc queue request member
mmc: add handling for two parallel block requests in issue_rw_rq
mmc: test: add random fault injection in core.c
omap_hsmmc: use original sg_len for dma_unmap_sg
omap_hsmmc: add support for pre_req and post_req
mmci: implement pre_req() and post_req()
drivers/mmc/card/block.c | 493 +++++++++++++++++++++++++++--------------
drivers/mmc/card/mmc_test.c | 342 ++++++++++++++++++++++++++++-
drivers/mmc/card/queue.c | 171 +++++++++------
drivers/mmc/card/queue.h | 31 ++-
drivers/mmc/core/core.c | 132 ++++++++++-
drivers/mmc/core/debugfs.c | 5 +
drivers/mmc/host/mmci.c | 146 +++++++++++-
drivers/mmc/host/mmci.h | 8 +
drivers/mmc/host/omap_hsmmc.c | 90 +++++++-
include/linux/mmc/core.h | 9 +-
include/linux/mmc/host.h | 13 +-
lib/Kconfig.debug | 11 +
12 files changed, 1172 insertions(+), 279 deletions(-)
--
1.7.4.1
Here are what the patch set does.
* Remove .probe and .remove hooks from sdhci-pltfm.c and make it be
a pure common helper function providers.
* Add .probe and .remove hooks for sdhci pltfm drivers sdhci-cns3xxx,
sdhci-dove, sdhci-tegra, and sdhci-esdhc-imx to make them self
registered with calling helper functions created above.
* Migrate the use of sdhci_of_host and sdhci_of_data to
sdhci_pltfm_host and sdhci_pltfm_data, so that OF version host and
data structure works can be saved, and pltfm version works for both
cases.
* Add OF common helper stuff into sdhci-pltfm.c, and make OF version
sdhci drivers sdhci-of-esdhc and sdhci-of-hlwd become self
registered as well, so that sdhci-of-core.c and sdhci-of.h can be
removed.
* Consolidate the OF and pltfm esdhc drivers into one with sharing
the same pair of .probe and .remove hooks. As a result,
sdhci-esdhc-imx.c and sdhci-of-esdhc.c go away, while
sdhci-esdhc.c comes in and works for both MPCxxx and i.MX.
* Eliminate include/linux/mmc/sdhci-pltfm.h with moving stuff into
drivers/mmc/host/sdhci-pltfm.h.
And the benefits we gain from the changes are:
* Get the sdhci device driver follow the Linux trend that driver
makes the registration by its own.
* sdhci-pltfm.c becomes simple and clean as it only has common helper
stuff there now.
* All sdhci device specific things are going back its own driver.
* The dt and non-dt drivers are consolidated to use the same pair of
.probe and .remove hooks.
* SDHCI driver for Freescale eSDHC controller found on both MPCxxx
and i.MX platforms is consolidated to use the same one .probe
function.
The patch set works against the tree below, and was only tested on
i.mx51 babbage board, all other targets were build tested.
git://git.secretlab.ca/git/linux-2.6.git devicetree/test
Comments are welcomed and appreciated.
Regards,
Shawn
PS: The first patch is a squashing of the patch set below, which was
posted for review a few days back.
[PATCH 0/5] make sdhci device drivers self registered
Some patches in this series are relatively large, involving more
changes than expected, I chose to not split considering they are
logically integral, and doing so can reduce the patch quantity much,
and make bisect much easier. But sorry for that it makes reviewers'
life harder.
Shawn Guo (5):
mmc: sdhci: make sdhci-pltfm device drivers self registered
mmc: sdhci: eliminate sdhci_of_host and sdhci_of_data
mmc: sdhci: make sdhci-of device drivers self registered
mmc: sdhci: consolidate sdhci-of-esdhc and sdhci-esdhc-imx
mmc: sdhci: merge two sdhci-pltfm.h into one
drivers/mmc/host/Kconfig | 71 ++++---
drivers/mmc/host/Makefile | 17 +-
drivers/mmc/host/sdhci-cns3xxx.c | 68 ++++++-
drivers/mmc/host/sdhci-dove.c | 69 ++++++-
drivers/mmc/host/sdhci-esdhc-imx.c | 149 -------------
drivers/mmc/host/sdhci-esdhc.c | 412 ++++++++++++++++++++++++++++++++++++
drivers/mmc/host/sdhci-of-core.c | 247 ---------------------
drivers/mmc/host/sdhci-of-esdhc.c | 89 --------
drivers/mmc/host/sdhci-of-hlwd.c | 89 +++++++-
drivers/mmc/host/sdhci-of.h | 42 ----
drivers/mmc/host/sdhci-pltfm.c | 251 +++++++++-------------
drivers/mmc/host/sdhci-pltfm.h | 36 +++-
drivers/mmc/host/sdhci-tegra.c | 187 ++++++++++-------
include/linux/mmc/sdhci-pltfm.h | 35 ---
14 files changed, 912 insertions(+), 850 deletions(-)
Hello,
There were some improvements to the Linaro Android Build Service (aka
linaro-cloud-buildd), https://android-build.linaro.org/ . The Jenkins
EC2 plugin was upgraded to the latest version 1.11, which should
improve EC2 instance management. There was also fixed a bug which caused
unreliable build process in case an instance was reused to build
different configurations.
While working towards completely disposable build instances (which
would require patching Jenkins/EC2 plugin), these changes should allow
for sustainable build reliability in the meantime. There were around a
dozen builds since these changes, and none of them showed
infrastructure failures. I hope it now will be easier to concentrate on
actual build failures where they happen.
The Android Build Service documentation is available at
https://wiki.linaro.org/Platform/Android/LinaroAndroidBuildService
Thanks,
Paul
Not sure who to send this to so including Nicolas and Linus.
make u8500_defconfig
make uImage
produces some undefined reference errors. Tail of the build log attached.
Thanks,
John
Hi folks,
I've been working on this app[1] (based on Patchwork) to track patches
submitted/accepted upstream by Linaro engineers. It's still a work in
progress and we're waiting for IS to deploy it but I wanted to show you
what I have so far and ask for feedback, so I've deployed it on an ec2
instance:
http://ec2-184-73-78-92.compute-1.amazonaws.com/
As you'll notice, that's the same front page you get on a regular
Patchwork instance, which allows you to browse the patches of every
project. You probably won't have to use that often, as everything you
need to deal with should be on the page below:
http://ec2-184-73-78-92.compute-1.amazonaws.com/user/submitted/
This one contains all patches you submitted that haven't received
feedback yet. We want that to reflect reality to make sure our metrics
are accurate, so we already have a script to mark the ones that are
committed upstream and may use a similar approach to track
rejected/superseded ones, but for now we'll need to manually mark the
rejected/superseded ones.
The script mentioned above will scan a git repo for each project and
update the state of patches that are found to be committed there, but
it's not running yet as first I need to know the git (http works too)
URL of every project listed there. if you know it for any of them,
please do let me know.
Finally, the page below has some basic metrics to demo how we'll be
using this data
http://ec2-184-73-78-92.compute-1.amazonaws.com/metrics/
Some things to notice:
- This is a temporary deployment, so don't bother making changes (other
than those when playing around) as those will probably be lost
- Login via OpenID using the Launchpad login service, so there's no need
to register
- Not all of your patches may be shown under /user/submitted. if that's
the case, make sure all your email addresses are registered in Launchpad
as we're sucking that data from there periodically and the next time we
do so we'll merge all your email addresses under a single user
- Some numbers in /metrics are skewed because of patches that have
multiple versions; you'll be asked to flag superseded versions so that
they're not included in the counts, and you're encouraged to do so for
some of them to see how it works, but given that changes will be lost,
don't bother doing it for more than a few
- The other-unknown project is where we put patches for which we can't
guess a project.
- in some cases this is because the patches are sent only to
patches(a)linaro.org so they need to be manually moved to the
appropriate project (there's a form at the bottom of every patch
list which allow you to do that).
- in other cases it's because we're missing a project in patchwork. if
that's the case we can register that project and there's a script
which runs periodically and will move these patches to the new
project
[1] https://wiki.linaro.org/Platform/Infrastructure/Specs/PatchTracking
--
Guilherme Salgado <https://launchpad.net/~salgado>
Hello!
Given the increasing numbers of CPUs on low-cost servers as well as
the appearance of SMP hand-held battery-powered devices, Linux Plumbers
2011 will feature a Scaling track, with topics including the following:
- What can the kernel do to help applications perform and scale better?
- What can applications do to help the kernel perform and scale better?
- Memory footprint:
+ 100MB here, 100MB there, pretty soon you are talking real memory!
+ Improving performance by decreasing icache and dcache footprint.
- Limits to scalability:
+ Technological limitations, especially hardware.
+ Complexity/maintainability limitations.
- Handling of non-CPU computational hardware (GPGPUs, crypto HW, etc.)
+ Can the kernel make good use of non-CPU computational hardware?
+ How best to enable user applications to use them?
- Dealing with the numerous remaining "little kernel locks"
The intended audience is developers interested in performance and
scalability throughout the Linux plumbing.
The deadline is April 30th. Please submit your proposals, whether to
Scaling or to other microconferences, at:
http://www.linuxplumbersconf.org/2011/ocw/events/LPC2011MC/proposals
The general track is also open for submissions until April 30th:
http://www.linuxplumbersconf.org/2011/ocw/events/LPC2011/proposals
Either way, we look forward to seeing your submissions!
Thanx, Paul