All,
Just a quick reminder. If you're trying to get anything delivered into
an Android target for 11.07 you should have talked to me and we should
have an integration BP tracking it. You should also be testing your
stuff against the Android target you want to deliver on. Simply
getting source into an Android build that isn't turned on is not
considered landing. Things will need to be tested, documented and
characterized against Android. We're doing continuous integration so
code should be pushed early and often.
Here's the current list of things we're going to land in 11.07:
https://launchpad.net/linaro-android/+milestone/11.07
If you're not sure where to start with Android talk to me and we'll
get you going. You don't even need a board (though it helps). Our tip
build right now is on Panda and will be pointing to the 3.0 kernel
soon.
-Zach
Hello,
Here at Linaro, we pull components for Android builds from various
sources, like AOSP upstream, our forks of AOSP components (to
fix compatibility issues with gcc 4.5/4.6, etc.), bootloaders and
kernels from SoC support teams, etc. All in all, that means that we
don't have full control over those repositories for example to tag them.
So, to achieve build reproducibility, we decided to use a feature that
repo provides, exporting "release" manifest for a build using
"repo manifest -r", which dumps manifest with SHA1 revision ids of
components used for particular build.
However, we found that such release manifests are not robust over time.
If an upstream project uses rebase work model, then, even if a
particular revision was tagged which ensures that this revision won't be
garbage-collected if rebased later in main branch, the repo tool may not
be able to fetch it.
This happens because, unlike git clone, the repo tool limits what it
fetches to refs/heads/*. So, if some revision is not reachable from one
of branches (but still reachable from refs/tags/*), it will lead to
an error like:
error.GitError: boot/u-boot-linaro-stable update-ref: error: Trying to write ref refs/remotes/m/linaro_android_2.3.3 with nonexistant object 9736a9332fcfe5fef1361a6d91740e160ad04bd5
fatal: Cannot update the ref 'refs/remotes/m/linaro_android_2.3.3'.
So, we would like to know if this can be considered a bug (it looks
pretty much such for us, after all, we're just trying to feed up repo
with what it generated, while 100% having valid revisions in the
underlying repositories), and can be fixed. The patch would be pretty
simple:
diff --git a/project.py b/project.py
index 9d67dea..31ee90f 100644
--- a/project.py
+++ b/project.py
@@ -1369,7 +1374,7 @@ class Project(object):
else:
ref_dir = None
- cmd = ['fetch']
+ cmd = ['fetch', '--tags']
# The --depth option only affects the initial fetch; after that we'll do
# full fetches of changes.
--
Best Regards,
Paul
Linaro.org | Open source software for ARM SoCs
Follow Linaro: http://www.facebook.com/pages/Linarohttp://twitter.com/#!/linaroorg - http://www.linaro.org/linaro-blog
Hi,
We've been thinking about adding support for the built-in functions for 64bit
atomic memory access and I'd like to know if this is of any interest.
Currently the main use of these functions seems to be to implement (SMP safe)
locking mechanisms where the existent 32bit memory ops are sufficient.
However, there might be code out there that implements a parallel algorithm
using 64bit atomic memory operations.
Currently the GCC ARM backend doesn't provide a pattern to inline 64bit
__sync_* functions but the compiler emits __sync_*_8 function calls [1]. The
libgcc does not provide these symbols via the usual thin wrapper around the
kernel helper [2] because the ARM Linux __kernel_cmpxchg supports 32bit only.
My understanding is that for ARMv7 the GCC backend could be enhanced to inline
the __sync_* functions by using the LDREXD and STREXD instructions. But for
ARMv5 we would still rely on a new kernel helper.
Any ideas/thoughts are appreciated.
[1] https://wiki.linaro.org/WorkingGroups/ToolChain/AtomicMemoryOperations#GCC
[2]
https://wiki.linaro.org/WorkingGroups/ToolChain/AtomicMemoryOperations#impl…
Regards
Ken
Forwarding at it got bounced (odd?)
------ Wiadomość oryginalna ------
Temat: Re: ppas, version numbers, releases and validation.linaro.org
Data: Mon, 11 Jul 2011 11:35:46 +0200
Nadawca: Zygmunt Krynicki <zygmunt.krynicki(a)linaro.org>
Firma/Organizacja: Linaro
Adresat: Michael Hudson-Doyle <michael.hudson(a)linaro.org>
Kopia: linaro-dev(a)linaro.org, paul.larson(a)linaro.org
W dniu 11.07.2011 06:34, Michael Hudson-Doyle pisze:
> Hi Paul& Zygmunt (& others),
>
> I spent a while today fixing a couple of bugs in lava-tool and in the
> packaging of lava-server, and was wondering what the process should be
> for getting them into the ~linaro-validation ppa and onto v.l.o
> (although there's no particular urgency in getting these precise fixes
> deployed, there will be changes that are more urgent).
>
> In some sense these are basic debian packaging questions I guess. But
> my understanding of the process is that it should go like this:
>
> If there are upstream changes, make a new release (update the version in
> __init__.py, tag the branch, make an sdist and upload it to pypi).
Agreed
> Then (whether there is an upstream change or not) it should be uploaded
> to a PPA. I think the part here that I don't really get is basically
> how to use bzr build-deb in practice. But I've just found
> http://jameswestby.net/bzr/builddeb/user_manual/merge.html so I think I
> should read that first :)
I do bzr bd && bzr bd -S followed by some pbuilder commands. Look at
.bzr-builder/ in each packaging branch.
> Another question I have is around version numbers. Currently we're
> using version numbers like 0.2-0ubuntu0~zyga1. I don't really see why
> the "zyga" is in there :) I think simply dropping the zyga and using
> versions like 0.2-0ubuntu0~1 would be fine, or if we want to know who
> uploaded a particular version we can use things like
> 0.2-0ubuntu0~2mwhudson.
In short: ~zygaN is the thing we can increment. We should KEEP and
perhaps change the name to ~lava (but this has to be coordinated as
~lava < ~zyga.
There are three possible scenarios which this system correctly handles:
1) We need a new release, the pattern is always the same:
${UPSTREAM}-0ubuntu0~${PACKAGE}${PACKAGE_VERSION}
Where PACKAGE is the marker (currently zyga) and PACKAGE_VERSION is
reset to 0 each time UPSTREAM changes.
2) Our packages land in Ubuntu. The version becomes:
${UPSTREAM}-0ubuntu${PACKAGE_VERSION}
Where PACKAGE_VERSION is >= 1 (this is important to differentiate from
all of our internal releases
3) Our packages land in Debian. The version in Debian becomes:
${UPSTREAM}-${PACKAGE_VERSION}
The version in Ubuntu becomes/changes to:
${UPSTREAM}-${PACKAGE_VERSION}ubuntu${UBUNTU_PACKAGE_VERSION}
Where PACKAGE_VERSION is the one from Debian and UBUNTU_PACKAGE_VERSION
is something Ubuntu developers can increment.
>
> Finally, I think that we should be triggerhappy about releases, and so
> going through the above process shouldn't take very long. I guess
> lava-dev-tool can help here.
Agreed. I will look at this problem this week, hopefully I'll resurrect
the package builder code.
Thanks
ZK
Hi All,
I put some timestamp code into the compress and decompress testcases
that I had posted about last friday for libjpeg-turbo. I have new and
much better performance numbers as a result. In addition, I've put
together a simple wiki page to capture various libjpeg-turbo kinds of
things.
https://wiki.linaro.org/TomGall/LibJpegTurbo
>From the table there is an improvement in the
libjpeg-turbo-1.1.1+patches from libjpeg62 of approx 30-40% in the
decode case. In the encode case there are improvements from 6 to 17%
however with larger images a regression is noted.
--
Regards,
Tom
"We want great men who, when fortune frowns will not be discouraged."
- Colonel Henry Knox
Linaro.org │ Open source software for ARM SoCs
w) tom.gall att linaro.org
w) tom_gall att vnet.ibm.com
h) tom_gall att mac.com
(If you reply, reply to this one, not the previous message I sent, this one
fixes the linaro-dev email address)
On Mon, Jul 11, 2011 at 4:35 AM, Zygmunt Krynicki <
zygmunt.krynicki(a)linaro.org> wrote:
> In short: ~zygaN is the thing we can increment. We should KEEP and perhaps
> change the name to ~lava (but this has to be coordinated as ~lava < ~zyga.
> There are three possible scenarios which this system correctly handles:
>
> Right, but we can make that change anytime the upstream version gets
bumped. So if it has an upstream version component that gets bumped, then
feel free to change over to the ~lava designation as soon as you make a
release that bumps the upstream version. Otherwise, if it's a component
that uses YYYY.MM <http://yyyy.mm/> only, then we will wait until the
2011.07 release later this month to switch.
-Paul Larson
Hi Paul & Zygmunt (& others),
I spent a while today fixing a couple of bugs in lava-tool and in the
packaging of lava-server, and was wondering what the process should be
for getting them into the ~linaro-validation ppa and onto v.l.o
(although there's no particular urgency in getting these precise fixes
deployed, there will be changes that are more urgent).
In some sense these are basic debian packaging questions I guess. But
my understanding of the process is that it should go like this:
If there are upstream changes, make a new release (update the version in
__init__.py, tag the branch, make an sdist and upload it to pypi).
Then (whether there is an upstream change or not) it should be uploaded
to a PPA. I think the part here that I don't really get is basically
how to use bzr build-deb in practice. But I've just found
http://jameswestby.net/bzr/builddeb/user_manual/merge.html so I think I
should read that first :)
Another question I have is around version numbers. Currently we're
using version numbers like 0.2-0ubuntu0~zyga1. I don't really see why
the "zyga" is in there :) I think simply dropping the zyga and using
versions like 0.2-0ubuntu0~1 would be fine, or if we want to know who
uploaded a particular version we can use things like
0.2-0ubuntu0~2mwhudson.
Finally, I think that we should be triggerhappy about releases, and so
going through the above process shouldn't take very long. I guess
lava-dev-tool can help here.
Cheers,
mwh
Documentation about the background and the design of mmc non-blocking.
Host driver guidelines to minimize request preparation overhead.
Signed-off-by: Per Forlin <per.forlin(a)linaro.org>
Acked-by: Randy Dunlap <rdunlap(a)xenotime.net>
---
ChangeLog:
v2: - Minor updates after proofreading comments from Chris
v3: - Minor updates after more comments from Chris
v4: - Minor updates after comments from Randy
v5: - Fixed one more comment and Acked-by from Randy
Documentation/mmc/00-INDEX | 2 +
Documentation/mmc/mmc-async-req.txt | 86 +++++++++++++++++++++++++++++++++++
2 files changed, 88 insertions(+), 0 deletions(-)
create mode 100644 Documentation/mmc/mmc-async-req.txt
diff --git a/Documentation/mmc/00-INDEX b/Documentation/mmc/00-INDEX
index 93dd7a7..a9ba672 100644
--- a/Documentation/mmc/00-INDEX
+++ b/Documentation/mmc/00-INDEX
@@ -4,3 +4,5 @@ mmc-dev-attrs.txt
- info on SD and MMC device attributes
mmc-dev-parts.txt
- info on SD and MMC device partitions
+mmc-async-req.txt
+ - info on mmc asynchronous requests
diff --git a/Documentation/mmc/mmc-async-req.txt b/Documentation/mmc/mmc-async-req.txt
new file mode 100644
index 0000000..b7a52ea
--- /dev/null
+++ b/Documentation/mmc/mmc-async-req.txt
@@ -0,0 +1,86 @@
+Rationale
+=========
+
+How significant is the cache maintenance overhead?
+It depends. Fast eMMC and multiple cache levels with speculative cache
+pre-fetch makes the cache overhead relatively significant. If the DMA
+preparations for the next request are done in parallel with the current
+transfer, the DMA preparation overhead would not affect the MMC performance.
+The intention of non-blocking (asynchronous) MMC requests is to minimize the
+time between when an MMC request ends and another MMC request begins.
+Using mmc_wait_for_req(), the MMC controller is idle while dma_map_sg and
+dma_unmap_sg are processing. Using non-blocking MMC requests makes it
+possible to prepare the caches for next job in parallel with an active
+MMC request.
+
+MMC block driver
+================
+
+The issue_rw_rq() in the MMC block driver is made non-blocking.
+The increase in throughput is proportional to the time it takes to
+prepare (major part of preparations are dma_map_sg and dma_unmap_sg)
+a request and how fast the memory is. The faster the MMC/SD is
+the more significant the prepare request time becomes. Roughly the expected
+performance gain is 5% for large writes and 10% on large reads on a L2 cache
+platform. In power save mode, when clocks run on a lower frequency, the DMA
+preparation may cost even more. As long as these slower preparations are run
+in parallel with the transfer performance won't be affected.
+
+Details on measurements from IOZone and mmc_test
+================================================
+
+https://wiki.linaro.org/WorkingGroups/Kernel/Specs/StoragePerfMMC-async-req
+
+MMC core API extension
+======================
+
+There is one new public function mmc_start_req().
+It starts a new MMC command request for a host. The function isn't
+truly non-blocking. If there is on ongoing async request it waits
+for completion of that request and starts the new one and returns. It
+doesn't wait for the new request to complete. If there is no ongoing
+request it starts the new request and returns immediately.
+
+MMC host extensions
+===================
+
+There are two optional hooks -- pre_req() and post_req() -- that the host
+driver may implement in order to move work to before and after the actual
+mmc_request function is called. In the DMA case pre_req() may do
+dma_map_sg() and prepare the DMA descriptor, and post_req runs
+the dma_unmap_sg.
+
+Optimize for the first request
+==============================
+
+The first request in a series of requests can't be prepared in parallel with
+the previous transfer, since there is no previous request.
+The argument is_first_req in pre_req() indicates that there is no previous
+request. The host driver may optimize for this scenario to minimize
+the performance loss. A way to optimize for this is to split the current
+request in two chunks, prepare the first chunk and start the request,
+and finally prepare the second chunk and start the transfer.
+
+Pseudocode to handle is_first_req scenario with minimal prepare overhead:
+if (is_first_req && req->size > threshold)
+ /* start MMC transfer for the complete transfer size */
+ mmc_start_command(MMC_CMD_TRANSFER_FULL_SIZE);
+
+ /*
+ * Begin to prepare DMA while cmd is being processed by MMC.
+ * The first chunk of the request should take the same time
+ * to prepare as the "MMC process command time".
+ * If prepare time exceeds MMC cmd time
+ * the transfer is delayed, guesstimate max 4k as first chunk size.
+ */
+ prepare_1st_chunk_for_dma(req);
+ /* flush pending desc to the DMAC (dmaengine.h) */
+ dma_issue_pending(req->dma_desc);
+
+ prepare_2nd_chunk_for_dma(req);
+ /*
+ * The second issue_pending should be called before MMC runs out
+ * of the first chunk. If the MMC runs out of the first data chunk
+ * before this call, the transfer is delayed.
+ */
+ dma_issue_pending(req->dma_desc);
--
1.7.4.1