Hello,
Month ago, during 11.05 release testing, we had a case when Pandaboard
didn't boot with one particular build and one particular card, while
worked well with with other combinations.
During this test cycle, I hit similar conditions, which I however was
able to trace to the following: if at the moment of reset/boot, there's
a cable connected from Panda USB OTG (small) connector to host, it
won't boot from SD. The D2 LED will light for about a second, then both
will be off. Nothing is output on serial. As soon as you unplug cable
from the host, all boots again.
I'm not sure if this is a known condition, but guessed I'd share it.
--
Best Regards,
Paul
The affinity between ARM processors is defined in the MPIDR register.
We can identify which processors are in the same cluster,
and which ones have performance interdependency. We can define the
cpu topology of ARM platform, that is then used by sched_mc and sched_smt.
The default state of sched_mc and sched_smt config is disable.
When enabled, the behavior of the scheduler can be modified with
sched_mc_power_savings and sched_smt_power_savings sysfs interfaces.
Changes since v3 :
* Update the format of printk message
* Remove blank line
Changes since v2 :
* Update the commit message and some comments
Changes since v1 :
* Update the commit message
* Add read_cpuid_mpidr in arch/arm/include/asm/cputype.h
* Modify header of arch/arm/kernel/topology.c
* Modify tests and manipulation of MPIDR's bitfields
* Modify the place and dependancy of the config
* Modify Noop functions
Signed-off-by: Vincent Guittot <vincent.guittot(a)linaro.org>
Reviewed-by: Amit Kucheria <amit.kucheria(a)linaro.org>
---
arch/arm/Kconfig | 25 +++++++
arch/arm/include/asm/cputype.h | 6 ++
arch/arm/include/asm/topology.h | 33 +++++++++
arch/arm/kernel/Makefile | 1 +
arch/arm/kernel/smp.c | 6 ++
arch/arm/kernel/topology.c | 149 +++++++++++++++++++++++++++++++++++++++
6 files changed, 220 insertions(+), 0 deletions(-)
create mode 100644 arch/arm/kernel/topology.c
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 9adc278..f327e55 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -1344,6 +1344,31 @@ config SMP_ON_UP
If you don't know what to do here, say Y.
+config ARM_CPU_TOPOLOGY
+ bool "Support cpu topology definition"
+ depends on SMP && CPU_V7
+ default y
+ help
+ Support ARM cpu topology definition. The MPIDR register defines
+ affinity between processors which is then used to describe the cpu
+ topology of an ARM System.
+
+config SCHED_MC
+ bool "Multi-core scheduler support"
+ depends on ARM_CPU_TOPOLOGY
+ help
+ Multi-core scheduler support improves the CPU scheduler's decision
+ making when dealing with multi-core CPU chips at a cost of slightly
+ increased overhead in some places. If unsure say N here.
+
+config SCHED_SMT
+ bool "SMT scheduler support"
+ depends on ARM_CPU_TOPOLOGY
+ help
+ Improves the CPU scheduler's decision making when dealing with
+ MultiThreading at a cost of slightly increased overhead in some
+ places. If unsure say N here.
+
config HAVE_ARM_SCU
bool
depends on SMP
diff --git a/arch/arm/include/asm/cputype.h b/arch/arm/include/asm/cputype.h
index cd4458f..cb47d28 100644
--- a/arch/arm/include/asm/cputype.h
+++ b/arch/arm/include/asm/cputype.h
@@ -8,6 +8,7 @@
#define CPUID_CACHETYPE 1
#define CPUID_TCM 2
#define CPUID_TLBTYPE 3
+#define CPUID_MPIDR 5
#define CPUID_EXT_PFR0 "c1, 0"
#define CPUID_EXT_PFR1 "c1, 1"
@@ -70,6 +71,11 @@ static inline unsigned int __attribute_const__ read_cpuid_tcmstatus(void)
return read_cpuid(CPUID_TCM);
}
+static inline unsigned int __attribute_const__ read_cpuid_mpidr(void)
+{
+ return read_cpuid(CPUID_MPIDR);
+}
+
/*
* Intel's XScale3 core supports some v6 features (supersections, L2)
* but advertises itself as v5 as it does not support the v6 ISA. For
diff --git a/arch/arm/include/asm/topology.h b/arch/arm/include/asm/topology.h
index accbd7c..63a7454 100644
--- a/arch/arm/include/asm/topology.h
+++ b/arch/arm/include/asm/topology.h
@@ -1,6 +1,39 @@
#ifndef _ASM_ARM_TOPOLOGY_H
#define _ASM_ARM_TOPOLOGY_H
+#ifdef CONFIG_ARM_CPU_TOPOLOGY
+
+#include <linux/cpumask.h>
+
+struct cputopo_arm {
+ int thread_id;
+ int core_id;
+ int socket_id;
+ cpumask_t thread_sibling;
+ cpumask_t core_sibling;
+};
+
+extern struct cputopo_arm cpu_topology[NR_CPUS];
+
+#define topology_physical_package_id(cpu) (cpu_topology[cpu].socket_id)
+#define topology_core_id(cpu) (cpu_topology[cpu].core_id)
+#define topology_core_cpumask(cpu) (&(cpu_topology[cpu].core_sibling))
+#define topology_thread_cpumask(cpu) (&(cpu_topology[cpu].thread_sibling))
+
+#define mc_capable() (cpu_topology[0].socket_id != -1)
+#define smt_capable() (cpu_topology[0].thread_id != -1)
+
+void init_cpu_topology(void);
+void store_cpu_topology(unsigned int cpuid);
+const struct cpumask *cpu_coregroup_mask(unsigned int cpu);
+
+#else
+
+static inline void init_cpu_topology(void) { };
+static inline void store_cpu_topology(unsigned int cpuid) { };
+
+#endif
+
#include <asm-generic/topology.h>
#endif /* _ASM_ARM_TOPOLOGY_H */
diff --git a/arch/arm/kernel/Makefile b/arch/arm/kernel/Makefile
index a5b31af..816a481 100644
--- a/arch/arm/kernel/Makefile
+++ b/arch/arm/kernel/Makefile
@@ -61,6 +61,7 @@ obj-$(CONFIG_IWMMXT) += iwmmxt.o
obj-$(CONFIG_CPU_HAS_PMU) += pmu.o
obj-$(CONFIG_HW_PERF_EVENTS) += perf_event.o
AFLAGS_iwmmxt.o := -Wa,-mcpu=iwmmxt
+obj-$(CONFIG_ARM_CPU_TOPOLOGY) += topology.o
ifneq ($(CONFIG_ARCH_EBSA110),y)
obj-y += io.o
diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
index 344e52b..3e8dc3b 100644
--- a/arch/arm/kernel/smp.c
+++ b/arch/arm/kernel/smp.c
@@ -31,6 +31,7 @@
#include <asm/cacheflush.h>
#include <asm/cpu.h>
#include <asm/cputype.h>
+#include <asm/topology.h>
#include <asm/mmu_context.h>
#include <asm/pgtable.h>
#include <asm/pgalloc.h>
@@ -268,6 +269,9 @@ static void __cpuinit smp_store_cpu_info(unsigned int cpuid)
struct cpuinfo_arm *cpu_info = &per_cpu(cpu_data, cpuid);
cpu_info->loops_per_jiffy = loops_per_jiffy;
+
+ store_cpu_topology(cpuid);
+
}
/*
@@ -354,6 +358,8 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
{
unsigned int ncores = num_possible_cpus();
+ init_cpu_topology();
+
smp_store_cpu_info(smp_processor_id());
/*
diff --git a/arch/arm/kernel/topology.c b/arch/arm/kernel/topology.c
new file mode 100644
index 0000000..e8f3b95
--- /dev/null
+++ b/arch/arm/kernel/topology.c
@@ -0,0 +1,149 @@
+/*
+ * arch/arm/kernel/topology.c
+ *
+ * Copyright (C) 2011 Linaro Limited.
+ * Written by: Vincent Guittot
+ *
+ * based on arch/sh/kernel/topology.c
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ */
+
+#include <linux/cpu.h>
+#include <linux/cpumask.h>
+#include <linux/init.h>
+#include <linux/percpu.h>
+#include <linux/node.h>
+#include <linux/nodemask.h>
+#include <linux/sched.h>
+
+#include <asm/cputype.h>
+#include <asm/topology.h>
+
+#define MPIDR_SMP_BITMASK (0x3 << 30)
+#define MPIDR_SMP_VALUE (0x2 << 30)
+
+#define MPIDR_MT_BITMASK (0x1 << 24)
+
+/*
+ * These masks reflect the current use of the affinity levels.
+ * The affinity level can be up to 16 bits according to ARM ARM
+ */
+
+#define MPIDR_LEVEL0_MASK 0x3
+#define MPIDR_LEVEL0_SHIFT 0
+
+#define MPIDR_LEVEL1_MASK 0xF
+#define MPIDR_LEVEL1_SHIFT 8
+
+#define MPIDR_LEVEL2_MASK 0xFF
+#define MPIDR_LEVEL2_SHIFT 16
+
+struct cputopo_arm cpu_topology[NR_CPUS];
+
+const struct cpumask *cpu_coregroup_mask(unsigned int cpu)
+{
+ return &(cpu_topology[cpu].core_sibling);
+}
+
+/*
+ * store_cpu_topology is called at boot when only one cpu is running
+ * and with the mutex cpu_hotplug.lock locked, when several cpus have booted,
+ * which prevents simultaneous write access to cpu_topology array
+ */
+void store_cpu_topology(unsigned int cpuid)
+{
+ struct cputopo_arm *cpuid_topo = &(cpu_topology[cpuid]);
+ unsigned int mpidr;
+ unsigned int cpu;
+
+ /* If the cpu topology has been already set, just return */
+ if (cpuid_topo->core_id != -1)
+ return;
+
+ mpidr = read_cpuid_mpidr();
+
+ /* create cpu topology mapping */
+ if ((mpidr & MPIDR_SMP_BITMASK) == MPIDR_SMP_VALUE) {
+ /*
+ * This is a multiprocessor system
+ * multiprocessor format & multiprocessor mode field are set
+ */
+
+ if (mpidr & MPIDR_MT_BITMASK) {
+ /* core performance interdependency */
+ cpuid_topo->thread_id = ((mpidr >> MPIDR_LEVEL0_SHIFT)
+ & MPIDR_LEVEL0_MASK);
+ cpuid_topo->core_id = ((mpidr >> MPIDR_LEVEL1_SHIFT)
+ & MPIDR_LEVEL1_MASK);
+ cpuid_topo->socket_id = ((mpidr >> MPIDR_LEVEL2_SHIFT)
+ & MPIDR_LEVEL2_MASK);
+ } else {
+ /* largely independent cores */
+ cpuid_topo->thread_id = -1;
+ cpuid_topo->core_id = ((mpidr >> MPIDR_LEVEL0_SHIFT)
+ & MPIDR_LEVEL0_MASK);
+ cpuid_topo->socket_id = ((mpidr >> MPIDR_LEVEL1_SHIFT)
+ & MPIDR_LEVEL1_MASK);
+ }
+ } else {
+ /*
+ * This is an uniprocessor system
+ * we are in multiprocessor format but uniprocessor system
+ * or in the old uniprocessor format
+ */
+
+ cpuid_topo->thread_id = -1;
+ cpuid_topo->core_id = 0;
+ cpuid_topo->socket_id = -1;
+ }
+
+ /* update core and thread sibling masks */
+ for_each_possible_cpu(cpu) {
+ struct cputopo_arm *cpu_topo = &(cpu_topology[cpu]);
+
+ if (cpuid_topo->socket_id == cpu_topo->socket_id) {
+ cpumask_set_cpu(cpuid, &cpu_topo->core_sibling);
+ if (cpu != cpuid)
+ cpumask_set_cpu(cpu,
+ &cpuid_topo->core_sibling);
+
+ if (cpuid_topo->core_id == cpu_topo->core_id) {
+ cpumask_set_cpu(cpuid,
+ &cpu_topo->thread_sibling);
+ if (cpu != cpuid)
+ cpumask_set_cpu(cpu,
+ &cpuid_topo->thread_sibling);
+ }
+ }
+ }
+ smp_wmb();
+
+ printk(KERN_INFO "CPU%u: thread %d, cpu %d, socket %d, mpidr %x\n",
+ cpuid, cpu_topology[cpuid].thread_id,
+ cpu_topology[cpuid].core_id,
+ cpu_topology[cpuid].socket_id, mpidr);
+}
+
+/*
+ * init_cpu_topology is called at boot when only one cpu is running
+ * which prevent simultaneous write access to cpu_topology array
+ */
+void init_cpu_topology(void)
+{
+ unsigned int cpu;
+
+ /* init core mask */
+ for_each_possible_cpu(cpu) {
+ struct cputopo_arm *cpu_topo = &(cpu_topology[cpu]);
+
+ cpu_topo->thread_id = -1;
+ cpu_topo->core_id = -1;
+ cpu_topo->socket_id = -1;
+ cpumask_clear(&cpu_topo->core_sibling);
+ cpumask_clear(&cpu_topo->thread_sibling);
+ }
+ smp_wmb();
+}
--
1.7.4.1
How significant is the cache maintenance over head?
It depends, the eMMC are much faster now
compared to a few years ago and cache maintenance cost more due to
multiple cache levels and speculative cache pre-fetch. In relation the
cost for handling the caches have increased and is now a bottle neck
dealing with fast eMMC together with DMA.
The intention for introducing non-blocking mmc requests is to minimize the
time between a mmc request ends and another mmc request starts. In the
current implementation the MMC controller is idle when dma_map_sg and
dma_unmap_sg is processing. Introducing non-blocking mmc request makes it
possible to prepare the caches for next job in parallel to an active
mmc request.
This is done by making the issue_rw_rq() non-blocking.
The increase in throughput is proportional to the time it takes to
prepare (major part of preparations is dma_map_sg and dma_unmap_sg)
a request and how fast the memory is. The faster the MMC/SD is
the more significant the prepare request time becomes. Measurements on U5500
and Panda on eMMC and SD shows significant performance gain for large
reads when running DMA mode. In the PIO case the performance is unchanged.
There are two optional hooks pre_req() and post_req() that the host driver
may implement in order to move work to before and after the actual mmc_request
function is called. In the DMA case pre_req() may do dma_map_sg() and prepare
the dma descriptor and post_req runs the dma_unmap_sg.
Details on measurements from IOZone and mmc_test:
https://wiki.linaro.org/WorkingGroups/Kernel/Specs/StoragePerfMMC-async-req
Changes since v7:
* rebase on mmc-next, on top of Russell's updated error handling.
* Clarify description of mmc_start_req()
* Resolve compile without CONFIG_DMA_ENIGNE issue for mmci
* Add mmc test to measure how performance is affected by sg length
* Add missing wait_for_busy in mmc_test non-blocking test. This call got lost
in v4 of this patchset when refactoring mmc_start_req.
* Add sub-prefix (core block queue) to relevant patches.
Per Forlin (12):
mmc: core: add non-blocking mmc request function
omap_hsmmc: add support for pre_req and post_req
mmci: implement pre_req() and post_req()
mmc: mmc_test: add debugfs file to list all tests
mmc: mmc_test: add test for non-blocking transfers
mmc: mmc_test: test to measure how sg_len affect performance
mmc: block: add member in mmc queue struct to hold request data
mmc: block: add a block request prepare function
mmc: block: move error code in issue_rw_rq to a separate function.
mmc: queue: add a second mmc queue request member
mmc: core: add random fault injection
mmc: block: add handling for two parallel block requests in
issue_rw_rq
drivers/mmc/card/block.c | 505 ++++++++++++++++++++++++-----------------
drivers/mmc/card/mmc_test.c | 491 ++++++++++++++++++++++++++++++++++++++--
drivers/mmc/card/queue.c | 184 ++++++++++------
drivers/mmc/card/queue.h | 33 ++-
drivers/mmc/core/core.c | 167 +++++++++++++-
drivers/mmc/core/debugfs.c | 5 +
drivers/mmc/host/mmci.c | 147 +++++++++++-
drivers/mmc/host/mmci.h | 8 +
drivers/mmc/host/omap_hsmmc.c | 87 +++++++-
include/linux/mmc/core.h | 6 +-
include/linux/mmc/host.h | 24 ++
lib/Kconfig.debug | 11 +
12 files changed, 1345 insertions(+), 323 deletions(-)
--
1.7.4.1
How significant is the cache maintenance over head?
It depends, the eMMC are much faster now
compared to a few years ago and cache maintenance cost more due to
multiple cache levels and speculative cache pre-fetch. In relation the
cost for handling the caches have increased and is now a bottle neck
dealing with fast eMMC together with DMA.
The intention for introducing non-blocking mmc requests is to minimize the
time between a mmc request ends and another mmc request starts. In the
current implementation the MMC controller is idle when dma_map_sg and
dma_unmap_sg is processing. Introducing non-blocking mmc request makes it
possible to prepare the caches for next job in parallel to an active
mmc request.
This is done by making the issue_rw_rq() non-blocking.
The increase in throughput is proportional to the time it takes to
prepare (major part of preparations is dma_map_sg and dma_unmap_sg)
a request and how fast the memory is. The faster the MMC/SD is
the more significant the prepare request time becomes. Measurements on U5500
and Panda on eMMC and SD shows significant performance gain for large
reads when running DMA mode. In the PIO case the performance is unchanged.
There are two optional hooks pre_req() and post_req() that the host driver
may implement in order to move work to before and after the actual mmc_request
function is called. In the DMA case pre_req() may do dma_map_sg() and prepare
the dma descriptor and post_req runs the dma_unmap_sg.
Details on measurements from IOZone and mmc_test:
https://wiki.linaro.org/WorkingGroups/Kernel/Specs/StoragePerfMMC-async-req
Changes since v5:
* Fix spelling mistakes, replace "none blocking" with non-blocking.
* excluded patch "omap_hsmmc: use original sg_len..." since it is
being merged separately.
Per Forlin (11):
mmc: add non-blocking mmc request function
omap_hsmmc: add support for pre_req and post_req
mmci: implement pre_req() and post_req()
mmc: mmc_test: add debugfs file to list all tests
mmc: mmc_test: add test for non-blocking transfers
mmc: add member in mmc queue struct to hold request data
mmc: add a block request prepare function
mmc: move error code in mmc_block_issue_rw_rq to a separate function.
mmc: add a second mmc queue request member
mmc: test: add random fault injection in core.c
mmc: add handling for two parallel block requests in issue_rw_rq
drivers/mmc/card/block.c | 534 ++++++++++++++++++++++++-----------------
drivers/mmc/card/mmc_test.c | 361 +++++++++++++++++++++++++++-
drivers/mmc/card/queue.c | 184 +++++++++-----
drivers/mmc/card/queue.h | 33 ++-
drivers/mmc/core/core.c | 165 ++++++++++++-
drivers/mmc/core/debugfs.c | 5 +
drivers/mmc/host/mmci.c | 146 ++++++++++-
drivers/mmc/host/mmci.h | 8 +
drivers/mmc/host/omap_hsmmc.c | 87 +++++++-
include/linux/mmc/core.h | 6 +-
include/linux/mmc/host.h | 24 ++
lib/Kconfig.debug | 11 +
12 files changed, 1235 insertions(+), 329 deletions(-)
--
1.7.4.1
We have both desktop (for general graphics/media stuff) and mobile
tracks at this year's LPC.
So if you're working on a topic related to one of the above areas,
especially one that has open issues or spans multiple parts of the
stack, please submit a topic for discussion at
http://www.linuxplumbersconf.org/2011/ocw/events/LPC2011MC/proposals/new
against the appropriate track.
If you've already submitted a talk to one of these tracks, you will
likely be hearing from us over the next week.
We're passed earlybird registration, but you can still sign up and attend.
Thanks,
Jesse Barker, desktop track lead
Daniel Stone, mobile track lead
These tests are used to test the cpufreq driver on ARM architecture.
As the cpufreq is not yet complete, the test suite is based on the cpufreq
sysfs API exported on intel architecture, assuming it is consistent across
architecture.
The different tests are described at:
https://wiki.linaro.org/WorkingGroups/PowerManagement/Doc/QA/Scripts
Each test's header contains an URL to the anchor related item of this web page
describing the script.
Daniel Lezcano (2):
cpufreq: add a test set for cpufreq
cpufreq: check the frequency affect the performances
Makefile | 6 ++
cpufreq/test_01.sh | 43 ++++++++++
cpufreq/test_02.sh | 43 ++++++++++
cpufreq/test_03.sh | 64 ++++++++++++++
cpufreq/test_04.sh | 85 +++++++++++++++++++
cpufreq/test_05.sh | 145 ++++++++++++++++++++++++++++++++
cpufreq/test_06.sh | 236 ++++++++++++++++++++++++++++++++++++++++++++++++++++
utils/Makefile | 11 +++
utils/cpucycle.c | 102 ++++++++++++++++++++++
utils/nanosleep.c | 45 ++++++++++
10 files changed, 780 insertions(+), 0 deletions(-)
create mode 100644 cpufreq/test_01.sh
create mode 100644 cpufreq/test_02.sh
create mode 100644 cpufreq/test_03.sh
create mode 100644 cpufreq/test_04.sh
create mode 100644 cpufreq/test_05.sh
create mode 100644 cpufreq/test_06.sh
create mode 100644 utils/Makefile
create mode 100644 utils/cpucycle.c
create mode 100644 utils/nanosleep.c
--
1.7.4.1
Excellent work everyone. This is a superb release!
On Thu, Jun 30, 2011 at 2:16 PM, Fathi Boudra <fathi.boudra(a)linaro.org> wrote:
> Hi,
>
> The Linaro Team is pleased to announce the release of Linaro 11.06.
>
> 11.06 is the Linaro’s first release delivered on the new monthly cadence.
> Since we started focusing on monthly component releases, activity in the
> engineering teams has been channeled into producing a coherent set of packages;
> This allows anyone to witness development of new features and fixes as the team
> progresses towards its goals. This month’s release highlights the results:
> a host of new components are now available, including LAVA packages from the
> Platform Validation Team, a collection of SoC-specific kernels provided by the
> Landing Teams, and preview releases of Graphics and Multimedia Working Groups
> work ranging from Unity 3D to a NEON-optimized libjpeg-turbo. In addition,
> another solid set of toolchain components, topped by a Linaro GCC 4.6 release
> that should start making a very good impressions on benchmarks near you.
>
> We encourage everybody to use the 11.06 release. The download links for all
> images and components are available on our release page:
>
> http://wiki.linaro.org/Cycles/1106/Release
>
> Highlights of this release:
>
> * Linaro Evaluation Builds (LEBs) for Ubuntu comes with the full 3D Unity
> desktop experience enabled on PandaBoard. It's powered by Compiz and relies
> on the Nux toolkit for its rendering.
> * Linaro Evaluation Build (LEBs) for Android on Pandaboard comes with latest
> stable 2.6.38 kernel from Linaro's TI Landing Team and is built using
> Linaro's GCC 4.5 2011.06 release; Also, latest Linaro toolchain have been
> packaged for Android and benchmark results showing noticeable performance
> gains compared to the Google AOSP gingerbread toolchain have been included
> as part of the release documentation: http://bit.ly/jTAhWa
> * Initial preview releases of Ubuntu Hardware Packs for Snowball, Origen and
> Quickstart boards featuring the latest Linaro Landing Team components are
> available as part of this release.
> * Linaro GCC 4.6 2011.06 and GCC 4.5 2011.06 come with bugfixes and various
> performance optimizations with focus on vectoriser improvements. With this
> release Linaro GCC 4.5 series enters maintenance mode and will ensure that
> development can be focused on making the "future" better.
> * Linaro QEMU 2011.06, based on upstream (trunk) QEMU. This version includes a
> number of ARM-focused bug fixes and enhancements like the support of a model
> of the Gumstix Overo board and the USB keyboard/mouse support on
> BeagleBoard.
> * Linaro Kernel 2.6.39 2011.06, based on the 2.6.39.1 stable kernel with a
> number of changes developed by Linaro and integrated from the 3.0-rc. It
> includes the ability to append Device Tree to zImage at build time, support
> for parallel async MMC requests and more...
> * Linaro U-Boot 2011.06.1, based on upstream version 2011.06-rc3 features USB,
> Network and TFTP boot for PandaBoard as well as initial PXE support.
> * First full release of LAVA components, Linaro's automated validation
> solution, has been made available as part of our monthly releases.
> * QEMU with OpenGL ES acceleration - technology preview. For more details,
> please visit https://wiki.linaro.org/Platform/DevPlatform/QemuOpenGLES
> * The Unity, NUX and Compiz port for EGL/OpenGL ES v2 that are part of
> our Ubuntu LEB for this month are also made available as components
> maintained by Linaro's Graphics Working Group.
> * Linaro Image Tools 2011.06-1 features the support for the --image_file
> option in linaro-android-media-create and support the new upstream name of
> the smdkv310 SPL.
> * Powerdebug 0.5-2011.06 is a major rewrite of the code to put in place a
> generic framework to integrate more easily new components like the thermal
> sensors. It's more modular and decrease the dependency between the display
> and the power management blocks.
> * And much more... The release details are linked from the "Details" column
> for each release artifact on the 11.06 release page.
>
> Using the Android-based images
> ==============================
>
> The Android-based images come in three parts: system, userdata and boot.
> These need to be combined to form a complete Android install. For an
> explanation of how to do this please see:
>
> http://wiki.linaro.org/Platform/Android/ImageInstallation
>
> If you are interested in getting the source and building these images
> yourself please see the following pages:
>
> http://wiki.linaro.org/Platform/Android/GetSource
> http://wiki.linaro.org/Platform/Android/BuildSource
>
> Using the Ubuntu-based images
> =============================
>
> The Ubuntu-based images consist of two parts. The first part is a hardware
> pack, which can be found under the hwpacks directory and contains hardware
> specific packages (such as the kernel and bootloader). The second part is
> the rootfs, which is combined with the hardware pack to create a complete
> image. For more information on how to create an image please see:
>
> http://wiki.linaro.org/Platform/DevPlatform/Ubuntu/ImageInstallation
>
> Getting involved
> ================
>
> More information on Linaro can be found on our websites:
>
> * Homepage: http://www.linaro.org
> * Wiki: http://wiki.linaro.org
>
> Also subscribe to the important Linaro mailing lists and join our IRC
> channels to stay on top of Linaro developments:
>
> * Announcements:
> http://lists.linaro.org/mailman/listinfo/linaro-announce
> * Development:
> http://lists.linaro.org/mailman/listinfo/linaro-dev
> * IRC:
> #linaro on irc.linaro.org or irc.freenode.net
> #linaro-android irc.linaro.org or irc.freenode.net
>
> Known issues with this release
> ==============================
>
> For any errata issues, please see:
>
> http://wiki.linaro.org/Cycles/1106/Release#Known_Issues
>
> Bug reports for this release should be filed in Launchpad against the
> individual packages that are affected. If a suitable package cannot be
> identified, feel free to assign them to:
>
> http://www.launchpad.net/linaro
>
> Cheers,
>
> --
> Fathi Boudra
> Linaro Release Manager | Platform Project Manager
> Linaro.org | Open source software for ARM SoCs
>
> _______________________________________________
> linaro-announce mailing list
> linaro-announce(a)lists.linaro.org
> http://lists.linaro.org/mailman/listinfo/linaro-announce
>
Hi.
This is just a reminder for Michael but I think it's worth to share that
to the rest of the validation team.
When using lava-dev-tool to hack on lava-dashboard or any other
component that is based on lava-server the location of the database and
the actual data contained is easily lost.
I think we need to stop using the "development" settings quickly and
switch to django-debian based model where we just use a different
configuration file for local development. This would make the testing
environment much more similar to production (definitely django-debian
would see more testing this way) and would help us locate and control
application state.
I would like to propose the following changes:
When hacking lava-server would setup the following stuff (all relative
to localenv root).
/etc/lava-server/default_database.conf <- location of the database
/etc/lava-server/settings.conf <- django-debian way to alter settings.py
/var/lib/lava-server/media/ <- upload root
/var/lib/lava-server/static/ <- django-staticfiles root
/var/lib/lava-server/database.db (only when using sqlite)
For postgresql (which I would like to use more by default) this is
roughly the same except for default_database.conf pointing at something
else than the file mentioned earlier.
I'll see how I can make that happen in practice.
Thanks
ZK