Signed-off-by: Grant Likely <grant.likely(a)secretlab.ca>
---
Hey all,
This is an early draft of the usage model document for the device
tree, but I wanted to get it out there for feedback, and so that some
of the Linaro engineers could get started on migrating board ports.
g.
Documentation/devicetree/usage-model | 403 ++++++++++++++++++++++++++++++++++
1 files changed, 403 insertions(+), 0 deletions(-)
create mode 100644 Documentation/devicetree/usage-model
diff --git a/Documentation/devicetree/usage-model b/Documentation/devicetree/usage-model
new file mode 100644
index 0000000..4203119
--- /dev/null
+++ b/Documentation/devicetree/usage-model
@@ -0,0 +1,403 @@
+Linux and the Device Tree
+The Linux usage model for device tree data
+
+Author: Grant Likely <grant.likely(a)secretlab.ca>
+
+This article describes how Linux uses the device tree. An overview of
+the device tree data format can be found at the <a
+href="http://devicetree.org/Device_Tree_Usage">Device Tree Usage</a>
+page on <a href="http://devicetree.org">devicetree.org</a>.
+
+
+ All the cool architectures are using device tree. I want to
+ use device tree too!
+
+The "Open Firmware Device Tree", or simply Device Tree (DT), is a data
+structure and language for describing hardware. More specifically, it
+is an description of hardware that is readable by an operating system
+so that the operating system doesn't need to hard code details of the
+machine.
+
+Structurally, the DT is a tree, or acyclic graph with named nodes, and
+nodes may have an arbitrary number of named properties encapsulating
+arbitrary data. A mechanism also exists to create arbitrary
+links from one node to another outside of the natural tree structure..
+
+Conceptually, a common set of usage conventions, called 'bindings',
+is defined for how data should appear in the tree to describe typical
+hardware characteristics including data busses, interrupt lines, gpio
+connections, and peripheral devices.
+
+As much as possible, hardware is described using existing bindings to
+maximize use of existing support code, but since property and node
+names are simply text strings, it is easy to extend existing bindings
+or create new ones by defining new nodes and properties.
+
+<h2>History</h2>
+The DT was originally created by Open Firmware as part of the
+communication method for passing data from Open Firmware to a client
+program (like to an operating system). An operating system used the
+Device Tree to discover the topology of the hardware at runtime, and
+thereby support a majority of available hardware without hard coded
+information (assuming drivers were available for all devices).
+
+Since Open Firmware is commonly used on PowerPC and SPARC platforms,
+the Linux support for those architectures has for a long time used the
+Device Tree.
+
+In 2005, when PowerPC Linux began a major cleanup and to merge 32 bit
+and 64 support, the decision was made to require DT support on all
+powerpc platforms, regardless of whether or not they used Open
+Firmware. To do this, a DT representation called the Flattened Device
+Tree (FDT) was created which could be passed to the kernel as a binary
+blob without requiring a real Open Firmware implementation. U-Boot,
+kexec, and other bootloaders were modified to support both passing a
+Device Tree Binary (dtb) and to modify a dtb at boot time.
+
+Some time later, FDT infrastructure was generalized to be usable by
+all architectures. At the time of this writing, 5 mainlined
+architectures (arm, mips, powerpc, sparc, and x86) and 1 out of
+mainline architecture (nios) have some level of DT support.
+
+<h2>Data Model</h2>
+If you haven't already read the
+href="http://devicetree.org/Device_Tree_Usage">Device Tree Usage</a>
+page, then go read it now. It's okay, I'll wait....
+
+<h3>High Level View</h3>
+The most important thing to understand is that the DT is simply a data
+structure that describes the hardware. There is nothing magical about
+it, and it doesn't magically make all hardware configuration problems
+go away. What it does do is provide a language for decoupling the
+hardware configuration from the board and device driver support in the
+Linux kernel (or any other operating system for that matter). Using
+it allows board and device support to become data driven; to make
+setup decisions based on data passed into the kernel instead of on
+per-machine hard coded selections.
+
+Ideally, data driven platform setup should result in less code
+duplication and make it easier to support a wide range of hardware
+with a single kernel image.
+
+Linux uses DT data for three major purposes:
+1) platform identification,
+2) runtime configuration, and
+3) device population.
+
+<h4>Platform Identification</h4>
+First and foremost, the kernel will use data in the DT to identify the
+specific machine. In a perfect world, the specific platform shouldn't
+matter to the kernel because all platform details would be described
+perfectly by the device tree in a consistent and reliable manner.
+Hardware is not perfect though, and so the kernel must identify the
+machine during early boot so that it has the opportunity to run
+machine specific fixups.
+
+In the majority of cases, the machine identity is irrelevant, and the
+kernel will instead select setup code based on the machines core
+cpu or SoC. On ARM for example, setup_arch() in
+arch/arm/kernel/setup.c will call setup_machine_fdt() in
+arch/arm/kernel/devicetree.c which searches through the machine_desc
+table and selects the machine_desc which best matches the device tree
+data. It determines the best match by looking at the 'compatible'
+property in the root device tree node, and comparing it with the
+dt_compat list in struct machine_desc.
+
+The 'compatible' property contains a sorted list of strings starting
+with the exact name of the machine, followed by an optional list of
+boards it is compatible with sorted from most compatible to list. For
+example, the root compatible properties for the TI BeagleBoard and its
+successor, the BeagleBoard xM board might look like:
+
+ compatible = "ti,omap3-beagleboard", "ti,omap3450", "ti,omap3";
+ compatible = "ti,omap3-beagleboard-xm", "ti,omap3450", "ti,omap3";
+
+Where "ti,omap3-beagleboard-xm" specifies the exact model, it also
+claims that it compatible with the OMAP 3450 SoC, and the omap3 family
+of SoCs in general. You'll notice that the list is sorted from most
+specific (exact board) to least specific (SoC family).
+
+Astute readers might point out that the Beagle xM could also claim
+compatibility with the original Beagle board. However, one should be
+cautioned about doing so at the board level since there is typically a
+high level of change from one board to another, even within the same
+product line, that it is hard to nail down exactly is meant when one
+board claims to be compatible with another. For the top level, it is
+better to err on the side of caution and not claim one board is
+compatible with another. The notable exception would be when one
+board is a carrier for another, such as a cpu module attached to a
+carrier board.
+
+One more note on compatible values. Any string used in a compatible
+property must be documented as to what it indicates. Add
+documentation for compatible strings in Documentation/devicetree/bindings.
+
+Again on ARM, the for each machine_desc, the kernel looks to see if
+any of the dt_compat list entries appear in the compatible property.
+If one does, then that machine_desc is a candidate for driving the
+machine. After searching the entire table of machine_descs,
+setup_machine_fdt() returns the 'most compatible' machine_desc based
+on which entry in the compatible property each machine_desc matches
+against. If no matching machine_desc is found, then it returns NULL.
+
+The reasoning behind this scheme is the observation that in the majority
+of cases, a single machine_desc can support a large number of boards
+if that all use the same SoC, or same family of SoCs. However,
+invariably there will be some exceptions where a specific board will
+require special setup code that is not useful in the generic case.
+Special cases could be handled by explicitly checking for the
+troublesome board(s) in generic setup code, but doing so very quickly
+becomes ugly and/or unmaintainable if it is more than just a couple of
+cases.
+
+Instead, the compatible list allows a generic machine_desc to provide
+support for a wide common set of boards by specifying "less
+compatible" value in the dt_compat list. In the example above,
+generic board support can claim compatibility with "ti,omap3" or
+"ti,omap3450". If a bug was discovered on the original beagleboard
+that required special workaround code during early boot, then a new
+machine_desc could be added which implements the workarounds and only
+matches on "ti,beagleboard".
+
+PowerPC uses a slightly different scheme where it calls the .probe()
+hook from each machine_desc, and the first one returning TRUE is used.
+However, this approach does not take into account the priority of the
+compatible list, and probably should be avoided for new architecture
+support.
+
+<h4>Runtime configuration</h4>
+In most cases, a DT will be the sole method of communicating data from
+firmware to the kernel, so also gets used to pass in runtime and
+configuration data like the kernel parameters string and the location
+of an initrd image.
+
+Most of this data is contained in the /chosen node, and when booting
+Linux it will look something like this:
+
+ chosen {
+ bootargs = "console=ttyS0,115200 loglevel=8";
+ initrd-start = <0xc8000000>;
+ initrd-end = <0xc8200000>;
+ };
+
+The bootargs property contains the kernel arguments, and the initrd-*
+properties define the address and size of an initrd blob. The
+chosen node may also optionally contain an arbitrary number of
+additional properties for platform specific configuration data.
+
+During early boot, the architecture setup code calls of_scan_flat_dt()
+several times with different helper callbacks to parse device tree
+data before paging is setup. The of_scan_flat_dt() code scans through
+the device tree and uses the helpers to extract information required
+during early boot. Typically the early_init_dt_scan_chosen() helper
+is used to parse the chosen node including kernel parameters,
+early_init_dt_scan_root() to initialize the DT address space model,
+and early_init_dt_scan_memory() to determine the size and
+location of usable RAM.
+
+On ARM, the function setup_machine_fdt() is responsible for early
+scanning of the device tree after selecting the correct machine_desc
+that supports the board.
+
+<h4>Device population</h4>
+After the board has been identified, and after the early configuration data
+has been parsed, then kernel initialization can proceed in the normal
+way. At some point in this process, unflatten_device_tree() is called
+to convert the data into a more efficient runtime representation.
+This is also when machine specific setup hooks will get called, like
+the machine_desc .init_early(), .init_irq() and .init_machine() hooks
+on ARM. The remainder of this section uses examples from the ARM
+implementation, but all architectures will do pretty much the same
+thing when using a DT.
+
+As can be guessed by the names, .init_early() is used for any machine
+specific setup that needs to be executed early in the boot process,
+and .init_irq() is used to set up interrupt handling. Using a DT
+doesn't materially change the behaviour of either of these functions.
+If a DT is provided, then both .init_early() and .init_irq() are able
+to call any of the DT query functions (of_* in include/linux/of*.h) to
+get additional data about the platform.
+
+The most interesting hook in the DT context is .init_machine() which
+is primarily responsible for populating the Linux device model with
+data about the platform. Historically this has been implemented on
+embedded platforms by defining a set of static clock structures,
+platform_devices, and other data in the board support .c file, and
+registering it en-masse in .init_machine(). When DT is used, then
+instead of hard coding static devices for each platform, the list of
+devices can be obtained by parsing the DT, and allocating device
+structures dynamically.
+
+The simplest case is when .init_machine() is only responsible for
+registering a block of platform_devices. Platform devices are concept
+used by Linux for memory or io mapped devices which cannot be detected
+by hardware, and for 'composite' or 'virtual' devices (more on those
+later). While there is no 'platform device' terminology for the DT,
+platform devices roughly correspond to device nodes at the root of the
+tree and children of simple memory mapped bus nodes.
+
+About now is a good time to lay out an example. Here is part of the
+device tree for the NVIDIA Tegra board.
+
+/{
+ compatible = "nvidia,harmony", "nvidia,tegra250";
+ #address-cells = <1>;
+ #size-cells = <1>;
+ interrupt-parent = <&intc>;
+
+ chosen { };
+ aliases { };
+
+ memory {
+ device_type = "memory";
+ reg = <0x00000000 0x40000000>;
+ };
+
+ soc {
+ compatible = "nvidia,tegra250-soc", "simple-bus";
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges;
+
+ intc: interrupt-controller@50041000 {
+ compatible = "nvidia,tegra250-gic";
+ interrupt-controller;
+ #interrupt-cells = <1>;
+ reg = <0x50041000 0x1000>, < 0x50040100 0x0100 >;
+ };
+
+ serial@70006300 {
+ compatible = "nvidia,tegra250-uart";
+ reg = <0x70006300 0x100>;
+ interrupts = <122>;
+ };
+
+ i2s-1: i2s@70002800 {
+ compatible = "nvidia,tegra250-i2s";
+ reg = <0x70002800 0x100>;
+ interrupts = <77>;
+ codec = <&wm8903>;
+ };
+
+ i2c@7000c000 {
+ compatible = "nvidia,tegra250-i2c";
+ #address-cells = <1>;
+ #size-cells = <1>;
+ reg = <0x7000c000 0x100>;
+ interrupts = <70>;
+
+ wm8903: codec@1a {
+ compatible = "wlf,wm8903";
+ reg = <0x1a>;
+ interrupts = <347>;
+ };
+ };
+ };
+
+ sound {
+ compatible = "nvidia,harmony-sound";
+ i2s-controller = <&i2s-1>;
+ i2s-codec = <&wm8903>;
+ };
+};
+
+At .machine_init() time, Tegra board support code will need to look at
+this DT and decide which nodes to create platform_devices for.
+However, looking at the tree, it is not immediately obvious what kind
+of device each node represents, or even if a node represents a device
+at all. The /chosen, /aliases, and /memory nodes are informational
+nodes that don't describe devices (although arguably memory could be
+considered a device). The children of the /soc node are memory mapped
+devices, but the codec@1a is an i2c device, and the sound node
+represents not a device, but rather how other devices are connected
+together to create the audio subsystem. I know what each device is
+because I'm familiar with the board design, but how does the kernel
+know what to do with each node?
+
+The trick is that the kernel starts at the root of the tree and looks
+for nodes that have a 'compatible' property. First, it is generally
+assumed that any node with a 'compatible' property represents a device
+of some kind, and second, it can be assumed that any node at the root
+of the tree is either directly attached to the processor bus, or is a
+miscellaneous system device that cannot be described any other way.
+For each of these nodes, Linux allocates and registers a
+platform_device, which in turn may get bound to a platform_driver.
+
+Why is using a platform_device for these nodes a safe assumption?
+Well, for the way that Linux models devices, just about all bus_types
+assume that its devices are children of a bus controller. For
+example, each i2c_client is a child of an i2c_master. Each spi_device
+is a child of an spi bus. Similarly for USB, PCI, MDIO, etc. The
+same hierarchy is also found in the DT, where i2c device nodes only
+ever appear as children of an i2c bus node. Ditto for spi, mdio, usb,
+etc. The only devices which do not require a specific type of parent
+device are platform_devices (and amba_devices, but more on that
+later), which will happily live at the base of the Linux /sys/devices
+tree. Therefore, if a DT node is at the root of the tree, then it
+really probably is best registered as a platform_device.
+
+Linux board support code calls of_platform_populate(NULL, NULL, NULL)
+to kick of discovery of devices at the root of the tree. The
+parameters are all NULL because when starting from the root of the
+tree, there is no need to provide a starting node (the first NULL), a
+parent struct device (the last NULL), and we're not using a match
+table (yet). For a board that only needs to register devices,
+.init_machine() can be completely empty except for the
+of_platform_populate() call.
+
+In the Tegra example, this accounts for the /soc and /sound nodes, but
+what about the children of the soc node? Shouldn't they be registered
+as platform devices too? For Linux DT support, the generic behaviour
+is for child devices to be registered by the parent's device driver at
+driver .probe() time. So, an i2c bus device driver will register a
+i2c_client for each child node, an spi bus driver will register
+it's spi_device children, and similarly for other bus_types.
+According to that model, a driver could be written that binds to the
+soc node and simply registers platform_devices for each of it's
+children. The board support code would allocate and register an soc
+device, an soc device driver would bind to the soc device, and
+register platform_devices for /soc/interrupt-controller, /soc/serial,
+/soc/i2s, and /soc/i2c in it's .probe() hook. Easy, right? Although
+it is a lot of mucking about for just registering platform devices.
+
+It turns out that registering children of certain platform_devices as
+more platform_devices is a common pattern, and the device tree support
+code reflects that. The second argument to of_platform_populate() is
+an of_device_id table, and any node that matches an entry in that
+table will also get it's child nodes registered. In the tegra case,
+the code can look something like this:
+
+static struct of_device_id harmony_bus_ids[] __initdata = {
+ { .compatible = "simple-bus", },
+ {}
+};
+
+static void __init harmony_init_machine(void)
+{
+ /* ... */
+ of_platform_populate(NULL, harmony_bus_ids, NULL);
+}
+
+"simple-bus" is defined in the ePAPR 1.0 specification as a property
+meaning a simple memory mapped bus, so the of_platform_populate() code
+could be written to just assume simple-bus compatible nodes will
+always be traversed. However, we pass it in as an argument so that
+board support code can always override the default behaviour.
+
+<h2>Appendix A: AMBA devices</h2>
+
+ARM Primecells are a certain kind of device attached to the ARM AMBA
+bus which include some support for hardware detection and power
+management. In Linux, struct amba_device and the amba_bus_type is
+used to represent Primecell devices. However, the fiddly bit is that
+not all devices on an AMBA bus are Primecells, and for Linux it is
+typical for both amba_device and platform_device instances to be
+siblings of the same bus segment.
+
+When using the DT, this creates problems for of_platform_populate()
+because it must decide whether to register each node as either a
+platform_device or an amba_device. This unfortunately complicates the
+device creation model a little bit, but the solution turns out not to
+be too invasive. If a node is compatible with "arm,amba-primecell", then
+of_platform_populate() will register it as an amba_device instead of a
+platform_device.
Signed-off-by: Daniel Lezcano <daniel.lezcano(a)linaro.org>
---
Makefile | 2 +-
clocks.c | 62 +-------------------------------------------------------------
utils.c | 55 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
utils.h | 22 ++++++++++++++++++++++
4 files changed, 79 insertions(+), 62 deletions(-)
create mode 100644 utils.c
create mode 100644 utils.h
diff --git a/Makefile b/Makefile
index d88b8ff..1a53121 100644
--- a/Makefile
+++ b/Makefile
@@ -4,7 +4,7 @@ MANDIR=/usr/share/man/man8
CFLAGS?=-O1 -g -Wall -Wshadow
CC?=gcc
-OBJS = powerdebug.o sensor.o clocks.o regulator.o display.o tree.o
+OBJS = powerdebug.o sensor.o clocks.o regulator.o display.o tree.o utils.o
default: powerdebug
diff --git a/clocks.c b/clocks.c
index 603ebe4..4d78910 100644
--- a/clocks.c
+++ b/clocks.c
@@ -28,6 +28,7 @@
#include "powerdebug.h"
#include "clocks.h"
#include "tree.h"
+#include "utils.h"
struct clock_info {
int flags;
@@ -75,67 +76,6 @@ static struct clock_info *clock_alloc(void)
return ci;
}
-/*
- * This functions is a helper to read a specific file content and store
- * the content inside a variable pointer passed as parameter, the format
- * parameter gives the variable type to be read from the file.
- *
- * @path : directory path containing the file
- * @name : name of the file to be read
- * @format : the format of the format
- * @value : a pointer to a variable to store the content of the file
- * Returns 0 on success, -1 otherwise
- */
-int file_read_value(const char *path, const char *name,
- const char *format, void *value)
-{
- FILE *file;
- char *rpath;
- int ret;
-
- ret = asprintf(&rpath, "%s/%s", path, name);
- if (ret < 0)
- return ret;
-
- file = fopen(rpath, "r");
- if (!file) {
- ret = -1;
- goto out_free;
- }
-
- ret = fscanf(file, format, value) == EOF ? -1 : 0;
-
- fclose(file);
-out_free:
- free(rpath);
- return ret;
-}
-
-static int file_read_from_format(const char *file, int *value,
- const char *format)
-{
- FILE *f;
- int ret;
-
- f = fopen(file, "r");
- if (!f)
- return -1;
- ret = fscanf(f, format, value);
- fclose(f);
-
- return !ret ? -1 : 0;
-}
-
-static inline int file_read_int(const char *file, int *value)
-{
- return file_read_from_format(file, value, "%d");
-}
-
-static inline int file_read_hex(const char *file, int *value)
-{
- return file_read_from_format(file, value, "%x");
-}
-
static inline const char *clock_rate(int *rate)
{
int r;
diff --git a/utils.c b/utils.c
new file mode 100644
index 0000000..e47c58e
--- /dev/null
+++ b/utils.c
@@ -0,0 +1,55 @@
+/*******************************************************************************
+ * Copyright (C) 2011, Linaro Limited.
+ *
+ * This file is part of PowerDebug.
+ *
+ * All rights reserved. This program and the accompanying materials
+ * are made available under the terms of the Eclipse Public License v1.0
+ * which accompanies this distribution, and is available at
+ * http://www.eclipse.org/legal/epl-v10.html
+ *
+ * Contributors:
+ * Daniel Lezcano <daniel.lezcano(a)linaro.org> (IBM Corporation)
+ * - initial API and implementation
+ *******************************************************************************/
+
+#define _GNU_SOURCE
+#include <stdio.h>
+#undef _GNU_SOURCE
+#include <stdlib.h>
+
+/*
+ * This functions is a helper to read a specific file content and store
+ * the content inside a variable pointer passed as parameter, the format
+ * parameter gives the variable type to be read from the file.
+ *
+ * @path : directory path containing the file
+ * @name : name of the file to be read
+ * @format : the format of the format
+ * @value : a pointer to a variable to store the content of the file
+ * Returns 0 on success, -1 otherwise
+ */
+int file_read_value(const char *path, const char *name,
+ const char *format, void *value)
+{
+ FILE *file;
+ char *rpath;
+ int ret;
+
+ ret = asprintf(&rpath, "%s/%s", path, name);
+ if (ret < 0)
+ return ret;
+
+ file = fopen(rpath, "r");
+ if (!file) {
+ ret = -1;
+ goto out_free;
+ }
+
+ ret = fscanf(file, format, value) == EOF ? -1 : 0;
+
+ fclose(file);
+out_free:
+ free(rpath);
+ return ret;
+}
diff --git a/utils.h b/utils.h
new file mode 100644
index 0000000..d4ac65a
--- /dev/null
+++ b/utils.h
@@ -0,0 +1,22 @@
+/*******************************************************************************
+ * Copyright (C) 2011, Linaro Limited.
+ *
+ * This file is part of PowerDebug.
+ *
+ * All rights reserved. This program and the accompanying materials
+ * are made available under the terms of the Eclipse Public License v1.0
+ * which accompanies this distribution, and is available at
+ * http://www.eclipse.org/legal/epl-v10.html
+ *
+ * Contributors:
+ * Daniel Lezcano <daniel.lezcano(a)linaro.org> (IBM Corporation)
+ * - initial API and implementation
+ *******************************************************************************/
+#ifndef __UTILS_H
+#define __UTILS_H
+
+extern int file_read_value(const char *path, const char *name,
+ const char *format, void *value);
+
+
+#endif
--
1.7.1
Hello,
This is my first attempt with the Linaro kernel build and I am having no
success in getting it build.
My aim is to get a kernel debian package build using the upstream changes.
I am following the instructions given at
https://wiki.linaro.org/Resources/HowTo/PackageYourOwnKernel.
I get quite a number of errors. I have attached one of them to the mail as
well.
Can someone let me know if this is the right wiki to follow the upstream
kernel packaging? Also, I dont find the arch/arm/configs/omap3_defconfig
configuration file shown in one of the steps.
Can I know how to get the same?
Is there a different approach to get the kernel debian package ?
Thanks in advance!!!
Regards,
Deepti.
Hi,
I am trying to run panda android leb using the instructions given on
wiki. Seeing a crash while formatting the SDcard using lmc. Below are
the crash logs, seems like a known issue. Any pointers on what needs to
be fixed?
Are you 100% sure, on selecting [/dev/sdc] (y/n)? y
Checking that no-one is using this disk right now ...
OK
Warning: bad partition start (earliest 1318913)
Warning: partition 1 does not end at a cylinder boundary
If you created or changed a DOS partition, /dev/foo7, say, then use
dd(1)
to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7 bs=512
count=1
(See fdisk(8).)
Traceback (most recent call last):
File "./linaro-image-tools-0.4.8/linaro-android-media-create", line
141, in <module>
args.should_align_boot_part)
File
"/play/linaro-android-leb/linaro-image-tools-0.4.8/linaro_image_tools/media_create/partitions.py", line 54, in setup_android_partitions
get_android_partitions_for_media (media, board_config)
File
"/play/linaro-android-leb/linaro-image-tools-0.4.8/linaro_image_tools/media_create/partitions.py", line 267, in get_android_partitions_for_media
media.path, 1 + board_config.mmc_part_offset)
File
"/play/linaro-android-leb/linaro-image-tools-0.4.8/linaro_image_tools/media_create/partitions.py", line 312, in _get_device_file_for_partition_number
device_path = _get_udisks_device_path(dev_file)
File
"/play/linaro-android-leb/linaro-image-tools-0.4.8/linaro_image_tools/media_create/partitions.py", line 327, in _get_udisks_device_path
return udisks.get_dbus_method('FindDeviceByDeviceFile')(device)
File "/usr/lib/pymodules/python2.6/dbus/proxies.py", line 68, in
__call__
return self._proxy_method(*args, **keywords)
File "/usr/lib/pymodules/python2.6/dbus/proxies.py", line 140, in
__call__
**keywords)
File "/usr/lib/pymodules/python2.6/dbus/connection.py", line 620, in
call_blocking
message, timeout)
dbus.exceptions.DBusException: org.freedesktop.UDisks.Error.Failed: No
such device
--
Thanks
Amit Mahajan
Hello.
Over the past two weeks I've been prototyping a new tool for managing
releases of Linaro validation deliverables. Some of the problems are
unique (we target older releases, we are upstream) some are shared
(building lots of packages together, making sure they all work,
reproducing builds).
TLDR; Skip to the bottom of the message where I list the features.
I had some prior experience in this task and I wanted to see how the old
solutions would map to the new environment. As we target Ubuntu Lucid
aka latest LTS and most of us develop on the latest Ubuntu I wanted to
ensure that all of our releases can be installed and would work
correctly on Lucid. Earlier in the cycle we decided that the delivery
method would be a PPA with multiple .debs, some of which may be
backports required for Lucid. This is unlike using pip to pull in all
the things we are interested in from various places (possibly using a
requirements file).
So having those requirements my goals were to build a tool that would be
useful for those two tasks:
1) Daily development helper as a sort of CI tool.
2) Monthly release helper
Both tasks have different requirements but involve similar actions.
Those actions are:
*) Figuring out what packages to work on (aka project definition)
*) Getting the appropriate source/revision/tag from launchpad
*) Building a source packages for the target distribution
*) Building a binary package in pbuilder
- this is where we also run all the unit tests that tells us if
something is broken.
The tricky part is where we need to build test and release *multiple*
source packages from *multiple* branches and target *multiple* target
distributions.
For some concrete example. LAVA today is composed of the following core
packages:
- lava-server
- lava-dashboard
- lava-scheduler
- lava-dispatcher
- lava-tool
- lava-test (-tool)
- lava-dashboard-tool
- lava-scheduler-tool (upcoming)
We also have a set of support libraries:
- versiontools
- linaro-django-xmlrpc
- linaro-django-pagination
- linaro-dashboard-bundle
- django-restricted-resource
- django-debian
- linaro-json
We also have some libraries that are required for testing:
- python-django-testproject
- python-django-testscenarios
- python-django-testinvariants (upcoming)
We also have some backports (for both production and testing):
- python-simplejson
- python-django 1.2 (soon 1.3)
- python-django-south
- python-testtools
- python-testscenarios
- python-fixtures
Now that's a lot of stuff to build and release manually. Granted not
everything is changing but at the very least the first group (lava-*)
will see a lot of activity each month.
Now the way I see it, each month, this list needs to be released.
Possibly some of them will be unchanged. In that case there is no need
to actually upload new packages to a PPA. Still we need to ensure that
all of them build and work on all Lucid, Maverick and so on, all up to
the latest version of Ubuntu.
Since we want to build Debian packages I decided to use bzr builder to
create source packages from a recipe files. Recipes are a simple (2-3
lines at most) text files that say how to assemble source branches to
get Debian packaging. The key feature of builder is it's ability to
create derivative packages for a particular distribution. Recipe files
are also used by launchpad for building source packages. In the future
this tool could actually push/pull the recipes to launchpad directly.
To build binary packages I used pbuilder. The configuration is a little
more complex than simple raw pbuilder or pbuilder-dist. The
configuration I made turns it into a ppa-like builder that can feed from
it's own packages. Each time you build a package it will land in a small
apt repository and can be used to feed another build. This is a critical
feature as all of our packages depend on something not available in the
main archive.
The trick is to know the right sequence of builds that would satisfy
build-dependencies. I did not find any existing tool to do this and
after a brief discussion with doko it seems there are no such tools
available. In general the problem can be simplified to a subset of real
dependencies (without conflicts, virtual packages and stuff like that)
and resolved automatically. I did not want implement that as our package
list can be managed manually. A special sequence file contains a list of
"things" to build, in order, to satisfy dependencies.
So getting everything together, lava-ci does the following:
* FEATURES *
$ ./lava-ci project help
Usage: lava-ci project [COMMAND]
Summary: Manipulate and release your project
Available commands:
- init - Prepare lava-ci to work with a particular project
- help - This message (default)
$ ./lava-ci distro help
Usage: lava-ci distro [COMMAND]
Summary: Manipulate pbuilder environments
Available commands:
- help - This message
- show - Display list of distributions and their status (default)
- enable - Enable one or more distributions
- disable - Disable one or more distributions
- update - Update one or more distributions
$ ./lava-ci package help
Usage: lava-ci package [COMMAND]
Summary: Manipulate source and binary packages
Available commands:
- srcpkg - Create a source package from a recipe
- help - This message
- show - Display list of packages and their status (default)
- wipe - Remove all source and binary packages
- sequence - Build a sequence of packages following
- binpkg - Create binary packages from a source package
A subset of the README file:
lava-ci - CI and release tools for the LAVA project (and others).
The purpose of this tool is to facilitate teamwork and monthly releases by
automating the act of creating and testing a release from source
repositories
all the way to the binary packages for various platforms.
Workflow example:
*) Prepare recipe files (bzr help builder) for each maintained source
package and put them in a bzr repository. Each recipe file *MUST* be
named like the source package it builds.
*) Use `lava-ci project init lp:BRANCH-WITH-RECIPES`. This will create
.lava-ci directory and a local checkout of you recipes in the
current directory.
*) Use `lava-ci distro` to list the distributions you wish to target.
For each one you are interested in do `lava-ci distro enable
$distro`. For example to enable lucid and maverick you could use
`lava-ci distro enable lucid maverick`.
*) Use `lava-ci package` to list available "packages" (based on
recipes). You can add/edit more just by creating additional
*.recipe files.
*) Use `lava-ci package src $package` to build source packages for
the selected recipe. There will be one source package per
distribution. You can check `lava-ci package [show]` to see what
source packages are already available.
*) Use `lava-ci package bin $package` to build binary packages for
the selected recipe. Again there will be multiple builds, one for
each distribution. Each build will result with a number of
additional debian packages being produced. You can use those
packages as build-dependencies of other packages you maintain,
similar to a Lauchpad PPA.
*) Write down a sequence of build operations to perform in and save it
as `recipe/sequence`. This file should contain one word per line
- the name of the source package to build. The order will depend on
the build-dependencies among your packages. To test that building
the whole sequence works do `lava-ci package wipe` followed by
`lava-ci package sequence`. If the build succeeds you can tag your
recipes as a working configuration and make a release.
Best regards
Zygmunt Krynicki
Hi,
I spent some more time today fixing IRC channels and it prompted me to
write-up a wiki page on how to create channels, including the specific
commands.
https://wiki.linaro.org/GettingInvolved/IRC/channelsetup
Team Leads in particular, please have a look and make sure it's understandable.
Thanks,
Joey
I just read Fathi's email about the new bots. I didn't want to hijack the
existing thread, but I have been considering setting up a new official
Linaro irc channel #linaro-multimedia with the meeting bot, etc.
As per discussions with Ilias, I would also like to set up a new service, an
IRC proxy (bip or ?) for all the multimedia team members. A stored
scrollback log might fix timezone problems we have. Would this proxy be
something that we could host on a Linaro server?
I am fully aware that we are trying to prevent "walled-gardens", but I
believe we have a very valid case why we need an irc channel specific to
multimedia. We have tried maillists and they have not worked. We would use
this channel for technical discussions across timezones, a quick access to a
multithreaded store and forward conversation with a log. We would also be
able to use this channel for meetings when the main linaro-meeting channel
was already being used, hence the need for the meeting bot. We would still
use the #linaro channel whenever possible.
These are all exploratory ideas at this point. I copied the -dev list so I
could get some feedback - Comments? Suggestions?
--
Kurt Taylor (irc krtaylor)
Linaro Multimedia Team Lead
Hi fellow ARM fans,
Linaro Developer Platform team organises every week (Wednesday 14:00 -
18:00 UTC) an ARM porting Jam. The idea is to gather all developers together to
fix userspace portability issues across the board. The list of bugs
being worked on is at launchpad:
https://bugs.launchpad.net/ubuntu/+bugs?field.tag=arm-porting-queue&orderby…
Interested in making the software in Ubuntu run better on ARM? Join us on
the #linaro channel on irc.linaro.org today!
Cheers,
Riku
I'd like to thank everyone who attended the Linaro multimedia work group
mini-summit last week, especially on such short notice. It was great to get
to meet everyone face-to-face. I think we made excellent progress on the
technical challenges for the MMWG team for the next cycle.
I have attached a consolidated and partially formatted log of the minutes
and actions from the meeting. I think they are fairly complete, please let
me know if there is anything missed that needs to be mentioned. You can also
access the individual logs from the event agenda:
http://wiki.linaro.org/Events/2011-06-MMWG
We will be working the actions of this meeting and may be contacting you for
assistance. The resulting plan will be initially reviewed at the public plan
review: http://wiki.linaro.org/Cycles/1111/PublicPlanReview
The next time we will meet to fully discuss MMWG work topics will be at the
Linaro Developer Summit at Orlando, a part of the Ubuntu Developer Summit.
Please join us if you can. More information on LDS/UDS is available here:
http://uds.ubuntu.com/ I hope to see you there!
Regards,
--
Kurt Taylor (irc krtaylor)
Linaro Multimedia Team Lead