On Tue, Feb 5, 2013 at 2:27 PM, Laurent Pinchart laurent.pinchart@ideasonboard.com wrote:
Hello,
We've hosted a CDF meeting at the FOSDEM on Sunday morning. Here's a summary of the discussions.
I would like to start with a big thank to UrLab, the ULB university hacker space, for providing us with a meeting room.
The meeting would of course not have been successful without the wide range of participants, so I also want to thank all the people who woke up on Sunday morning to attend the meeting :-)
(The CC list is pretty long, please let me know - by private e-mail in order not to spam the list - if you would like not to receive future CDF-related e- mails directly)
- Abbreviations
DBI - Display Bus Interface, a parallel video control and data bus that transmits data using parallel data, read/write, chip select and address signals, similarly to 8051-style microcontroller parallel busses. This is a mixed video control and data bus.
DPI - Display Pixel Interface, a parallel video data bus that transmits data using parallel data, h/v sync and clock signals. This is a video data bus only.
DSI - Display Serial Interface, a serial video control and data bus that transmits data using one or more differential serial lines. This is a mixed video control and data bus.
DT - Device Tree, a representation of a hardware system as a tree of physical devices with associated properties.
SFI - Simple Firmware Interface, a lightweight method for firmware to export static tables to the operating system. Those tables can contain display device topology information.
VBT - Video BIOS Table, a block of data residing in the video BIOS that can contain display device topology information.
- Goals
The meeting started with a brief discussion about the CDF goals.
Tomi Valkeinin and Tomasz Figa have sent RFC patches to show their views of what CDF could/should be. Many others have provided very valuable feedback. Given the early development stage propositions were sometimes contradictory, and focused on different areas of interest. We have thus started the meeting with a discussion about what CDF should try to achieve, and what it shouldn't.
CDF has two main purposes. The original goal was to support display panels in a platform- and subsystem-independent way. While mostly useful for embedded systems, the emergence of platforms such as Intel Medfield and ARM-based PCs that blends the embedded and PC worlds makes panel support useful for the PC world as well.
The second purpose is to provide a cross-subsystem interface to support video encoders. The idea originally came from a generalisation of the original RFC that supported panels only. While encoder support is considered as lower priority than display panel support by developers focussed on display controller driver (Intel, Renesas, ST Ericsson, TI), companies that produce video encoders (Analog Devices, and likely others) don't share that point of view and would like to provide a single encoder driver that can be used in both KMS and V4L2 drivers.
Both display panels and encoders are thus the target of a lot of attention, depending on the audience. As long as none of them is forgotten in CDF, the overall agreement was that focussing on panels first is acceptable. Care shall be taken in that case to avoid any architecture that would make encoders support difficult or impossible.
- Subsystems
Display panels are used in conjunction with FBDEV and KMS drivers. There was to the audience knowledge no V4L2 driver that needs to explicitly handle display panels. Even though at least one V4L2 output drivers (omap_vout) can output video to a display panel, it does so in conjunction with the KMS and/or FBDEV APIs that handle panel configuration. Panels are thus not exposed to V4L2 drivers.
Encoders, on the other hand, are widely used in the V4L2 subsystem. Many V4L2 devices output video in either analog (Composite, S-Video, VGA) or digital (DVI, HDMI) way.
Display panel drivers don't need to be shared with the V4L2 subsystem. Furthermore, as the general opinion during the meeting was that the FBDEV subsystem should be considered as legacy and deprecate in the future, restricting panel support to KMS hasn't been considered by anyone as an issue. KMS will thus be the main target of display panel support in CDF, and FBDEV will be supported if that doesn't bring any drawback from an architecture point of view.
Encoder drivers need to be shared with the V4L2 subsystem. Similarly to panel drivers, excluding FBDEV support from CDF isn't considered as an issue.
- KMS Extensions
The usefulness of V4L2 for output devices was questioned, and the possibility of using KMS for complex video devices usually associated with V4L2 was raised. The TI DaVinci 8xxx family is an example of chips that could benefit from KMS support.
The KMS API is lacking support for deep-pipelining ("framebuffers" that are sourced from a data stream instead of a memory buffer) today. Extending the KMS API with deep-pipelining support was considered as a sensible goal that would mostly require the creation of a new KMS source object. Exposing the topology of the whole device would then be handled by the Media Controller API.
Given that no evidence of this KMS extension being ready in a reasonable time frame exists, sharing encoder drivers with the V4L2 subsystem hasn't been seriously questioned.
- Discovery and Initialization
As CDF will split support for complete display devices across different drivers, the question of physical devices discovery and initialization caused concern among the audience.
Topology and connectivity information can come from a wide variety of sources. Embedded platforms typically provide that information in platform data supplied by board code or through the device tree. PC platforms usually store the information in the firmware exposed through ACPI, SFI, VBT or other interfaces. Pluggable devices (PCI being the most common case) can also store the information on an on-board non-volatile memory or hardcode it in drivers.
When using the device tree display entity information are bundled with the display entity device DT node. The associated driver shall thus extract itself information from the DT node. In all other cases the display entity driver shall not parse data from the information source directly, but shall instead received a platform data structure filled with data parsed by the display controller driver. In the most complex case a machine driver, similar to ASoC machine drivers, might be needed, in which case platform data could be provided by that machine driver.
Display entity drivers are encouraged to internally fill a platform data structure from their DT node to reuse the same code path for both platform data- and DT-based initialization.
- Bus Model
Display panels are connected to a video bus that transmits video data and optionally to a control bus. Those two busses can be separate physical interfaces or combined into a single physical interface.
The Linux device model represents the system as a tree of devices (not to be confused by the device tree, abreviated as DT). The tree is organized around control busses, with every device being a child of its control bus master. For instance an I2C device will be a child of its I2C controller device, which can itself be a child of its parent PCI device.
Display panels will be represented as Linux devices. They will have a single parent from the Linux device model point of view, but will be potentially connected to multiple physical busses. CDF thus needs to define what bus to select as the Linux parent bus.
In theory any physical bus that the device is attached to can be selected as the parent bus. However, selecting a video data bus would depart from the traditional Linux device model that uses control busses only. This caused concern among several people who argued that not presenting the device to the kernel as attached to its control bus would bring issues in embedded system. Unlike on PC systems where the control bus master is usually the same physical device as the data bus master, embedded systems are made of a potentially complex assembly of completely unrelated devices. Not representing an I2C- controlled panel as a child of its I2C master in DT was thus frown upon, even though no clear agreement was reached on the subject.
Panels can be divided in three categories based on their bus model.
- No control bus
Many panels don't offer any control interface. They are usually referred to as 'dumb panels' as they directly display the data received on their video bus without any configurable option. Panels in this category often use DPI is their video bus, but other options such as DSI (using the DSI video mode only) are possible.
Panels with no control bus can be represented in the device model as platform devices, or as being attached to their video bus. In the later case we would need Linux busses for pure video data interfaces such as DPI or VGA. Nobody was particularly enthousiastic about this idea. Dumb panels will thus likely be represented as platform devices.
- Separate video and control busses
The typical case is a panel connected to an I2C or SPI bus that receives data through a DPI video interface or DSI video mode interface.
Using a mixed control and video bus (such as DSI and DBI) for control only with a different bus for video data is possible in theory but very unlikely in practice (although the creativity of hardware developers should never be underestimated).
Display panels that use a control bus supported by the Linux kernel should likely be represented as children of their control bus master. Other options are possible as mentioned above but were received without enthousiasm by most embedded kernel developers.
When the control bus isn't supported by the kernel, a new bus type can be developed, or the panel can be represented as a platform device. The right option will likely very depending on the control bus.
- Combined video and control busses
When the two busses are combined in a single physical bus the panel device will obviously be represented as a child of that single physical bus.
In such cases the control bus could expose video bus control methods. This would remove the need for a video source as proposed by Tomi Valkeinen in his CDF model. However, if the bus can be used for video data transfer in combination with a different control bus, a video source corresponding to the data bus will be needed.
No decision has been taken on whether to use a video source in addition to the control bus in the combined busses case. Experimentation will be needed, and the right solution might depend on the bus type.
- Multiple control busses
One panel was mentioned as being connected to a DSI bus and an I2C bus. The DSI bus is used for both control and video, and the I2C bus for control only. configuring the panel requires sending commands through both DSI and I2C. The opinion on such panels was a large *sigh* followed by a "this should be handled by the device core, let's ask Greg KH".
- Miscellaneous
- If the OMAP3 DSS driver is used as a model for the DSI support
implementation, Daniel Vetter requested the DSI bus lock semaphore to be killed as it prevents lockdep from working correctly (reference needed ;-)).
- Do we need to support chaining several encoders ? We can come up with
several theoretical use cases, some of them probably exist in real hardware, but the details are still a bit fuzzy.
So, a part which is completely omitted in this thread is how to handle suspend/resume ordering. If you have multiple encoders which need to be turned on/off in a given order at suspend/resume, how do you handle that given the current scheme where they are just separate platform drivers in drivers/video?
This problems occurs with drm/exynos in current 3.8 kernels for example. On that platform, the DP driver and the FIMD driver will suspend/resume in random order, and therefore fail resuming half the time. Is there something which could be done in CDF to address that?
Stéphane
On 02/12/2013 11:45 PM, Stéphane Marchesin wrote:
- Do we need to support chaining several encoders ? We can come up with
several theoretical use cases, some of them probably exist in real hardware, but the details are still a bit fuzzy.
So, a part which is completely omitted in this thread is how to handle suspend/resume ordering. If you have multiple encoders which need to be turned on/off in a given order at suspend/resume, how do you handle that given the current scheme where they are just separate platform drivers in drivers/video?
This problems occurs with drm/exynos in current 3.8 kernels for example. On that platform, the DP driver and the FIMD driver will suspend/resume in random order, and therefore fail resuming half the time. Is there something which could be done in CDF to address that?
My idea here is two parts. First hide the chaining within the CDF driver. So it is always the first CDF driver that is responsible for the rest of the chain. Second, I'm looking at using the dev->parent and bus relationchip to describe this dependency. Then power usually works out fine, since children could be forced to be suspended before parent ("bus" host).
/BR /Marcus
On 2013-02-13 00:45, Stéphane Marchesin wrote:
So, a part which is completely omitted in this thread is how to handle suspend/resume ordering. If you have multiple encoders which need to be turned on/off in a given order at suspend/resume, how do you handle that given the current scheme where they are just separate platform drivers in drivers/video?
This problems occurs with drm/exynos in current 3.8 kernels for example. On that platform, the DP driver and the FIMD driver will suspend/resume in random order, and therefore fail resuming half the time. Is there something which could be done in CDF to address that?
I don't think we have a perfect solution for this, but I think we can handle this by using PM notifiers, PM_SUSPEND_PREPARE and PM_POST_SUSPEND.
The code that manages the whole chain should register to those notifiers, and disable or enable the display devices accordingly. This way the devices are enabled and disabled in the right order, and also (hopefully) so that the control busses are operational.
Tomi
linaro-mm-sig@lists.linaro.org