On 11/22/21 4:16 PM, Paul Cercueil wrote:
> Hi Lars,
>
> Le lun., nov. 22 2021 at 16:08:51 +0100, Lars-Peter Clausen
> <lars(a)metafoo.de> a écrit :
>> On 11/21/21 9:08 PM, Paul Cercueil wrote:
>>>
>>>
>>> Le dim., nov. 21 2021 at 19:49:03 +0100, Lars-Peter Clausen
>>> <lars(a)metafoo.de> a écrit :
>>>> On 11/21/21 6:52 PM, Paul Cercueil wrote:
>>>>> Hi Lars,
>>>>>
>>>>> Le dim., nov. 21 2021 at 17:23:35 +0100, Lars-Peter Clausen
>>>>> <lars(a)metafoo.de> a écrit :
>>>>>> On 11/15/21 3:19 PM, Paul Cercueil wrote:
>>>>>>> The buffer-dma code was using two queues, incoming and outgoing, to
>>>>>>> manage the state of the blocks in use.
>>>>>>>
>>>>>>> While this totally works, it adds some complexity to the code,
>>>>>>> especially since the code only manages 2 blocks. It is much
>>>>>>> easier to
>>>>>>> just check each block's state manually, and keep a counter for
>>>>>>> the next
>>>>>>> block to dequeue.
>>>>>>>
>>>>>>> Since the new DMABUF based API wouldn't use these incoming and
>>>>>>> outgoing
>>>>>>> queues anyway, getting rid of them now makes the upcoming changes
>>>>>>> simpler.
>>>>>>>
>>>>>>> Signed-off-by: Paul Cercueil <paul(a)crapouillou.net>
>>>>>> The outgoing queue is going to be replaced by fences, but I think
>>>>>> we need to keep the incoming queue.
>>>>>
>>>>> Blocks are always accessed in sequential order, so we now have a
>>>>> "queue->next_dequeue" that cycles between the buffers
>>>>> allocated for fileio.
>>>>>
>>>>>>> [...]
>>>>>>> @@ -442,28 +435,33 @@ EXPORT_SYMBOL_GPL(iio_dma_buffer_disable);
>>>>>>> static void iio_dma_buffer_enqueue(struct iio_dma_buffer_queue
>>>>>>> *queue,
>>>>>>> struct iio_dma_buffer_block *block)
>>>>>>> {
>>>>>>> - if (block->state == IIO_BLOCK_STATE_DEAD) {
>>>>>>> + if (block->state == IIO_BLOCK_STATE_DEAD)
>>>>>>> iio_buffer_block_put(block);
>>>>>>> - } else if (queue->active) {
>>>>>>> + else if (queue->active)
>>>>>>> iio_dma_buffer_submit_block(queue, block);
>>>>>>> - } else {
>>>>>>> + else
>>>>>>> block->state = IIO_BLOCK_STATE_QUEUED;
>>>>>>> - list_add_tail(&block->head, &queue->incoming);
>>>>>> If iio_dma_buffer_enqueue() is called with a dmabuf and the
>>>>>> buffer is not active, it will be marked as queued, but we
>>>>>> don't actually keep a reference to it anywhere. It will
>>>>>> never be submitted to the DMA, and it will never be
>>>>>> signaled as completed.
>>>>>
>>>>> We do keep a reference to the buffers, in the queue->fileio.blocks
>>>>> array. When the buffer is enabled, all the blocks in that
>>>>> array that are in the "queued" state will be submitted to the
>>>>> DMA.
>>>>>
>>>> But not when used in combination with the DMA buf changes later in
>>>> this series.
>>>>
>>>
>>> That's still the case after the DMABUF changes of the series. Or can
>>> you point me exactly what you think is broken?
>>>
>> When you allocate a DMABUF with the allocate IOCTL and then submit it
>> with the enqueue IOCTL before the buffer is enabled it will end up
>> marked as queued, but not actually be queued anywhere.
>>
>
> Ok, it works for me because I never enqueue blocks before enabling the
> buffer. I can add a requirement that blocks must be enqueued only
> after the buffer is enabled.
I don't think that is a good idea. This way you are going to potentially
drop data at the begining of your stream when the DMA isn't ready yet.
On 11/21/21 9:08 PM, Paul Cercueil wrote:
>
>
> Le dim., nov. 21 2021 at 19:49:03 +0100, Lars-Peter Clausen
> <lars(a)metafoo.de> a écrit :
>> On 11/21/21 6:52 PM, Paul Cercueil wrote:
>>> Hi Lars,
>>>
>>> Le dim., nov. 21 2021 at 17:23:35 +0100, Lars-Peter Clausen
>>> <lars(a)metafoo.de> a écrit :
>>>> On 11/15/21 3:19 PM, Paul Cercueil wrote:
>>>>> The buffer-dma code was using two queues, incoming and outgoing, to
>>>>> manage the state of the blocks in use.
>>>>>
>>>>> While this totally works, it adds some complexity to the code,
>>>>> especially since the code only manages 2 blocks. It is much easier to
>>>>> just check each block's state manually, and keep a counter for the
>>>>> next
>>>>> block to dequeue.
>>>>>
>>>>> Since the new DMABUF based API wouldn't use these incoming and
>>>>> outgoing
>>>>> queues anyway, getting rid of them now makes the upcoming changes
>>>>> simpler.
>>>>>
>>>>> Signed-off-by: Paul Cercueil <paul(a)crapouillou.net>
>>>> The outgoing queue is going to be replaced by fences, but I think
>>>> we need to keep the incoming queue.
>>>
>>> Blocks are always accessed in sequential order, so we now have a
>>> "queue->next_dequeue" that cycles between the buffers allocated for
>>> fileio.
>>>
>>>>> [...]
>>>>> @@ -442,28 +435,33 @@ EXPORT_SYMBOL_GPL(iio_dma_buffer_disable);
>>>>> static void iio_dma_buffer_enqueue(struct iio_dma_buffer_queue
>>>>> *queue,
>>>>> struct iio_dma_buffer_block *block)
>>>>> {
>>>>> - if (block->state == IIO_BLOCK_STATE_DEAD) {
>>>>> + if (block->state == IIO_BLOCK_STATE_DEAD)
>>>>> iio_buffer_block_put(block);
>>>>> - } else if (queue->active) {
>>>>> + else if (queue->active)
>>>>> iio_dma_buffer_submit_block(queue, block);
>>>>> - } else {
>>>>> + else
>>>>> block->state = IIO_BLOCK_STATE_QUEUED;
>>>>> - list_add_tail(&block->head, &queue->incoming);
>>>> If iio_dma_buffer_enqueue() is called with a dmabuf and the buffer
>>>> is not active, it will be marked as queued, but we don't actually
>>>> keep a reference to it anywhere. It will never be submitted to
>>>> the DMA, and it will never be signaled as completed.
>>>
>>> We do keep a reference to the buffers, in the queue->fileio.blocks
>>> array. When the buffer is enabled, all the blocks in that array
>>> that are in the "queued" state will be submitted to the DMA.
>>>
>> But not when used in combination with the DMA buf changes later in
>> this series.
>>
>
> That's still the case after the DMABUF changes of the series. Or can
> you point me exactly what you think is broken?
>
When you allocate a DMABUF with the allocate IOCTL and then submit it
with the enqueue IOCTL before the buffer is enabled it will end up
marked as queued, but not actually be queued anywhere.
19.11.2021 16:32, Akhil R пишет:
> - i2c_dev->rst = devm_reset_control_get_exclusive(i2c_dev->dev, "i2c");
> - if (IS_ERR(i2c_dev->rst)) {
> - dev_err_probe(i2c_dev->dev, PTR_ERR(i2c_dev->rst),
> - "failed to get reset control\n");
> - return PTR_ERR(i2c_dev->rst);
> - }
> -
> tegra_i2c_parse_dt(i2c_dev);
>
> - err = tegra_i2c_init_clocks(i2c_dev);
> - if (err)
> - return err;
> + if (!has_acpi_companion(&pdev->dev)) {
> + i2c_dev->rst = devm_reset_control_get_exclusive(i2c_dev->dev, "i2c");
> + if (IS_ERR(i2c_dev->rst)) {
> + dev_err_probe(i2c_dev->dev, PTR_ERR(i2c_dev->rst),
> + "failed to get reset control\n");
> + return PTR_ERR(i2c_dev->rst);
> + }
> +
> + err = tegra_i2c_init_clocks(i2c_dev);
> + if (err)
> + return err;
> + }
What about to factor out the reset initialization into a separate function and write it like this:
static int tegra_i2c_init_reset(i2c_dev)
{
if (has_acpi_companion(i2c_dev->dev)
return 0;
i2c_dev->rst = devm_reset_control_get_exclusive(i2c_dev->dev, "i2c");
if (IS_ERR(i2c_dev->rst))
return dev_err_probe(i2c_dev->dev, PTR_ERR(i2c_dev->rst),
"failed to get reset control\n");
return 0;
}
And then change tegra_i2c_init_clocks() to:
static int tegra_i2c_init_clocks(i2c_dev)
{
int err;
if (has_acpi_companion(i2c_dev->dev))
return 0;
...
}
This will make both reset/clocks initialization to look more consistent.
19.11.2021 17:48, Andy Shevchenko пишет:
>> + if (i2c_dev->nclocks == 0)
>> + return;
> Why? Make clocks optional.
This check shouldn't be needed because both clk_disable() and
clk_bulk_unprepare() should handle NULL/zero clocks without problems.
On 11/21/21 6:52 PM, Paul Cercueil wrote:
> Hi Lars,
>
> Le dim., nov. 21 2021 at 17:23:35 +0100, Lars-Peter Clausen
> <lars(a)metafoo.de> a écrit :
>> On 11/15/21 3:19 PM, Paul Cercueil wrote:
>>> The buffer-dma code was using two queues, incoming and outgoing, to
>>> manage the state of the blocks in use.
>>>
>>> While this totally works, it adds some complexity to the code,
>>> especially since the code only manages 2 blocks. It is much easier to
>>> just check each block's state manually, and keep a counter for the next
>>> block to dequeue.
>>>
>>> Since the new DMABUF based API wouldn't use these incoming and outgoing
>>> queues anyway, getting rid of them now makes the upcoming changes
>>> simpler.
>>>
>>> Signed-off-by: Paul Cercueil <paul(a)crapouillou.net>
>> The outgoing queue is going to be replaced by fences, but I think we
>> need to keep the incoming queue.
>
> Blocks are always accessed in sequential order, so we now have a
> "queue->next_dequeue" that cycles between the buffers allocated for
> fileio.
>
>>> [...]
>>> @@ -442,28 +435,33 @@ EXPORT_SYMBOL_GPL(iio_dma_buffer_disable);
>>> static void iio_dma_buffer_enqueue(struct iio_dma_buffer_queue
>>> *queue,
>>> struct iio_dma_buffer_block *block)
>>> {
>>> - if (block->state == IIO_BLOCK_STATE_DEAD) {
>>> + if (block->state == IIO_BLOCK_STATE_DEAD)
>>> iio_buffer_block_put(block);
>>> - } else if (queue->active) {
>>> + else if (queue->active)
>>> iio_dma_buffer_submit_block(queue, block);
>>> - } else {
>>> + else
>>> block->state = IIO_BLOCK_STATE_QUEUED;
>>> - list_add_tail(&block->head, &queue->incoming);
>> If iio_dma_buffer_enqueue() is called with a dmabuf and the buffer is
>> not active, it will be marked as queued, but we don't actually keep a
>> reference to it anywhere. It will never be submitted to the DMA, and
>> it will never be signaled as completed.
>
> We do keep a reference to the buffers, in the queue->fileio.blocks
> array. When the buffer is enabled, all the blocks in that array that
> are in the "queued" state will be submitted to the DMA.
>
But not when used in combination with the DMA buf changes later in this
series.
On 11/15/21 3:19 PM, Paul Cercueil wrote:
> The buffer-dma code was using two queues, incoming and outgoing, to
> manage the state of the blocks in use.
>
> While this totally works, it adds some complexity to the code,
> especially since the code only manages 2 blocks. It is much easier to
> just check each block's state manually, and keep a counter for the next
> block to dequeue.
>
> Since the new DMABUF based API wouldn't use these incoming and outgoing
> queues anyway, getting rid of them now makes the upcoming changes
> simpler.
>
> Signed-off-by: Paul Cercueil <paul(a)crapouillou.net>
The outgoing queue is going to be replaced by fences, but I think we
need to keep the incoming queue.
> [...]
> @@ -442,28 +435,33 @@ EXPORT_SYMBOL_GPL(iio_dma_buffer_disable);
> static void iio_dma_buffer_enqueue(struct iio_dma_buffer_queue *queue,
> struct iio_dma_buffer_block *block)
> {
> - if (block->state == IIO_BLOCK_STATE_DEAD) {
> + if (block->state == IIO_BLOCK_STATE_DEAD)
> iio_buffer_block_put(block);
> - } else if (queue->active) {
> + else if (queue->active)
> iio_dma_buffer_submit_block(queue, block);
> - } else {
> + else
> block->state = IIO_BLOCK_STATE_QUEUED;
> - list_add_tail(&block->head, &queue->incoming);
If iio_dma_buffer_enqueue() is called with a dmabuf and the buffer is
not active, it will be marked as queued, but we don't actually keep a
reference to it anywhere. It will never be submitted to the DMA, and it
will never be signaled as completed.
Am 16.11.21 um 17:31 schrieb Laurent Pinchart:
> On Tue, Nov 16, 2021 at 05:02:25PM +0100, Daniel Vetter wrote:
>> On Mon, Nov 15, 2021 at 02:57:37PM +0000, Paul Cercueil wrote:
>>> Le lun., nov. 15 2021 at 15:37:16 +0100, Daniel Vetter a écrit :
>>>> On Mon, Nov 15, 2021 at 02:19:10PM +0000, Paul Cercueil wrote:
>>>>> Hi Jonathan,
>>>>>
>>>>> This patchset introduces a new userspace interface based on DMABUF
>>>>> objects, to complement the existing fileio based API.
>>>>>
>>>>> The advantage of this DMABUF based interface vs. the fileio
>>>>> interface, is that it avoids an extra copy of the data between the
>>>>> kernel and userspace. This is particularly userful for high-speed
>>>>> devices which produce several megabytes or even gigabytes of data per
>>>>> second.
>>>>>
>>>>> The first few patches [01/15] to [03/15] are not really related, but
>>>>> allow to reduce the size of the patches that introduce the new API.
>>>>>
>>>>> Patch [04/15] to [06/15] enables write() support to the buffer-dma
>>>>> implementation of the buffer API, to continue the work done by
>>>>> Mihail Chindris.
>>>>>
>>>>> Patches [07/15] to [12/15] introduce the new DMABUF based API.
>>>>>
>>>>> Patches [13/15] and [14/15] add support for cyclic buffers, only through
>>>>> the new API. A cyclic buffer will be repeated on the output until the
>>>>> buffer is disabled.
>>>>>
>>>>> Patch [15/15] adds documentation about the new API.
>>>>>
>>>>> For now, the API allows you to alloc DMABUF objects and mmap() them
>>>>> to
>>>>> read or write the samples. It does not yet allow to import DMABUFs
>>>>> parented to other subsystems, but that should eventually be possible
>>>>> once it's wired.
>>>>>
>>>>> This patchset is inspired by the "mmap interface" that was previously
>>>>> submitted by Alexandru Ardelean and Lars-Peter Clausen, so it would be
>>>>> great if I could get a review from you guys. Alexandru's commit was
>>>>> signed with his @analog.com address but he doesn't work at ADI anymore,
>>>>> so I believe I'll need him to sign with a new email.
>>>> Why dma-buf? dma-buf looks like something super generic and useful, until
>>>> you realize that there's a metric ton of gpu/accelerator bagage piled in.
>>>> So unless buffer sharing with a gpu/video/accel/whatever device is the
> And cameras (maybe they're included in "whatever" ?).
>
>>>> goal here, and it's just for a convenient way to get at buffer handles,
>>>> this doesn't sound like a good idea.
>>> Good question. The first reason is that a somewhat similar API was intented
>>> before[1], but refused upstream as it was kind of re-inventing the wheel.
>>>
>>> The second reason, is that we want to be able to share buffers too, not with
>>> gpu/video but with the network (zctap) and in the future with USB
>>> (functionFS) too.
>>>
>>> [1]: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kern…
>> Hm is that code merged already in upstream already?
>>
>> I know that dma-buf looks really generic, but like I said if there's no
>> need ever to interface with any of the gpu buffer sharing it might be
>> better to use something else (like get_user_pages maybe, would that work?).
> Not GUP please. That brings another set of issues, especially when
> dealing with DMA, we've suffered enough from the USERPTR support in V4L2
> to avoid repeating this. Let's make dma-buf better instead.
Yeah, when comparing GUP and DMA-buf the later is clearly the lesser evil.
DMA-buf indeed has some design issues we need to work on, especially
around the async operations and synchronization. But I still think those
are solvable.
GUP on the other hand has some hard fundamental problems which you can
only solved completely if the hardware is capable of fast and reliable
recoverable page faults.
Regards,
Christian.
On Mon, Nov 15, 2021 at 02:57:37PM +0000, Paul Cercueil wrote:
> Hi Daniel,
>
> Le lun., nov. 15 2021 at 15:37:16 +0100, Daniel Vetter <daniel(a)ffwll.ch> a
> écrit :
> > On Mon, Nov 15, 2021 at 02:19:10PM +0000, Paul Cercueil wrote:
> > > Hi Jonathan,
> > >
> > > This patchset introduces a new userspace interface based on DMABUF
> > > objects, to complement the existing fileio based API.
> > >
> > > The advantage of this DMABUF based interface vs. the fileio
> > > interface, is that it avoids an extra copy of the data between the
> > > kernel and userspace. This is particularly userful for high-speed
> > > devices which produce several megabytes or even gigabytes of data
> > > per
> > > second.
> > >
> > > The first few patches [01/15] to [03/15] are not really related, but
> > > allow to reduce the size of the patches that introduce the new API.
> > >
> > > Patch [04/15] to [06/15] enables write() support to the buffer-dma
> > > implementation of the buffer API, to continue the work done by
> > > Mihail Chindris.
> > >
> > > Patches [07/15] to [12/15] introduce the new DMABUF based API.
> > >
> > > Patches [13/15] and [14/15] add support for cyclic buffers, only
> > > through
> > > the new API. A cyclic buffer will be repeated on the output until
> > > the
> > > buffer is disabled.
> > >
> > > Patch [15/15] adds documentation about the new API.
> > >
> > > For now, the API allows you to alloc DMABUF objects and mmap() them
> > > to
> > > read or write the samples. It does not yet allow to import DMABUFs
> > > parented to other subsystems, but that should eventually be possible
> > > once it's wired.
> > >
> > > This patchset is inspired by the "mmap interface" that was
> > > previously
> > > submitted by Alexandru Ardelean and Lars-Peter Clausen, so it would
> > > be
> > > great if I could get a review from you guys. Alexandru's commit was
> > > signed with his @analog.com address but he doesn't work at ADI
> > > anymore,
> > > so I believe I'll need him to sign with a new email.
> >
> > Why dma-buf? dma-buf looks like something super generic and useful,
> > until
> > you realize that there's a metric ton of gpu/accelerator bagage piled
> > in.
> > So unless buffer sharing with a gpu/video/accel/whatever device is the
> > goal here, and it's just for a convenient way to get at buffer handles,
> > this doesn't sound like a good idea.
>
> Good question. The first reason is that a somewhat similar API was intented
> before[1], but refused upstream as it was kind of re-inventing the wheel.
>
> The second reason, is that we want to be able to share buffers too, not with
> gpu/video but with the network (zctap) and in the future with USB
> (functionFS) too.
>
> [1]: https://lore.kernel.org/linux-iio/20210217073638.21681-1-alexandru.ardelean…
Hm is that code merged already in upstream already?
I know that dma-buf looks really generic, but like I said if there's no
need ever to interface with any of the gpu buffer sharing it might be
better to use something else (like get_user_pages maybe, would that work?).
> > Also if the idea is to this with gpus/accelerators then I'd really like
> > to
> > see the full thing, since most likely at that point you also want
> > dma_fence. And once we talk dma_fence things get truly horrible from a
> > locking pov :-( Or well, just highly constrained and I get to review
> > what
> > iio is doing with these buffers to make sure it all fits.
>
> There is some dma_fence action in patch #10, which is enough for the
> userspace apps to use the API.
>
> What "horribleness" are we talking about here? It doesn't look that scary to
> me, but I certainly don't have the complete picture.
You need to annotate all the code involved in signalling that dma_fence
using dma_fence_begin/end_signalling, and then enable full lockdep and
everything.
You can safely assume you'll find bugs, because we even have bugs about
this in gpu drivers (where that annotation isn't fully rolled out yet).
The tldr is that you can allocate memory in there. And a pile of other
restrictions, but not being able to allocate memory (well GFP_ATOMIC is
ok, but that can fail) is a very serious restriction.
-Daniel
>
> Cheers,
> -Paul
>
> > Cheers, Daniel
> >
> > >
> > > Cheers,
> > > -Paul
> > >
> > > Alexandru Ardelean (1):
> > > iio: buffer-dma: split iio_dma_buffer_fileio_free() function
> > >
> > > Paul Cercueil (14):
> > > iio: buffer-dma: Get rid of incoming/outgoing queues
> > > iio: buffer-dma: Remove unused iio_buffer_block struct
> > > iio: buffer-dma: Use round_down() instead of rounddown()
> > > iio: buffer-dma: Enable buffer write support
> > > iio: buffer-dmaengine: Support specifying buffer direction
> > > iio: buffer-dmaengine: Enable write support
> > > iio: core: Add new DMABUF interface infrastructure
> > > iio: buffer-dma: Use DMABUFs instead of custom solution
> > > iio: buffer-dma: Implement new DMABUF based userspace API
> > > iio: buffer-dma: Boost performance using write-combine cache
> > > setting
> > > iio: buffer-dmaengine: Support new DMABUF based userspace API
> > > iio: core: Add support for cyclic buffers
> > > iio: buffer-dmaengine: Add support for cyclic buffers
> > > Documentation: iio: Document high-speed DMABUF based API
> > >
> > > Documentation/driver-api/dma-buf.rst | 2 +
> > > Documentation/iio/dmabuf_api.rst | 94 +++
> > > Documentation/iio/index.rst | 2 +
> > > drivers/iio/adc/adi-axi-adc.c | 3 +-
> > > drivers/iio/buffer/industrialio-buffer-dma.c | 670
> > > ++++++++++++++----
> > > .../buffer/industrialio-buffer-dmaengine.c | 42 +-
> > > drivers/iio/industrialio-buffer.c | 49 ++
> > > include/linux/iio/buffer-dma.h | 43 +-
> > > include/linux/iio/buffer-dmaengine.h | 5 +-
> > > include/linux/iio/buffer_impl.h | 8 +
> > > include/uapi/linux/iio/buffer.h | 30 +
> > > 11 files changed, 783 insertions(+), 165 deletions(-)
> > > create mode 100644 Documentation/iio/dmabuf_api.rst
> > >
> > > --
> > > 2.33.0
> > >
> >
> > --
> > Daniel Vetter
> > Software Engineer, Intel Corporation
> > http://blog.ffwll.ch
>
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch