On Fri, Dec 10, 2021 at 08:29:24PM +0900, Shunsuke Mie wrote:
> Hi Jason,
> Thank you for replying.
>
> 2021年12月8日(水) 2:14 Jason Gunthorpe <jgg(a)ziepe.ca>:
> >
> > On Fri, Dec 03, 2021 at 12:51:44PM +0900, Shunsuke Mie wrote:
> > > Hi maintainers,
> > >
> > > Could you please review this patch series?
> >
> > Why is it RFC?
> >
> > I'm confused why this is useful?
> >
> > This can't do copy from MMIO memory, so it shouldn't be compatible
> > with things like Gaudi - does something prevent this?
> I think if an export of the dma-buf supports vmap, CPU is able to access the
> mmio memory.
>
> Is it wrong? If this is wrong, there is no advantages this changes..
I don't know what the dmabuf folks did, but yes, it is wrong.
IOMEM must be touched using only special accessors, some platforms
crash if you don't do this. Even x86 will crash if you touch it with
something like an XMM optimized memcpy.
Christian? If the vmap succeeds what rules must the caller use to
access the memory?
Jason
09.12.2021 18:05, Akhil R пишет:
> Add support for SMBus Alert and SMBus block read functions to
> i2c-tegra driver
>
> Akhil R (2):
> dt-bindings: i2c: tegra: Add SMBus feature properties
> i2c: tegra: Add SMBus block read and SMBus alert functions
>
> .../devicetree/bindings/i2c/nvidia,tegra20-i2c.txt | 4 ++
> drivers/i2c/busses/i2c-tegra.c | 54 +++++++++++++++++++++-
> 2 files changed, 57 insertions(+), 1 deletion(-)
>
How this was tested? This series must include the DT patch. If there is
no real user in upstream for this feature, then I don't think that we
should bother at all about it.
On Thu, Dec 09, 2021 at 08:35:20PM +0530, Akhil R wrote:
> Tegra I2C can use a gpio as an smbus-alert. Document the usage of
> the same.
>
> Signed-off-by: Akhil R <akhilrajeev(a)nvidia.com>
> ---
> Documentation/devicetree/bindings/i2c/nvidia,tegra20-i2c.txt | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/Documentation/devicetree/bindings/i2c/nvidia,tegra20-i2c.txt b/Documentation/devicetree/bindings/i2c/nvidia,tegra20-i2c.txt
> index 3f2f990..71ee79b 100644
> --- a/Documentation/devicetree/bindings/i2c/nvidia,tegra20-i2c.txt
> +++ b/Documentation/devicetree/bindings/i2c/nvidia,tegra20-i2c.txt
> @@ -70,6 +70,10 @@ Required properties:
> - rx
> - tx
>
> +optional properties:
> +- smbalert-gpio: Must contain an entry for the gpio to be used as smbus alert.
> + It will be used only if optional smbus-alert property is present.
There's already a standard way to do this with interrupts. And GPIOs can
be interrupts usually.
> +
> Example:
>
> i2c@7000c000 {
> --
> 2.7.4
>
>
Am 13.12.21 um 12:18 schrieb Shunsuke Mie:
> 2021年12月10日(金) 22:29 Christian König <christian.koenig(a)amd.com>:
>> Am 10.12.21 um 14:26 schrieb Jason Gunthorpe:
>>> On Fri, Dec 10, 2021 at 01:47:37PM +0100, Christian König wrote:
>>>> Am 10.12.21 um 13:42 schrieb Jason Gunthorpe:
>>>>> On Fri, Dec 10, 2021 at 08:29:24PM +0900, Shunsuke Mie wrote:
>>>>>> Hi Jason,
>>>>>> Thank you for replying.
>>>>>>
>>>>>> 2021年12月8日(水) 2:14 Jason Gunthorpe <jgg(a)ziepe.ca>:
>>>>>>> On Fri, Dec 03, 2021 at 12:51:44PM +0900, Shunsuke Mie wrote:
>>>>>>>> Hi maintainers,
>>>>>>>>
>>>>>>>> Could you please review this patch series?
>>>>>>> Why is it RFC?
>>>>>>>
>>>>>>> I'm confused why this is useful?
>>>>>>>
>>>>>>> This can't do copy from MMIO memory, so it shouldn't be compatible
>>>>>>> with things like Gaudi - does something prevent this?
>>>>>> I think if an export of the dma-buf supports vmap, CPU is able to access the
>>>>>> mmio memory.
>>>>>>
>>>>>> Is it wrong? If this is wrong, there is no advantages this changes..
>>>>> I don't know what the dmabuf folks did, but yes, it is wrong.
>>>>>
>>>>> IOMEM must be touched using only special accessors, some platforms
>>>>> crash if you don't do this. Even x86 will crash if you touch it with
>>>>> something like an XMM optimized memcpy.
>>>>>
>>>>> Christian? If the vmap succeeds what rules must the caller use to
>>>>> access the memory?
>>>> See dma-buf-map.h and especially struct dma_buf_map.
>>>>
>>>> MMIO memory is perfectly supported here and actually the most common case.
>>> Okay that looks sane, but this rxe RFC seems to ignore this
>>> completely. It stuffs the vaddr directly into a umem which goes to all
>>> manner of places in the driver.
>>>
>>> ??
>> Well, yes that can go boom pretty quickly.
> Sorry, I was wrong. The dma_buf_map treats both iomem and vaddr region, but
> this RFC only supports vaddr. Advantage of the partial support is we can use the
> vaddr dma-buf in RXE without changing a rxe data copy implementation.
Well that is most likely not a good idea.
For example buffers for GPU drivers can be placed in both MMIO memory
and system memory.
If you don't want to provoke random failures you *MUST* be able to
handle both if you want to use this.
Regards,
Christian.
>
> An example of a dma-buf pointing to a vaddr is some gpu drivers use RAM for
> VRAM and we can get dma-buf for the region that indicates vaddr regions.
> Specifically, the gpu driver using gpu/drm/drm_gem_cma_helper.c is one such
> example.
>
>> Not sure what they want to use this for.
> I'd like to use RDMA with RXE for that memory region.
>
> Best,
> Shunsuke
>> Christian.
>>
>>> Jason
09.12.2021 18:33, Andy Shevchenko пишет:
> On Thu, Dec 9, 2021 at 5:30 PM Dmitry Osipenko <digetx(a)gmail.com> wrote:
>> 09.12.2021 18:05, Akhil R пишет:
>>> +static int tegra_i2c_setup_smbalert(struct tegra_i2c_dev *i2c_dev)
>>> +{
>>> + struct tegra_i2c_smbalert *smbalert = &i2c_dev->smbalert;
>>> + struct gpio_desc *alert_gpiod;
>>> + struct i2c_client *ara;
>>> +
>>> + alert_gpiod = devm_gpiod_get(i2c_dev->dev, "smbalert", GPIOD_IN);
>>> + if (IS_ERR(alert_gpiod))
>>> + return PTR_ERR(alert_gpiod);
>>> +
>>> + smbalert->alert_data.irq = gpiod_to_irq(alert_gpiod);
>>> + if (smbalert->alert_data.irq <= 0)
>>> + return smbalert->alert_data.irq;
>>
>> 0 is the error condition.
>
> I'm not sure what you implied here. gpiod_to_irq() returns 0 if and
> only if it goes to the architectures where it might be possible to
> have valid vIRQ 0, but this is not the case (at least I never heard of
> a such) for GPIO controllers on such platforms. So, looking at the
> above code I may tell that the '=' part is redundant.
>
Yes, removal of the '=' should be enough here.
Once more an entire week behind on mails, but this looked interesting
enough.
On Fri, Dec 03, 2021 at 03:18:01PM +0100, Thomas Hellström wrote:
> On Fri, 2021-12-03 at 14:08 +0100, Christian König wrote:
> > Am 01.12.21 um 13:16 schrieb Thomas Hellström (Intel):
> > >
> > > On 12/1/21 12:25, Christian König wrote:
> > > > And why do you use dma_fence_chain to generate a timeline for
> > > > TTM?
> > > > That should come naturally because all the moves must be ordered.
> > >
> > > Oh, in this case because we're looking at adding stuff at the end
> > > of
> > > migration (like coalescing object shared fences and / or async
> > > unbind
> > > fences), which may not complete in order.
> >
> > Well that's ok as well. My question is why does this single dma_fence
> > then shows up in the dma_fence_chain representing the whole
> > migration?
>
> What we'd like to happen during eviction is that we
>
> 1) await any exclusive- or moving fences, then schedule the migration
> blit. The blit manages its own GPU ptes. Results in a single fence.
> 2) Schedule unbind of any gpu vmas, resulting possibly in multiple
> fences.
This sounds like over-optimizing for nothing. We only really care about
pipeling moves on dgpu, and on dgpu we only care about modern userspace
(because even gl moves in that direction).
And modern means that usually even write access is only setting a read
fence, because in vk/compute we only set write fences for object which
need implicit sync, and _only_ when actually needed.
So ignoring read fences for movings "because it's only reads" is actually
busted.
I think for buffer moves we should document and enforce (in review) the
rule that you have to wait for all fences, otherwise boom. Same really
like before freeing backing storage. Otherwise there's just too many gaps
and surprises.
And yes with Christian's rework of dma_resv this will change, and we'll
allow multiple write fences (because that's what amdgpu encoded into their
uapi). Still means that you cannot move a buffer without waiting for read
fences (or kernel fences or anything really).
The other thing is this entire spinlock recursion topic for dma_fence, and
I'm deeply unhappy about the truckload of tricks i915 plays and hence in
favour of avoiding recursion in this area as much as possible.
If we really can't avoid it then irq_work to get a new clean context gets
the job done. Making this messy and work is imo a feature, lock nesting of
same level locks is just not a good&robust engineering idea.
/me back to being completely burried
I do hope I can find some more time to review a few more of Christian's
patches this week though :-/
Cheers, Daniel
> 3) Most but not all of the remaining resv shared fences will have been
> finished in 2) We can't easily tell which so we have a couple of shared
> fences left.
> 4) Add all fences resulting from 1) 2) and 3) into the per-memory-type
> dma-fence-chain.
> 5) hand the resulting dma-fence-chain representing the end of migration
> over to ttm's resource manager.
>
> Now this means we have a dma-fence-chain disguised as a dma-fence out
> in the wild, and it could in theory reappear as a 3) fence for another
> migration unless a very careful audit is done, or as an input to the
> dma-fence-array used for that single dependency.
>
> >
> > That somehow doesn't seem to make sense because each individual step
> > of
> > the migration needs to wait for those dependencies as well even when
> > it
> > runs in parallel.
> >
> > > But that's not really the point, the point was that an (at least to
> > > me) seemingly harmless usage pattern, be it real or fictious, ends
> > > up
> > > giving you severe internal- or cross-driver headaches.
> >
> > Yeah, we probably should document that better. But in general I don't
> > see much reason to allow mixing containers. The dma_fence_array and
> > dma_fence_chain objects have some distinct use cases and and using
> > them
> > to build up larger dependency structures sounds really questionable.
>
> Yes, I tend to agree to some extent here. Perhaps add warnings when
> adding a chain or array as an input to array and when accidently
> joining chains, and provide helpers for flattening if needed.
>
> /Thomas
>
>
> >
> > Christian.
> >
> > >
> > > /Thomas
> > >
> > >
> > > >
> > > > Regards,
> > > > Christian.
> > > >
> > > >
> >
>
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch