Back from vacations.
On Thu, Jan 28, 2021 at 04:46:55PM +0100, Christian König wrote:
Am 28.01.21 um 16:39 schrieb Felix Kuehling:
Am 2021-01-28 um 2:39 a.m. schrieb Christian König:
Am 27.01.21 um 23:00 schrieb Felix Kuehling:
Am 2021-01-27 um 7:16 a.m. schrieb Christian König:
Am 27.01.21 um 13:11 schrieb Maarten Lankhorst:
Op 27-01-2021 om 01:22 schreef Felix Kuehling: > Am 2021-01-21 um 2:40 p.m. schrieb Daniel Vetter: > > Recently there was a fairly long thread about recoreable hardware > > page > > faults, how they can deadlock, and what to do about that. > > > > While the discussion is still fresh I figured good time to try and > > document the conclusions a bit. > > > > References: > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kerne... > > > > Cc: Maarten Lankhorst maarten.lankhorst@linux.intel.com > > Cc: Thomas Hellström thomas.hellstrom@intel.com > > Cc: "Christian König" christian.koenig@amd.com > > Cc: Jerome Glisse jglisse@redhat.com > > Cc: Felix Kuehling felix.kuehling@amd.com > > Signed-off-by: Daniel Vetter daniel.vetter@intel.com > > Cc: Sumit Semwal sumit.semwal@linaro.org > > Cc: linux-media@vger.kernel.org > > Cc: linaro-mm-sig@lists.linaro.org > > -- > > I'll be away next week, but figured I'll type this up quickly for > > some > > comments and to check whether I got this all roughly right. > > > > Critique very much wanted on this, so that we can make sure hw which > > can't preempt (with pagefaults pending) like gfx10 has a clear > > path to > > support page faults in upstream. So anything I missed, got wrong or > > like that would be good. > > -Daniel > > --- > > Documentation/driver-api/dma-buf.rst | 66 > > ++++++++++++++++++++++++++++ > > 1 file changed, 66 insertions(+) > > > > diff --git a/Documentation/driver-api/dma-buf.rst > > b/Documentation/driver-api/dma-buf.rst > > index a2133d69872c..e924c1e4f7a3 100644 > > --- a/Documentation/driver-api/dma-buf.rst > > +++ b/Documentation/driver-api/dma-buf.rst > > @@ -257,3 +257,69 @@ fences in the kernel. This means: > > userspace is allowed to use userspace fencing or long running > > compute > > workloads. This also means no implicit fencing for shared > > buffers in these > > cases. > > + > > +Recoverable Hardware Page Faults Implications > > +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > + > > +Modern hardware supports recoverable page faults, which has a > > lot of > > +implications for DMA fences. > > + > > +First, a pending page fault obviously holds up the work that's > > running on the > > +accelerator and a memory allocation is usually required to resolve > > the fault. > > +But memory allocations are not allowed to gate completion of DMA > > fences, which > > +means any workload using recoverable page faults cannot use DMA > > fences for > > +synchronization. Synchronization fences controlled by userspace > > must be used > > +instead. > > + > > +On GPUs this poses a problem, because current desktop compositor > > protocols on > > +Linus rely on DMA fences, which means without an entirely new > > userspace stack > > +built on top of userspace fences, they cannot benefit from > > recoverable page > > +faults. The exception is when page faults are only used as > > migration hints and > > +never to on-demand fill a memory request. For now this means > > recoverable page > > +faults on GPUs are limited to pure compute workloads. > > + > > +Furthermore GPUs usually have shared resources between the 3D > > rendering and > > +compute side, like compute units or command submission engines. If > > both a 3D > > +job with a DMA fence and a compute workload using recoverable page > > faults are > > +pending they could deadlock: > > + > > +- The 3D workload might need to wait for the compute job to finish > > and release > > + hardware resources first. > > + > > +- The compute workload might be stuck in a page fault, because the > > memory > > + allocation is waiting for the DMA fence of the 3D workload to > > complete. > > + > > +There are a few ways to prevent this problem: > > + > > +- Compute workloads can always be preempted, even when a page > > fault is pending > > + and not yet repaired. Not all hardware supports this. > > + > > +- DMA fence workloads and workloads which need page fault handling > > have > > + independent hardware resources to guarantee forward progress. > > This could be > > + achieved through e.g. through dedicated engines and minimal > > compute unit > > + reservations for DMA fence workloads. > > + > > +- The reservation approach could be further refined by only > > reserving the > > + hardware resources for DMA fence workloads when they are > > in-flight. This must > > + cover the time from when the DMA fence is visible to other > > threads up to > > + moment when fence is completed through dma_fence_signal(). > > + > > +- As a last resort, if the hardware provides no useful reservation > > mechanics, > > + all workloads must be flushed from the GPU when switching > > between jobs > > + requiring DMA fences or jobs requiring page fault handling: This > > means all DMA > > + fences must complete before a compute job with page fault > > handling can be > > + inserted into the scheduler queue. And vice versa, before a DMA > > fence can be > > + made visible anywhere in the system, all compute workloads must > > be preempted > > + to guarantee all pending GPU page faults are flushed. > I thought of another possible workaround: > > * Partition the memory. Servicing of page faults will use a > separate > memory pool that can always be allocated from without > waiting for > fences. This includes memory for page tables and memory for > migrating data to. You may steal memory from other processes > that > can page fault, so no fence waiting is necessary. Being able to > steal memory at any time also means there are basically no > out-of-memory situations you need to worry about. Even page > tables > (except the root page directory of each process) can be > stolen in > the worst case. I think 'overcommit' would be a nice way to describe this. But I'm not sure how easy this is to implement in practice. You would basically need to create your own memory manager for this.
Yeah when we discussed this at intel we've come across this one too, but for the practical reasons laid out below this one is going to be very hard.
Some more of the things I've dug out when looking into whether this is feasible below.
Well you would need a completely separate pool for both device as well as system memory.
E.g. on boot we say we steal X GB system memory only for HMM.
Why? The GPU driver doesn't need to allocate system memory for HMM. Migrations to system memory are handled by the kernel's handle_mm_fault and page allocator and swap logic.
And that one depends on dma_fence completion because you can easily need to wait for an MMU notifier callback.
I see, the GFX MMU notifier for userpointers in amdgpu currently waits for fences. For the KFD MMU notifier I am planning to fix this by causing GPU page faults instead of preempting the queues. Can we limit userptrs in amdgpu to engines that can page fault. Basically make it illegal to attach userptr BOs to graphics CS BO lists, so they can only be used in user mode command submissions, which can page fault. Then the GFX MMU notifier could invalidate PTEs and would not have to wait for fences.
It's not only the MMU notifier, the TTM shrinker I'm adding needs to wait for dma_fences as well.
And apart from that we can't limit userptrs since they are part of the UAPI and Vulkan/OpenGL.
So when I looked I noticed that ->mmap has already a GFP flag, but it seems largely defunct. It's in struct vm_fault.gfp_mask.
We could also set a PF thread flag somehow to limit this.
But the real risk I'm seeing is that this means we're running the entire page faulthandler from any fs/driver/whatever under a more limited memory allocation policy, and experience from other areas says that's very fragile and prone to blow up real bad. Other examples are loopback block device (running file i/o under GFP_NOIO because it's a block device) or nfs, which runs the network stack under GFP_NOFS. I've chatted with some fs people, and they strongly recommend against these kind of magic "everything I call here has a limited memory allocation scope" tricks.
That's why I didn't bring it up, but I think for completeness I can mention this and explain why it's very hard to implement and probably not going to happen.
As Maarten wrote when you want to go down this route you need a complete separate memory management parallel to the one of the kernel.
Not really. I'm trying to make the GPU memory management more similar to what the kernel does for system memory.
I understood Maarten's comment as "I'm creating a new memory manager and not using TTM any more". This is true. The idea is that this portion of VRAM would be managed more like system memory.
I don't think that will fly. We can have the backing store which TTM uses for allocation shared with HMM.
But essentially both TTM allocations needs to be able to put pressure on HMM allocations as well as the other way around.
Yeah that's another reason why I think full split isn't good, as soon as you run desktop stuff with mixed workload we want the 2 worlds to press against each another and figure out a fair memory split. Also when we go into stuff like cgroups I don't think users want to manage these 2 worlds explicitly, especially if we want to keep the road open to transition vk (and maybe also gl/libva) over to the explicit userspace fencing world.
I'll try and respin the patch with the suggestion from Christian and this thread here address and then resend the patch.
Cheers, Daniel
Regards, Christian.
Regards, Felix
Regards, Christian.
It doesn't depend on any fences, so it cannot deadlock with any GPU driver-managed memory. The GPU driver gets involved in the MMU notifier to invalidate device page tables. But that also doesn't need to wait for any fences.
And if the kernel runs out of pageable memory, you're in trouble anyway. The OOM killer will step in, nothing new there.
Regards, Felix
But from a design point of view, definitely a valid solution.
I think the restriction above makes it pretty much unusable.
But this looks good, those solutions are definitely the valid options we can choose from.
It's certainly worth noting, yes. And just to make sure that nobody has the idea to reserve only device memory.
Christian.
~Maarten
Linaro-mm-sig mailing list Linaro-mm-sig@lists.linaro.org https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.lina...