On Thu, 4 May 2023 13:27:22 +0200 Jonas Ådahl jadahl@gmail.com wrote:
On Thu, May 04, 2023 at 01:39:04PM +0300, Pekka Paalanen wrote:
On Thu, 4 May 2023 01:50:25 +0000 Zack Rusin zackr@vmware.com wrote:
On Wed, 2023-05-03 at 09:48 +0200, Javier Martinez Canillas wrote:
Zack Rusin zackr@vmware.com writes:
On Tue, 2023-05-02 at 11:32 +0200, Javier Martinez Canillas wrote:
AFAICT this is the only remaining thing to be addressed for this series ?
No, there was more. tbh I haven't had the time to think about whether the above makes sense to me, e.g. I'm not sure if having virtualized drivers expose "support universal planes" and adding another plane which is not universal (the only "universal" plane on them being the default one) makes more sense than a flag that says "this driver requires a cursor in the cursor plane". There's certainly a huge difference in how userspace would be required to handle it and it's way uglier with two different cursor planes. i.e. there's a lot of ways in which this could be cleaner in the kernel but they all require significant changes to userspace, that go way beyond "attach hotspot info to this plane". I'd like to avoid approaches that mean running with atomic kms requires completely separate paths for virtualized drivers because no one will ever support and maintain it.
It's not a trivial thing because it's fundamentally hard to untangle the fact the virtualized drivers have been advertising universal plane support without ever supporting universal planes. Especially because most new userspace in general checks for "universal planes" to expose atomic kms paths.
After some discussion on the #dri-devel, your approach makes sense and the only contention point is the name of the driver feature flag name. The one you are using (DRIVER_VIRTUAL) seems to be too broad and generic (the fact that vkms won't set and is a virtual driver as well, is a good example).
Maybe something like DRIVER_CURSOR_HOTSPOT or DRIVER_CURSOR_COMMANDEERING would be more accurate and self explanatory ?
Sure, or even more verbose DRIVER_NEEDS_CURSOR_PLANE_HOTSPOT, but it sounds like Pekka doesn't agree with this approach. As I mentioned in my response to him, I'd be happy with any approach that gets paravirtualized drivers working with atomic kms, but atm I don't have enough time to be creating a new kernel subsystem or a new set of uapi's for paravirtualized drivers and then porting mutter/kwin to those.
It seems I have not been clear enough, apologies. Once more, in short:
Zack, I'm worried about this statement from you (copied from above):
I'd like to avoid approaches that mean running with atomic kms requires completely separate paths for virtualized drivers because no one will ever support and maintain it.
It feels like you are intentionally limiting your own design options for the fear of "no one will ever support it". I'm worried that over the coming years, that will lead to a hard to use, hard to maintain patchwork of vague or undocumented or just too many little UAPI details.
Please, don't limit your designs. There are good reasons why nested KMS drivers behave fundamentally differently to most KMS hardware drivers. Userspace that does not or cannot take that into account is unavoidably crippled.
From a compositor side, there is a valid reason to minimize the uAPI difference between "nested virtual machine" code paths and "running on actual hardware" code paths, which is to let virtual machines with a viewer connected to KMS act as a testing environment, rather than a production environment. Running a production environment in a virtual machine doesn't really need to use KMS at all.
When using virtual machines for testing, I want to minimize the amount of differentation between running on hardware and running in the VM because otherwise the parts that are tested are not the same.
I realize that hotpspots and the cursor moving viewer side contradicts that to some degree, but still, from a graphical testing witha VM point of view, one has to compromise, as testing isn't just for the KMS layer, but for the DE and distribution as a whole.
Right, I'm looking at this from the production use only point of view, and not as any kind of testing environment, not for compositor KMS driving bits at least. Using a virtualized driver for KMS testing seems so very... manual to me, and like you said, it's not representative of "real" behaviour.
As for the best choice for production use, KMS in guest OS is attractive because it offers zero-copy direct scanout to the host hardware, with the right stack. OTOH, I think RDP has extensions that could enable that too, and if the end point is not host hardware display then KMS use in guest is not the best idea indeed.
I don't recall any mention of actual use cases here recently. I agree the intended use makes a huge difference. Testing KMS userspace and production use are almost the opposite goals for virtualized drivers.
Thanks, pq