Hi Viresh,
Vacation is over now, I will take a look at what you have and add it to the demo library.
On 11/19/24 2:06 AM, Viresh Kumar wrote:
On 18-11-24, 16:27, Viresh Kumar wrote:
Hi everyone,
I hope all interested parties are part of the previously created list.
Based on the discussions we had previously, where I was struggling to find a setup where I can test virtio-msg with FFA (or something more realistic than hacked-up MMIO), I decided to go ahead and make use of Bertrand's Xen patches with Linux based host and guest VMs.
I now have a working setup with Bertrand's Xen patches, where we can do FFA I was able to reuse my earlier setup (where we tested I2C with MMIO traps) based virtio communication guest and host kernels. I have created another page [1] (similar to previous one) for curious ones to replicate it.
This would work just fine for Vsock as well, which I tested earlier with the backend from vhost-device crate.
Hi Bill,
Few updates on this:
The virtio-msg-ffa driver supports async messages, I tried to move that code to the virtio-msg layer (so other transports can also use it) but dropped the idea for now for following reasons:
Right now both virtio-msg and virtio-msg-ffa need async message support (for virtio and transport related messages) and so ffa seems to be the right place to keep that code, so it can be used by both layers without duplication.
The only thing I am doing for now (for async support) is wait-for-completion. I though about keeping the API in the core code (so other transports can use it too), but then left it as there isn't much going on. Each of them can also use completion.
Maybe later I would add some support in core, once there is more code that needs it and we need to make the async code more complicated.
Also the ffa async code for now assumes that only one message will be sent by the guest side per device at a time, as I don't see a case where the frontends send multiple messages in parallel. And so we aren't required to match message-ids for now and so I don't have that complicated code written yet. Do you see a case where we need to support multiple messages per device in parallel ?
Yes, after I coded my own I don't know that there is a ton of reason to centralize the async waiting code. The way things work now I serialize each request per device. Multiple responses could be pending for different devices but not for the same device.
With the way Linux is structured I don't see anyway to improve this and probibly there is no reason to do so.
In a greenfield implementation you could imaging sending N GET VQ messages and then N SET VQ messages. This would cut down on the number of turn around times needed to setup a device. However, I suspect no one really cares about that.
Bill
-- viresh