On Jun 21 2024, Alexei Starovoitov wrote:
On Fri, Jun 21, 2024 at 9:08 AM Benjamin Tissoires bentiss@kernel.org wrote:
On Jun 21 2024, Alexei Starovoitov wrote:
On Fri, Jun 21, 2024 at 1:56 AM Benjamin Tissoires bentiss@kernel.org wrote:
Same story than hid_hw_raw_requests:
This allows to intercept and prevent or change the behavior of hid_hw_output_report() from a bpf program.
The intent is to solve a couple of use case:
- firewalling a HID device: a firewall can monitor who opens the hidraw nodes and then prevent or allow access to write operations on that hidraw node.
- change the behavior of a device and emulate a new HID feature request
The hook is allowed to be run as sleepable so it can itself call hid_hw_output_report(), which allows to "convert" one feature request into another or even call the feature request on a different HID device on the same physical device.
Signed-off-by: Benjamin Tissoires bentiss@kernel.org
Here checkpatch complains about: WARNING: use of RCU tasks trace is incorrect outside BPF or core RCU code
However, we are jumping in BPF code, so I think this is correct, but I'd like to have the opinion on the BPF folks.
drivers/hid/bpf/hid_bpf_dispatch.c | 37 ++++++++++++++++++++++++++++++++---- drivers/hid/bpf/hid_bpf_struct_ops.c | 1 + drivers/hid/hid-core.c | 10 ++++++++-- drivers/hid/hidraw.c | 2 +- include/linux/hid.h | 3 ++- include/linux/hid_bpf.h | 24 ++++++++++++++++++++++- 6 files changed, 68 insertions(+), 9 deletions(-)
diff --git a/drivers/hid/bpf/hid_bpf_dispatch.c b/drivers/hid/bpf/hid_bpf_dispatch.c index 8d6e08b7c42f..2a29a0625a3b 100644 --- a/drivers/hid/bpf/hid_bpf_dispatch.c +++ b/drivers/hid/bpf/hid_bpf_dispatch.c @@ -111,6 +111,38 @@ int dispatch_hid_bpf_raw_requests(struct hid_device *hdev, } EXPORT_SYMBOL_GPL(dispatch_hid_bpf_raw_requests);
+int dispatch_hid_bpf_output_report(struct hid_device *hdev,
__u8 *buf, u32 size, __u64 source,
bool from_bpf)
+{
struct hid_bpf_ctx_kern ctx_kern = {
.ctx = {
.hid = hdev,
.allocated_size = size,
.size = size,
},
.data = buf,
.from_bpf = from_bpf,
};
struct hid_bpf_ops *e;
int ret;
rcu_read_lock_trace();
list_for_each_entry_rcu(e, &hdev->bpf.prog_list, list) {
if (e->hid_hw_output_report) {
ret = e->hid_hw_output_report(&ctx_kern.ctx, source);
if (ret)
goto out;
}
}
ret = 0;
+out:
rcu_read_unlock_trace();
same question.
re What is this for?:
e->hid_hw_output_report might sleep, so using a plain rcu_read_lock() introduces warnings.
Ok, but just replacing rcu_read_lock() with rcu_read_lock_trace() doesn't fix it. rcu and rcu_tasks_trace are different. If you're using call_rcu to wait for GP to free an element in that list the thing will go wrong.
If you really need rcu life times here use srcu. It's a much better fit. There will be srcu_read_lock() here, paired with call_srcu().
OK, thanks for the explanation.
I'll work on this for v2
Cheers, Benjamin