----- On Dec 20, 2019, at 3:57 PM, Florian Weimer fw@deneb.enyo.de wrote:
- Mathieu Desnoyers:
----- On Dec 20, 2019, at 3:37 PM, Florian Weimer fw@deneb.enyo.de wrote:
- Mathieu Desnoyers:
diff --git a/include/uapi/linux/rseq.h b/include/uapi/linux/rseq.h index 9a402fdb60e9..6f26b0b148a6 100644 --- a/include/uapi/linux/rseq.h +++ b/include/uapi/linux/rseq.h @@ -100,7 +100,9 @@ struct rseq { * instruction sequence block, as well as when the kernel detects that * it is preempting or delivering a signal outside of the range * targeted by the rseq_cs. Also needs to be set to NULL by user-space
* before reclaiming memory that contains the targeted struct rseq_cs.
* before reclaiming memory that contains the targeted struct rseq_cs
* or reclaiming memory that contains the code refered to by the
* start_ip and post_commit_offset fields of struct rseq_cs.
Maybe mention that it's good practice to clear rseq_cs before returning from a function that contains a restartable sequence?
Unfortunately, clearing it is not free. Considering that rseq is meant to be used in very hot code paths, it would be preferable that applications clear it in the very infrequent case where the rseq_cs or code will vanish (e.g. dlclose or JIT reclaim), and not require it to be cleared after each critical section. I am therefore reluctant to document the behavior you describe as a "good practice" for rseq.
You already have to write to rseq_cs before entering the critical section, right? Then you've already determined the address, and the cache line is already hot, so it really should be close to zero cost.
Considering that overall rseq executes in fraction of nanoseconds on some architectures, adding an extra store is perhaps close to zero, but still significantly degrades performance.
I mean, you can still discard the advice, but you do so ad your own peril …
I am also uncomfortable leaving this to the end user. One possibility would be to extend rseq or membarrier to add a kind of "rseq-clear" barrier, which would ensure that the kernel will have cleared the rseq_cs field for each thread belonging to the current process. glibc could then call this barrier before dlclose.
This is slightly different from another rseq-barrier that has been requested by Paul Turner: a way to ensure that all previously running rseq critical sections have completed or aborted.
AFAIU, the desiderata for each of the 2 use-cases is as follows:
rseq-barrier: guarantee that all prior rseq critical sections have completed or aborted for the current process or for a set of registered processes. Allows doing RCU-like algorithms within rseq critical sections.
rseq-clear: guarantee that the rseq_cs field is cleared for each thread belonging to the current process before the barrier system call returns to the caller. Aborts currently running rseq critical sections for all threads belonging to the current process. The use-case is to allow dlclose and JIT reclaim to clear any leftover reference to struct rseq_cs or code which are going to be reclaimed.
Thoughts ?
Thanks,
Mathieu