On Mon, Aug 28, 2023 at 10:37 AM Huacai Chen chenhuacai@kernel.org wrote:
On Mon, Aug 28, 2023 at 10:02 PM Paul E. McKenney paulmck@kernel.org wrote:
On Mon, Aug 28, 2023 at 01:33:48PM +0000, Joel Fernandes wrote:
On Mon, Aug 28, 2023 at 03:47:12AM -0700, Paul E. McKenney wrote:
On Sun, Aug 27, 2023 at 06:11:40PM -0400, Joel Fernandes wrote:
On Sun, Aug 27, 2023 at 1:51 AM Huacai Chen chenhuacai@kernel.org wrote: [..]
> > > > The only way I know of to avoid these sorts of false positives is for > > > > the user to manually suppress all timeouts (perhaps using a kernel-boot > > > > parameter for your early-boot case), do the gdb work, and then unsuppress > > > > all stalls. Even that won't work for networking, because the other > > > > system's clock will be running throughout. > > > > > > > > In other words, from what I know now, there is no perfect solution. > > > > Therefore, there are sharp limits to the complexity of any solution that > > > > I will be willing to accept. > > > I think the simplest solution is (I hope Joel will not angry): > > > > Not angry at all, just want to help. ;-). The problem is the 300*HZ solution > > will also effect the VM workloads which also do a similar reset. Allow me few > > days to see if I can take a shot at fixing it slightly differently. I am > > trying Paul's idea of setting jiffies at a later time. I think it is doable. > > I think the advantage of doing this is it will make stall detection more > > robust in this face of these gaps in jiffie update. And that solution does > > not even need us to rely on ktime (and all the issues that come with that). > > > > I wrote a patch similar to Paul's idea and sent it out for review, the > advantage being it purely is based on jiffies. Could you try it out > and let me know? If you can cc my gmail chenhuacai@gmail.com, that could be better.
Sure, will do.
I have read your patch, maybe the counter (nr_fqs_jiffies_stall) should be atomic_t and we should use atomic operation to decrement its value. Because rcu_gp_fqs() can be run concurrently, and we may miss the (nr_fqs == 1) condition.
I don't think so. There is only 1 place where RMW operation happens and rcu_gp_fqs() is called only from the GP kthread. So a concurrent RMW (and hence a lost update) is not possible.
Huacai, is your concern that the gdb user might have created a script (for example, printing a variable or two, then automatically continuing), so that breakpoints could happen in quick successsion, such that the second breakpoint might run concurrently with rcu_gp_fqs()?
If this can really happen, the point that Joel makes is a good one, namely that rcu_gp_fqs() is single-threaded and (absent rcutorture) runs only once every few jiffies. And gdb breakpoints, even with scripting, should also be rather rare. So if this is an issue, a global lock should do the trick, perhaps even one of the existing locks in the rcu_state structure. The result should then be just as performant/scalable and a lot simpler than use of atomics.
Thanks Paul and Huacai, also I was thinking in the event of such concurrent breakpoint stalling jiffies updates but GP thread / rcu_gp_fqs() chugging along, we could also make the patch more robust for such a situation as follows (diff on top of previous patch [1]). Thoughts?
Also if someone sets a breakpoint right after the "nr_fqs == 1" check, then they are kind of asking for it anyway since the GP kthread getting stalled is an actual reason for RCU stalls (infact rcutorture has a test mode for it even :P) and as such the false-positive may not be that false. ;-)
[Paul] That would indeed be asking for it. But then again, they might have set a breakpoint elsewhere that had the unintended side-effect of catching the RCU grace-period kthread right at that point.
If that isn't something we are worried about, your original is fine. If it is something we are worried about, I recommend learning from my RCU CPU stall warning experiences and just using a lock. ;-)
This sounds good to me.
[Huacai] I also think the original patch should be OK, but I have another question: what will happen if the current GP ends before nr_fqs_jiffies_stall reaches zero?
Nothing should happen. Stall detection only happens when a GP is in progress. If a new GP starts, it resets nr_fqs_jiffies_stall.
Or can you elaborate your concern more?
Thanks.