

- #CANNOT ENQUEUE QUIT AFTER INVOKING QUIT PATCH#
- #CANNOT ENQUEUE QUIT AFTER INVOKING QUIT CODE#
- #CANNOT ENQUEUE QUIT AFTER INVOKING QUIT SERIES#
- #CANNOT ENQUEUE QUIT AFTER INVOKING QUIT FREE#
Implement timer-based RCU lazy callback batching. ` (7 subsequent siblings) 9 siblings, 4 replies 60+ messages in thread 22:50 ` rcu: shrinker for lazy rcu Joel Fernandes (Google) 22:50 ` context_tracking: Use arch_atomic_read() in _ct_state for KASAN Joel Fernandes 22:50 ` Joel Fernandes (Google) 22:50 Implement call_rcu_lazy() and miscellaneous fixes Joel Fernandes (Google) * rcu: Introduce call_rcu_lazy() API implementation + return arch_atomic_read(this_cpu_ptr(&context_tracking.state)) & CT_STATE_MASK return atomic_read(this_cpu_ptr(&context_tracking.state)) & CT_STATE_MASK Static _always_inline int _ct_state(void) +++ -49,7 +49,7 DECLARE_PER_CPU(struct context_tracking, context_tracking) a/include/linux/context_tracking_state.h Include/linux/context_tracking_state.h | 2 +-ġ file changed, 1 insertion(+), 1 deletion(-)ĭiff -git a/include/linux/context_tracking_state.h b/include/linux/context_tracking_state.h This commit therefore replaces the _ct_state() function's use ofĪtomic_read() with arch_atomic_read(), which KASAN does not attempt to Someone tracing the _kasan_check_read() function could get a nasty

This means that its use of atomic_read()Ĭauses KASAN to invoke the non-noinstr _kasan_check_read() functionįrom the noinstr function _ct_state(). ` (8 subsequent siblings) 9 siblings, 1 reply 60+ messages in threadĬontext tracking's _ct_state() function can be invoked from noinstr state 22:50 ` rcu: Introduce call_rcu_lazy() API implementation Joel Fernandes (Google) 22:50 Implement call_rcu_lazy() and miscellaneous fixes Joel Fernandes 22:50 ` Joel Fernandes (Google) * context_tracking: Use arch_atomic_read() in _ct_state for KASAN Rcu/kfree: Fix kfree_rcu_shrink_count() return value Rcu/nocb: Rewrite deferred wake up logic to be more clean Rcuscale: Add test for using call_rcu_lazy() to emulate kfree_rcu() Rcu/nocb: Wake up gp thread when flushing Rcu/nocb: Add option to force all call_rcu() to lazy Rcu: Introduce call_rcu_lazy() API implementationįs: Move call_rcu() to call_rcu_lazy() in some paths Net, fs) to keep noise low and will CC them in the future after 1 or 2 rounds However, Iīelieve identifying and fixing those is a more reasonable approach than slowingĭisclaimer: I have intentionally not CC'd other subsystem maintainers (like The future, that's not lazy, then that will again hurt the power.
#CANNOT ENQUEUE QUIT AFTER INVOKING QUIT SERIES#
One drawback of this series is, if another frequent RCU callback creeps up in Of function graph tracer when increasing that. That also has the effect of slowing down RCU. Similar results can be achieved by increasing jiffies_till_first_fqs, however Initiate a flush of one or more per-CPU lists. On memory pressure, timeout or queue growing too big, we
#CANNOT ENQUEUE QUIT AFTER INVOKING QUIT PATCH#
This patch series adds a simple but effective, and lockless implementation of This isĪttributed to the file_free_rcu() path which this patch series also touches. To open/close of file descriptors associated with graphics buffers. This confuses the power management hardware that the system is active,įor example, when ChromeOS screen is off and user is not doing anything on theįurther, when ChromeOS screen is ON but system is idle or lightly loaded, weĬan see that the display pipeline is constantly doing RCU callback queuing due System is very lightly loaded but constantly running few RCU callbacks very The observation is that due to a 'trickle down' effect of RCU callbacks, the
#CANNOT ENQUEUE QUIT AFTER INVOKING QUIT CODE#
Series, just to make the series focus on the feature code first.įollowing are power savings we see on top of RCU_NOCB_CPU on an Intel platform. Need to pull the call_rcu_lazy() user patches from v1.
#CANNOT ENQUEUE QUIT AFTER INVOKING QUIT FREE#
Rushikesh,įeel free to pull these patches into your tree.

In the v1, we some numbers below (testing on v2 is in progress). I also don't see the TREE07 RCU stall from v1 anymore. The mainĭifference between the previous version is that it is now using bypass lists,Īnd thus handling rcu_barrier() and hotplug situations, with some small changes Please find the next improved version of call_rcu_lazy() attached.

Paulmck, rostedt, vineeth, Joel Fernandes (Google) ` (9 more replies) 0 siblings, 10 replies 60+ messages in threadįrom: Joel Fernandes (Google) 22:50 UTC ( / raw)Ĭc: linux-kernel, rushikesh.s.kadam, urezki, neeraj.iitr10, frederic, 22:50 ` context_tracking: Use arch_atomic_read() in _ct_state for KASAN Joel Fernandes (Google) Implement call_rcu_lazy() and miscellaneous fixes All of help / color / mirror / Atom feed * Implement call_rcu_lazy() and miscellaneous fixes 22:50 Joel Fernandes (Google)
