Use lock_api trait for configurable PDI lock
#331
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Hey, it's me again :).
Following our conversation on matrix regarding realtime performance on linux, I was able to tune the performance of my system by using blocking timers that inherit the PREEMPT_RT priority of my PDI loop thread instead of
smol's timers that delegate timing to a non-RT helper thread.Before:

After:

That looks way better... but wait, why is there still a spike!?
Via debugging, I was able to determine that occasionally,
let mut pdi_lock = self.pdi.write();insubdevice_group/mod.rswould take 2-20ms acquire the lock, despite being a high priority thread. We've run into the problem of priority inversion, where my application-layer non-RT threads that want to write and read to the PDI (theoretically an instantaneous operation) get pre-empted while still holding the lock, causing the PDI loop thread to hang.There are various approaches to solving priority inversion, including os-provided mutexes that upgrade/inherit the priority of any lower-priority thread currently holding the lock (eg
futex), or RT-specific algorithms that do dependency analysis or simply yield on lock instead of spinning.Some available mutexes that address this issue to various extents (with mixed results):
For example, patching ethercrab to use
rtsc::pi::Mutex, we get the following!However, I would like to reduce the number of patches I need to maintain in a fork.
This PR takes advantage of lock_api to expose a user-configurable PDI lock in ethercrab that defaults to the current
spinlock. This way, various locking implementations (rt, non-rt, critical section, etc) can be easily swapped out.