Difference between revisions of "Rt Preempt Subpatch Table"
m (Bot (Edward's framework)) |
(Add category) |
||
Line 323: | Line 323: | ||
** this is used to emit warnings when a user-defined threshold is exceeded, and is useful for debugging the preemption features of this patchset | ** this is used to emit warnings when a user-defined threshold is exceeded, and is useful for debugging the preemption features of this patchset | ||
* latency trace = system for logging preemption-related events | * latency trace = system for logging preemption-related events | ||
+ | |||
+ | [[Category:Kernel]] |
Latest revision as of 03:32, 28 October 2011
Here is a table of rt-preempt sub-patches and the kernel features they introduce and affect:
. | PATCH | ||||||||
Kernel item | p-smp | p-cleanup | add-lnr | add-crs | *fix-scheduling* | *keventd* | idle-thread-p-fix | remove-bkl | |
spinlock_t | add break_lock | . | . | . | . | . | . | . | |
rwlock_t | add break_lock | . | . | . | . | . | . | . | |
_raw_read_trylock() | create | . | . | . | . | . | . | . | |
generic_raw_read_trylock() | create | . | . | . | . | . | . | . | |
cond_resched_lock() | check for break_lock | . | . | . | add calls to | . | . | . | |
<spinlock routines>() (read_lock(), spin_lock_irqsave(), spin_lock_irq(), spin_lock_bh(), read_lock_irqsave(), read_lock_irq(), read_lock_bh(), write_lock_irqsave(), write_lock_irq(), write_lock_bh()) | enabled irqs, check for lock break requests | . | . | . | change some *_lock() to *_lock_irq()s | . | . | . | |
need_lockbreak() | . | create | . | . | . | . | . | . | |
cond_resched() | . | fix | . | . | add calls to | . | . | . | |
lock_need_resched() | . | . | create | . | add calls to | . | . | . | |
lock/unlock_kernel() | . | . | . | . | add calls to | . | . | . | |
cond_resched_softirq() | . | . | . | . | add calls to | . | . | . | |
filemap_sync() | . | . | . | . | create replacement with cond_resched() | . | . | . | |
__filemap_sync() | . | . | . | . | create? | . | . | . | |
unmap_vmas() | . | . | . | . | change ZAP_BLOCK_SIZE | . | . | . | |
helper_init() | . | . | . | . | . | create | . | . | |
rest_init() | . | . | . | . | . | . | enable_preemption | . | |
start_kernel() | . | . | . | . | . | . | disable preemption | . | |
PREEMPT_BKL | . | . | . | . | . | . | . | create | |
_smp_processor_id() | . | . | . | . | . | . | . | create | |
current_cpu_data | . | . | . | . | . | . | . | replace with cpu_data[] | |
in_atomic() | . | . | . | . | . | . | . | alters | |
nmi_enter/exit(), irq_enter() | . | . | . | . | . | . | . | alter to use add/sub_preempt_count() | |
add/sub_preempt_count() | . | . | . | . | . | . | . | create | |
ARRAY(0x1016aca4)release_kernel_lock() | . | . | . | . | . | . | . | create | |
lock/unlock_kernel() | . | . | . | . | . | . | . | create sem version | |
DEBUG_PREEMPT | . | . | . | . | . | . | . | create | |
smp_processor_id() | . | . | . | . | . | . | . | add debug version | |
might_sleep() | . | . | . | . | . | . | . | . | created in rtp3 |
Sub-patch summaries
- preempt-smp - spin irq-nicely and request cross-CPU lock-breaks if needed.
- add break_lock field to spinlock_t and rwlock_t, and add _raw_read_trylock() function
- generic_raw_read_trylock() needs to an ARCH-optimized version
- preempt-cleanup - fixes some issues with cond_resched, and adds need_lockbreak()
- add-lock_need_resched - self-explanatory
- sched-add-cond_resched_softirq - allows some softirqs-disabled codepaths to preempt
- *fix-scheduling*, break-latency* - add cond_resched() to lots of places
- add lock_kernel and unlock_kernel in a few places
- adjust some routines to allow rescheduling better (dependent on PREEMPT and SMP)
- fix-keventd-execution-dependency - schedule work differently during kevent initialization
- idle-thread-preemption-fix - disable preemption during bootup (until idle thread is running)
- remove-the-bkl-by-turning-it-into-a-sempahore - self-explanatory
- adds _smp_processor_id() to help debug incorrect usage of this routine
- adds add/sub_preempt_count, with debug aids to detect underflows
- adds DEBUG_PREEMPT to enable both of the above
- replace current_cpu_data with cpu_data[_smp_processor_id()]
- add PREEMPT_BKL, and change routines lock/unlock_kernel to use a semaphore
Terminology
- voluntary preempt = add preemption points to PREEMPT kernels (mostly via added calls to cond_resched())
- might_sleep =
- latency timing = system for keeping track of preemption request vs. actual preemption occurence
- this is used to emit warnings when a user-defined threshold is exceeded, and is useful for debugging the preemption features of this patchset
- latency trace = system for logging preemption-related events