Wednesday, February 19, 2014

Linux Kernel : Spinlocks

SPINLOCKS
A spinlock is a lock that can be held by at most one thread of execution. If a thread of execution attempts to acquire a spin lock while it is already held, which is called contended, the thread busy loops - spins waiting for the lock to become available. If the lock is not contended, the thread can immediately acquire the lock and continue.The spinning prevents more than one thread of execution from entering the critical region at any one time.The same lock can be used in multiple locations, so all access to a given data structure, for example, can be protected and synchronized.

The fact that a contended spinlock causes threads to spin (essentially wasting processor time) while waiting for the lock to become available is salient.This behavior is the point of the spin lock. It is not wise to hold a spin lock for a long time.This is the nature of the spinlock : a lightweight single-holder lock that should be held for short duration. An alternative behavior when the lock is contended is to put the current thread to sleep and wake it up when it becomes available.Then the processor can go off and execute other code.This incurs a bit of overhead - most notably the two context switches required to switch out of and back into the blocking thread, which is certainly a lot more code than the handful of lines used to implement a spinlock. Therefore, it is wise to hold spinlocks for less than the duration of two context switches. Because most of us have better things to do than measure context switches, just try to hold the lock for as little time as possible.

"Spinlocks waste processor time hence cannot be held for longer duration, whereas other locks that sleep and go for a context switch instead has extra overhead of two context switches" 

Here is below the usage of a normal spinlocks.

spin_lock(&mr_lock);
/* critical region ... */
spin_unlock(&mr_lock);

The lock can be held simultaneously by at most only one thread of execution. Consequently, only one thread is allowed in the critical region at a time.This provides the needed protection from concurrency on multiprocessing machines. 

On Uni-processor machines, the lock doesn't exist however they function to disable and enable kernel preemption. If the kernel is non-preemptive, the spinlocks are not required thus it is compiled away completely as if it doesn't exists.

Warning: Spin Locks Are Not Recursive!
Unlike spin lock implementations in other operating systems and threading libraries, the Linux kernel’s spin locks are not recursive. This means that if you attempt to acquire a lock you already hold, you will spin, waiting for yourself to release the lock. But because you are busy spinning, you will never release the lock and you will deadlock. Be careful!

A Variant of Spinlocks can be used in Interrupt handlers whereas semaphores/mutexes cannot be as they would sleep in the handler. The interrupt handler would spin before the lock is available. A sleep in the interrupt handler is not allowed. It is also important here to understand that this kind of spinlocks need to disable the interrupts for the local cpu before getting used in the interrupt handler. If any other interrupt occurs in a different cpu at that point of time and it spins on the same lock, it doesn't prevent the lock holder in the other cpu to release it.

spin_lock_irqsave(&mr_lock, flags);
/* critical region ... */
spin_unlock_irqrestore(&mr_lock, flags);


Another variant of spinlock which should be avoided: 
If you always know before the fact that interrupts are initially enabled, there is no need to restore their previous state.You can unconditionally enable them on unlock.

In those cases, spin_lock_irq() and spin_unlock_irq() are optimal:


DEFINE_SPINLOCK(mr_lock);

spin_lock_irq(&mr_lock);
/* critical section ... */
spin_unlock_irq(&mr_lock);

As the kernel grows in size and complexity, it is increasingly hard to ensure that interrupts are always enabled in any given code path in the kernel. Use of spin_lock_irq()therefore is not recommended. If you do use it, you had better be positive that interrupts were originally enabled.


Note : On Uni-processor machines, the lock doesn't exist for all variant of spinlocks and for a non-preemptive,uni-processor environment the whole spinlock functionality is compiled away as if it doesn't exists.

Debugging Spinlocks
The configure option CONFIG_DEBUG_SPINLOCK enables a handful of debugging checks in the spin lock code. For example, with this option the spin lock code checks for the use of uninitialized spin locks and unlocking a lock that is not yet locked. When testing your code,you should always run with spin lock debugging enabled. For additional debugging of lock life cycles, enable CONFIG_DEBUG_LOCK_ALLOC.

Whenever you write kernel code, you should ask yourself these questions:

1. Is the data global? Can a thread of execution other than the current one access it?


2. Is the data shared between process context and interrupt context? Is it shared between two different interrupt handlers?


3. If a process is preempted while accessing this data, can the newly scheduled process 
access the same data?

4. Can the current process sleep (block) on anything? If it does, in what state does that leave any shared data?

5. What prevents the data from being freed out from under me?

6. What happens if this function is called again on another processor?


7. Given the proceeding points, how am I going to ensure that my code is safe from 
concurrency?

No comments:

Post a Comment