Mutual Exclusion

Posted By on March 24, 2016

Download PDF
Static Vs. Dynamic RPCs
Election Algorithms

Mutual exclusion refers to the requirement of ensuring that no two concurrent processes are in their critical section at the same time; it is a basic requirement in concurrency control, to prevent race conditions. Here, a critical section refers to a period when the process accesses a shared resource, such as shared memory.

A simple example of why mutual exclusion is important in practice can be visualized using a singly linked list (See Figure 1). In such a linked list, the removal of a node is done by changing the “next” pointer of the preceding node to point to the subsequent node (e.g., if node i is being removed then the “next” pointer of node i − 1 will be changed to point to node i + 1). In an execution where such a linked list is being shared between multiple processes, two processes may attempt to remove two different nodes simultaneously, resulting in the following problem: let nodes i and i + 1 be the nodes to be removed; furthermore, let neither of them be the head nor the tail; the next pointer of node i − 1 will be changed to point to node i + 1 and the next pointer of node i will be changed to point to node i + 2. Although both removal operations complete successfully, node i + 1 remains in the list since i − 1 was made to point to i + 1, skipping node i (which was the node that reflected the removal of i + 1 by having its next pointer set to i + 2). This can be seen in Figure 1. This problem (normally called a race condition) can be avoided by using the requirement of mutual exclusion to ensure that simultaneous updates to the same part of the list cannot occur.

some software solutions exist that use busy waiting to achieve mutual exclusion. Examples of these include the following:

  • Dekker’s algorithm;
  • Peterson’s algorithm;
  • Lamport’s bakery algorithm;
  • Szymanski’s algorithm;
  • Taubenfeld’s black-white bakery algorithm.

These algorithms do not work if out-of-order execution is used on the platform that executes them. Programmers have to specify strict ordering on the memory operations within a thread.

It is often preferable to use synchronization facilities provided by an operating system’s multithreading library, which will take advantage of hardware solutions if possible but will use software solutions if no hardware solutions exist. For example, when the operating system’s lock library is used and a thread tries to acquire an already acquired lock, the operating system could suspend the thread using a context switch and swap it out with another thread that is ready to be run, or could put that processor into a low power state if there is no other thread that can be run. Therefore, most modern mutual exclusion methods attempt to reduce latency and busy-waits by using queuing and context switches. However, if the time that is spent suspending a thread and then restoring it can be proven to be always more than the time that must be waited for a thread to become ready to run after being blocked in a particular situation, then spinlocks are an acceptable solution (for that situation only).

The solutions explained above can be used to build the synchronization primitives below:

  • locks;
  • readers–writer locks;
  • recursive locks;
  • semaphores;
  • monitors;
  • message passing;
  • tuple space.

Many forms of mutual exclusion have side-effects. For example, classic semaphores permit deadlocks, in which one process gets a semaphore, another process gets a second semaphore, and then both wait forever for the other semaphore to be released. Other common side-effects include starvation, in which a process never gets sufficient resources to run to completion; priority inversion, in which a higher priority thread waits for a lower-priority thread; and high latency, in which response to interrupts is not prompt.

Much research is aimed at eliminating the above effects, often with the goal of guaranteeing non-blocking progress. No perfect scheme is known. Blocking system calls used to sleep an entire process. Until such calls became threadsafe, there was no proper mechanism for sleeping a single thread within a process (see polling).

Static Vs. Dynamic RPCs
Election Algorithms

Download PDF

Posted by Akash Kurup

Founder and C.E.O, World4Engineers Educationist and Entrepreneur by passion. Orator and blogger by hobby