How Do Race Conditions Actually Happen?

Dec 14, 2024

4 minutes

In concurrent programming, usually the first challenge we notice is multiple threads using shared resources. Consider the simple one-line code below:

counter++;

When we live in the wonderland of abstractions, this simple line of code might seem atomic—consisting of a single operation—but it's not. To understand why, we need to pull the curtains aside and go one level deeper into the abstraction layers, by looking at the assembly code that is generated for this line of code.

Breaking Down counter++

At the assembly level, this translates to something like this:

MOV EAX, [counter]   ; Load the value of counter into the register EAX
ADD EAX, 1           ; Increment the value in EAX by 1
MOV [counter], EAX   ; Store the updated value from EAX back into memory

When a single thread runs the code above, it'll execute each instruction sequentially, which won't lead to any problems. However, in a multithreaded environment, more than one thread may execute this sequence simultaneously, leading to a race condition. By simultaneous, we don't necessarily mean two threads executing each instruction at the exact same time, but even very close to each other in terms of time. Consider two threads, Thread A and Thread B, both executing counter++ concurrently. Here’s how a race condition can occur:

  1. Thread A loads the value of counter into register EAX (let's assume the initial value of counter is 3).
  2. Thread B loads the value of counter into register EBX (which should still be 3).
  3. Thread A increments the value in register EAX which updates its value to 4.
  4. Thread B increments the value in register EBX which updates its value to 4.
  5. Thread A writes the value 4 back into memory.
  6. Thread B writes the value 4 back into memory.

The final value in the memory for counter will then be 4. If you're allowing multiple threads to change the value of counter, then this result is most likely unfavorable, as it means we lost one increment. As mentioned before, this operation is not atomic, as the CPU executes each instruction separately and between these operations, the operating system's scheduler can switch context to another thread. Therefore, if we want to retain all the increments and guarantee no loss of information, we need to implement a strategy to avoid the race conditions.

Preventing Race Conditions

In simple terms, we need to make sure the entire sequence of operations—read, increment, write—runs atomically. Here are some common solutions:

  1. Locks: Using a mutex or other synchronization primitive to ensure only one thread can execute the critical section at a time.
    pthread_mutex_lock(&mutex);
    counter++;
    pthread_mutex_unlock(&mutex);
    
  2. Atomic Instructions: Modern CPUs provide atomic instructions like LOCK XADD to perform read-modify-write operations atomically:
    LOCK INC [counter]   ; Atomically increment the value of counter
    
  3. High-Level Atomic Operations: Using atomic operations provided by your programming language or library, such as std::atomic in C++ or AtomicInteger in Java.
    std::atomic<int> counter = 0;
    counter++;
    

Conclusion

We saw that even simple one-line operations in higher level languages (such as counter++) are not atomic, and how they can potentially lead to race conditions. At the assembly level, these simple operations break down into multiple instructions, any of which can be interrupted by another thread. By employing thread safety strategies and atomic operations, we can ensure consistency and eliminate race conditions. I hope that this article has been able to provide a new perspective to deepen your understanding of the nature of race conditions.

Tags:
Concurrency

© 2025 AliDecoded