John and Mary both walk into work. Let’s consider both John and Mary to be a “thread”. We’ll define a “thread” to simply be a sequence of consecutive actions. In this case, let’s say that John is a “thread”, and Mary is also a “thread”, because they can both execute actions in parallel. If John is working on a spreadsheet, Mary does not have to wait for John to finish before she can do her own tasks. They are both independent entities.

Now, John loves donuts. Fortunately for John, his company always stocks donuts in the company fridge. For John, his thought process is quite simple, and looks like this :

```if John sees at least one donut in the fridge
then John washes his hands
then John grabs a donut and eats it

And this is fine and dandy. When John is alone, this course of logic always works. He sees a donut in the fridge, he washes his hands, and he eats it. It always works without fail.
However, one day, Mary walks into the room. And Mary also likes donuts. When John looks into the fridge and sees that there is exactly one donut left, he hurriedly goes to wash his hands. However, when he returns, he finds that the donut is gone! But that shouldn’t have been possible. John had checked that at least one donut existed, and it did, but now it’s suddenly gone.
So what happened? It turns out that Mary had taken the donut in the time that John had spent washing his hands. This is called a race condition, and in a real-world program, your program would either return the wrong result, or just out-right crash.

While it may be true that John’s logic always succeeds when he is alone (single-threaded), the same cannot be said when Mary is with him (multithreaded). This is because when objects or resources are shared between multiple threads, multiple different threads are able to modify the same resource at the same time. In this case, the fridge is the object/resource, and both John and Mary are able to modify the contents of the fridge at any time.
This is a huge problem, because now, functions that had been unit tested or proven to work on a single thread no longer work when multithreaded. As shown above, Mary can, at any given time, swoop the last-remaining donut out of the fridge before John gets it. However, if John gets the donut first, then Mary won’t be able to get the donut. There are two different possibilities, based on two different event interleavings. When the ordering of parallel instructions is important to get the right result, you have a race condition.
So how can John prevent Mary from modifying the contents of the fridge until he is done using it? The answer is fairly simple — when John accesses the fridge, he needs to make sure that he is the only one who has control of it until he is done whatever he is doing. There are multiple ways to do this, but the most common is to use something called a “lock” (also called a mutex).
This new multithreaded system with locks has three simple rules :
1. Whoever controls the lock will have sole control over the resource for as long as they possess the lock.
2. A resource should only have one lock.
3. Any thread waiting on a locked object can either throw an exception or wait for it to be unlocked. In which case, it will either be unlocked and passed by some priority order, or a new owner will be randomly chosen.
Let’s observe why these three rules are true.

Rule Number One

“Whoever controls the lock will have sole control over the object for as long as they possess the lock.”
The first rule is true because that is how a lock is defined. But for a more intuitive approach, you can think of the lock as “binding” itself to its owner. As long as the owner of the lock does not relinquish control, the object will forever be locked. This also means that if a thread controlling a locked object gets stuck in an infinite loop, that locked object will be locked forever.
Of course, while you could manually unlock the object, there is no way to determine if an object will be locked forever, because you can’t (in general) determine whether or not a program is stuck in an infinite loop (see Halting problem). You could, however, add a timeout to the lock, although this does have its own set of problems that will not be discussed in this article.

Rule Number Two

“A resource should only have one lock.”
Suppose an object had a specific resource that was guarded by two locks. That means that two different threads can access the resource at the same time. Unfortunately, this is essentially no different than not having a lock at all, because now two threads can access the resource at the same time, if they both grab ownership of different locks. We are back to the exact same problem that we initially started with!
This means that a resource must have one and only one lock, and only one owner for that lock. Having two locks on the same resource makes it less secure, not more.

Rule Number Three

“Any thread waiting on a locked object can either throw an exception or wait for it to be unlocked. In which case, it will either be unlocked and passed by some priority order, or a new owner will be randomly chosen.”
In general, it is bad practice to have your code throw an exception when it attempts to access a locked object. If fairness is important, you can do something like creating a queue for that locked resource, like a wait list. This is so that if three threads wanted to access an object, they would access the object in the order that they called the lock.
For example, the execution order might look like this :

1. Thread 1 calls for the object. It is not locked, so Thread 1 gains control of the lock.
2. Thread 2 calls for the object. It is locked. Thread 2 is put on the queue (wait list)
3. Thread 1 performs some operation using the object.
4. Thread 1 relinquishes control of the lock.
5. Thread 3 calls for the object. It is unlocked at this very exact moment, but because there is a queue for this object, the lock goes to the first element in the queue, which is Thread 2. Thread 3 is now put on the queue.
6. Thread 2 performs some operation using the object.
7. Thread 2 relinquishes control of the lock.
8. Thread 3 automatically gains control of the lock, as Thread 3 is next on the queue.
9. Thread 3 performs some operation using the object.
10. Thread 3 relinquishes control of the lock.

As you can see, when multiple threads want access to a locked resource, they can be accessed in first-come-first-serve order. However, this does not necessarily mean that Thread 2 just has to twiddle its thumb and do nothing. Thread 2 does not have to be blocked while it waits — it can do other computations while it waits for its access to the locked object.
This is important for when the desired operation could take a long time. For example, imagine if Thread 1 wanted access to an object controlling the webcam for your computer. Thread 1 could be using this object for a long time, and if Thread 2 and Thread 3 both sat idly doing nothing, we would experience a severe loss in performance.
Alternatively, when the lock gets freed, it can also be passed to a random thread. By default, most locks are unfair, meaning they get passed to a random thread after being unlocked. The reason for this is that mutexes are much more performant when they don’t have to keep an ordering of which thread goes next, and can simply pick the first non-busy thread.

Conclusion

In this article, you’ve learned that multithreading is difficult because if two threads access a resource at the same time, there’s no guarantee that you will get the correct behavior. Without locks, you’ll have to deal with race conditions, which are incredibly hard to debug.
To fix this problem, you simply have to use a lock, which will prevent other threads from accessing the resource. When a thread is done performing operations with a locked object, it relinquishes the lock, and the next thread gets control of the lock.
In the next Simply Explained post, I’ll be talking about Futures, which is a programming construct used to obtain a value that does not yet exist (for example, because the object you wanted to use to get that value was locked). It allows you to write non-blocking code, because after you create a Future, your thread can do other tasks until the Future makes a callback, telling you that the value now exists.