Deadlock prevention algorithms
![]() | This article or section is in a state of significant expansion or restructuring. You are welcome to assist in its construction by editing it as well. If this article or section has not been edited in several days, please remove this template. If you are the editor who added this template and you are actively editing, please be sure to replace this template with {{in use}} during the active editing session. Click on the link for template parameters to use.
This article was last edited by Cybercobra (talk | contribs) 12 years ago. (Update timer) |
Name | Coffman conditions | Patented | Description |
---|---|---|---|
Banker's algorithm | Mutual Exclusion | N/A | The Banker's algorithm is a resource allocation and deadlock avoidance algorithm developed by Edsger Dijkstra. |
Preventing Recursive Locks | Mutual Exclusion | No | This prevents a single thread form entering the same lock more than once. |
The below needs migrated into the table above.
Distributed deadlock
Distributed deadlocks can occur in distributed systems when distributed transactions or concurrency control is being used. Distributed deadlocks can be detected either by constructing a global wait-for graph, from local wait-for graphs at a deadlock detector or by a distributed algorithm like edge chasing.
In a commitment ordering-based distributed environment (including the strong strict two-phase locking (SS2PL, or rigorous) special case) distributed deadlocks are resolved automatically by the atomic commitment protocol (like a two-phase commit (2PC)), and no global wait-for graph or other resolution mechanism is needed. Similar automatic global deadlock resolution occurs also in environments that employ 2PL that is not SS2PL (and typically not CO; see Deadlocks in 2PL). However, 2PL that is not SS2PL is rarely utilized in practice.
Phantom deadlocks are deadlocks that are detected in a distributed system due to system internal delays but no longer actually exist at the time of detection.
Deadlock prevention
There are many different ways to increase parallelism where recursive locks would otherwise cause deadlocks. But there is a price. And that price is either performance/overhead, allow data corruption, or both.
Some of examples include: lock reference-counting and preemption (either using versioning or allowing data corruption when preemption occurs); Wait-For-Graph (WFG) [1] algorithms, which track all cycles that cause deadlocks (including temporary deadlocks); and heuristics algorithms which don't necessarily increase parallelism in 100% of the places that deadlocks are possible, but instead compromise by solving them in enough places that performance/overhead vs parallelism is acceptable.