Distributed concurrency control is a strategy that spreads responsibility for concurrency control across an entire network. Concurrency refers to having all of the computers working with the same version of the same files. Once computers are networked together, concurrency concerns — issues with keeping all of the files on the network identical for all users — climb to the forefront, as multiple users can have simultaneous access to any authorized files and folders on the system. Without enforcing concurrency, these files could easily become inconsistent from one computer to the next as users change and manipulate data in real-time, resulting in everyone quickly losing the ability to rely on network files as changes take place. Concurrency control keeps files consistent across the entire network, avoiding this concern.
The primary advantage to distributed concurrency control is that it spreads the workload for concurrency issues across multiple computers, reducing overhead on each. Without distributed concurrency control, keeping concurrency on a network could easily become a full time job for a single computer, rendering it useless for anything else. With distributed concurrency control, each computer on the network can help to share the workload, ensuring that end-users can still use the terminals for other network tasks.
Strong strict two phase locking is one of the most common types of distributed concurrency control. In strong strict two phase locking, as soon as an individual network file is accessed, it is locked for both read and write operations until the access ends. This means that only one user on the network can change a file at a time, making it impossible for the file to fall out of concurrency on the network. Once the end-user saves changes to the file or exits the file altogether, the locks are removed, allowing another user on the system to thereafter handle the file again.
One of the biggest disadvantages to strong strict two phase locking is the additional overhead it places on network resources. Each file under each user must be earmarked by the network as "locked," and that information must be retained in memory until the lock ends. In the aggregate, with hundreds of end-users running hundreds of files at the same time, this can easily cannibalize a significant portion of memory on the network. This excessive cannibalization of memory can slow down networks with inefficient or outdated hardware designs.