Database locking intro for non-DBAs
When it comes to databases, locking is an integral part of ensuring data integrity and preventing conflicts between concurrent transactions. Locks can be used to prevent multiple transactions from accessing the same data at the same time, which can cause issues such as lost updates, dirty reads, and inconsistent results.
Why do we need locks?
One commonly used term in the RDBMS (relational database management system) world is ACID, which stands for Atomicity, Consistency, Isolation, and Durability.
- Atomicity refers to the concept of all operations in a transaction being treated as a single, indivisible unit.
- Consistency ensures that the database remains in a consistent state before and after a transaction.
- Isolation refers to the ability to execute transactions concurrently without interfering with one another.
- Durability ensures that once a transaction has been committed, it will remain committed even in the event of a system failure.
In order to achieve these goals, databases use various types of locks.
- Row level locks, are used to prevent transactions from modifying the same record as another transaction such as when running DML statements like
- Table level locks lock an entire table which might be done explicitly by a DBA when running a maintenance job or could be due to certain DDL statements such as
- Latches are lightweight locks can be used to protect internal data structures. These are designed to be acquired and released very quickly but can still be a source of contention on busy systems.
Different locking modes can also be used to control how transactions interact with one another. For example, a shared lock can be used to allow multiple transactions to read the same data simultaneously, while an exclusive lock can be used to prevent other transactions from accessing the data at all.
Performance impact of locking
Despite the benefits of locking, it is important to be aware of the performance impact:
Long duration locks - In some cases a session might lock one or more other sessions for a long period of time causing those other users to be frustrated. For example if one users starts a transaction to update records but then goes for their lunch break before committing or rolling back the change, other users won’t be able to update the same records until the first user returns to complete their transaction.
Short duration locks - You could also have a scenario where many sessions are blocked but only for a short duration but when you add that time up over a period, it could result in a significant amount of wasted time. For example session A could block B for a few milliseconds then a second later B blocks C for a few milliseconds and then the next second session C blocks A for a few milliseconds and so on. Having a lot of sessions locking like this would limit the scalability of your application and the problem would get worse the more sessions you had competing with each other.
Deadlocks - There is also the potential for deadlocks, which can occur when two transactions are each waiting for the other to release a lock. In this case the database engine will normally detect the deadlock and pick one of the sessions as the victim to kill which will impact on the user experience.
How to monitor database locking
Overall, locking is a crucial concept in the database world that helps to ensure data integrity and prevent conflicts between concurrent transactions but you should have a monitoring tool like DBmarlin which can provide the information you need to assess whether locking is impacting on your database performance.
With DBmarlin you can see how much time sessions spent waiting for various types of locks to be released and which SQL statements were involved. With v3.2 you can also now see the blocking trees (Oracle & SQL Server currently supported) to see the session holding the lock and what it was doing.
New locking and blocking features in DBmarlin v3.2