Optimistic vs. Pessimistic Concurrency In .NET: Complete Guide with Examples

Date
Authors

Introduction

Concurrency control is crucial when multiple users or processes might try to modify the same data simultaneously. In .NET applications using Entity Framework Core, we have two primary strategies for managing concurrency: optimistic and pessimistic. This article explores both approaches with practical, runnable examples that demonstrate real-world implementation.

Optimistic Concurrency

Optimistic concurrency is a strategy for managing simultaneous access to shared data that assumes conflicts are rare. Instead of locking the resource, it allows multiple processes to read and attempt to update it freely. When a process tries to save changes, the system checks whether the data was modified by someone else in the meantime—often using a version number or timestamp. If a conflict is detected, the update is rejected or retried, preventing silent overwrites. This approach improves performance in low-contention environments but requires careful conflict detection and resolution to avoid data inconsistencies.

1. RowVersion Approach

In .NET optimistic concurrency can be implemented using a row version which is a special column (often a byte[] timestamp or a GUID) that checks the state of a database row. This version column is used to detect whether the data has been changed by another process or user between the time it was read and when it's being updated.

The rowversion type shown above is a SQL Server-specific feature and not manage with application code; the details on setting up an automatically updating concurrency token differ across databases, and some databases don't support these at all (e.g., SQLite)

How It Works

  1. Reading the Data: When an entity is fetched from the database, EF Core also retrieves the value of its row version column (usually a byte[] marked with [Timestamp]).

  2. Making Changes: The application modifies the entity in memory as needed.

  3. Saving Changes: When SaveChanges() is called, EF Core generates an UPDATE statement that includes the entity’s original version value in the WHERE clause alongside its ID. This ensures that the update only proceeds if the row hasn’t changed.

  4. Database Updates the Row Version: If the update succeeds, the database automatically generates a new incremented binary value for the row version column. This happens within the database engine (not in EF Core), ensuring the version always reflects the most recent state of the row.

  5. Conflict Detection: If another process has modified the row and the version has changed, the UPDATE affects zero rows, and EF Core throws a DbUpdateConcurrencyException — signaling a concurrency conflict.

  6. Handling the Conflict: You can catch the exception in your code and handle it appropriately: retry the operation, merge changes, discard your update, or inform the user.

Examples

using Microsoft.EntityFrameworkCore;
using System;
using System.Threading.Tasks;

public class Product
{
    public int Id { get; set; }
    public string Name { get; set; }
    public decimal Price { get; set; }
    // Automatic version tracking
    public byte[] RowVersion { get; set; }
}

public class AppDbContext : DbContext
{
    public DbSet<Product> Products { get; set; }

    protected override void OnConfiguring(DbContextOptionsBuilder options)
        => options.UseSqlServer(@"Server=(localdb)\mssqllocaldb;Database=ConcurrencyDemo;Trusted_Connection=True");

    protected override void OnModelCreating(ModelBuilder modelBuilder)
    {
        modelBuilder.Entity<Product>()
            .Property(p => p.RowVersion)
             // Configure as row version
            .IsRowVersion();
    }
}

class Program
{
    static async Task Main(string[] args)
    {
        await InitializeDatabase();
        var productId = 1;

        // User 1 loads product
        var user1Context = new AppDbContext();
        var user1Product = await user1Context.Products.FindAsync(productId);

        // User 2 loads same product
        var user2Context = new AppDbContext();
        var user2Product = await user2Context.Products.FindAsync(productId);

        // User 1 updates first
        user1Product.Price += 10;
        await user1Context.SaveChangesAsync();
        Console.WriteLine("User 1 updated successfully");

        // User 2 tries to update (will fail)
        try
        {
            user2Product.Price -= 5;
            await user2Context.SaveChangesAsync();
        }
        catch (DbUpdateConcurrencyException)
        {
            Console.WriteLine("User 2: Conflict detected! Reloading data...");
            await user2Context.Entry(user2Product).ReloadAsync();
            Console.WriteLine($"Current price: {user2Product.Price}");
        }
    }

    static async Task InitializeDatabase()
    {
        using var context = new AppDbContext();
        await context.Database.EnsureDeletedAsync();
        await context.Database.EnsureCreatedAsync();
        context.Products.Add(new Product { Name = "Laptop", Price = 999.99m });
        await context.SaveChangesAsync();
    }
}

2. ConcurrencyToken Approach

In EF Core, optimistic concurrency can be implemented using a ConcurrencyToken which is a property that the framework uses to detect conflicting updates. Unlike the [Timestamp] or RowVersion approach, this method lets the application manage the token value explicitly, rather than relying on the database to generate or increment it.

Common types include int or long (for manual versioning), string (for hashes or GUIDs), DateTime (for last-modified timestamps), and Guid (to track changes with unique IDs) can be used for ConcurrencyToken column. These values must be manually updated in the application code before saving. This approach gives you more control over when and how concurrency is checked.

This strategy is useful in databases like SQLite, which lack native support for auto-updating row versions and also gives finer control in databases like SQL Server, allowing developers to decide exactly when and how the token should be updated.

How It Works

  1. Reading the Data: EF Core reads the entity from the database and stores the current value of the ConcurrencyToken property (e.g., an int, string, Guid, or DateTime).

  2. Making Changes: The application modifies the entity in memory as needed. 🔄 You must also update the concurrency token manually in your code—typically by incrementing a version number, updating a timestamp, or generating a new Guid. EF Core does not handle this automatically.

  3. Saving Changes: When SaveChanges() is called, EF Core generates an UPDATE statement that includes the original concurrency token value in the WHERE clause. This ensures the update only succeeds if no other changes occurred since the entity was read.

  4. Conflict Detection: If another process has updated the row and changed the token in the meantime, the UPDATE affects zero rows. EF Core detects this and throws a DbUpdateConcurrencyException.

  5. Handling the Conflict: You can catch the exception and decide how to respond—retry the operation, reload the entity, merge changes, or inform the user of the conflict.

Example

using Microsoft.EntityFrameworkCore;
using System;
using System.Threading.Tasks;

public class Document
{
    public int Id { get; set; }
    public string Title { get; set; }
    public string Content { get; set; }
    // Concurrency token
    public DateTime LastModified { get; set; }
}

public class AppDbContext : DbContext
{
    public DbSet<Document> Documents { get; set; }

    protected override void OnConfiguring(DbContextOptionsBuilder options)
        => options.UseSqlServer(@"Server=(localdb)\mssqllocaldb;Database=DocDB;Trusted_Connection=True");

    protected override void OnModelCreating(ModelBuilder modelBuilder)
    {
        modelBuilder.Entity<Document>(entity =>
        {
            entity.Property(d => d.LastModified)
                .IsConcurrencyToken()
                .ValueGeneratedOnAddOrUpdate();
        });
    }
}

class Program
{
    static async Task Main(string[] args)
    {
        await InitializeDB();
        var docId = 1;

        // User A loads document
        var userAContext = new AppDbContext();
        var userADoc = await userAContext.Documents.FindAsync(docId);

        // User B loads same document
        var userBContext = new AppDbContext();
        var userBDoc = await userBContext.Documents.FindAsync(docId);

        // User A updates first
        userADoc.Content += "\nUser A edit";
        userADoc.LastModified = DateTime.UtcNow;
        await userAContext.SaveChangesAsync();

        // User B tries to update
        try
        {
            userBDoc.Content += "\nUser B edit";
            await userBContext.SaveChangesAsync();
        }
        catch (DbUpdateConcurrencyException)
        {
            Console.WriteLine("Conflict detected! User B's changes rejected.");
            Console.WriteLine($"Current LastModified: {(await userBContext.Documents.FindAsync(docId)).LastModified}");
        }
    }

    static async Task InitializeDB()
    {
        using var context = new AppDbContext();
        await context.Database.EnsureDeletedAsync();
        await context.Database.EnsureCreatedAsync();
        context.Documents.Add(new Document
        {
            Title = "Project Plan",
            Content = "Initial content",
            LastModified = DateTime.UtcNow
        });
        await context.SaveChangesAsync();
    }
}

Pessimistic Concurrency

Pessimistic concurrency is a strategy used to manage access to shared data in environments where multiple operations might try to modify the same resource at the same time. It works by assuming conflicts will happen, so it prevents them upfront by locking the resource. When a process locks a resource, other processes must wait until the lock is released before they can read or change it. This method ensures data integrity in high-contention scenarios but can lead to performance issues, such as slowdowns or deadlocks, if locks aren't managed properly.

1. Transaction Isolation Approach

In EF Core, pessimistic concurrency can be implemented using transaction isolation levels and explicit locking to prevent conflicts by blocking access to data during a transaction. Instead of assuming that conflicts are rare (as in optimistic concurrency), this approach uses DbContext.Database. BeginTransaction() with a specified isolation level—like Serializable or RepeatableRead to control how data is locked and accessed by concurrent operations. Additionally, you can use raw SQL queries with locking hints (likeSELECT ... FOR UPDATE) to explicitly lock rows being read, ensuring that no other process can modify them until the transaction completes. This guarantees consistency in high-contention scenarios but may lead to reduced concurrency or deadlocks if not managed carefully.

How It Works

  1. Starting the Transaction

    • A transaction begins with a specified isolation level (e.g., Serializable, RepeatableRead).
    • The isolation level determines how locks are acquired (e.g., row-level vs. table-level) and how long they are held.
  2. Reading Data (Locking Behavior Depends on Isolation Level)

    • Under high isolation levels (e.g., Serializable), the database implicitly locks rows during SELECT queries to prevent concurrent modifications without FOR UPDATE...
    • FOR UPDATE is not mandatory and it is only necessary if you require explicit row locks in lower isolation levels (e.g., ReadCommitted , ReadCommitted).
    • In EF Core, you can use raw SQL (FromSqlRaw) with FOR UPDATE if fine-grained locking is needed.
  3. Modifying Data

    • Since the rows are locked (either implicitly by isolation level or explicitly via FOR UPDATE), no other transaction can modify them until the lock is released.
    • Changes are made in memory and tracked by EF Core.
  4. Saving Changes

    • SaveChanges() executes the UPDATE while locks are still active, ensuring no conflicts occur.
    • Unlike optimistic concurrency, no version checks are needed—locks prevent concurrent edits.
  5. Committing or Rolling Back

    • On Commit(), locks are released, and changes become permanent.
    • If the transaction fails or deadlocks, Rollback() releases locks, allowing other transactions to proceed.
  6. Handling Contention (Deadlocks/Timeouts)

    • If another transaction tries to access locked data, it may block, timeout, or deadlock.
    • Applications should implement retry logic or error handling for such cases.

Example (Without FOR UPDATE because of use IsolationLevel.Serializable)

using Microsoft.EntityFrameworkCore;
using System;
using System.Threading.Tasks;
using System.Transactions;

public class BankAccount
{
    public int Id { get; set; }
    public string AccountNumber { get; set; }
    public decimal Balance { get; set; }
}

public class AppDbContext : DbContext
{
    public DbSet<BankAccount> Accounts { get; set; }

    protected override void OnConfiguring(DbContextOptionsBuilder options)
        => options.UseSqlServer(@"Server=(localdb)\mssqllocaldb;Database=BankDemo;Trusted_Connection=True");
}

class Program
{
    static async Task Main(string[] args)
    {
        await InitializeDatabase();
        var accountId = 1;

        // Transaction with Serializable isolation
        using var transaction = await new AppDbContext().Database
            .BeginTransactionAsync(System.Data.IsolationLevel.Serializable);

        try
        {
            var db = transaction.DbContext;
            var account = await db.Accounts.FindAsync(accountId);

            Console.WriteLine($"Current balance: {account.Balance}");
            Console.WriteLine("Processing transfer (5 second delay)...");
            await Task.Delay(5000);

            account.Balance -= 100;
            await db.SaveChangesAsync();
            await transaction.CommitAsync();

            Console.WriteLine("Transaction completed successfully");
        }
        catch (Exception ex)
        {
            Console.WriteLine($"Transaction failed: {ex.Message}");
        }
    }

    static async Task InitializeDatabase()
    {
        using var context = new AppDbContext();
        await context.Database.EnsureDeletedAsync();
        await context.Database.EnsureCreatedAsync();
        context.Accounts.Add(new BankAccount
        {
            AccountNumber = "ACC123",
            Balance = 1000.00m
        });
        await context.SaveChangesAsync();
    }
}

Example (Using FOR UPDATE and IsolationLevel.ReadCommitted)

using Microsoft.EntityFrameworkCore;
using System;
using System.Threading.Tasks;

public class BankAccount
{
    public int Id { get; set; }
    public string AccountNumber { get; set; }
    public decimal Balance { get; set; }
}

public class AppDbContext : DbContext
{
    public DbSet<BankAccount> Accounts { get; set; }

    protected override void OnConfiguring(DbContextOptionsBuilder options)
        => options.UseSqlServer(@"Server=(localdb)\mssqllocaldb;Database=BankDemo;Trusted_Connection=True");
}

class Program
{
    static async Task Main(string[] args)
    {
        await InitializeDatabase();
        var accountId = 1;

        // Using ReadCommitted isolation with explicit locking
        using var dbContext = new AppDbContext();
        using var transaction = await dbContext.Database
            .BeginTransactionAsync(System.Data.IsolationLevel.ReadCommitted);

        try
        {
            // Explicitly lock the row using UPDLOCK (SQL Server's equivalent of FOR UPDATE)
            var account = await dbContext.Accounts
                .FromSqlRaw("SELECT * FROM Accounts WITH (UPDLOCK, ROWLOCK) WHERE Id = {0}", accountId)
                .FirstOrDefaultAsync();

            Console.WriteLine($"Current balance: {account.Balance}");
            Console.WriteLine("Processing transfer (5 second delay)...");
            await Task.Delay(5000);

            account.Balance -= 100;
            await dbContext.SaveChangesAsync();
            await transaction.CommitAsync();

            Console.WriteLine("Transaction completed successfully");
        }
        catch (Exception ex)
        {
            Console.WriteLine($"Transaction failed: {ex.Message}");
            await transaction.RollbackAsync();
        }
    }

    static async Task InitializeDatabase()
    {
        using var context = new AppDbContext();
        await context.Database.EnsureDeletedAsync();
        await context.Database.EnsureCreatedAsync();
        context.Accounts.Add(new BankAccount
        {
            AccountNumber = "ACC123",
            Balance = 1000.00m
        });
        await context.SaveChangesAsync();
    }
}

2. Semaphore Approach

In multithreaded applications, pessimistic locking can be implemented using synchronization primitives like SemaphoreSlim or lock statements to enforce exclusive access to shared resources. Unlike database-level locks (which prevent concurrent DB access), this approach manages in-memory contention when multiple threads compete for the same application-level resource.

using Microsoft.EntityFrameworkCore;
using System;
using System.Threading;
using System.Threading.Tasks;

public class Ticket
{
    public int Id { get; set; }
    public string Event { get; set; }
    public int Available { get; set; }
}

public class AppDbContext : DbContext
{
    public DbSet<Ticket> Tickets { get; set; }

    protected override void OnConfiguring(DbContextOptionsBuilder options)
        => options.UseSqlServer(@"Server=(localdb)\mssqllocaldb;Database=TicketDemo;Trusted_Connection=True");
}

class Program
{
    private static readonly SemaphoreSlim _semaphore = new SemaphoreSlim(1, 1); // Only 1 thread at a time

    static async Task Main(string[] args)
    {
        await InitializeDatabase();
        var ticketId = 1;

        // Simulate 3 users trying to book tickets simultaneously
        var tasks = new[]
        {
            BookTickets(ticketId, "User 1", 2),
            BookTickets(ticketId, "User 2", 3),
            BookTickets(ticketId, "User 3", 1)
        };

        await Task.WhenAll(tasks);

        // Check final availability
        using var db = new AppDbContext();
        var final = await db.Tickets.FindAsync(ticketId);
        Console.WriteLine($"\nFinal availability: {final.Available}");
    }

    static async Task BookTickets(int ticketId, string user, int quantity)
    {
        await _semaphore.WaitAsync(); // Wait for semaphore
        try
        {
            using var db = new AppDbContext();
            var ticket = await db.Tickets.FindAsync(ticketId);

            Console.WriteLine($"{user} checking: {ticket.Available} available");

            if (ticket.Available >= quantity)
            {
                await Task.Delay(1000); // Simulate processing
                ticket.Available -= quantity;
                await db.SaveChangesAsync();
                Console.WriteLine($"{user} booked {quantity} tickets");
            }
            else
            {
                Console.WriteLine($"{user} failed: Not enough tickets");
            }
        }
        finally
        {
            _semaphore.Release(); // Release semaphore
        }
    }

    static async Task InitializeDatabase()
    {
        using var context = new AppDbContext();
        await context.Database.EnsureDeletedAsync();
        await context.Database.EnsureCreatedAsync();
        context.Tickets.Add(new Ticket { Event = "Concert", Available = 5 });
        await context.SaveChangesAsync();
    }
}

How It Works

  1. Acquiring the Lock

    • A semaphore (or lock in C#) acts as a gatekeeper, allowing only one thread to enter a critical section at a time.
    • Example:
      private static readonly SemaphoreSlim _semaphore = new SemaphoreSlim(1, 1); // Only 1 thread allowed
      
  2. Critical Section Execution

    • Threads must wait for the semaphore before accessing/modifying shared data.
    • Example:
      await _semaphore.WaitAsync(); // Blocks if another thread holds the lock
      try
      {
          // Modify shared resource (e.g., static list, cache, or file)
          _sharedList.Add(item);
      }
      finally
      {
          _semaphore.Release(); // Releases the lock
      }
      
  3. Conflict Prevention

    • Other threads attempting to acquire the lock are blocked until the current thread releases it.
    • Ensures atomicity for in-memory operations (similar to DB transactions).
  4. Deadlock Handling

    • Use try-finally to guarantee lock release, even if an exception occurs.
    • Avoid nested locks to prevent deadlocks.

Comparison Table

ApproachMechanismBest ForPerformance Impact
Optimistic (RowVersion)Version checking on saveWeb apps, read-heavy systemsLow
Optimistic (ConcurrencyToken)Property value checkingSelective field updatesMedium
Transaction IsolationDatabase-level lockingFinancial transactionsHigh
SemaphoreApplication-level thread controlIn-memory coordinationMedium

Here's a well-structured addition to your article that smoothly transitions between monolith and distributed systems concurrency concerns:


Handling Race Conditions in Monolith vs. Distributed Systems

In this article, we've covered two fundamental approaches to concurrency control—optimistic and pessimistic locking - which are primarily designed for monolithic applications where:

  • All data access goes through a single database
  • The application runs as a single process or a tightly coupled cluster
  • Lock state can be easily maintained in memory or a single database

These solutions work perfectly for traditional monolithic architectures because:

  1. Pessimistic locking relies on database-level locks that are immediately visible to all threads
  2. Optimistic concurrency depends on version checks that happen atomically within a single database transaction
  3. The entire state exists in a single consistency boundary

Why These Approaches Don't Scale to Distributed Systems

In distributed systems (microservices, cloud-native apps, or any multi-node and instance architecture), traditional concurrency controls face significant challenges:

  1. No single source of truth - Data may be partitioned or replicated across nodes
  2. Network latency makes immediate lock propagation impossible
  3. Partial failures can leave locks hanging indefinitely
  4. Cross-service transactions break the atomicity guarantees

For distributed scenarios, we typically implement distributed locking patterns using solutions like:

  • Redis RedLock algorithm
  • ZooKeeper/etcd for coordination
  • Database-based solutions with expiration
  • Lease-based locking mechanisms

These approaches come with their own trade-offs around performance, fault tolerance, and complexity.

In the next blog post we'll explain in depth about "Distributed Concurrency Control: Complete Guide with Examples".

For now, if you're building a monolithic .NET application with a single database, the optimistic and pessimistic approaches we've demonstrated will provide robust concurrency control for your needs.

Conclusion

When implementing concurrency control:

  1. Choose Optimistic Concurrency when:

    • Conflicts are rare
    • You want better performance
    • Your application is read-heavy
    • Using RowVersion for simplicity or ConcurrencyToken for granular control
  2. Choose Pessimistic Concurrency when:

    • Data conflicts must be prevented
    • Working with critical financial data
    • Implementing inventory systems
    • Using transactions for full isolation or explicit locks for specific records

Each approach has trade-offs between performance and data safety. Test your implementation under realistic load conditions to ensure it meets your application requirements. The provided runnable examples demonstrate all four approaches in complete, self-contained programs you can use as templates for your projects.