Choosing the Right Caching Strategy in .NET 9

Date
Authors

Introduction

In today's world of high user expectations and modern cloud apps, performance and instant response times aren’t just nice to haves, they're requirements. Caching is one of the most effective tools in your performance toolbox, letting you store frequently accessed data closer to your application in memory; This means fewer database calls, lower latency, speed up response times, and smoother user experiences.

However, effective caching is more than just putting data in memory. Applications must choose the right caching strategy to solve unique challenges: Should data be kept up to date everywhere? Does cache consistency matter more than speed? Can cache stay reliable across dozens (or hundreds) of servers?

In this blog, we’ll walk through four popular caching strategies:

  • Read-Through
  • Cache-Aside
  • Write-Through
  • Write-Behind

But it isn't all. With .NET 9, a major step forward arrives: the built-in HybridCache. This new feature combines the speed of local in-memory caching with the consistency and scalability of distributed caches like Redis, all with minimal configuration. Hybrid caching delivers the best of both worlds and is particularly powerful for cloud-native, microservice, and large web applications. Let's dive in and discover how modern .NET makes high-performance, scalable caching easier than ever!

Why HybridCache? (What Hybrid Caching Solves)

Traditional caching in .NET apps is often a choice between:

  • In-Memory Cache (MemoryCache): Extremely fast (because it's local), but limited to a single server instance. When running behind a load balancer or scaling out, each instance has different cached data—leading to inconsistency.
  • Distributed Cache (e.g., Redis, NCache): Centralized in a networked store so all servers get the same data, but every access takes a network hop—slower than reading from in-memory.

Hybrid caching brings a new strategy:

  • Reads come from local memory if available (ultra-fast!).
  • If the local cache doesn't have the value, fallback is to distributed cache (shared and consistent).
  • When any instance writes or evicts a cache entry, other instances are notified (cache coherence!) so their local caches are updated or invalidated.

Imagine this as having a "superfast local shortcut" but always falling back to a "central reliable source" as needed.

With HybridCache:

  • Local RAM = fast, cheap, but not shared
  • Distributed cache = slower, but consistent everywhere
  • HybridCache = best of both

Setting Up Hybrid Caching in .NET 9

.NET 9 introduces a built-in HybridCache through the Microsoft.Extensions.Caching.Hybrid package. This new approach combines the blazing speed of in-memory caching (like MemoryCache) with the scalability and resilience of distributed caches (like Redis, NCache).

Here’s how you can get started with hybrid caching in a .NET 9 application.

✅ Step 1: Add the Required Package

First, make sure you have the right NuGet package installed:

dotnet add package Microsoft.Extensions.Caching.Hybrid
# if using Redis for distributed caches
dotnet add package Microsoft.Extensions.Caching.StackExchangeRedis

✅ Step 2: Register the Hybrid Cache in Program.cs

You can configure the hybrid cache by combining MemoryCache and Redis (or another distributed cache). Here's an example using Redis:

builder.Services.AddHybridCache(options =>
{
    options.DefaultEntryOptions = new()
    {
        Expiration = TimeSpan.FromMinutes(15),
        LocalCacheExpiration = TimeSpan.FromMinutes(15)
    };
});

// if using Redis for distributed caches
builder.Services.AddStackExchangeRedisCache(options =>
{
    options.Configuration = builder.Configuration.GetConnectionString("Redis");
});

By default, HybridCache register InMemory cache through IMemoryCache interface and for registering redis as secondary cache, we should add it to dependency injection with builder.Services.AddStackExchangeRedisCache.

✅ Step 3: Inject and Use HybridCache

Now you can inject HybridCache in your services or controllers and use it just like any other cache service:

public class ProductService
{
    private readonly HybridCache  _cache;

    public ProductService(HybridCache cache)
    {
        _cache = cache;
    }

    public async Task<Product> GetProductAsync(int id)
    {
        string key = $"product:{id}";
        return await _cache.GetOrCreateAsync(
            key,
            async cancel => await FetchProductFromDatabase(id),
            TimeSpan.FromMinutes(10)
        );
    }
}

With this setup, you're now using hybrid caching: fast lookups from memory when available, fallback to Redis when needed, and automatic updates across layers. It's a powerful model, especially in microservice or cloud-native environments.

1. Cache-Aside (Lazy Loading)

The Cache-Aside pattern, also known as Lazy Loading, is a caching strategy where the application is responsible for loading data into the cache on demand. When data is requested, the application first checks the cache. If the data isn't found (a cache miss), the application retrieves it from the data source (like a database), stores it in the cache, and then returns it to the caller. The next request for the same data will be served directly from the cache, greatly improving performance for frequently accessed data.

How It Works Step-by-Step

  1. The application requests data from the cache using a key. (Step ①: Get(key))

  2. If the data is present in the cache (cache hit):

    • The cache immediately returns the cached data to the application. (Step ②/③: Return cached data)
  3. If the data is not found in the cache (cache miss):

    • The cache informs the application that the key was not found. (Step ②/③: Not found)
    • The application queries the database for the required data. (Step ④: Query data)
    • The database returns the requested data to the application. (Step ⑤: Return data from DB)
    • The application stores the newly retrieved data in the cache, optionally setting a TTL (time-to-live) for expiration. (Step ⑥: Set(key, data, TTL))
    • Finally, the application returns the data to the original requester. (Step ⑦: Return data to caller)

Cache Aside

Cache Aside Diagram

When Should You Use Cache-Aside?

Cache-Aside shines in scenarios with the following requirements:

  • Frequently Read, Rarely Updated Data: Ideal for data that is accessed often but doesn’t change much, like product catalogs or user profile data. The benefits of caching are maximized while the risk of serving outdated information is minimized.
  • Eventual Consistency is Acceptable: Since there can be a lag between when the database is updated and when the cache refreshes, this pattern works best if your system can tolerate short-lived discrepancies between cache and data source. For example, a minor delay in reflecting profile picture changes for a user's account is often acceptable.
  • High Availability Required: With cache-aside, your application always goes to the backing store if the cache is unavailable. This makes the system more robust: if the cache server goes down, you can still serve requests (at the cost of performance). You don’t lose data or face application crashes—just slower response times until the cache recovers.
  • Cache Memory is Limited: Cache-aside doesn’t preload data. Only data that’s requested is cached, so memory usage stays efficient even with large datasets.

Advantages

  • Simple to implement – Easy to implement in most .NET apps with minimal changes.
  • Reduces database load – Only queries the DB on cache misses.
  • Memory Efficient: Only caches data that's actually requested.
  • Works for Read-heavy Workloads: Especially beneficial when reads far exceed writes.
  • Fault-tolerant – If the cache fails, the system falls back to the database (graceful degradation).

Disadvantages

  • Cache misses add latency – First request is slower.
  • Eventual consistency – Because of the delay between DB updates and cache refresh.
  • Stale data risk – If the database updates, the cache remains outdated until expiration (TTL).
    • Shorten TTL – Reduces staleness, but increases DB load.
    • Manual Invalidation – Delete cache key when the data is updated.
    • Combine with Write-Through – Update both cache and database during writes to avoid staleness.

Use Cases

  1. E-commerce product catalogs: Products that don't change often but are frequently viewed
  2. User profile information: User data that changes infrequently
  3. Content management systems: Articles, blog posts, or other content
  4. Reference data: Countries, currencies, or other static data

Real World Example in .NET

Suppose you have an e-commerce application where you frequently need to fetch product details. Using Cache-Aside with IMemoryCache:

public class ProductService
{
    private readonly IMemoryCache _cache;
    private readonly AppDbContext _dbContext;

    public ProductService(IMemoryCache cache, AppDbContext dbContext)
    {
        _cache = cache;
        _dbContext = dbContext;
    }

    // Cache-Aside Get
    public Product GetProduct(int id)
    {
        string cacheKey = $"product:{id}";
        if (!_cache.TryGetValue(cacheKey, out Product product))
        {
            // Cache miss - load from DB and cache it
            product = _db.GetProductById(id);
            if (product != null)
                _cache.Set(cacheKey, product, TimeSpan.FromHours(1));
        }

        return product;
    }

   // Update product with invalidate cache (Option 1: Remove Cache Entry on Update)
   public void UpdateProduct(Product updatedProduct)
   {
       // 1. Update database
       _dbContext.Products.Update(updatedProduct);
       _dbContext.SaveChanges();

       // 2. Remove cached entry (next Get will repopulate)
       _cache.Remove($"product_{updatedProduct.Id}");
   }

   // Update product with invalidate cache (Option 2: Update cache immediately - Write-Through hybrid)
   public void UpdateProduct(Product updatedProduct)
   {
       // 1. Update database
       _dbContext.Products.Update(updatedProduct);
       _dbContext.SaveChanges();

       // 2. Update cache immediately (not just remove)
       _cache.Set($"product_{updatedProduct.Id}", updatedProduct,
           new MemoryCacheEntryOptions { SlidingExpiration = TimeSpan.FromMinutes(5) });
   }
}

This C# class ProductService illustrates the cache-aside (also known as lazy loading) caching strategy for handling product data:

1. Reading Data (GetProduct method)

  • When GetProduct is called, it first checks the cache using a cache key (e.g., "product:5").
  • If the product is in the cache (cache hit), it is returned immediately.
  • If the product is not in the cache (cache miss), it is loaded from the database via _db.GetProductById(id).
  • After fetching from the database, it adds the product to the cache for quicker access in future requests.
  • This reduces a database load for frequent reads and improves application performance.

2. Updating Data (UpdateProduct method)

  • When updating a product, it must be changed in both the database and cache to keep data consistent.

  • Option 1: Invalidate/Remove Cache on Update

    • The updated product is saved to the database.
    • The corresponding cache entry is removed (_cache.Remove(...)). The next read will fetch the latest data from the database and replace the cache entry.
    • This is a pure cache-aside approach, ensuring stale data is not served.
  • Option 2: Update Cache Immediately (Hybrid Write-Through)

    • After updating the database, the method immediately updates the cache with the new product data.
    • This reduces the window in which stale data may be served and keeps cache and database in sync.
    • This approach blends cache-aside with write-through behavior.

Why This Meets the Cache-Aside Pattern

  • Responsibility for caching is in the application logic (not handled automatically by the cache system itself).
  • Data is loaded into the cache only on demand (first read is a cache miss, after which cache is populated).
  • Update and invalidation logic is manual, the service takes care of removing or updating cache entries after the database writings to prevent stale cache usage.
  • This approach is best for data that is read frequently but updated less often, and where short periods of stale data are acceptable (eventual consistency).

2. Read-Through Caching

The Read-Through pattern is a caching strategy where the cache itself automatically loads data from the database when a cache miss occurs. Unlike Cache-Aside where the application handles cache misses, in Read-Through the cache system takes responsibility for fetching missing data and populating itself.

How It Works Step-by-Step

  1. The client application requests data via the caching library, providing a specific key. (① Request data (by key))
  2. The caching library checks the cache (such as Redis) to see if the data for that key exists. (② Lookup key)
  3. If the data is found in the cache (cache hit): a. The cache immediately returns the cached data to the caching library. (④ Return cached data) b. The caching library relays this data back to the client application. (⑤ Return data to the client)
  4. If the data is NOT found in the cache (cache miss): a. The cache notifies the caching library that the key is missing. (④ Key not found) b. The caching library retrieves the data directly from the database. (⑤ Fetch data from the database) c. Once the database responds, the caching library receives the fresh data. (⑥ Return data from DB) d. The caching library stores this newly acquired data in the cache under the same key for future use. (⑦ Store data in cache) e. Finally, the caching library returns the data to the client application. (⑧ Return data to the client)
  5. Automatic management: The caching library handles all cache reads, writes, and database fetches automatically. The application code simply requests data—the library transparently determines whether to serve it from cache or database.
  6. Faster future requests: For further requests of the same key, data is served directly from the cache (until it expires or is evicted), delivering optimal performance and reducing load on your database.

Read Through

Read Through Diagram

When Should You Use Read-Through?

Read-Through uses in scenarios with the following requirements:

  • Centralized Cache Management Needed: Ideal when you want to encapsulate all cache logic in one layer and prevent writing manual cache-update logic in every part of your codebase, making application code simpler and more consistent across services.
  • Consistency and Simplicity: Promotes consistency and simplicity by centralizing cache miss handling.
  • Microservices Architecture: Particularly useful in distributed systems where multiple services should share the same cache behavior without duplicating logic.
  • Legacy System Modernization: Allows adding caching to existing systems with minimal changes to business logic.
  • Perfect for read-heavy scenarios Best for scenarios that the same data is frequently requested but changes infrequently, like reference data or product catalogs.
  • Where Eventual Consistency is Acceptable Suitable for non-critical data that can tolerate brief staleness between database updates and cache refreshes. Examples:
  • High Availability Requirements When your system must remain operational even during cache failures. Read-through gracefully degrades to direct database access.

Advantages

  • Simpler Application Code – Miss handling and cache population logic are encapsulated, minimizing repetitive code across the application.
  • Centralized Miss Handling – Fallback to the database on cache miss is handled in one place, reducing duplication and maintenance effort.
  • Consistent Behavior – Ensures uniform cache population and retrieval strategies across different services or components.
  • Great for Distributed Systems – With a shared caching layer, multiple services can benefit from the same cached data and logic.
  • Reduces Database Load – Repeated reads hit the cache, not the database, preserving database resources.
  • Efficient for Read-Heavy Apps – Frequently requested data is served from cache, reducing query overhead.
  • Easy to Integrate – Read-through caching libraries can often be added with minimal changes to existing application logic.
  • Better Scalability – Because the cache itself manages populating data, it’s easier to horizontally scale your application and caching layer.

Disadvantages

  • Less Flexible – It’s more challenging to implement custom miss logic, handle complex queries, or fetch data from multiple sources.
  • Cache Misses Add Latency – The first request after a cache expiry (or a miss) is slower, since it involves a database call and cache population.
  • Eventual Consistency – Updates to the underlying database aren’t instantly visible in the cache, risking outdated information after changes.
  • Stale Data Risk – Cached values remain until they expire (TTL) or are explicitly invalidated, which may result in temporary delivery of outdated data.
    • Shorten TTL – Reduces staleness, but increases the DB load.
    • Manual Invalidation – Delete the cache key when the data is updated.
    • Combine with Write-Through – Update both cache and database during writings to avoid staleness.
  • Write Complexity – Cache invalidation or refresh after database updates must be handled separately, increasing the chance of inconsistency.
  • Potential for Bulk Cache Stampedes – If many requests simultaneously miss the cache (e.g., after an eviction or restart), they may flood the database, causing performance issues (known as a "thundering herd" problem).
  • Storage Overhead – Cached data can consume significant memory, requiring careful management of cache size and eviction policies.

Use Cases

  1. E-commerce platforms: For product details and pricing that change infrequently
  2. User profile systems: Where profile data is read often but updated rarely
  3. Content delivery networks: For static content that benefits from caching
  4. Configuration management: System settings that are read frequently
  5. Multiservice architectures: Where consistent caching behavior is crucial

Real World Examples in .NET

1. Using a Read-Through Cache Library (e.g., FusionCache, EasyCaching)

using ZiggyCreatures.Caching.Fusion;

public class ProductService
{
    private readonly IFusionCache _cache;
    private readonly AppDbContext _dbContext;

    public ProductService(IFusionCache cache, AppDbContext dbContext)
    {
        _cache = cache;
        _dbContext = dbContext;
    }

    public Product GetProduct(int id)
    {
        return _cache.GetOrSet(
            $"product_{id}",
            async (ctx) => await _dbContext.Products.FindAsync(id), // Cache handles DB fetch
            options => options.SetDuration(TimeSpan.FromMinutes(5))
        );
    }
}

How this demonstrates read-through:

  • Cache Integration: Uses IFusionCache which natively supports read-through patterns
  • The method GetProduct tries to retrieve a product from the cache using a key.
  • If the product is not found in the cache (cache miss), the cache library automatically fetches it from the database using the delegate you provide.
  • Once retrieved, the value is stored in the cache with the defined expiration.
  • This means cache population and database fallback are centralized and abstracted by the cache library itself.
  • As a result, the application code is straightforward, you only need to define how to fetch the data, not when to cache it.

2. Using .NET HybridCache (.NET 8+)

// Requires .NET 8+
public class ProductService
{
    private readonly HybridCache _cache;
    private readonly AppDbContext _dbContext;

    public ProductService(HybridCache cache, AppDbContext dbContext)
    {
        _cache = cache;
        _dbContext = dbContext;
    }

    public async Task<Product> GetProductAsync(int id)
    {
        return await _cache.GetOrCreateAsync(
            $"product_{id}",
            async cancel => await _dbContext.Products.FindAsync(id), // Read-through delegate
            expires: TimeSpan.FromMinutes(5)
        );
    }
}

How this demonstrates read-through:

  • This example uses the HybridCache, available in .NET 8+, to implement the same strategy for asynchronous access.
  • When calling GetOrCreateAsync, the cache first checks for the value.
  • If the value is already in cache, it’s returned instantly.
  • On a cache miss, the supplied delegate runs to fetch the value from the database, which is then stored in the cache.
  • This is a true read-through pattern: database access and caching are handled together in a single line of code.

3. Manual Read-Through Implementation

public T ReadThrough<T>(string key, Func<T> fetchData, TimeSpan expiry)
{
    if (!_memoryCache.TryGetValue(key, out T value))
    {
        value = fetchData(); // Fetch from DB
        _memoryCache.Set(key, value, expiry);
    }
    return value;
}

How this demonstrates read-through:

  • This manual implementation is necessary because the default .NET memory cache does not support read-through behavior out of the box.
  • Checks IMemoryCache for existing entry in the cache. On miss**, it executes the fetchData delegate (which should get from a database), **caches the result**, and returns it.
  • This essentially re-creates the read-through logic that specialized libraries handle for you.
  • With this approach, the developer is still responsible for providing the fetch logic and invoking this method, but it brings explicit read-through caching to any scenario.

Comparison with Cache-Aside

AspectCache-AsideRead-Through
Miss HandlingApplication managesCache system manages
Code ComplexityMore in app codeLess in app code
ConsistencySame challengesSame challenges
FlexibilityMore flexibleLess flexible
Best ForSimple apps, ad-hoc needsComplex systems, uniformity

Read-Through caching provides a more structured approach to caching that can significantly simplify application code while providing consistent behavior across distributed systems. While it requires more sophisticated cache infrastructure, the benefits in maintainability and consistency often outweigh the costs in enterprise scenarios.

3. Write-Through Caching

The Write-Through pattern guarantees strong consistency by updating both the cache and the database during every write operation. When the application updates data, it first writes the new value to the cache and then immediately writes the same value to the database. This process is synchronous, the operation isn’t considered complete until both the cache and the database have been successfully updated. Because of this, any further read will always retrieve the latest data directly from the cache, ensuring clients never receive stale information. This tight synchronization between cache and database eliminates the risk of cache inconsistency. However, this approach does introduce higher write latency, since every writing must touch both the cache and the database before returning success to the caller. Despite this, it’s an excellent choice for scenarios where data freshness and reliability are critical, as the cache always reflects the most up-to-date state of the database.

How Write-Through Caching Works (Step-by-Step)

  1. The client application issues a write operation (such as updating or inserting data). It first updates the cache with the new value for the specified key. (① Update cache (SET key))
  2. The cache confirms that the update was successful and acknowledges back to the application. (② Confirm cache update)
  3. The application then immediately updates the database with the same data, ensuring both systems stay in sync. (③ Update database)
  4. The database confirms that it has accepted and saved the update. (④ Confirm DB update)
  5. The application returns success to the user or calling logic, signaling the entire operation is complete only after both cache and database have been updated. (⑤ Return success)

Write Through

Write Through Diagram

When Should You Use Write-Through?

  • Data consistency is critical: Applications such as financial systems, inventory management, and order processing—where up-to-date information is essential—benefit from Write-Through’s guaranteed freshness.
  • Cache reliability matters: If your system must always return current data from the cache, not allowing stale information even briefly.
  • Reads are much more common than writes: Write-Through is especially effective when your application has high read volume but comparatively fewer writes, allowing you to maximize fast, cache-based retrievals.
  • You want to simplify cache management: Since the cache is always updated alongside the database, there’s no need for complex cache invalidation logic.
  • You need predictability: Synchronous updates ensure that once a write returns “success,” both your cache and database are fully in sync.

Advantages

  • Strong Consistency: Ensures the cache and database are always in sync, so clients never receive stale data.
  • Fresh Reads: Reads from the cache always return the most recent data, simplifying cache management.
  • No Stale Cache: Since data is written to the cache before the database, cache entries are always up to date.
  • Simple Invalidations: Eliminates the need for complex cache invalidation strategies, as the cache is always updated with every writing.
  • Fault Tolerant Reads: If the cache fails or is flushed, data is always available and correct in the database.

Disadvantages

  • Higher Write Latency: Every writing requires updating both the cache and the database, increasing overall write time.
  • Increased System Load: Both systems (cache and DB) handle every write, which can add extra loads, especially in write-heavy scenarios.
  • Not Ideal for Write-Heavy Workloads: Systems with high write volumes may experience bottlenecks due to synchronous updates.
  • Complexity with Distributed Writes: Multi-node or distributed databases/caches may require careful coordination to maintain consistency.

Use Cases

  • Financial applications: Banking systems, payment platforms, or any solution requiring up-to-the-moment balances, ledgers, or records.
  • E-commerce stock management: Online stores where product availability and inventory levels must reflect real-time changes to prevent overselling.
  • Order or reservation systems: Booking systems for travel, hotels, or tickets, where double-bookings must be prevented and users see live availability.
  • User profile or account management: Systems where updated user details must always be reflected immediately in further requests.
  • Read-heavy applications with high consistency requirements: Apps where reads vastly outnumber writes and clients expect only the latest data in every response.

Real World Examples in .NET

public class InventoryService
{
    private readonly IDistributedCache _cache;
    private readonly AppDbContext _db;

    public InventoryService(IDistributedCache cache, AppDbContext db)
    {
        _cache = cache;
        _db = db;
    }

    // Write-Through Update
    public async Task UpdateStockAsync(int productId, int newQuantity)
    {
        var cacheKey = $"inventory:{productId}";

        // 1. Update cache
        await _cache.SetStringAsync(
            cacheKey,
            newQuantity.ToString(),
            new DistributedCacheEntryOptions {
                AbsoluteExpirationRelativeToNow = TimeSpan.FromMinutes(30)
            });

        // 2. Update database
        var product = await _db.Products.FindAsync(productId);
        product.Stock = newQuantity;
        await _db.SaveChangesAsync();
    }

    // Consistent Read
    public async Task<int> GetStockAsync(int productId)
    {
        var cacheKey = $"inventory:{productId}";
        var cachedStock = await _cache.GetStringAsync(cacheKey);

        if (cachedStock != null)
            return int.Parse(cachedStock);

        // Fallback (should rarely occur)
        var product = await _db.Products.FindAsync(productId);
        return product?.Stock ?? 0;
    }
}

How this demonstrates write-through:

  • Synchronous Writes to Both Cache and Database: Whenever the stock level is updated through UpdateStockAsync, the new value is written to both the distributed cache and the database within the same operation. This ensures that the cache is always consistent with the database at the moment the update completes.
  • Cache Reflects Latest Data: The cache is updated first, so any further reads will immediately reflect the new stock value, eliminating the possibility of serving stale data.
  • Consistent Reads: The GetStockAsync method reads from the cache by default, ensuring fast access to up-to-date information. If the cache value is missing (for example, after expiration), the method gracefully falls back to the database, maintaining data consistency.
  • Simplicity and Reliability: By writing every update to both the cache and the database, this approach avoids the complexities of cache invalidation and reduces the risk of synchronization issues between the two data sources.

4. Write-Behind (Write-Back) Caching

The Write-Behind (Write-Back) caching strategy improves write performance by recording updates to the cache first and persisting them to the database asynchronously, after a delay or in batches. This dramatically reduces the perceived latency for the user, as operations return success once the cache has been updated, not when the database commit is complete. Write-behind caching can be achieved in two main ways:

  1. Caching Provider Managed (like Redis “write-behind” or “persistent cache” modes): Some advanced caching systems, such as Redis (when configured appropriately), support built-in mechanisms to asynchronously persist changes from memory to a backing store or database. In these setups, your application interacts only with the cache, and the provider ensures all changes are eventually saved to the database. This approach simplifies your code but places more responsibility and trust in the cache provider's durability and failure handling.
  2. Application Managed (Background Worker Approach): For cache systems that don’t natively support write-behind, or when you want more control, your application can implement its own background process (worker, thread, or hosted service). Here, every write is recorded in both the cache and a queue. A background task periodically collects updates from the queue and batches them to the database, achieving the same effect as provider-managed write-behind but with more control over batching, retries, and error handling.

Write Behind

How Write-Behind Caching Works (Step-by-Step)

Provider-Managed Write-Behind like Redis

  1. The application issues a write operation (such as an update or insert), sending the new value to the cache. (① SET (update))
  2. The cache stores the data and adds it to its internal write-behind queue. Both the cache and the queue are persisted within the cache provider. (② Persist in cache & internal queue)
  3. The cache acknowledges success to the application only after both the cache and the queue have been successfully updated, allowing the application to proceed immediately. (③ ACK (success after both cache+queue are updated))
  4. The cache provider asynchronously flushes queued changes to the database in the background, typically in batches, without blocking the application. (④ Async flush from internal queue)
  5. Once the database confirms the write, the cache provider marks the operation as completed—this happens transparently in the background. (⑤ DB write ACK (background))

Write Behind Diagram2

Application-Managed Write-Behind (Internal Background Worker) in the Application-Managed Write-Behind approach, the background process that flushes the queue to the database is typically part of the application's runtime environment—not an external or separate service. The queue and background flush logic are managed internally (such as via a thread, task, or scheduler), but from an architectural perspective, they belong to the application itself.

  1. The application receives a write request (such as adding or updating data). It first writes the new value to the cache for fast access.
    (① Write to cache)
  2. The application writes the same data to its internal queue, ensuring the update is safely buffered for later database persistence.
    (② Write to queue)
  3. The application waits for both the cache and the queue to acknowledge that the data has been persisted in each.
    (③ Cache ACK & Queue ACK)
  4. Once both cache and queue have acknowledged, the application returns success to the user or calling logic, confirming the data is safely stored (though not yet in the database).
    (④ Return success)
  5. A background worker in the application periodically polls the queue for pending updates. When found, changes are batched for efficiency.
    (⑤ Poll queue)
  6. The background worker writes the batch to the database.
    (⑥ Batch writes to the database)
  7. Upon successful database write, the worker updates the queue to mark items as flushed or removes them.
    (⑦ DB confirmation to queue)

Write Behind Diagram

Advantages

  • Reduced Write Latency: The application acknowledges success as soon as the cache is updated.
  • Lower Database Load: Database writes are batched, reducing the frequency and overhead of writes.
  • Throughput: Supports very high write rates, suitable for logging, telemetry, or event collection.
  • Improved Scalability: Back-end database writes can be spread out and controlled.

Disadvantages

  • Potential Data Loss: If the cache fails before the data is flushed to the database, recent updates may be lost.
  • Eventual Consistency: The database will not reflect the latest state immediately after the write.
  • Complexity: Requires mechanisms for batching, retries, and error handling for background flushing.

Use Cases

  • Analytics & Logging: Systems where high-speed writes are essential and tolerance for a small risk of data loss is acceptable.
  • Shopping cart updates: Online shops where cart state can be eventually consistent and resilience is provided at the application layer.
  • Session state management: Applications tracking user state that can tolerate eventual persistence.
  • IoT telemetry: Collecting sensor data at high speed and persisting it in batches.

Real World Examples in .NET

1. Provider-Managed Write-Behind (e.g., Redis)

Scenario: You run an e-commerce platform. Your .NET InventoryService uses Redis as a distributed cache. You enable Redis’s write-behind (asynchronous persistence) to keep the relational database (e.g., SQL Server) in sync with changes made in the cache, without coupling the application to database write logic.

public class InventoryService
{
    private readonly IDistributedCache _cache; // e.g. StackExchange.Redis
    public InventoryService(IDistributedCache cache)
    {
        _cache = cache;
    }

    public async Task UpdateStockAsync(string productId, int newStock)
    {
        var cacheKey = $"inventory:{productId}";
        var inventory = new { ProductId = productId, Stock = newStock };

        // This writes to Redis. Redis is configured for write-behind to DB.
        await _cache.SetStringAsync(cacheKey, JsonConvert.SerializeObject(inventory));

        // No direct call to the database here.
    }
}

Redis is configured with write-behind persistence (e.g., using Redis Streams or Redis modules, or with Redis Enterprise features), so all updates queued in Redis are eventually/batched written to your main database out-of-band.

How this demonstrates Write-Behind:

  • All writes go through the cache. and its internal queue for updating a database asynchronously.
  • The application is unaware of the timing and mechanics of persisting to the database, the provider (like Redis) is responsible for doing this.
  • Guarantees fast writes and eventual consistency with the database.

2. Application-Managed Write-Behind (Internal Background Worker)

Scenario: Suppose you have an e-commerce application. The InventoryService handles immediate cache updates and enqueues write events. A separate background worker periodically flushes the latest updates from the queue to the database.

// InventoryService.cs
using System.Collections.Concurrent;
using Microsoft.Extensions.Caching.Memory;

public class InventoryService
{
    private readonly IMemoryCache _cache = new MemoryCache(new MemoryCacheOptions());
    private readonly ConcurrentQueue<(string ProductId, int NewStock)> _queue = new();

    public void UpdateStock(string productId, int newStock)
    {
        var cacheKey = $"inventory:{productId}";

        // Write to cache
        _cache.Set(cacheKey, newStock);

        // Enqueue for background persistence
        _queue.Enqueue((productId, newStock));

        // Return success after both cache and queue are updated (write-through to cache & buffer)
    }

    public bool TryDequeueUpdate(out (string ProductId, int NewStock) update) =>
        _queue.TryDequeue(out update);
}

// DbWriteWorker.cs
using System.Threading;

public class DbWriteWorker
{
    private readonly InventoryService _inventoryService;
    private readonly InventoryDbContext _dbContext;
    private readonly CancellationToken _token;

    public DbWriteWorker(InventoryService inventoryService, InventoryDbContext dbContext, CancellationToken token)
    {
        _inventoryService = inventoryService;
        _dbContext = dbContext;
        _token = token;
    }

    public async Task RunAsync()
    {
        while (!_token.IsCancellationRequested)
        {
            var updated = false;

            // Batch process all queued updates
            while (_inventoryService.TryDequeueUpdate(out var record))
            {
                var (productId, newStock) = record;
                var product = await _dbContext.Products.FindAsync(productId);
                if (product != null)
                {
                    product.Stock = newStock;
                    updated = true;
                }
            }

            if (updated)
                await _dbContext.SaveChangesAsync(_token);

            // Wait for the next interval
            await Task.Delay(TimeSpan.FromSeconds(5), _token);
        }
    }
}

How this demonstrates Write-Behind:

  • When a stock change occurs, the application writes through the cache and through the queue before acknowledging success to the client.
  • Data is always immediately available in the cache (the freshest data for reads).
  • The actual database write is deferred (write-behind) and performed in batches by the background worker, but the application ensures the change is at least committed to an internal durable queue before returning success, aligning with write-through then write-behind semantics.

Conclusion

Choosing the right caching strategy is essential for building fast, reliable, and scalable .NET applications. Each approach, whether it's cache-aside, read-through, write-through, or write-behind—offers unique trade-offs between speed, consistency, and complexity. Traditional strategies like cache-aside provide a simple way to boost read performance, while more advanced techniques such as write-behind and hybrid caching unlock even greater efficiency and resilience, especially in distributed or cloud-based environments. With the introduction of HybridCache in .NET 9, developers now have access to a solution that seamlessly combines the ultra-fast performance of local in-memory caching with the consistency and scalability of distributed caches. This development reduces much of the configuration and complexity previously required to achieve both speed and consistency, empowering teams to deliver responsive, modern applications with confidence. As you design your own systems, carefully consider your application's data access patterns, consistency needs, and scaling requirements to select the most appropriate caching approach. Leveraging the right strategy not only improves performance and user experience but also future-proofs your application architecture as your system grows.

Happy caching!