There are 2 caching levels available, transparently handled by FusionCache for you:
- Primary (Memory): it's a memory cache and is used to have a very fast access to data in memory, with high data locality. You can give FusionCache any implementation of
IMemoryCache
or let FusionCache create one for you - Secondary (Distributed): is an optional distributed cache (any implementation of
IDistributedCache
will work) and, since it's not strictly necessary and it serves the purpose of easing a cold start or sharing data with other nodes, it is treated differently than the primary one. This means that any potential error happening on this level (remember the fallacies of distributed computing ?) can be automatically handled by FusionCache to not impact the overall application, all while (optionally) logging any detail of it for further investigation
Everything is handled transparently for you.
Any implementation of the standard IDistributedCache
interface will work (see below).
On top of this you also need to specify a serializer to use, by providing an implementation of the IFusionCacheSerializer
interface: you can create your own or pick one of the existing ones, which natively support formats like Json, MessagePack and Protobuf (see below).
Basically it boils down to 2 possible ways:
-
1️⃣ MEMORY ONLY: if you don't setup a 2nd layer, FusionCache will act as a normal memory cache (
IMemoryCache
) -
2️⃣ MEMORY + DISTRIBUTED: if you also setup a 2nd layer, FusionCache will automatically coordinate the 2 layers (
IMemoryCache
+IDistributedCache
) gracefully handling all edge cases to get a smooth experience
Of course in both cases you will also have at your disposal the added ability to enable extra features, like fail-safe, advanced timeouts and so on.
Finally, if needed you can also specify a different Duration
specific for the distributed cache via the DistributedCacheDuration
option, so that updates to the distributed cache can be picked up more frequently, in case you don't want to use a backplane for some reason.
There are a variety of already existing IDistributedCache
implementations available, just pick one:
Package Name | License | Version |
---|---|---|
Microsoft.Extensions.Caching.StackExchangeRedis The official Microsoft implementation for Redis |
MIT |
|
Microsoft.Extensions.Caching.SqlServer The official Microsoft implementation for SqlServer |
MIT |
|
Microsoft.Extensions.Caching.Cosmos The official Microsoft implementation for Cosmos DB |
MIT |
|
MongoDbCache An implementation for MongoDB |
MIT |
|
MarkCBB.Extensions.Caching.MongoDB Another implementation for MongoDB |
Apache v2 |
|
EnyimMemcachedCore An implementation for Memcached |
Apache v2 |
|
NeoSmart.Caching.Sqlite An implementation for SQLite |
MIT |
|
Microsoft.Extensions.Caching.Memory An in-memory implementation |
MIT |
As for an implementation of IFusionCacheSerializer
, pick one of these:
Package Name | License | Version |
---|---|---|
ZiggyCreatures.FusionCache.Serialization.NewtonsoftJson A serializer, based on Newtonsoft Json.NET |
MIT |
|
ZiggyCreatures.FusionCache.Serialization.SystemTextJson A serializer, based on the new System.Text.Json |
MIT |
|
ZiggyCreatures.FusionCache.Serialization.NeueccMessagePack A MessagePack serializer, based on the most used MessagePack serializer on .NET |
MIT |
|
ZiggyCreatures.FusionCache.Serialization.ProtoBufNet A Protobuf serializer, based on one of the most used protobuf-net serializer on .NET |
MIT |
|
ZiggyCreatures.FusionCache.Serialization.CysharpMemoryPack A serializer based on the uber fast new serializer by Neuecc, MemoryPack |
MIT |
|
ZiggyCreatures.FusionCache.Serialization.ServiceStackJson A serializer based on the ServiceStack JSON serializer |
MIT |
As an example let's use FusionCache with Redis as a distributed cache and Newtonsoft Json.NET as the serializer:
PM> Install-Package ZiggyCreatures.FusionCache
PM> Install-Package ZiggyCreatures.FusionCache.Serialization.NewtonsoftJson
PM> Install-Package Microsoft.Extensions.Caching.StackExchangeRedis
Then, to create and setup the cache manually, do this:
// INSTANTIATE A REDIS DISTRIBUTED CACHE
var redis = new RedisCache(new RedisCacheOptions() { Configuration = "CONNECTION STRING" });
// INSTANTIATE THE FUSION CACHE SERIALIZER
var serializer = new FusionCacheNewtonsoftJsonSerializer();
// INSTANTIATE FUSION CACHE
var cache = new FusionCache(new FusionCacheOptions());
// SETUP THE DISTRIBUTED 2ND LAYER
cache.SetupDistributedCache(redis, serializer);
If instead you prefer a DI (Dependency Injection) approach you can do this:
// REGISTER REDIS AS A DISTRIBUTED CACHE
services.AddStackExchangeRedisCache(options => {
options.Configuration = "CONNECTION STRING";
});
// REGISTER THE FUSION CACHE SERIALIZER
services.AddFusionCacheNewtonsoftJsonSerializer();
// REGISTER FUSION CACHE
services.AddFusionCache();
and FusionCache will automatically discover the registered IDistributedCache
implementation and, if there's also a valid implementation of IFusionCacheSerializer
, it picks up both and starts using them.
In certain situations we may like to have some of the benefits of a 2nd level like better cold starts (when the memory cache is initially empty) but at the same time we don't want to have a separate actual distributed cache to handle, or we simply cannot have it. A good example of that may be a mobile app, where everything should be self contained.
In those situations we may want a distributed cache that is "not really distributed", something like an implementation of IDistributedCache
that reads and writes directly to one or more local files: makes sense, right?
Yes, kinda, but there is more to that.
We should also think about the details, about all the things it should handle for a real-real world usage:
- have the ability to read and write data in a persistent way to local files (so the cached data will survive restarts)
- ability to prevent data corruption when writing to disk
- support some form of compression, to avoid wasting too much space on disk
- support concurrent access without deadlocks, starvations and whatnot
- be fast and resource optimized, so to consume as little cpu cycles and memory as possible
- and probably something more that I'm forgetting
That's a lot to do... but wait a sec, isn't that exactly what a database is?
Yes, yes it is!
Of course I'm not suggesting to install (and manage) a local MySql/SqlServer/PostgreSQL instance or something, that would be hard to do in most cases, impossible in others and frankly overkill.
So, what should we use?
If case you didn't know it yet, SQLite is an incredible piece of software:
- it's one of the highest quality software ever produced
- it's used in production on billions of devices, with a higher instance count than all the other database engines, combined
- it's fully tested, with millions of test cases, 100% test coverage, fuzz tests and more, way more (the link is a good read, I suggest to take a look at it)
- it's very robust and fully transactional, no worries about data corruption
- it's fast, like really really fast. Like, 35% faster than direct file I/O!
- has a very small footprint
- the license is as free and open as it can get
Ok so SQLite is the best, how can we use it as the 2nd level?
Luckily someone in the community created an implementation of IDistributedCache
based on SQLite, and released it as the NeoSmart.Caching.Sqlite Nuget package (GitHub repo here).
The package:
- supports both the sync and async models natively, meaning it's not doing async-over-sync or vice versa, but a real double impl (like FusionCache does) which is very nice and will use the underlying system resources best
- uses a pooling mechanism which means the memory allocation will be lower since they reuse existing objects instead of creating new ones every time and consequently, because of that, less cpu usage in the long run because less pressure on the GC (Garbage Collector)
- supports
CancellationToken
s, meaning that it will gracefully handle cancellations in case it's needed, like for example a mobile app pause/shutdown events or similar
So, we simply use that package as an impl of IDistributedCache
and we are good to go!
Oh, and give that repo a star ⭐ and share it!