What is caching?
Storing frequently accessed information closer to where you need it for faster subsequent access to it.
Caching is so ubiquitous that you probably have benefited from it without even knowing about it. Your computer or your phone or other mobile devices use caching at different levels like at its hard drive, CPU, GPU, browser, operating system etc to improve its performance. And when you think about things outside your computer, like databases, web servers, DNS etc also use caches for the same reason, improving performance, by keeping frequently requested for items closest to where it is accessed, in a faster, more volatile storage system. Also cache storage is often times more expensive than regular storage, as it is often stored either in super fast access memory or solid state drives. Hence choosing what you store in the cache and how much you store in the cache is also something you must consider before deciding to use a cache.
Types of data storage in your Computer
As I said earlier, there are several layers of caches in your computer which you probably don’t realise or worry about as your computer hardware and operating system just deals with it. To give you an idea of the different levels of storage, let me list some of it below, based on their speed:
- CPU Registers
- L1 cache
- L2 cache
- Memory (RAM - random access memory)
Those first three actually live inside your CPU. Your CPU has its own personal storage! And it is blazingly fast! It is faster than your RAM. This is necessary because, if you haven’t noticed already, processing technology has improved so much in the last decade alone that systems have got much faster than memory technology. Thus you do not want your CPU sitting around waiting for data to be returned from the RAM module. Thus it caches instructions in its personal cache. Of course there is not a huge amount of storage in there but there is just enough, to keep your CPU from idling.
Why not put everything in cache, to make everything faster?
A cache is not a replacement for regular storage. It is complementary and is meant to be used sparingly for most frequently used data in particular, to improve performance, specifically to reduce latency or I/O overhead that one would incur when fetching data from a some traditional data source.
But caching also means that you now have two sets of data to maintain!
- The snapshot of the original data that you have in the cache
- The original data which could be changed by your application.
What if your actual data got updated? Who would tell the cache that it needs to be refreshed?
Thus caching, does introduce additional challenges which can be cleverly managed or mitigated by choosing only the right type of data to be stored in the cache. But because of these factors, you don’t want to store everything in cache.
Web applications and Caching
Web applications are usually built on HTTP, which already comes with a Caching specification.
Responses from a web server can be cached to improve performance, thereby reducing the work the server has to do when a subsequent request comes along requesting the same data. I found a video online that went into the details of this really well, sharing it here:
Types of Caches for applications
When you have an application that serves some relatively static data that rarely or never changes, then you could choose to store it in cache. Depending on your application’s architecture, whether this cached information has to be shared by multiple application you can decide whether to cache this information in-memory of the application or to store it in a shared distributed caching technology.
In-memory vs Distributed
Before I go ahead, I would like to clarify two terms
- Sticky sessions
When your website is always served by one server, this term really doesn’t have much of a meaning. Because every request just hits the same server and gets served by it.
However, imagine you have a website, that’s behind a load balancer. There could be multiple server processes behind that load balancer serving client requests. The load balancer, as the name suggests is supposed to help spread the load between the real servers behind it. The load balancer determines, which request goes to which server. Remember that HTTP is a stateless protocol. Hence, every time a request comes, it is fresh, new stuff from the server that goes back.
When a load balancer has Sticky sessions switched on, then the requests from a client will always go to that same underlying host and will not be load balanced or directed to another server. This is sometimes necessary for certain types of applications which rely on server side session management.
Caching in ASP .Net Core
IDistributedCache are two interfaces available to developers to interact with whatever caching they decide to use.
- IMemoryCache as the name suggests is for interacting with an in-memory cache. It exposes some simple methods to create a record in the cache and to retrieve values from it.
- You use register the implementation using
services.AddMemoryCache()and then you just rely on the dependency injection framework to inject it to your implementation for caching.
- It also allows you to provide
MemoryCacheEntryOptionsto decide ways to expire cache content that lets you choose either an absolute expiry, a sliding expiry and a token based expiry.
- You can find the full details with examples on Microsoft docs which I do not want to repeat here as people have invested time and effort giving enough details there already. And having recently used the docs to introduce caching in my system, I found it really useful.
When you have to cache information that is to be shared by multiple application servers or services, then a distributed cache is the way to go. The advantage is that when you redeploy or restart your application server, the cached data isn’t lost unless you chose a specific expiry policy anyway.
I have previously used a distributed Redis cache in my application. I think now it is known as Azure Cache for Redis.
Here again, I believe the best place to read more about this is the docs.