Cache – the concepts

August 27, 2006

In the early days of Internet, when memory was expensive and CPUs were less powerful, and the dreams were big and budgets were small – caching was perhaps the biggest buzz thing in software architecture. Today, we have at least 10x more powerful CPUs, memory is at least 10 times cheaper, and hardware and software can also be scaled many times over ( who would have imagined windows supporting 64 processors? ) – Cache is still there – and is supported out of the box in some web programming languages!!
I came across this question in MSDN architecture form – which triggered this note.

There are three distinct kinds of scenario which need different kinds of caching and cache expiry.
1) Most Frequently Used: Imagine that you are building an application like Amazon.co.uk – you have millions of items in the catalogue, you can fetch details about all of them in runtime from the database. Here, we know that the book descriptions are not going to change very often, hence we know that we do not need to hit the database every time we need to show details of a book / or other item. However, the sheer size of the database is so large that we cannot ( no not even now) contemplate keeping all of it in memory of the application server. The solution here is to keep the most frequently used items in the Cache and remove the rest. The cache expiry algorithm here would be to remove the Least Recently Used item from the cache. These kinds of cache are called LRU cache for least recently used. It is funny how its named after the mechanisms of expiry and not on how elements are cached.
If we think a bit more, we will realize how old this concept it. Computer architecture – memory management – virtual memory uses this concept – a quick look at wikipedia for Virtual Memory shows that this concept has been around from 1959! phew!

2) Time to Live: There are many data for which the resources are abundant and we can cache and manage all in memory. If you have subscribed to RSS feeds via any of the readers -like http://www.live.com/ or www.google.co.uk/ig you know that the data you see when you launch these pages may not be the latest. These readers do not go to the source very frequently to get an update. They get the feeds and keep it in memory for about 2 hours or whatever is the time you specify. This kind of caching is named “Time to Live” or TTL – again because the item in memory has a certain time to live before it expires. You can see this kind of cache on ebay ( you will notice that while viewing a list of items, the bid price does not always reflect what is happening inside – esp for items which are ending soon).

3) Event based: here we cache the data till an external event forces the cache to be expired. some of the online travel sites – cache the hotel availability information, till they get an error while booking saying that there are no rooms available. This is a trigger for cache expiry. Similarly the B2C new sites have explicit “publishing” action which clears the cache and lets the users see the latest articles.

In all three scenarios, the characteristics of cache are in fact the characteristics of cache expiry.

The next thing to consider is the Granularity of cache.
Many times, information comes in Packets, and hence cache should also expire in packets.
Lets say you have an application showing departure boards of all London airports. Now its likely that you will get a feed every “x” minutes from different airports. And when you get that, you want to expire the status of all flights of the particular airport and reload them. It might be too much of an effort to compare individual flights and change selectively. In some cases, this processing may out-weigh the benefit of the cache in the first place.

It is usually easy to identify the parameters for caching – and for a different value of any of those parameters, we can assign a different “cache set”. For example, search results will depend on the query parameters typed by the user. It may also depend on the language option specified by the user and any other structured search field selected by the user. So – caching in that case will have all the structured search options as parameters – and if any of that changes – it can not use the cached value and has to do a query.

I will talk more about caching and cache implementation in my next post Cache Implementation  and cache access and expiry

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: