I thought this issue was long dead and buried. Then I saw this post in asp.net forums here – Calvin wondering how to improve the speed of his site in different parts of this world ( whether replication is the only option). Here is my reply, reproduced.

Read the rest of this entry »

Cameron Purdy from Tangosl.com commented

Your examples are definitely “traditional” caching. We do see that a lot (e.g. hotel chains, as you said). There is a more contemporary approach to caching, which is to actually keep the data up-to-date in memory (real-time “single system image”), as opposed to having LRU or some other way to get rid of stale data. For a good example, see: http://wiki.tangosol.com/display/COH32UG/Cluster+your+objects+and+your+data Peace.

It is a very relevant comment. Cameron says that instead of using a database for persistant synchronized cache, we could use in-memory replicated cace like the product of his company Coherence.

This concept is not new – concept of travelling objects and external cache have neen there for some time. Though I havent tried Coherence myself, there is a possibility that Coherence has solved many of the stumbling blocks of the past.

Traditionally, we have considered and possibly used in-memory databases for some of our caching requirements. ( Remember TimesTen ? Now its owned by Oracle). These databases were either too expensive, or too flaky(using non standard queries, providing no guarantee of the data etc.).  Then we discovered that all databases also started giving ways to pin databases to memory – to get significantly faster performance – and that became our preferred choice.

App servers also started providing session sharing – though performance cost of the same forced us to stop putting data in session, and putting it in databases instead.

There were also programming frameworks like Jini ( and more recently Javaspaces as a part of Jini) which enabled networked objects – available to all nodes in the Jini cluster. Jini never got much adoption.

Neither have cluster replication products.

I believe the reason for the same is that when most of us design, we design for a single server and IF the usage becomes so high, warranting big clusters cluster and performant caches, we then re-engineer parts of the application or Hardware to enable clustering ( for example, organizations prefer to introduce a NAS in the datacenter than to change the software to use Databases instead of file systems when clustering is introduced).

With products like Coherent, there could be a case where by paying 5K licensing fee, we could save 1-2 CPUs on the database and hence justifying the cost. But, i will be vary of this technology for the following reason.

Today, the architecture of choice on App layer seems to be a large number of small, blade servers clustered by a hardware load balancer. In this case, for a muti-way replication with such large number of nodes, we dont know how such technologies will perform. For vertically scaled applications ( running on lets say 2-3 machine having 4-8 processors each), such products have a very high changes of working. The second issue which I have is the whether the updates are ACID or are the BASE ( Well – base is a concept which never picked up – standing for Basically Asynchronous, eventially Synchronised i think), so do they leave some amount of margin for stale data? If not, if used in “Transactional” mode, do they really give better performance?

The onus is on the cache vendors to convince us.

I am sure they will – and if the products are actually as good, they should become popular. They seem to have a good client list already.

However, I would be worried about the life of such products. When this demand becomes mainstream, commercial App server vendors will have to provide the same functionality to survive – whether they OEM Camerons product or build something on their own.

Let the brutal theory of Survival of those “most responsive to change” decide the same.

In my previous post on cache implementation I had talked about where to keep the cache. Now I will talk about how to access the cache and how to expire the same.

Before we go into those topics, it is very important to consider the tolerance of stale data for maximum optimization.
Lets say we cannot tolerate stale data at all – lets say the application is for selling a hotel room in Thai Bhat converted to Sterling. Now, depending on the rate I get from my bank and the date of stay, I get a price.
So every time anyone requests for a room, I have to check the price like
PriceInGBP=PriceInTHB* ( Select ExchangeRate where fromCurrency=THB and ToCurrency=GBP and ValidityFromDate<=12Jan and ValidityToDate >= 12Jan and isLive=True)
– Since I am selling in future so I have to look up rates on that date.
Now if I cannot tolerate stale data, the best I can do is to put a trigger on the exchange rate table to update a ExchangeRateVersionNumber Table. The exchange rate version number is updated on any change on the entire exchange rate table.
Thus my application changes to
If CachedExchangeRateVersionNumber = select * from ExchangeRateVersionNumber , used Cached exchange rate, Else the above query to fetch the exchange rate.
Here we see that the query on the database, hence load on the database is much smaller aiding scalabililty.

Read the rest of this entry »

Cache – Implementation

August 27, 2006

In my previous post on Cache concepts I had indicated that cache are defined by how we want to expire them. I also talked about granularity of cache.

In this post – I will look at where to keep the cache .

When it comes to caching implementation – the consideration are : Where should we keep the cache, how do we access the cache and how do we keep the cache updated. Accessing and updating cache will form topics for subsequent posts.

Where to keep the cache.

We all agree that the closer the cache is to the consumer, the more optimal it is.
Hence – for a browser based application- the most optimal way would be to cache it in user browsers. This can be done by having simple Get URLs and by specifying appropriate meta tags and HTTP headers indicating a cache expiry time.
The problem with this is that it becomes unpredictable – whether the browsers cache or not and whether they actually expire cache when we want them to depends on the browsers and the user settings. This is however by all means is favorable for items which are not subject to change ( typically images do not change even if text of an article does). The second problem with this is that this is cached per user, and not across users.

The second closest place is proxy servers. The proxy servers have the same advantages and problems as the browsers and I will not go into it. Typically, we want to defeat the browser and proxy cache – and I would probably write a separate post on that.

Read the rest of this entry »

Cache – the concepts

August 27, 2006

In the early days of Internet, when memory was expensive and CPUs were less powerful, and the dreams were big and budgets were small – caching was perhaps the biggest buzz thing in software architecture. Today, we have at least 10x more powerful CPUs, memory is at least 10 times cheaper, and hardware and software can also be scaled many times over ( who would have imagined windows supporting 64 processors? ) – Cache is still there – and is supported out of the box in some web programming languages!!
I came across this question in MSDN architecture form – which triggered this note.

There are three distinct kinds of scenario which need different kinds of caching and cache expiry.
1) Most Frequently Used: Imagine that you are building an application like Amazon.co.uk – you have millions of items in the catalogue, you can fetch details about all of them in runtime from the database. Here, we know that the book descriptions are not going to change very often, hence we know that we do not need to hit the database every time we need to show details of a book / or other item. However, the sheer size of the database is so large that we cannot ( no not even now) contemplate keeping all of it in memory of the application server. The solution here is to keep the most frequently used items in the Cache and remove the rest. The cache expiry algorithm here would be to remove the Least Recently Used item from the cache. These kinds of cache are called LRU cache for least recently used. It is funny how its named after the mechanisms of expiry and not on how elements are cached.
If we think a bit more, we will realize how old this concept it. Computer architecture – memory management – virtual memory uses this concept – a quick look at wikipedia for Virtual Memory shows that this concept has been around from 1959! phew!

2) Time to Live: There are many data for which the resources are abundant and we can cache and manage all in memory. If you have subscribed to RSS feeds via any of the readers -like http://www.live.com/ or www.google.co.uk/ig you know that the data you see when you launch these pages may not be the latest. These readers do not go to the source very frequently to get an update. They get the feeds and keep it in memory for about 2 hours or whatever is the time you specify. This kind of caching is named “Time to Live” or TTL – again because the item in memory has a certain time to live before it expires. You can see this kind of cache on ebay ( you will notice that while viewing a list of items, the bid price does not always reflect what is happening inside – esp for items which are ending soon).

3) Event based: here we cache the data till an external event forces the cache to be expired. some of the online travel sites – cache the hotel availability information, till they get an error while booking saying that there are no rooms available. This is a trigger for cache expiry. Similarly the B2C new sites have explicit “publishing” action which clears the cache and lets the users see the latest articles.

In all three scenarios, the characteristics of cache are in fact the characteristics of cache expiry.

The next thing to consider is the Granularity of cache.
Many times, information comes in Packets, and hence cache should also expire in packets.
Lets say you have an application showing departure boards of all London airports. Now its likely that you will get a feed every “x” minutes from different airports. And when you get that, you want to expire the status of all flights of the particular airport and reload them. It might be too much of an effort to compare individual flights and change selectively. In some cases, this processing may out-weigh the benefit of the cache in the first place.

It is usually easy to identify the parameters for caching – and for a different value of any of those parameters, we can assign a different “cache set”. For example, search results will depend on the query parameters typed by the user. It may also depend on the language option specified by the user and any other structured search field selected by the user. So – caching in that case will have all the structured search options as parameters – and if any of that changes – it can not use the cached value and has to do a query.

I will talk more about caching and cache implementation in my next post Cache Implementation  and cache access and expiry