Caching

What is Caching?

Caching is a common term in computing. Anyone who has been out  shopping for a computer has probably heard the word “cache.” You might have heard terms like L1 and L2 caches and have been wondering  what it means. You may have heard it from friends giving advice, saying something like, “Don’t buy that processor chip, it doesn’t have any cache in it!” well, caching is a very important computer-science process that appears on every computer in a variety of forms.  Caches exist in various forms; there are memory caches, hardware and software disk caches, page caches, and more. Even a computer’s virtual memory is a form of caching. In this article, I will discuss everything you need to know about caching.

To put it simply, caching is a technology based on the memory subsystem of your computer. Caches exist as a way to accelerate your computer while keeping the price of the computer low by creating memory stores that allow your computer to run tasks more rapidly.

For you to really understand what caching is, let’s use this example of a librarian. Imagine a librarian at a library desk; his job is to help you find the books you need, right? Now let’s say you looked all over and can’t find a book, and so you go on to ask the librarian. That’s the regular setup of most libraries. Now let’s look at how the librarian would work without a cache. One customer arrives and requests for a single book, let’s say, Oliver Twist. The librarian gets up and brings the book out from wherever it is in the library. Now once the customer is done with the book, the librarian walks all the way back to the shelf and returns the book. Now the next customer walks in and asks for the same book. Without a cache, the librarian has to make a complete round trip to fetch every book – even very popular ones that are requested frequently. This is extremely stressful and will make the librarian slower and less efficient.

Now let’s implement a cache system; to this example, if you give the librarian a bag containing about 10 books (in computer lingo, you would call the bag a 10 book cache). Now he can put some of the more frequently requested books into the back without making round trips to the shelves. If a new customer asks for any popular books, he can just whip out a copy from the bag, cutting his work time effectively by more than half. The librarian has. A 10 book cache that effectively lets him store up to 10 copies of a book. Depending on the demand for said book, he can adjust his cache to reduce his work time and increase his performance to an unbelievable level.

There is also an issue that can arise from using caches. Let’s say the librarian has a 20 book cache. A large number of books in his cache might prevent him from accurately Knowing what books are inside. Because of this, he will have to search his cache every time a customer asks for a book. If the book eventually ends up not being in the cache, then the librarian has virtually doubled his response time and also put more stress on himself. This makes cache searches one of the biggest issues in computing. When designing caches, engineers have to worry about minimizing the impact of cache searches on performance. 

Benefits of Caching

A cache on a system can exponentially increase performance and response time. A faster and larger cache will bring about more gains speed and performance-wise. There are also a few more benefits to having a fast Cache; here are some of them.

Application Performance

A cache, as mentioned above, significantly increases performance. Due to the fact that the cache memory is faster than disk (magnetic or SSD), reading data from in-memory cache is extremely fast (sub-millisecond). This means that data access will be significantly faster, and the application’s overall performance will be increased.

Reduce Database Cost

With just a simple cache, you can get hundreds of thousands of IOPS (Input/output operations per second), potentially replacing a number of database instances, thus driving the total cost down. Caches are especially useful in DBMS when the primary database charges per throughput. In cases like this, the price savings by using a cache could be dozens of percentage points.

Reduce the Load on the Backend

As the cache redirects the bulk of reading/write load from the backend database to the in-memory layer, it automatically reduces the load on your database. It protects it from slower performance under load or even from crashing at times of spikes.

Predictable Performance

A very common issue with modern applications is dealing with times of spikes in application usage. For example, social media apps receive a huge spike during national or worldwide events like elections, and shopping apps also experience this during black Friday and the like. These spikes effectively increase the load on the database and result in higher latencies to get data, making the overall application performance unpredictable. Luckily, with a high throughput in-memory cache, this issue can be mitigated effortlessly.

Eliminate Database Hotspots

As it stands, in many applications, there are certain pages or accounts that will be viewed more frequently. A popular product brand or a celebrity’s profile are common examples. These frequently visited pages can result in hot spots in your database and may require overprovisioning of database resources based on the throughput requirements for the most frequently used data. By using an in-memory cache to store common keys, the need for overprovision is mitigated, and you are provided with fast and predictable performance for the most commonly accessed data.

Increase Read Throughput (IOPS)

In addition to lower latency, in-memory systems also offer much higher request rates (IOPS) relative to a comparable disk-based database. A single instance used as a distributed side-cache can serve hundreds of thousands of requests per second.

Leave a Comment


The reCAPTCHA verification period has expired. Please reload the page.