Avatar

👋🏻, I'm Julia.

Caching Strategies for Web Applications

#databases #performance

3 min read

I’ve been considering opportunities for improving the performance of the app I’ve been developing at work. One of the obvious ways is by making better use of caching strategies on the frontend and server. The app uses the NextJS framework, which helpfully takes care of a lot of caching setup for you, so there’s not a lot of manual configuration required. On top of that, because we also use Vercel to deploy our app, caching at a CDN / Edge level is also taken care of for us.

Even though I didn’t have to do a lot of the caching setup, I wanted to be sure that I had really considered all of the obvious caching strategies available to see if there was anything I could tweak. This post is therefore just a summary of my findings.

1. Browser cache

The browser cache is a local storage area on the user's computer that is used to store static assets (e.g. images, stylesheets and JavaScript files). This can be accessed via the browser’s Web Storage API and allows stored assets to be used across multiple pages of a website. These files can also persist between sessions, so that if a user visits a website again, these assets can be retrieved from local storage instead of being downloaded again.

Note: There’s a difference between a browser’s localStorage and sessionStorage. The main difference is that data stored to localStorage has no expiration time, whereas sessionStorage data gets cleared when the page session ends. IndexedDB can also be used - see this post.

Note: Cache-control is a HTTP header used to specify browser caching policies in client requests and server responses. These should be set appropriately to ensure assets are not continuously queried for unnecessarily. Examples of policies that can be set include whether a resource can be cached and the amount of time a resource will be cached for before it expires

2. Content Delivery Network (CDN)

A Content Delivery Network (CDN) is a network of servers distributed across the globe, which are used to deliver static assets to users based on their location. Loading times for a user can be reduced, as the CDN server that is most closely located to the user, can be used to serve the assets.

Note: A CDN is an example of a distributed file system that takes advantage of geographic location caching policies.

3. Cache server

A proxy server sits between a client and backend server, and handles all transactions between them. A cache server is a type of proxy server, specifically designed to store and serve cached data. This can be beneficial for websites with lots of traffic in order to reduce load on the main server.

An example of an in-memory caching server that is distributed is Memcached. Distributed servers allows multiple servers to read and write from the same cache. Redis is another solution for in-memory distributed caching, which is also a database (i.e. also capable of processing and querying, in addition to storing more complex data structures). Amazon ElastiCache offers a fully managed service for running Memcached and Redis.

4. Database cache

A database cache stores frequently used data in the database itself, rather than retrieving it from the main server each time it is needed (thus reducing the number of queries needed).

Examples of how this might be done include the use of buffer pools (memory area allocated to caching query results) and materialised views (the pre-computation of query results and storage in database tables).

© 2016-2024 Julia Tan · Powered by Next JS.