Subscribe to our
weekly update
Sign up to receive our latest news via a mobile-friendly weekly email
In this article, we will focus specifically on varnish, which is an HTTP accelerator otherwise known as caching HTTP reverse proxy.
We’ve published several articles on the topic of caching and its role in optimizing web performance in the past. In this article, we will focus specifically on varnish, which is an HTTP accelerator otherwise known as caching HTTP reverse proxy.
HTTP accelerators use techniques such as caching, prefetching, compression, and TCP acceleration to reduce the number of requests being served by the web server.
Accelerators are primarily of two types:
Varnish is an example of a web server accelerator which serves as a reverse proxy server and is installed in front of web/application servers. It focuses on reducing the number of requests being served by the web/application server by caching the responses returned from the web server, thus allowing less bandwidth usage and reducing the server’s load.
Varnish, when installed in front of a web server, receives the requests made by the client and attempts to respond to these requests from its cache (varnish cache).
If varnish is unable to respond to the query from its cache, it forwards the request to the backend, receives the response from the backend, stores it in its cache and then delivers it to the client who made the request.
The diagram above explains the process of caching using varnish in a simple manner. The client (a browser, in this case) makes HTTP requests assuming it is communicating with the web server.
The HTTP request is received by Varnish, which is installed right in front of the web server. Assuming it is the first time that the request for the resource has been made, Varnish will forward the request to the web server (Apache or Nginx); it caches the response received from the web server so that Varnish can respond if the same query is made again without reaching out to the Web Server.
One very important item to point out here is that Varnish allows caching and accelerates web pages without the need of modifying any of your code or backend.
Version 1.0 of Varnish was released in 2006 and it’s come a long way since then; websites like StackOverflow, Drupal, Wikipedia, Reddit, Facebook, Twitter, and twitch.tv are currently using this.
When it comes to Varnish, one of the most widely used and an amazingly powerful feature it provides is the option to customize. We all love customizations, right? Be it our homes, cars, motorbikes, clothes, or even the tools we use. Varnish supports customizations using a powerful configuration language called VCL. The VCL is used to control the behavior of the cache and allows you to control how requests are cached/not cached by Varnish.
Since Varnish is an HTTP Accelerator, it is very important to understand how Varnish works with HTTP. Let’s have a look at the points below:
You can read more about the HTTP headers mentioned above in this article.
Now, let’s look at some common Varnish errors – 503s and what they mean.
Varnish by default uses a cache tag size of 8192 bytes. If you are using cache tags (when using a Content Management System like Drupal, Magento, WordPress or on your server) which exceed Varnish’s default size, there are possibilities that you may end up being greeted with the 503 Backend fetch failed error.
To fix such errors, you have to increase the value of the “http_resp_hdr_len” parameter in your Varnish Configuration File.
Please note that if the value of the “http_resp_hdr_len” parameter exceeds 32768 bytes, you will also have to increase the default response size using the “http_resp_size” parameter.
This error generally occurs when there is a problem with a backend/origin/ server or when multiple backends are fetched for information but all of them fail to provide the information.
Some of the common reasons you may end up seeing the error:
Checks:
The First Byte timeout error in Varnish simply means that Varnish did not receive an expected response (including errors) from the backend within the specified timeout limit. Varnish comes packed with a lot of default settings for most of its parameters; the value of these may be changed as per your requirements.
Backend timeouts in Varnish are of multiple types:
One simple way of handling timeout-related issues is to increase the timeout (specified in seconds) by overriding the defaults specified by the user.vcl file.
Sample:
backend default {
.host = "127.0.0.1";
.port = "8080";
.connect_timeout = 2s; # Wait a maximum of 2 seconds for establishing TCP Connection (Apache, Nginx, etc...)
.first_byte_timeout = 2s; # Wait a maximum of 2 seconds to receive the first byte from the backend
.between_bytes_timeout = 2s; # Wait a maximum of 2 seconds between each byte sent
}
Please note that in some cases you may still end up seeing 503 timeout errors, even after increasing the timeout thresholds. This is mostly seen on Web Servers running Apache (Varnish and Apache running on the same server) where the “KeepAlive” setting needs to be turned off.
When talking about CDNs, Akamai and Fastly are two names in which you cannot ignore and the same is the case when you are talking about Varnish and CDNs. One of the main reasons why CDNs came into the picture was to get content as close to the end user as possible. Though CDNs today may not be limited to just caching, getting the content closer to the user remains an integral requirement for many businesses when they opt for a CDN.
CDNs make websites fast. Varnish is an HTTP Accelerator. Both are powerful tools that can speed up any website. Now imagine what would be the result if the two worked together? In simple terms, most CDNs work with Varnish the same way they work with Origin servers. If the origin server serves assets from Varnish cache to a CDN, the CDN will treat Varnish just like any other origin and cache those assets.
Fastly uses a customized version of Varnish focused and optimized for large scale deployments. You can read more about how Akamai and Fastly works with Varnish here:
Catchpoint not only allows you to see the difference Varnish can have on the load time of your website, but it also allows you to capture important metrics from the HTTP Response Headers such as Cache Hit, Cache Miss, Via, or any other custom header that you may be passing. You can chart the data over a period to compare performance and have a look at trends which will reveal actionable insights.
Some of the key metrics which you can monitor using Catchpoint are:
Here are some visualizations charted using Catchpoint (using the Insights feature).
1. Comparative performance analysis of Response and Webpage Response times for Cache-Hit vs Cache-Miss
2. Location comparative performance of Cache-Hit vs. Cache-Miss
3. Capturing & Charting X-Varnish HTTP Header (Custom Metric using Catchpoint)
Note: The X-Varnish HTTP header allows us to find the correct log-entries for a request. For a cache hit, X-Varnish will contain both the ID of the current request and the ID of the request that populated the cache. In this scatterplot, you will be able to see both the values separated by a comma. X-Varnish HTTP header allows better debugging.