Tag Archives: cache

HAProxy’s load-balancing algorithm for static content delivery with Varnish

HAProxy’s load-balancing algorithms

HAProxy supports many load-balancing algorithms which may be used in many different type of cases.
That said, cache servers, which deliver most of the time the static content from your web applications, may require some specific load-balancing algorithms.

HAProxy stands in front of your cache server for some good reasons:

  • SSL offloading (read PHK’s feeling about SSL, Varnish and HAProxy)
  • HTTP content switching capabilities
  • advanced load-balancing algorithms

The main purpose of this article is to show how HAProxy can be used to aggregate Varnish servers memory storage in some kind of “JBOD” mode (like the “Just a Bunch Of Disks“).
Main purpose of the examples delivered here are to optimize the resources on the cache, mainly its memory, in order to improve the HIT rate. This will also improve your application response time and make your site top ranked on google 🙂

Content Switching in HAProxy

This has been covered many times on this blog.
As a quick introduction for readers who are not familiar with HAProxy, let’s explain how it works.

Clients will get connected to HAProxy through a Frontend. Then HAProxy routes traffic to a backend (server farm) where the load-balancing algorithm is used to choose a server.
A frontend can points to multiple backends and the choice of a backend is made through acls and use_backend rules..
Acls can be formed using fetches. A fetch is a directive which instructs HAProxy where to get content from.

Enough theory, let’s make a practical example: splitting static and dynamic traffic using the following rules:

  • Static content is hosted on domain names starting by ‘static.’ and ‘images.’
  • Static content files extensions are ‘.jpg’ ‘.png’ ‘.gif’ ‘.css’ ‘.js’
  • Static content can match any of the rule above
  • anything which is not static is considered as dynamic

The configuration sniplet below should be integrated into the HAProxy frontend. It matches the rules above to do traffic splitting. The varnish servers will stands in the bk_static farm.

frontend ft_public
 <frontend settings>
 acl static_domain  req.hdr_beg(Host) -i static. images.
 acl static_content path_end          -i .jpg .png .gif .css .js
 use_backend bk_static if static_domain or static_content
 default_backend bk_dynamic
   
backend bk_static
 <parameters related to static content delivery>

The configuration above creates 2 named acls ‘static_domain‘ and ‘static_content‘ which are used by the used_backend rule to route the traffic to varnish servers.

HAProxy and hash based load-balancing algotithm


Later in this article, we’ll heavily used the hash based load-balancing algorithms from HAProxy.
So a few information here (non exhaustive, it would deserve a long blog article) which will be useful for people wanting to understand what happens deep inside HAProxy.

The following parameters are taken into account when computing a hash algorithm:

  • number of servers in the farm
  • weight of each server in the farm
  • status of the servers (UP or DOWN)

If any of the parameter above changes, the whole hash computation also changes, hence request may hit an other server. This may lead to a negative impact on the response time of the application (during a short period of time).
Fortunately, HAProxy allows ‘consistent’ hashing, which means that only the traffic related to the change will be impacted.
That’s why you’ll see a lot of hash-type consistent directives in the configuration samples below.

Load-Balancing varnish cache server

Now, let’s focus on the magic we can add in the bk_static server farm.

Hashing the URL

HAProxy can hash the URL to pick up a server. With this load-balancing algorithm, we guarantee that a single URL will always hit the same Varnish server.

hashing the URL path only


In the example below, HAProxy hashes the URL path, which is from the first slash ‘/’ character up to the question mark ‘?’:

backend bk_static
  balance uri
  hash-type consistent

hashing the whole url, including the query string


In some cases, the query string may contain some variables in the query string, which means we must include the query string in the hash:

backend bk_static
  balance uri whole
  hash-type consistent

Query string parameter hash


That said, in some cases (API, etc…), hashing the whole URL is not enough. We may want to hash only on a particular query string parameter.
This applies well in cases where the client can forge itself the URL and all the parameters may be randomly ordered.
The configuration below tells HAProxy to apply the hash to the query string parameter named ‘id’ (IE: /image.php?width=512&id=12&height=256)

backend bk_static
  balance url_param id
  hash-type consistent

hash on a HTTP header


HAProxy can apply the hash to a specific HTTP header field.
The example below applies it on the Host header. This can be used for people hosting many domain names with a few pages, like users dedicated pages.

backend bk_static
  balance hdr(Host)
  hash-type consistent

Compose your own hash: concatenation of Host header and URL


Nowadays, HAProxy becomes more and more flexible and we can use this flexibility in its configuration.
Imagine, in your varnish configuration, you have a storage hash key based on the concatenation of the host header and the URI, then you may want to apply the same load-balancing algorithm into HAProxy, to optimize your caches.

The configuration below creates a new HTTP header field named X-LB which contains the host header (converted to lowercase) concatenated to the request uri (converted in lowercase too).

backend bk_static
  http-request set-header X-LB %[req.hdr(Host),lower]%[req.uri,lower]
  balance hdr(X-LB)
  hash-type consistent

Conclusion


HAProxy and Varnish works very well together. Each soft can benefit from performance and flexibility of the other one.

Links

HAProxy, Varnish and the single hostname website

As explained in a previous article, HAProxy and Varnish are two great OpenSource software which aim to improve performance, resilience and scalability of web applications.
We saw also that these two softwares are not competitors. Instead of that, they can work properly together, each one bringing the other one its features, making any web infrastructure more agile and robust at the same time.

In the current article, I’m going to explain how to use both of them on a web application hosted on a single domain name.

Main advantages of each soft


As a reminder, here are the main features each product owns.

HAProxy


HAProxy‘s main features:

  • Real load-balancer with smart persistence
  • Request queueing
  • Transparent proxy

Varnish


Varnish‘s main features:

  • Cache server with stale content delivery
  • Content compression
  • Edge Side Includes

Common features


HAProxy and Varnish both have the features below:

  • Content switching
  • URL rewritting
  • DDOS protection

So if we need any of them, we could use either HAProxy or Varnish.

Why a single domain

In web application, there are two types of content: static and dynamic.

By dynamic, I mean content which is generated on the fly and which is dedicated to a single user based on its current browsing on the application. Anything which is not in this category, can be considered as static. Even a page which is generated by PHP and whose content does change every minutes or few seconds (like the CMS WordPress or drupal). I call these pages “pseudo-static

The biggest strength of Varnish is that it can cache static objects, delivering them on behalf of the server, offloading most of the traffic from the server.



An object is identified by a Host header and its URL. When you have a single domain name, you have a single Host header for all your requests: static, pseudo static or dynamic.

You can’t split your traffic: everything requests must arrive on a single type of device: the LB, the cache, etc…

A good practise to split dynamic and static content is to use one domain name per type of object: www.domain.tld for dynamic and static.domain.tld for static content. Doing that you could forward dynamic traffic to the LB and static traffic to the caches directly.



Now, I guess you understand that the web application host naming can have an impact on the platform you’re going to build.

In the current article, I’ll only focus on applications using a single domain name. We’ll see how we can route traffic to the right product despite the limitation of the single domain name.



Don’t worry, I’ll write an other article later about the fun we could have when building a platform for an application hosted on multiple domain names.

Available architectures

Considering I summarize the “web application” as a single brick called “APPSERVER“, we have 2 main architectures available:

  1. CLIENT ==> HAPROXY ==> VARNISH ==> APPSERVER
  2. CLIENT ==> VARNISH ==> HAPROXY ==> APPSERVER

Pro and cons of HAProxy in front of Varnish


Pros:

  • Use HAProxy‘s smart load-balancing algorithm such as uri, url_param to make varnish caching more efficient and improve the hit rate
  • Make the Varnish layer scalable, since load-balanced
  • Protect Varnish ramp up when starting up (related to thread pool creation)
  • HAProxy can protect against DDOS and slowloris
  • Varnish can be used as a WAF

Cons:

  • no easy way do do application layer persistence
  • HAProxy queueing system can hardly protect the application hidden by Varnish
  • The client IP will be mandatory forwwarded on the X-Forwarded-For header (or any header you want)

Pro and cons of Varnish in front of HAProxy


Pros:

  • Smart layer 7 persistence with HAProxy
  • HAProxy layer scalable (with persistence preserved) since load-balanced by Varnish
  • APPSERVER protection through HAProxy request queueing
  • Varnish can be used as a WAF
  • HAProxy can use the client IP address (provided by Varnish in a HTTP header) to do Transparent proying (getting connected on APPSERVER with the client ip)

Cons:

  • HAProxy can’t protect against DDOS, Varnish will do
  • Cache size must be big enough to store all objects
  • Varnish layer not scalable

Finally, which is the best architecture??


No need to choose between both architecture above which one is the less worst for you.

It would be better to build a platform where there are no negative points.

The Architecture


The diagram below shows the architecture we’re going to work on.
haproxy_varnish
Legend:

  • H: HAProxy Load-Balancers (could be ALOHA Load-Balancer or any home made)
  • V: Varnish servers
  • S: Web application servers, whatever the product used here (tomcat, jboss, etc…)…
  • C: Client or end user

Main roles of each layers:

  • HAProxy: Layer 7 traffic routing, first row of protection against DDOS (syn flood, slowloris, etc…), application request flow optimiation
  • Varnish: Caching, compression. Could be used later as a WAF to protect the application
  • Server: hosts the application and the static content
  • Client: browse and use the web application

traffic flow


Basically, the client will send all the requests to HAProxy, then HAProxy, based on URL or file extension will take a routing decision:

  • If the request looks to be for a (pseudo) static object, then forward it to Varnish
    If Varnish misses the object, it will use HAProxy to get the content from the server.
  • Send all the other requests to the appserver. If we’ve done our job properly, there should be only dynamic traffic here.

I don’t want to use Varnish as the default option in the flow, cause a dynamic content could be cached, which could lead to somebody’s personal information sent to everybody

Furthermore, in case of massive misses or purposely built request to bypass the caches, I don’t the servers to be hammered by Varnish, so HAProxy protects them with a tight traffic regulation between Varnish and appservers..

Dynamic traffic flow


The diagram below shows how the request requiring dynamic content should be ideally routed through the platform:
haproxy_varnish_dynamic_flow
Legend:

  1. The client sends its request to HAProxy
  2. HAProxy chooses a server based on cookie persistence or Load-Balancing Algorithm if there is no cookie.
    The server processes the request and send the response back to HAPRoxy which forwards it to the client

Static traffic flow


The diagram below shows how the request requiring static content should be ideally routed through the platform:
haproxy_varnish_static_flow

  1. The client sends its request to HAProxy which sees it asks for a static content
  2. HAProxy forward the request to Varnish. If Varnish has the object in Cache (a HIT), it forwards it directly to HAProxy.
  3. If Varnish doesn’t have the object in cache or if the cache has expired, then Varnish forwards the request to HAProxy
  4. HAProxy randomly chooses a server. The response goes back to the client through Varnish.

In case of a MISS, the flow looks heavy 🙂 I want to do it that way to use the HAProxy traffic regulation features to prevent Varnish to flood the servers. Furthermore, since Varnish sees only static content, its HIT rate is over 98%… So the overhead is very low and the protection is improved.

Pros of such architecture

  • Use smart load-balancing algorithm such as uri, url_param to make varnish caching more efficient and improve the hit rate
  • Make the Varnish layer scalable, since load-balanced
  • Startup protection for Varnish and APPSERVER, allowing server reboot or farm expansion even under heavy load
  • HAProxy can protect against DDOS and slowloris
  • Smart layer 7 persistence with HAProxy
  • APPSERVER protection through HAProxy request queueing
  • HAProxy can use the client IP address to do Transparent proxying (getting connected on APPSERVER with the client ip)
  • Cache farm failure detection and routing to application servers (worst case management)
  • Can load-balance any type of TCP based protocol hosted on APPSERVER

Cons of such architecture


To be totally fair, there are a few “non-blocking” issues:

  • HAProxy layer is hardly scalable (must use 2 crossed Virtual IPs declared in the DNS)
  • Varnish can’t be used as a WAF since it will see only static traffic passing through. This can be updated very easily

Configuration

HAProxy Configuration

# On Aloha, the global section is already setup for you
# and the haproxy stats socket is available at /var/run/haproxy.stats
global
  stats socket ./haproxy.stats level admin
  log 10.0.1.10 local3

# default options
defaults
  option http-server-close
  mode http
  log global
  option httplog
  timeout connect 5s
  timeout client 20s
  timeout server 15s
  timeout check 1s
  timeout http-keep-alive 1s
  timeout http-request 10s  # slowloris protection
  default-server inter 3s fall 2 rise 2 slowstart 60s

# HAProxy's stats
listen stats
  bind 10.0.1.3:8880
  stats enable
  stats hide-version
  stats uri     /
  stats realm   HAProxy Statistics
  stats auth    admin:admin

# main frontend dedicated to end users
frontend ft_web
  bind 10.0.0.3:80
  acl static_content path_end .jpg .gif .png .css .js .htm .html
  acl pseudo_static path_end .php ! path_beg /dynamic/
  acl image_php path_beg /images.php
  acl varnish_available nbsrv(bk_varnish_uri) ge 1
  # Caches health detection + routing decision
  use_backend bk_varnish_uri if varnish_available static_content
  use_backend bk_varnish_uri if varnish_available pseudo_static
  use_backend bk_varnish_url_param if varnish_available image_php
  # dynamic content or all caches are unavailable
  default_backend bk_appsrv

# appsrv backend for dynamic content
backend bk_appsrv
  balance roundrobin
  # app servers must say if everything is fine on their side
  # and they can process requests
  option httpchk
  option httpchk GET /appcheck
  http-check expect rstring [oO][kK]
  cookie SERVERID insert indirect nocache
  # Transparent proxying using the client IP from the TCP connection
  source 10.0.1.1 usesrc clientip
  server s1 10.0.1.101:80 cookie s1 check maxconn 250
  server s2 10.0.1.102:80 cookie s2 check maxconn 250

# static backend with balance based on the uri, including the query string
# to avoid caching an object on several caches
backend bk_varnish_uri
  balance uri # in latest HAProxy version, one can add 'whole' keyword
  # Varnish must tell it's ready to accept traffic
  option httpchk HEAD /varnishcheck
  http-check expect status 200
  # client IP information
  option forwardfor
  # avoid request redistribution when the number of caches changes (crash or start up)
  hash-type consistent
  server varnish1 10.0.1.201:80 check maxconn 1000
  server varnish2 10.0.1.202:80 check maxconn 1000

# cache backend with balance based on the value of the URL parameter called "id"
# to avoid caching an object on several caches
backend bk_varnish_url_param
  balance url_param id
  # client IP information
  option forwardfor
  # avoid request redistribution when the number of caches changes (crash or start up)
  hash-type consistent
  server varnish1 10.0.1.201:80 maxconn 1000 track bk_varnish_uri/varnish1
  server varnish2 10.0.1.202:80 maxconn 1000 track bk_varnish_uri/varnish2

# frontend used by Varnish servers when updating their cache
frontend ft_web_static
  bind 10.0.1.3:80
  monitor-uri /haproxycheck
  # Tells Varnish to stop asking for static content when servers are dead
  # Varnish would deliver staled content
  monitor fail if nbsrv(bk_appsrv_static) eq 0
  default_backend bk_appsrv_static

# appsrv backend used by Varnish to update their cache
backend bk_appsrv_static
  balance roundrobin
  # anything different than a status code 200 on the URL /staticcheck.txt
  # must be considered as an error
  option httpchk
  option httpchk HEAD /staticcheck.txt
  http-check expect status 200
  # Transparent proxying using the client IP provided by X-Forwarded-For header
  source 10.0.1.1 usesrc hdr_ip(X-Forwarded-For)
  server s1 10.0.1.101:80 check maxconn 50 slowstart 10s
  server s2 10.0.1.102:80 check maxconn 50 slowstart 10s

Varnish Configuration

backend bk_appsrv_static {
        .host = "10.0.1.3";
        .port = "80";
        .connect_timeout = 3s;
        .first_byte_timeout = 10s;
        .between_bytes_timeout = 5s;
        .probe = {
                .url = "/haproxycheck";
                .expected_response = 200;
                .timeout = 1s;
                .interval = 3s;
                .window = 2;
                .threshold = 2;
                .initial = 2;
        }
}

acl purge {
        "localhost";
}

sub vcl_recv {
### Default options

        # Health Checking
        if (req.url == /varnishcheck) {
                error 751 "health check OK!";
        }

        # Set default backend
        set req.backend = bk_appsrv_static;

        # grace period (stale content delivery while revalidating)
        set req.grace = 30s;

        # Purge request
        if (req.request == "PURGE") {
                if (!client.ip ~ purge) {
                        error 405 "Not allowed.";
                }
                return (lookup);
        }

        # Accept-Encoding header clean-up
        if (req.http.Accept-Encoding) {
                # use gzip when possible, otherwise use deflate
                if (req.http.Accept-Encoding ~ "gzip") {
                        set req.http.Accept-Encoding = "gzip";
                } elsif (req.http.Accept-Encoding ~ "deflate") {
                        set req.http.Accept-Encoding = "deflate";
                } else {
                        # unknown algorithm, remove accept-encoding header
                        unset req.http.Accept-Encoding;
                }

                # Microsoft Internet Explorer 6 is well know to be buggy with compression and css / js
                if (req.url ~ ".(css|js)" && req.http.User-Agent ~ "MSIE 6") {
                        remove req.http.Accept-Encoding;
                }
        }

### Per host/application configuration
        # bk_appsrv_static
        # Stale content delivery
        if (req.backend.healthy) {
                set req.grace = 30s;
        } else {
                set req.grace = 1d;
        }

        # Cookie ignored in these static pages
        unset req.http.cookie;

### Common options
         # Static objects are first looked up in the cache
        if (req.url ~ ".(png|gif|jpg|swf|css|js)(?.*|)$") {
                return (lookup);
        }

        # if we arrive here, we look for the object in the cache
        return (lookup);
}

sub vcl_hash {
        hash_data(req.url);
        if (req.http.host) {
                hash_data(req.http.host);
        } else {
                hash_data(server.ip);
        }
        return (hash);
}

sub vcl_hit {
        # Purge
        if (req.request == "PURGE") {
                set obj.ttl = 0s;
                error 200 "Purged.";
        }

        return (deliver);
}

sub vcl_miss {
        # Purge
        if (req.request == "PURGE") {
                error 404 "Not in cache.";
        }

        return (fetch);
}

sub vcl_fetch {
        # Stale content delivery
        set beresp.grace = 1d;

        # Hide Server information
        unset beresp.http.Server;

        # Store compressed objects in memory
        # They would be uncompressed on the fly by Varnish if the client doesn't support compression
        if (beresp.http.content-type ~ "(text|application)") {
                set beresp.do_gzip = true;
        }

        # remove any cookie on static or pseudo-static objects
        unset beresp.http.set-cookie;

        return (deliver);
}

sub vcl_deliver {
        unset resp.http.via;
        unset resp.http.x-varnish;

        # could be useful to know if the object was in cache or not
        if (obj.hits > 0) {
                set resp.http.X-Cache = "HIT";
        } else {
                set resp.http.X-Cache = "MISS";
        }

        return (deliver);
}

sub vcl_error {
        # Health check
        if (obj.status == 751) {
                set obj.status = 200;
                return (deliver);
        }
}
  

Related links

Links

Smart content switching for news website

Purpose

Build a scalable architecture for news website using below components:

  • load balancer with content switching capability
  • cache server
  • application server

Definition

  • Content switching: the ability to route traffic based on the content of the HTTP request: URI, parameters, headers, etc…
    HAproxy is a good example of OpenSource reverse proxy load-balancer with content switching capability.
  • cache server: a server able to quickly deliver static content.
    Squid, Varnish and Apache Traffic Server are OpenSource cache reverse proxy.
  • application server: the server which build the pages for your news website.
    This can be either Apache+PHP, Tomcat+Java, IIS+asp .net, etc…

Target Network Diagram

Architecture

Context

All the traffic pass through the Aloha load-balancer.
HAproxy, the layer 7 load-balancer included in Aloha, will do the content switching to route request either to cache servers or to application servers.
If cache server misses an object, it will get it from the application servers.

Haproxy configuration

Service configuration:

frontend public
	bind :80
	acl DYN path_beg /user
	acl DYN path_beg /profile
	acl DYN method POST
	use_backend APPLICATION if DYN
	default_backend CACHE

The content switching is achieved by the few lines beginning with the keyword acl.
If a URI starts with /user or /profile or if the method is a POSTthen, the traffic will be redirected to the APPLICATION server pool, otherwise the CACHE pool will be used.

Application pool configuration:

backend APPLICATION
	balance roundrobin
	cookie PHPSESSID prefix
	option httpchk /health
	http-check expect string GOOD
	server APP1 1.1.1.1:80 cookie app1 check
	server APP2 1.1.1.2:80 cookie app2 check

We maintain backend server persistence using the cookie sent by the application server, named PHPSESSID in this example. You can change this cookie name to the cookie provided by your application, like JSESSIONID, ASP.NET_SessionId or anything else.
Note the health check URL: /health. The script executed on the backend will check server health (database availability, CPU usage, memory usage, etc…) and will return a GOOD if everything looks fine and a WRONG if not. With this, HAproxy will consider the server as ready only if the server returns a GOOD.

Cache pool configuration

backend CACHE
	balance url
	hash-type consistent
	option httpchk /
	http-check expect KEYWORD
	reqidel ^Accept-Encoding unless { hdr_sub(Accept-Encoding) gzip }
	reqirep ^Accept-Encoding: .*gzip.* Accept-Encoding: gzip
	server CACHE1 1.1.1.3:80 check
	server CACHE2 1.1.1.4:80 check

Here, we balance requests according to the URL. The purpose of this metric is to “force” a single URL to always be retrieved from the same server.
Main benefits are :

  1. less objects in caches memory
  2. less requests to the application server for static and pseudo-static content

In order to lower the impact of the Vary: header on Content Encoding, we added the two lines reqidel / reqirep to normalize a bit the Accept-Encoding header.

Links