Tag Archives: performance

Scalable WAF protection with HAProxy and Apache with modsecurity

Greeting to Thomas Heil, from our German partner Olanis, for his help in Apache and modsecurity configuration assistance.

What is a Web Application Firewall (WAF)?

Years ago, it was common to protect networks using a firewall… Well known devices which filter traffic at layer 3 and 4…
Now, the failures have moved from the network stack to the application layer, making the old firewall useless and obsolete (for protection purpose I mean). We used then to deploy IDS or IPS, which tried to match attacks at a packet level. These products are usually very hard to tune.
Then the Web Application Firewall arrived: it’s a firewall aware of the layer 7 protocol in order to be more efficient when deciding to block requests (or responses).
This is because the attacks became more complicated, like the SQL Injection, Cross Site scripting, etc…

One of the most famous opensource WAF is mod_security, which works as a module on Apache webserver and IIS for a long time and has been announced recently for nginx too.
A very good alternative is naxsi, a module for nginx, still young but very promising.

On today’s article, I’ll focus on modsecurity for Apache. In a next article, I’ll build the same platform with naxsi and nginx.

Scalable WAF platform


The main problem with WAF, is that they require a lot of resources to analyse each requests headers and body. (it can even be configured to analyze the response). If you want to be able to protect all your upcoming traffic, then you must think scalability.
In the present article I’m going to explain how to build a reliable and scalable platform where WAF capacity won’t be an issue. I could add “and where WAF maintenance could be done during business hours).
Here are the basic purpose to achieve:

  • Web Application Firewall: achieved by Apache and modsecurity
  • High-availability: application server and WAF monitoring, achieved by HAProxy
  • Scalability: ability to adapt capacity to the upcoming volume of traffic, achieved by HAProxy

It would be good if the platform would achieve the following advanced features:

  • DDOS protection: blind and brutal attacks protection, slowloris protection, achieved by HAProxy
  • Content-Switching: ability to route only dynamic requests to the WAF, achieved by HAProxy
  • Reliability: ability to detect capacity overusage, this is achieved by HAProxy
  • Performance: deliver response as fast as possible, achieved by the whole platform

Web platform with WAF Diagram

The diagram below shows the platform with HAProxy frontends (prefixed by ft_) and backends (prefixed by bk_). Each farm is composed by 2 servers.

As you can see, at first, it seems all the traffic goes to the WAFs, then comes back in HAProxy before being routed to the web servers. This would be the basic configuration, meeting the following basic requirements: Web Application Firewall, High-Availability, Scalability.

Platform installation


As load-balancer, I’m going to use our well known ALOHA ūüôā
The web servers are standard debian with apache and PHP, the application used on top of it is dokuwiki. I have no procedure for this one, this is very straight forward!
The WAF run on CentOS 6.3 x86_64, using modsecurity 2.5.8. The installation procedure is outside of the scope of this article, so I documented it on my personal wiki.
All of these servers are virtualized on my laptop using KVM, so NO, I won’t run performance benchmark, it would be ridiculous!

Configuration

WAF configuration


Basic configuration here, no tuning at all. The purpose is not to explain how to configure a WAF, sorry.

Apache Configuration


Modification to the file /etc/httpd/conf/httpd.conf:

Listen 192.168.10.15:81
[...]
LoadModule security2_module modules/mod_security2.so
LoadModule unique_id_module modules/mod_unique_id.so
[...]
NameVirtualHost 192.168.10.15:81
[...]
<IfModule mod_security2.c>
        SecPcreMatchLimit 1000000
        SecPcreMatchLimitRecursion 1000000
        SecDataDir logs/
</IfModule>
<VirtualHost 192.168.10.15:81>
        ServerName *
        AddDefaultCharset UTF-8

        <IfModule mod_security2.c>
                Include modsecurity.d/modsecurity_crs_10_setup.conf
                Include modsecurity.d/aloha.conf
                Include modsecurity.d/rules/*.conf

                SecRuleEngine On
                SecRequestBodyAccess On
                SecResponseBodyAccess On
        </IfModule>

        ProxyPreserveHost On
        ProxyRequests off
        ProxyVia Off
        ProxyPass / http://192.168.10.2:81/
        ProxyPassReverse / http://192.168.10.2:81/
</VirtualHost>

Basically, we just turned Apache into a reverse-proxy, accepting traffic for any server name, applying modsecurity rules before routing traffic back to HAProxy frontend dedicated to web servers.

Client IP


HAProxy works has a reverse proxy and so will use its own IP address to get connected on the WAF server. So you have to install mod_rpaf to get the client IP in the WAF for both tracking and logging.
To install mod_rpaf, follow these instructions: apache mod_rpaf installation.
Concerning its configuration, we’ll do it as below, edit the file /etc/httpd/conf.d/mod_rpaf.conf:

LoadModule rpaf_module modules/mod_rpaf-2.0.so

<IfModule rpaf_module>
        RPAFenable On
        RPAFproxy_ips 192.168.10.1 192.168.10.3
        RPAFheader X-Client-IP
</IfModule>

modsecurity custom rules

In the Apache configuration there is a directive which tells modsecurity to load a file called aloha.conf. The purpose of this file is to tell to modsecurity to deny the health check requests from HAProxy and to prevent logging them.
HAProxy will consider the WAF as operational only if it gets a 403 response to this request. (see HAProxy configuration below).
Content of the file /etc/httpd/modsecurity.d/aloha.conf:

SecRule REQUEST_FILENAME "/waf_health_check" "nolog,deny"

Load-Balancer (HAProxy) configuration for basic usage


The configuration below is the first shoot we do when deploying such platform, it is basic, simple and straight forward:

######## Default values for all entries till next defaults section
defaults
  option  http-server-close
  option  dontlognull
  option  redispatch
  option  contstats
  retries 3
  timeout connect 5s
  timeout http-keep-alive 1s
  # Slowloris protection
  timeout http-request 15s
  timeout queue 30s
  timeout tarpit 1m          # tarpit hold tim
  backlog 10000

# public frontend where users get connected to
frontend ft_waf
  bind 192.168.10.2:80 name http
  mode http
  log global
  option httplog
  timeout client 25s
  maxconn 1000
  default_backend bk_waf

# WAF farm where users' traffic is routed first
backend bk_waf
  balance roundrobin
  mode http
  log global
  option httplog
  option forwardfor header X-Client-IP
  option httpchk HEAD /waf_health_check HTTP/1.0
  # Specific WAF checking: a DENY means everything is OK
  http-check expect status 403
  timeout server 25s
  default-server inter 3s rise 2 fall 3
  server waf1 192.168.10.15:81 maxconn 100 weight 10 check
  server waf2 192.168.10.16:81 maxconn 100 weight 10 check

# Traffic secured by the WAF arrives here
frontend ft_web
  bind 192.168.10.2:81 name http
  mode http
  log global
  option httplog
  timeout client 25s
  maxconn 1000
  # route health check requests to a specific backend to avoid graph pollution in ALOHA GUI
  use_backend bk_waf_health_check if { path /waf_health_check }
  default_backend bk_web

# application server farm
backend bk_web
  balance roundrobin
  mode http
  log global
  option httplog
  option forwardfor
  cookie SERVERID insert indirect nocache
  default-server inter 3s rise 2 fall 3
  option httpchk HEAD /
  timeout server 25s
  server server1 192.168.10.11:80 maxconn 100 weight 10 cookie server1 check
  server server2 192.168.10.12:80 maxconn 100 weight 10 cookie server2 check

# backend dedicated to WAF checking (to avoid graph pollution)
backend bk_waf_health_check
  balance roundrobin
  mode http
  log global
  option httplog
  option forwardfor
  default-server inter 3s rise 2 fall 3
  timeout server 25s
  server server1 192.168.10.11:80 maxconn 100 weight 10 check
  server server2 192.168.10.12:80 maxconn 100 weight 10 check

Advanced Load-Balancing (HAProxy) configuration


We’re going now to improve a bit the platform. The picture below shows which type of protection is achieved by the load-balancer and the WAF:

The configuration below adds a few more features:

  • DDOS protection on the frontend
  • abuser or attacker detection in bk_waf and blocking on the public interface (ft_waf)
  • Bypassing WAF when overusage or unavailable

Which will allow to meet the advanced requirements: DDOS protection, Content-Switching, Reliability, Performance.

######## Default values for all entries till next defaults section
defaults
  option  http-server-close
  option  dontlognull
  option  redispatch
  option  contstats
  retries 3
  timeout connect 5s
  timeout http-keep-alive 1s
  # Slowloris protection
  timeout http-request 15s
  timeout queue 30s
  timeout tarpit 1m          # tarpit hold tim
  backlog 10000

# public frontend where users get connected to
frontend ft_waf
  bind 192.168.10.2:80 name http
  mode http
  log global
  option httplog
  timeout client 25s
  maxconn 10000

  # DDOS protection
  # Use General Purpose Couter (gpc) 0 in SC1 as a global abuse counter
  # Monitors the number of request sent by an IP over a period of 10 seconds
  stick-table type ip size 1m expire 1m store gpc0,http_req_rate(10s),http_err_rate(10s)
  tcp-request connection track-sc1 src
  tcp-request connection reject if { sc1_get_gpc0 gt 0 }
  # Abuser means more than 100reqs/10s
  acl abuse sc1_http_req_rate(ft_web) ge 100
  acl flag_abuser sc1_inc_gpc0(ft_web)
  tcp-request content reject if abuse flag_abuser

  acl static path_beg /static/ /dokuwiki/images/
  acl no_waf nbsrv(bk_waf) eq 0
  acl waf_max_capacity queue(bk_waf) ge 1
  # bypass WAF farm if no WAF available
  use_backend bk_web if no_waf
  # bypass WAF farm if it reaches its capacity
  use_backend bk_web if static waf_max_capacity
  default_backend bk_waf

# WAF farm where users' traffic is routed first
backend bk_waf
  balance roundrobin
  mode http
  log global
  option httplog
  option forwardfor header X-Client-IP
  option httpchk HEAD /waf_health_check HTTP/1.0

  # If the source IP generated 10 or more http request over the defined period,
  # flag the IP as abuser on the frontend
  acl abuse sc1_http_err_rate(ft_waf) ge 10
  acl flag_abuser sc1_inc_gpc0(ft_waf)
  tcp-request content reject if abuse flag_abuser

  # Specific WAF checking: a DENY means everything is OK
  http-check expect status 403
  timeout server 25s
  default-server inter 3s rise 2 fall 3
  server waf1 192.168.10.15:81 maxconn 100 weight 10 check
  server waf2 192.168.10.16:81 maxconn 100 weight 10 check

# Traffic secured by the WAF arrives here
frontend ft_web
  bind 192.168.10.2:81 name http
  mode http
  log global
  option httplog
  timeout client 25s
  maxconn 1000
  # route health check requests to a specific backend to avoid graph pollution in ALOHA GUI
  use_backend bk_waf_health_check if { path /waf_health_check }
  default_backend bk_web

# application server farm
backend bk_web
  balance roundrobin
  mode http
  log global
  option httplog
  option forwardfor
  cookie SERVERID insert indirect nocache
  default-server inter 3s rise 2 fall 3
  option httpchk HEAD /
  # get connected on the application server using the user ip
  # provided in the X-Client-IP header setup by ft_waf frontend
  source 0.0.0.0 usesrc hdr_ip(X-Client-IP)
  timeout server 25s
  server server1 192.168.10.11:80 maxconn 100 weight 10 cookie server1 check
  server server2 192.168.10.12:80 maxconn 100 weight 10 cookie server2 check

# backend dedicated to WAF checking (to avoid graph pollution)
backend bk_waf_health_check
  balance roundrobin
  mode http
  log global
  option httplog
  option forwardfor
  default-server inter 3s rise 2 fall 3
  timeout server 25s
  server server1 192.168.10.11:80 maxconn 100 weight 10 check
  server server2 192.168.10.12:80 maxconn 100 weight 10 check

Detecting attacks


On the load-balancer


The ft_waf frontend stick table tracks two information: http_req_rate and http_err_rate which are respectively the http request rate and the http error rate generated by a single IP address.
HAProxy would automatically block an IP which has generated more than 100 requests over a period of 10s or 10 errors (WAF detection 403 responses included) in 10s. The user is blocked for 1 minute as long as he keeps on abusing.
Of course, you can setup above values to whatever you need: it is fully flexible.

To know the status of IPs in your load-balancer, just run the command below:

echo show table ft_waf | socat /var/run/haproxy.stat - 
# table: ft_waf, type: ip, size:1048576, used:1
0xc33304: key=192.168.10.254 use=0 exp=4555 gpc0=0 http_req_rate(10000)=1 http_err_rate(10000)=1

Note: The ALOHA Load-balancer does not provide watch, but you can monitor the content of the table in live with the command below:

while true ; do echo show table ft_waf | socat /var/run/haproxy.stat - ; sleep 2 ; clear ; done

On the Waf


I have not setup anything particular on WAF logging, so every errors appears in /var/log/httpd/error_log. IE:

[Fri Oct 12 10:48:21 2012] [error] [client 192.168.10.254] ModSecurity: Access denied with code 403 (phase 2). Pattern match "(?:(?:[\\;\\|\\`]\\W*?\\bcc|\\b(wget|curl))\\b|\\/cc(?:[\\'\"\\|\\;\\`\\-\\s]|$))" at REQUEST_FILENAME. [file "/etc/httpd/modsecurity.d/rules/modsecurity_crs_40_generic_attacks.conf"] [line "25"] [id "950907"] [rev "2.2.5"] [msg "System Command Injection"] [data "/cc-"] [severity "CRITICAL"] [tag "WEB_ATTACK/COMMAND_INJECTION"] [tag "WASCTC/WASC-31"] [tag "OWASP_TOP_10/A1"] [tag "PCI/6.5.2"] [hostname "mywiki"] [uri "/dokuwiki/lib/images/license/button/cc-by-sa.png"] [unique_id "UHfZVcCoCg8AAApVAzsAAAAA"]

Seems to be a false positive ūüôā

Conclusion


Today, we saw it’s easy to build a scalable and well performing WAF platform in front of our web application.
The WAF is able to communicate to HAProxy which IPs to automatically blacklist (throuth error rate monitoring), which is convenient since the attacker won’t bother the WAF for a certain amount of time ūüėČ
The platform allows to detect WAF farm availability and to bypass it in case of total failure, we even saw it is possible to bypass the WAF for static content if the farm is running out of capacity. Purpose is to deliver a good end-user experience without dropping too much the security.
Note that it is possible to route all the static content to the web servers (or a static farm) directly, whatever the status of the WAF farm.
This make me say that the platform is fully scallable and flexible.
Also, bear in mind to monitor your WAF logs, as shown in the example above, there was a false positive preventing an image to be loaded from dokuwiki.

Related links

Links

Application Delivery Controller and ecommerce websites

Synopsis

Today, almost any ecommerce website uses a load-balancer or an application delivery controller in front of it, in order to improve its availability and reliability.
In today’s article, I’ll explain how we can take advantage of ADCs’ layer 7 features to improve an ecommerce website performance and give the best experience to end-user in order to increase the revenue.
The points on which we can work are:

  • Network optimization
  • Traffic regulation
  • Overusage protection
  • User “tagging” based on cart content
  • User “tagging” based purchase phase
  • Blackout prevention
  • SEO optimization
  • Partner slowness protection

Note: the list is not exhaustive and the given example will be very simple. My purpose is not to create a very complicated configuration but give the reader clues on how he can take advantage of our product.


Note2: I won’t discuss about static content, there is already one article with a lot of details about it on this blog.


As Usual, the configuration example below applies on our ALOHA ADC appliance, but should work as well on HAProxy 1.5.

Network optimization

Client-side network latency have a negative impact on websites: the slowest the user connectivity is, the longest the connection will remain opened on the web server, the time for the client to download the object. This could last much longer if the client and server uses HTTP Keepalives.
Basically, this is what happens with basic layer 4 load-balancers like LVS or some other appliance vendors, when the TCP connection is established between the client and the server directly.
Since HAProxy works as a HTTP reverse-proxy, it breaks the TCP connection and enables TCP buffering between both connections. It means HAProxy reads the response at the speed of the server and delivers it at the speed of the client.
Slow clients with high latency will have no impact anymore on application servers because HAProxy “hides” it by its own latency to the server.
An other good point is that you can enable HTTP Keepalives on the client side and disable it on the server side: it allows a client to re-use a connection to download several objects, with no impact on server resources.
TCP buffering does not require any configuration, while enabling client side HTTP keep-alive is achieved by the line option http-server-close.
And The configuration is pretty simple:

# default options
defaults
  option http-server-close
  mode http
  log 10.0.0.1 local2
  option httplog
  timeout connect 5s
  timeout client 20s
  timeout server 15s
  timeout check 1s
  timeout http-keep-alive 1s
  timeout http-request 10s  # slowloris protection
  default-server inter 3s fall 2 rise 2 slowstart 60s

# main frontend
frontend ft_web
  bind 10.0.0.3:80
  default_backend bk_appsrv

# application server farm
backend bk_appsrv
  balance roundrobin
  # app servers must say if everything is fine on their side and 
  # they are ready to process traffic
  option httpchk GET /appcheck
  http-check expect rstring [oO][kK]
  cookie SERVERID insert indirect nocache
  server s1 10.0.1.101:80 cookie s1 check
  server s2 10.0.1.102:80 cookie s2 check

Traffic Regulation


Any server has a maximum capacity. The more it handles requests, the slower it will be to process each request. And if it has too many requests to process, it can even crash and won’t obviously be able to answer to anybody!
HAProxy can regulate request streams to servers in order to prevent them from crashing or even slowing down. Note that when well set up, it can allow you to use your server at their maximum capacity without never being in trouble.
Basically, HAProxy is able to manage request queues.
You can configure traffic regulation with fullconn and maxconn parameters in the backend and with minconn and maxconn parameters on the server line description.
Let’s update our server line description above with a simple maxconn parameter:

  server s1 10.0.1.101:80 cookie s1 check maxconn 250
  server s2 10.0.1.102:80 cookie s2 check maxconn 250

Note: there would be many many things to say about queueing and the HAProxy parameter cited above, but this is not the purpose of the current article.

Over usage protection

By over usage, I mean that you want to be able to handle an unexpected flow of users and be able to classify users in 2 categories:

  1. Those who have already been identified by the website and are using it
  2. Those who have just arrived and wants to use it

The difference between both type of users can be done through the ecommerce CMS cookie: identified users owns a Cookie while brand new users doesn’t.
If you know your server farm has the capacity to manage 10000 users, then you don’t want to allow more than this number until you expand the farm.
Here is the configuration to protect against over-usage (The application Cookie is “MYCOOKIE”.):

# default options
defaults
  option http-server-close
  mode http
  log 10.0.0.2 local2
  option httplog
  timeout connect 5s
  timeout client 20s
  timeout server 15s
  timeout check 1s
  timeout http-keep-alive 1s
  timeout http-request 10s  # slowloris protection
  default-server inter 3s fall 2 rise 2 slowstart 60s

# main frontend
frontend ft_web
  bind 10.0.0.3:80
  # update the number below to the number of people you want to allow
  acl maxcapacity table_cnt(bk_appsrv) ge 10000
  acl knownuser hdr_sub(Cookie) MYCOOK
  # route any unknown user to the sorry page if we reached the maximum number
  # of allowed users and the request does not have a cookie
  use_backend bk_sorrypage if maxcapacity !knownuser
  default_backend bk_appsrv

# appsrv backend for dynamic content
backend bk_appsrv
  balance roundrobin
  # define a stick-table with at most 10K entries
  # cookie value would be cleared from the table if not used for 10 mn
  stick-table type string len 32 size 10K expire 10m nopurge
  stick store-response set-cookie(MYCOOK)
  # Reset the TTL in the stick table each time a request comes in
  stick store-request cookie(MYCOOK)
  # app servers must say if everything is fine on their side and 
  # they are ready to process traffic
  option httpchk GET /appcheck
  http-check expect rstring [oO][kK]
  cookie SERVERID insert indirect nocache
  server s1 10.0.1.101:80 cookie s1 check maxconn 250
  server s2 10.0.1.102:80 cookie s2 check maxconn 250

# sorry page management
backend bk_sorrypage
  balance roundrobin
  server s1 10.0.1.103:80 check maxconn 1000
  server s2 10.0.1.104:80 check maxconn 1000

User tagging based on cart content

When your architecture has enough capacity, you don’t need to classify users. But imagine if your platform runs out of capacity, you want to be able to reserve resources for users who have no article in the cart, that way the website looks very fast, hopefully these users will buy some articles.
Just configure your ecommerce application to setup a cookie with some information about the cart: either the number of article, the whole value, etc…
In the example below, we’ll consider the application creates a cookie named CART and the number of articles as a value.
Based on the information provided by this cookie, we’ll take routing decision and choose different farms with different capacity.

# default options
defaults
  option http-server-close
  mode http
  log 10.0.0.2 local2
  option httplog
  timeout connect 5s
  timeout client 20s
  timeout server 15s
  timeout check 1s
  timeout http-keep-alive 1s
  timeout http-request 10s  # slowloris protection
  default-server inter 3s fall 2 rise 2 slowstart 60s

# main frontend
frontend ft_web
  bind 10.0.0.3:80
  # update the number below to the number of people you want to allow
  acl maxcapacity table_cnt(bk_appsrv) ge 10000
  acl knownuser hdr_sub(Cookie) MYCOOK
  acl empty_cart hdr_sub(Cookie) CART=0
  # route any unknown user to the sorry page if we reached the maximum number
  # of allowed users and the request does not own a cookie
  use_backend bk_sorrypage if maxcapacity !knownuser
  # Once the user have something in the cart, move it to a farm with less resources
  # only when there are too many users on the website
  use_backend bk_appsrv if maxcapacity !empty_cart 
  default_backend bk_appsrv_empty_cart

# Default farm when everything goes well
backend bk_appsrv_empty_cart
  balance roundrobin
  # create the entry in the table when the server generates the cookie
  stick store-response set-cookie(MYCOOK) table bk_appsrv
  # Reset the TTL in the stick table each time a request comes in
  stick store-request cookie(MYCOOK) table bk_appsrv
  # app servers must say if everything is fine on their side
  # and they can process requests
  option httpchk GET /appcheck
  http-check expect rstring [oO][kK]
  cookie SERVERID insert indirect nocache
  server s1 10.0.1.101:80 cookie s1 check maxconn 200
  server s2 10.0.1.102:80 cookie s2 check maxconn 200

# Reserve resources for the few users which have something in their cart
backend bk_appsrv
  balance roundrobin
  # define a stick-table with at most 10K entries
  # cookie would be cleared from the table if not used for 10  mn
  stick-table type string len 32 size 10K expire 10m nopurge
  # create the entry in the table when the server generates the cookie
  stick store-response set-cookie(MYCOOK)
  # Reset the TTL in the stick table each time a request comes in
  stick store-request cookie(MYCOOK)
  cookie SERVERID insert indirect nocache
  server s1 10.0.1.101:80 cookie s1 track bk_appsrv_empty_cart/s1 maxconn 50
  server s2 10.0.1.102:80 cookie s2 track bk_appsrv_empty_cart/s2 maxconn 50

backend bk_sorrypage
  balance roundrobin
  server s1 10.0.1.103:80 check maxconn 1000
  server s2 10.0.1.104:80 check maxconn 1000

User tagging based on purchase phase

The synopsis of this chapter is the same as the precedent chapter: behing able to classify users and ability to reserve resources.
But this time, we’ll identify users based on the phase they are. Basically, we’ll consider two phases:

  1. browsing phase, when people add articles in the cart
  2. purchasing phase, when people have finished filling up the cart and start providing billing, delivery and payment information

In order to classify users, we’ll use the URL path. It starts by /purchase/ when the user is in the purchasing phase. Any other URLs are considered as browsing.
Based on the information provided by requested URL, we’ll take routing decision and choose different farms with different capacity.

# defaults options
defaults
  option http-server-close
  mode http
  log 10.0.0.2 local2
  option httplog
  timeout connect 5s
  timeout client 20s
  timeout server 15s
  timeout check 1s
  timeout http-keep-alive 1s
  timeout http-request 10s  # slowloris protection
  default-server inter 3s fall 2 rise 2 slowstart 60s

# main frontend
frontend ft_web
  bind 10.0.0.3:80
  # update the number below to the number of people you want to allow
  acl maxcapacity table_cnt(bk_appsrv) ge 10000
  acl knownuser hdr_sub(Cookie) MYCOOK
  acl purchase_phase path_beg /purchase/
  # route any unknown user to the sorry page if we reached the maximum number
  # of allowed users and the request does not own a cookie
  use_backend bk_sorrypage if maxcapacity !knownuser
  # Once the user is in the purchase phase, move it to a farm with less resources
  # only when there are too many users on the website
  use_backend bk_appsrv if maxcapacity purchase_phase 
  default_backend bk_appsrv_browse

# Default farm when everything goes well
backend bk_appsrv_browse
  balance roundrobin
  # create the entry in the table when the server generates the cookie
  stick store-response set-cookie(MYCOOK) table bk_appsrv
  # Reset the TTL in the stick table each time a request comes in
  stick store-request cookie(MYCOOK) table bk_appsrv
  # app servers must say if everything is fine on their side
  # and they can process requests
  option httpchk GET /appcheck
  http-check expect rstring [oO][kK]
  cookie SERVERID insert indirect nocache
  server s1 10.0.1.101:80 cookie s1 check maxconn 200
  server s2 10.0.1.102:80 cookie s2 check maxconn 200

# Reserve resources for the few users in the purchase phase
backend bk_appsrv
  balance roundrobin
  # define a stick-table with at most 10K entries
  # cookie would be cleared from the table if not used for 10  mn
  stick-table type string len 32 size 10K expire 10m nopurge
  # create the entry in the table when the server generates the cookie
  stick store-response set-cookie(MYCOOK)
  # Reset the TTL in the stick table each time a request comes in
  stick store-request cookie(MYCOOK)
  cookie SERVERID insert indirect nocache
  server s1 10.0.1.101:80 cookie s1 track bk_appsrv_browse/s1 maxconn 50
  server s2 10.0.1.102:80 cookie s2 track bk_appsrv_browse/s2 maxconn 50

backend bk_sorrypage
  balance roundrobin
  server s1 10.0.1.103:80 check maxconn 1000
  server s2 10.0.1.104:80 check maxconn 1000

Blackout prevention

A website blackout is the worst thing that could happen: something has crashed and the application does not work anymore, or none of the servers are reachable.
When such thing occurs, it is common to get 503 errors or a blank page after 30 seconds.
In both cases, end users have a negative feeling about the website. At least an excuse page with an estimated recovery date would be appreciated. HAProxy allows to communicate to end user even if none of the servers are available.
The configuration below shows how to do it:

# defaults options
defaults
  option http-server-close
  mode http
  log 10.0.0.2 local2
  option httplog
  timeout connect 5s
  timeout client 20s
  timeout server 15s
  timeout check 1s
  timeout http-keep-alive 1s
  timeout http-request 10s  # slowloris protection
  default-server inter 3s fall 2 rise 2 slowstart 60s

# main frontend
frontend ft_web
  bind 10.0.0.3:80
  # update the number below to the number of people you want to allow
  acl maxcapacity table_cnt(bk_appsrv) ge 10000
  acl knownuser hdr_sub(Cookie) MYCOOK
  acl purchase_phase path_beg /purchase/
  acl no_appsrv nbsrv(bk_appsrv_browse) eq 0
  acl no_sorrysrv nbsrv(bk_sorrypage) eq 0
  # worst case management
  use_backend bk_worst_case_management if no_appsrv no_sorrysrv
  # use sorry servers if available
  use_backend bk_sorrypage if no_appsrv !no_sorrysrv
  # route any unknown user to the sorry page if we reached the maximum number
  # of allowed users and the request does not own a cookie
  use_backend bk_sorrypage if maxcapacity !knownuser
  # Once the user is in the purchase phase, move it to a farm with less resources
  # only when there are too many users on the website
  use_backend bk_appsrv if maxcapacity purchase_phase 
  default_backend bk_appsrv_browse

# Default farm when everything goes well
backend bk_appsrv_browse
  balance roundrobin
  # create the entry in the table when the server generates the cookie
  stick store-response set-cookie(MYCOOK) table bk_appsrv
  # Reset the TTL in the stick table each time a request comes in
  stick store-request cookie(MYCOOK) table bk_appsrv
  # app servers must say if everything is fine on their side
  # and they can process requests
  option httpchk GET /appcheck
  http-check expect rstring [oO][kK]
  cookie SERVERID insert indirect nocache
  server s1 10.0.1.101:80 cookie s1 check maxconn 200
  server s2 10.0.1.102:80 cookie s2 check maxconn 200

# Reserve resources for the few users in the purchase phase
backend bk_appsrv
  balance roundrobin
  # define a stick-table with at most 10K entries
  # cookie would be cleared from the table if not used for 10  mn
  stick-table type string len 32 size 10K expire 10m nopurge
  # create the entry in the table when the server generates the cookie
  stick store-response set-cookie(MYCOOK)
  # Reset the TTL in the stick table each time a request comes in
  stick store-request cookie(MYCOOK)
  cookie SERVERID insert indirect nocache
  server s1 10.0.1.101:80 cookie s1 track bk_appsrv_browse/s1 maxconn 50
  server s2 10.0.1.102:80 cookie s2 track bk_appsrv_browse/s2 maxconn 50

backend bk_sorrypage
  balance roundrobin
  server s1 10.0.1.103:80 check maxconn 1000
  server s2 10.0.1.104:80 check maxconn 1000

backend bk_worst_case_management
  errorfile 503 /etc/haproxy/errors/503.txt

And the content of the file /etc/haproxy/errors/503.txt could look like:

HTTP/1.0 200 OK
Cache-Control: no-cache
Connection: close
Content-Type: text/html
Content-Length: 246

<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>Maintenance</title>
</head>
<body>
<h1>Maintenance</h1>
We're sorry, ecommerce.com is currently under maintenance and will come back soon.
</body>
</html>

SEO optimization

Most search engines takes now into account pages response time.
The configuration below redirects search engine bots to a dedicated server and if it’s not available, then it is forwarded to the default farm. The bot is identified by its User-Agent header.

# defaults options
defaults
  option http-server-close
  mode http
  log 10.0.0.2 local2
  option httplog
  timeout connect 5s
  timeout client 20s
  timeout server 15s
  timeout check 1s
  timeout http-keep-alive 1s
  timeout http-request 10s  # slowloris protection
  default-server inter 3s fall 2 rise 2 slowstart 60s

# main frontend
frontend ft_web
  bind 10.0.0.3:80
  # update the number below to the number of people you want to allow
  acl maxcapacity table_cnt(bk_appsrv) ge 10000
  acl knownuser hdr_sub(Cookie) MYCOOK
  acl purchase_phase path_beg /purchase/
  acl bot hdr_sub(User-Agent) -i googlebot bingbot slurp
  acl no_appsrv nbsrv(bk_appsrv_browse) eq 0
  acl no_sorrysrv nbsrv(bk_sorrypage) eq 0
  acl no_seosrv nbsrv(bk_seo) eq 0
  # worst caperformancese management
  use_backend bk_worst_case_management if no_appsrv no_sorrysrv
  # use sorry servers if available
  use_backend bk_sorrypage if no_appsrv !no_sorrysrv
  # redirect bots
  use_backend bk_seo if bot !no_seosrv
  use_backend bk_appsrv if bot no_seosrv
  # route any unknown user to the sorry page if we reached the maximum number
  # of allowed users and the request does not own a cookie
  use_backend bk_sorrypage if maxcapacity !knownuser
  # Once the user is in the purchase phase, move it to a farm with less resources
  # only when there are too many users on the website
  use_backend bk_appsrv if maxcapacity purchase_phase 
  default_backend bk_appsrv_browse

# Default farm when everything goes well
backend bk_appsrv_browse
  balance roundrobin
  # create the entry in the table when the server generates the cookie
  stick store-response set-cookie(MYCOOK) table bk_appsrv
  # Reset the TTL in the stick table each time a request comes in
  stick store-request cookie(MYCOOK) table bk_appsrv
  # app servers must say if everything is fine on their side
  # and they can process requests
  option httpchk GET /appcheck
  http-check expect rstring [oO][kK]
  cookie SERVERID insert indirect nocache
  server s1 10.0.1.101:80 cookie s1 check maxconn 200
  server s2 10.0.1.102:80 cookie s2 check maxconn 200

# Reserve resources for the few users in the purchase phase
backend bk_appsrv
  balance roundrobin
  # define a stick-table with at most 10K entries
  # cookie would be cleared from the table if not used for 10  mn
  stick-table type string len 32 size 10K expire 10m nopurge
  # create the entry in the table when the server generates the cookie
  stick store-response set-cookie(MYCOOK)
  # Reset the TTL in the stick table each time a request comes in
  stick store-request cookie(MYCOOK)
  cookie SERVERID insert indirect nocache
  server s1 10.0.1.101:80 cookie s1 track bk_appsrv_browse/s1 maxconn 50
  server s2 10.0.1.102:80 cookie s2 track bk_appsrv_browse/s2 maxconn 50

# Reserve resources search engines bot
backend bk_seo
  option httpchk GET /appcheck
  http-check expect rstring [oO][kK]
  server s3 10.0.1.103:80 check

backend bk_sorrypage
  balance roundrobin
  server s1 10.0.1.103:80 check maxconn 1000
  server s2 10.0.1.104:80 check maxconn 1000

backend bk_worst_case_management
  errorfile 503 /etc/haproxy/errors/503.txt

Partner slowness protection

Some ecommerce website relies on partners for some product or services. Unfortunately, if the partner’s webservice application slows down, then our own application will slow down. Even worst, we may see sessions pilling up and server crashes due to lack of resources…
In order to prevent this, just configure your appserver to pass through HAProxy to reach your partners’ webservices. HAProxy can shut a session if a partner is too slow to answer. If the partner complain you don’t send them enough deals, just tell him to improve his platform, maybe using a ADC like HAProxy / ALOHA Load-Balancer ūüėČ

frontend ft_partner1
  bind 10.0.0.3:8001
  use_backend bk_partner1

backend bk_partner1
  # the partner has 2 seconds to answer each requests
  timeout server 2s
  # you can add a maxconn here if you're not supposed to open 
  # too many connections on the partner application
  server partner1 1.2.3.4:80 check

Related links

Links

Quand le marketing d√©pense des sous qui ne vont pas √† l’innovation

Pas facile le load-balancing !

J’ai d’ordinaire un tr√®s grand respect pour nos concurrents, que je qualifie plut√īt de confr√®res et qu’il m’est m√™me d√©j√† arriv√© d’aider en priv√© pour au moins deux d’entre eux. Etant √† l’origine du r√©partiteur de charge libre le plus r√©pandu qui donna naissance √† l’Aloha, je suis tr√®s bien plac√© pour savoir √† quel point cette t√Ęche est difficile. Aussi j’√©prouve une certaine admiration pour quiconque s’engage avec succ√®s dans cette voie et plus particuli√®rement pour ceux qui parviennent √† enrichir leurs produits sur le long terme, chose encore plus difficile que d’en recr√©er un de z√©ro.

C’est ainsi qu’il m’arrive de rappeller √† mes coll√®gues qu’on ne doit jamais se moquer de nos confr√®res quand il leur arrive des exp√©riences tr√®s d√©sagr√©ables comme par exemple le fait de laisser tra√ģner une cl√© SSH root sur des appliances, parce que √ßa arrive m√™me aux plus grands, que nous ne sommes pas √† l’abri d’une boulette similaire malgr√© tout le soin apport√© √† chaque nouvelle version, et que face √† une telle situation nous serions √©galement tr√®s embarrass√©s.

Toutefois il en est un qui ne semble pas conna√ģtre ces r√®gles de bonne conduite, probablement parce qu’il ne connait pas la valeur du travail de recherche et d√©veloppement apport√© aux produits de ses concurrents. Je ne le nommerai pas, ce serait lui faire l’√©conomie d’une publicit√© alors que c’est sa seule sp√©cialit√©.

En effet, depuis que ce concurrent a touch√© $16M d’investissements, il n’a de cesse de d√©biner nos produits ainsi que ceux de quelques autres de nos confr√®res aupr√®s de nos partenaires, et ce de mani√®re compl√®tement gratuite et sans aucun fondement (ce que les anglo-saxons appellent le “FUD”), juste pour essayer de se placer.

Je pense que leur premi√®re motivation vient sans doute de leur amertume d’avoir vu leur produit syst√©matiquement √©limin√© par nos clients lors des tests comparatifs sur les crit√®res de facilit√© d’int√©gration, de performance et de stabilit√©. C’est d’ailleurs la cause la plus probable vu que ce concurrent s’en prend √† plusieurs √©diteurs √† la fois et que nous rencontrons nous-m√™mes sur le terrain des confr√®res offrant des produits de qualit√© tels que Cisco, et F5 pour les produits mat√©riels, ou load-balancer.org pour le logiciel. Bon et bien s√Ľr il y a aussi ce concurrent agressif que je ne nomme pas.

Il est vrai que cela ne doit pas √™tre plaisant pour lui de perdre les tests √† chaque fois, mais lorsque nous perdons un test face √† un confr√®re, ce qui nous arrive comme √† tous, nous nous effor√ßons d’am√©liorer notre produit sur le crit√®re qui nous a fait d√©faut, dans l’objectif de gagner la fois suivante, au lieu d’investir lourdement dans des campagnes de d√©sinformation sur le vainqueur.

A mon avis, ce concurrent ne sait pas par o√Ļ commencer pour am√©liorer sa solution, ce qui explique qu’il s’attaque √† plusieurs concurrents en m√™me temps. Je pense que √ßa rel√®verait un peu le d√©bat de lui donner quelques bases pour am√©liorer ses solutions et se donner un peu plus de chances de se placer.

D√©j√†, il gagnerait du temps en commen√ßant par regarder un peu comment fonctionne notre produit et √† s’en inspirer. Je n’ai vraiment pas pour habitude de copier la concurrence et je pr√©f√®re tr√®s largement l’innovation. Mais pour eux √ßa ne devrait pas √™tre un probl√®me vu que d√©j√† ils ont choisi les m√™mes mat√©riels que notre milieu de gamme, √† part qu’ils en ont chang√© la couleur, pr√©f√©rant celle des cocus. Les malheureux ne savaient pas qu’un mat√©riel de bonne qualit√© ne fait pas tout et que le logiciel ainsi que la qualit√© de l’int√©gration comptent √©norm√©ment (sinon les VM seraient toutes les m√™mes). C’est comme cela qu’ils se sont retrouv√©s avec un d√©calage dans la gamme par rapport √† nous : ils ont syst√©matiquement besoin du bo√ģtier plus gros pour atteindre un niveau de performances comparable (je parle de performances r√©elles mesur√©es sur le terrain avec les applications des clients et les m√©thodologies de test des clients, pas celles de la fiche produit qu’on trouve sur leur site et qui n’int√©ressent pas les clients).

Et oui messieurs, il faudrait aussi se pencher un peu sur la partie logicielle, le v√©ritable savoir faire d’un √©diteur. D√©j√†, utiliser une distribution Linux g√©n√©rique de type poste de travail pour faire une appliance optimis√©e r√©seau, ce n’√©tait pas tr√®s fin, mais s’affranchir du tuning syst√®me, √ßa rel√®ve de la paresse. Au lieu de perdre votre temps √† chercher des manuels d’optimisation sur Internet, prenez donc les param√®tres directement dans notre Aloha, vous savez qu’ils sont bons et vous gagnerez du temps. Sans doute que certains param√©trages n’existeront pas chez vous vu que nous ajoutons constamment des fonctionnalit√©s pour mieux r√©pondre aux besoins, mais ce sera d√©j√† un bon d√©but. Ne comptez quand m√™me pas sur nous pour vous attendre, pendant que vous nous copiez, nous innovons et conserverons toujours cette longueur d’avance :-). Mais au moins vous aurez l’air moins ridicules en avant-vente et √©viterez de mettre vos partenaires dans l’embarras chez le client avec un produit qui ne fonctionne toujours pas au bout de 6 heures pass√©es sur un test simple.

Volontairement je vais publier cet article en fran√ßais. Cela leur fera un petit exercice de traduction qui leur sera b√©n√©fique pour s’implanter sur le territoire fran√ßais o√Ļ les clients sont tr√®s exigeants sur l’usage de la langue fran√ßaise que leur support ne pratique toujours pas d’ailleurs.

Ah dernier point, j’invite tous les lecteurs de cet article √† chercher “Exceliance” sur Google, par exemple en cliquant sur ce lien : http://www.google.com/search?q=exceliance

Vous noterez que notre concurrent pr√©f√©r√© a m√™me √©t√© jusqu’√† payer des Google Adwords pour faire appara√ģtre ses publicit√©s lorsqu’on cherche notre nom, il faut croire qu’il nous en veut vraiment! C’est le seul √† d√©ployer autant d’efforts pour essayer de nous faire de l’ombre, comme si c’√©tait absolument strat√©gique pour lui. Vous ne verrez pas cela de la part d’A10, Brocade, F5 ou Cisco (ni m√™me Exceliance bien s√Ľr) : ces produits ont chacun des atouts sur le terrain et n’ont pas besoin de recourir √† des m√©thodes pareilles pour exister. Pensez √† cliquer sur leur lien de pub, √ßa leur fera plaisir, √ßa leur co√Ľte un petit peu √† chaque clic, et puis √ßa vous donnera l’occasion d’admirer leurs beaux produits :-).

Links

HAProxy, Varnish and the single hostname website

As explained in a previous article, HAProxy and Varnish are two great OpenSource software which aim to improve performance, resilience and scalability of web applications.
We saw also that these two softwares are not competitors. Instead of that, they can work properly together, each one bringing the other one its features, making any web infrastructure more agile and robust at the same time.

In the current article, I’m going to explain how to use both of them on a web application hosted on a single domain name.

Main advantages of each soft


As a reminder, here are the main features each product owns.

HAProxy


HAProxy‘s main features:

  • Real load-balancer with smart persistence
  • Request queueing
  • Transparent proxy

Varnish


Varnish‘s main features:

  • Cache server with stale content delivery
  • Content compression
  • Edge Side Includes

Common features


HAProxy and Varnish both have the features below:

  • Content switching
  • URL rewritting
  • DDOS protection

So if we need any of them, we could use either HAProxy or Varnish.

Why a single domain

In web application, there are two types of content: static and dynamic.

By dynamic, I mean content which is generated on the fly and which is dedicated to a single user based on its current browsing on the application. Anything which is not in this category, can be considered as static. Even a page which is generated by PHP and whose content does change every minutes or few seconds (like the CMS WordPress or drupal). I call these pages “pseudo-static

The biggest strength of Varnish is that it can cache static objects, delivering them on behalf of the server, offloading most of the traffic from the server.



An object is identified by a Host header and its URL. When you have a single domain name, you have a single Host header for all your requests: static, pseudo static or dynamic.

You can’t split your traffic: everything requests must arrive on a single type of device: the LB, the cache, etc…

A good practise to split dynamic and static content is to use one domain name per type of object: www.domain.tld for dynamic and static.domain.tld for static content. Doing that you could forward dynamic traffic to the LB and static traffic to the caches directly.



Now, I guess you understand that the web application host naming can have an impact on the platform you’re going to build.

In the current article, I’ll only focus on applications using a single domain name. We’ll see how we can route traffic to the right product despite the limitation of the single domain name.



Don’t worry, I’ll write an other article later about the fun we could have when building a platform for an application hosted on multiple domain names.

Available architectures

Considering I summarize the “web application” as a single brick called “APPSERVER“, we have 2 main architectures available:

  1. CLIENT ==> HAPROXY ==> VARNISH ==> APPSERVER
  2. CLIENT ==> VARNISH ==> HAPROXY ==> APPSERVER

Pro and cons of HAProxy in front of Varnish


Pros:

  • Use HAProxy‘s smart load-balancing algorithm such as uri, url_param to make varnish caching more efficient and improve the hit rate
  • Make the Varnish layer scalable, since load-balanced
  • Protect Varnish ramp up when starting up (related to thread pool creation)
  • HAProxy can protect against DDOS and slowloris
  • Varnish can be used as a WAF

Cons:

  • no easy way do do application layer persistence
  • HAProxy queueing system can hardly protect the application hidden by Varnish
  • The client IP will be mandatory forwwarded on the X-Forwarded-For header (or any header you want)

Pro and cons of Varnish in front of HAProxy


Pros:

  • Smart layer 7 persistence with HAProxy
  • HAProxy layer scalable (with persistence preserved) since load-balanced by Varnish
  • APPSERVER protection through HAProxy request queueing
  • Varnish can be used as a WAF
  • HAProxy can use the client IP address (provided by Varnish in a HTTP header) to do Transparent proying (getting connected on APPSERVER with the client ip)

Cons:

  • HAProxy can’t protect against DDOS, Varnish will do
  • Cache size must be big enough to store all objects
  • Varnish layer not scalable

Finally, which is the best architecture??


No need to choose between both architecture above which one is the less worst for you.

It would be better to build a platform where there are no negative points.

The Architecture


The diagram below shows the architecture we’re going to work on.
haproxy_varnish
Legend:

  • H: HAProxy Load-Balancers (could be ALOHA Load-Balancer or any home made)
  • V: Varnish servers
  • S: Web application servers, whatever the product used here (tomcat, jboss, etc…)…
  • C: Client or end user

Main roles of each layers:

  • HAProxy: Layer 7 traffic routing, first row of protection against DDOS (syn flood, slowloris, etc…), application request flow optimiation
  • Varnish: Caching, compression. Could be used later as a WAF to protect the application
  • Server: hosts the application and the static content
  • Client: browse and use the web application

traffic flow


Basically, the client will send all the requests to HAProxy, then HAProxy, based on URL or file extension will take a routing decision:

  • If the request looks to be for a (pseudo) static object, then forward it to Varnish
    If Varnish misses the object, it will use HAProxy to get the content from the server.
  • Send all the other requests to the appserver. If we’ve done our job properly, there should be only dynamic traffic here.

I don’t want to use Varnish as the default option in the flow, cause a dynamic content could be cached, which could lead to somebody’s personal information sent to everybody

Furthermore, in case of massive misses or purposely built request to bypass the caches, I don’t the servers to be hammered by Varnish, so HAProxy protects them with a tight traffic regulation between Varnish and appservers..

Dynamic traffic flow


The diagram below shows how the request requiring dynamic content should be ideally routed through the platform:
haproxy_varnish_dynamic_flow
Legend:

  1. The client sends its request to HAProxy
  2. HAProxy chooses a server based on cookie persistence or Load-Balancing Algorithm if there is no cookie.
    The server processes the request and send the response back to HAPRoxy which forwards it to the client

Static traffic flow


The diagram below shows how the request requiring static content should be ideally routed through the platform:
haproxy_varnish_static_flow

  1. The client sends its request to HAProxy which sees it asks for a static content
  2. HAProxy forward the request to Varnish. If Varnish has the object in Cache (a HIT), it forwards it directly to HAProxy.
  3. If Varnish doesn’t have the object in cache or if the cache has expired, then Varnish forwards the request to HAProxy
  4. HAProxy randomly chooses a server. The response goes back to the client through Varnish.

In case of a MISS, the flow looks heavy ūüôā I want to do it that way to use the HAProxy traffic regulation features to prevent Varnish to flood the servers. Furthermore, since Varnish sees only static content, its HIT rate is over 98%… So the overhead is very low and the protection is improved.

Pros of such architecture

  • Use smart load-balancing algorithm such as uri, url_param to make varnish caching more efficient and improve the hit rate
  • Make the Varnish layer scalable, since load-balanced
  • Startup protection for Varnish and APPSERVER, allowing server reboot or farm expansion even under heavy load
  • HAProxy can protect against DDOS and slowloris
  • Smart layer 7 persistence with HAProxy
  • APPSERVER protection through HAProxy request queueing
  • HAProxy can use the client IP address to do Transparent proxying (getting connected on APPSERVER with the client ip)
  • Cache farm failure detection and routing to application servers (worst case management)
  • Can load-balance any type of TCP based protocol hosted on APPSERVER

Cons of such architecture


To be totally fair, there are a few “non-blocking” issues:

  • HAProxy layer is hardly scalable (must use 2 crossed Virtual IPs declared in the DNS)
  • Varnish can’t be used as a WAF since it will see only static traffic passing through. This can be updated very easily

Configuration

HAProxy Configuration

# On Aloha, the global section is already setup for you
# and the haproxy stats socket is available at /var/run/haproxy.stats
global
  stats socket ./haproxy.stats level admin
  log 10.0.1.10 local3

# default options
defaults
  option http-server-close
  mode http
  log global
  option httplog
  timeout connect 5s
  timeout client 20s
  timeout server 15s
  timeout check 1s
  timeout http-keep-alive 1s
  timeout http-request 10s  # slowloris protection
  default-server inter 3s fall 2 rise 2 slowstart 60s

# HAProxy's stats
listen stats
  bind 10.0.1.3:8880
  stats enable
  stats hide-version
  stats uri     /
  stats realm   HAProxy Statistics
  stats auth    admin:admin

# main frontend dedicated to end users
frontend ft_web
  bind 10.0.0.3:80
  acl static_content path_end .jpg .gif .png .css .js .htm .html
  acl pseudo_static path_end .php ! path_beg /dynamic/
  acl image_php path_beg /images.php
  acl varnish_available nbsrv(bk_varnish_uri) ge 1
  # Caches health detection + routing decision
  use_backend bk_varnish_uri if varnish_available static_content
  use_backend bk_varnish_uri if varnish_available pseudo_static
  use_backend bk_varnish_url_param if varnish_available image_php
  # dynamic content or all caches are unavailable
  default_backend bk_appsrv

# appsrv backend for dynamic content
backend bk_appsrv
  balance roundrobin
  # app servers must say if everything is fine on their side
  # and they can process requests
  option httpchk
  option httpchk GET /appcheck
  http-check expect rstring [oO][kK]
  cookie SERVERID insert indirect nocache
  # Transparent proxying using the client IP from the TCP connection
  source 10.0.1.1 usesrc clientip
  server s1 10.0.1.101:80 cookie s1 check maxconn 250
  server s2 10.0.1.102:80 cookie s2 check maxconn 250

# static backend with balance based on the uri, including the query string
# to avoid caching an object on several caches
backend bk_varnish_uri
  balance uri # in latest HAProxy version, one can add 'whole' keyword
  # Varnish must tell it's ready to accept traffic
  option httpchk HEAD /varnishcheck
  http-check expect status 200
  # client IP information
  option forwardfor
  # avoid request redistribution when the number of caches changes (crash or start up)
  hash-type consistent
  server varnish1 10.0.1.201:80 check maxconn 1000
  server varnish2 10.0.1.202:80 check maxconn 1000

# cache backend with balance based on the value of the URL parameter called "id"
# to avoid caching an object on several caches
backend bk_varnish_url_param
  balance url_param id
  # client IP information
  option forwardfor
  # avoid request redistribution when the number of caches changes (crash or start up)
  hash-type consistent
  server varnish1 10.0.1.201:80 maxconn 1000 track bk_varnish_uri/varnish1
  server varnish2 10.0.1.202:80 maxconn 1000 track bk_varnish_uri/varnish2

# frontend used by Varnish servers when updating their cache
frontend ft_web_static
  bind 10.0.1.3:80
  monitor-uri /haproxycheck
  # Tells Varnish to stop asking for static content when servers are dead
  # Varnish would deliver staled content
  monitor fail if nbsrv(bk_appsrv_static) eq 0
  default_backend bk_appsrv_static

# appsrv backend used by Varnish to update their cache
backend bk_appsrv_static
  balance roundrobin
  # anything different than a status code 200 on the URL /staticcheck.txt
  # must be considered as an error
  option httpchk
  option httpchk HEAD /staticcheck.txt
  http-check expect status 200
  # Transparent proxying using the client IP provided by X-Forwarded-For header
  source 10.0.1.1 usesrc hdr_ip(X-Forwarded-For)
  server s1 10.0.1.101:80 check maxconn 50 slowstart 10s
  server s2 10.0.1.102:80 check maxconn 50 slowstart 10s

Varnish Configuration

backend bk_appsrv_static {
        .host = "10.0.1.3";
        .port = "80";
        .connect_timeout = 3s;
        .first_byte_timeout = 10s;
        .between_bytes_timeout = 5s;
        .probe = {
                .url = "/haproxycheck";
                .expected_response = 200;
                .timeout = 1s;
                .interval = 3s;
                .window = 2;
                .threshold = 2;
                .initial = 2;
        }
}

acl purge {
        "localhost";
}

sub vcl_recv {
### Default options

        # Health Checking
        if (req.url == /varnishcheck) {
                error 751 "health check OK!";
        }

        # Set default backend
        set req.backend = bk_appsrv_static;

        # grace period (stale content delivery while revalidating)
        set req.grace = 30s;

        # Purge request
        if (req.request == "PURGE") {
                if (!client.ip ~ purge) {
                        error 405 "Not allowed.";
                }
                return (lookup);
        }

        # Accept-Encoding header clean-up
        if (req.http.Accept-Encoding) {
                # use gzip when possible, otherwise use deflate
                if (req.http.Accept-Encoding ~ "gzip") {
                        set req.http.Accept-Encoding = "gzip";
                } elsif (req.http.Accept-Encoding ~ "deflate") {
                        set req.http.Accept-Encoding = "deflate";
                } else {
                        # unknown algorithm, remove accept-encoding header
                        unset req.http.Accept-Encoding;
                }

                # Microsoft Internet Explorer 6 is well know to be buggy with compression and css / js
                if (req.url ~ ".(css|js)" && req.http.User-Agent ~ "MSIE 6") {
                        remove req.http.Accept-Encoding;
                }
        }

### Per host/application configuration
        # bk_appsrv_static
        # Stale content delivery
        if (req.backend.healthy) {
                set req.grace = 30s;
        } else {
                set req.grace = 1d;
        }

        # Cookie ignored in these static pages
        unset req.http.cookie;

### Common options
         # Static objects are first looked up in the cache
        if (req.url ~ ".(png|gif|jpg|swf|css|js)(?.*|)$") {
                return (lookup);
        }

        # if we arrive here, we look for the object in the cache
        return (lookup);
}

sub vcl_hash {
        hash_data(req.url);
        if (req.http.host) {
                hash_data(req.http.host);
        } else {
                hash_data(server.ip);
        }
        return (hash);
}

sub vcl_hit {
        # Purge
        if (req.request == "PURGE") {
                set obj.ttl = 0s;
                error 200 "Purged.";
        }

        return (deliver);
}

sub vcl_miss {
        # Purge
        if (req.request == "PURGE") {
                error 404 "Not in cache.";
        }

        return (fetch);
}

sub vcl_fetch {
        # Stale content delivery
        set beresp.grace = 1d;

        # Hide Server information
        unset beresp.http.Server;

        # Store compressed objects in memory
        # They would be uncompressed on the fly by Varnish if the client doesn't support compression
        if (beresp.http.content-type ~ "(text|application)") {
                set beresp.do_gzip = true;
        }

        # remove any cookie on static or pseudo-static objects
        unset beresp.http.set-cookie;

        return (deliver);
}

sub vcl_deliver {
        unset resp.http.via;
        unset resp.http.x-varnish;

        # could be useful to know if the object was in cache or not
        if (obj.hits > 0) {
                set resp.http.X-Cache = "HIT";
        } else {
                set resp.http.X-Cache = "MISS";
        }

        return (deliver);
}

sub vcl_error {
        # Health check
        if (obj.status == 751) {
                set obj.status = 200;
                return (deliver);
        }
}
  

Related links

Links

HAProxy and Varnish comparison

In opensource world, there are some very smart products which are very often used to build a high performance, reliable and scalable architecture.
HAProxy and Varnish are both in this category.

Since we can’t really compare a reverse-proxy cache and a reverse-proxy load-balancer, I’m just going to focus in common for both software as well as the advantage of each of them.
The list is not exhaustive, but must only focus on most used / interesting features. So feel free to add a comment if you want me to complete the list.

Common points between HAProxy and Varnish


Before comparing the differences, we can summarize the points in common:

  • reverse-proxy mode
  • advanced HTTP features
  • no SSL offloading
  • client-side HTTP 1.1 with keepalive
  • tunnel mode available
  • high performance
  • basic load-balancing
  • server health checking
  • IPv6 ready
  • Management socket (CLI)
  • Professional services and training available

Features available in HAProxy and not in Varnish


The features below are available in HAProxy, but aren’t in Varnish:

  • advanced load-balancer
  • multiple persistence methods
  • DOS and DDOS mitigation
  • Advanced and custom logging
  • Web interface
  • Server / application protection through queue management, slow start, etc…
  • SNI content switching
  • Named ACLs
  • Full HTTP 1.1 support on server side, but keep-alive
  • Can work at TCP level with any L7 protocol
  • Proxy protocol for both client and server
  • powerful log analyzer tool (halog)
  • <private joke> 2002 website design </private joke>

Features available in Varnish and not in HAProxy


The features below are available in Varnish, but aren’t in HAProxy:

  • caching
  • grace mode (stale content delivery)
  • saint mode (manages origin server errors)
  • modular software (with a lot of modules available)
  • intuitive VCL configuration language
  • HTTP 1.1 on server side
  • TCP connection re-use
  • Edge side includes (ESI)
  • a few command line tools for stats (varnishstat, varnishhist, etc…)
  • powerful live traffic analyzer (varnishlog)
  • <private joke> 2012 website design </private joke>

Conclusion


Even if HAProxy can do TCP proxying, it is often used in front of web application, exactly where we find Varnish :).
They complete very well together: Varnish will make the website faster by offloading static object delivery to itself, while HAProxy can ensure a smooth load-balancing with smart persistence and DDOS mitigation.

Basically, HAProxy and Varnish completes very well, despite being “competitors” on a few features, each on them has its own domain of expertise where it performs very well: HAProxy is a reverse-proxy Load-Balancer and Varnish is a Reverse-proxy cache.

To be honest, when, at HAProxy Technologies, we work on infrastructures where Aloha Load balancer or HAProxy is deployed, we often see Varnish deployed. And if it is not the case, we often recommend the customer to deploy one if we feel it would improve its website performance.
Recently, I had a discussion with Ruben and Kristian when they came to Paris and they told me that they also often see an HAProxy when they work on infrastructure where Varnish is deployed.

So the real question is: Since Varnish and HAProxy are a bit similar but complete so well, how can we use them together???
The response could be very long, so stay tuned, I’ll try to answer this question in an article coming soon.

Related Links

Links

Hypervisors virtual network performance comparison from a Virtualized load-balancer point of view

Introduction

At HAProxy Technologies, we edit and sell a Load-Balancer appliance called ALOHA (stands for Application Layer Optimisation and High-Availability).
A few month ago, we managed to make it run on the most common hypervisors available:

  • VMWare (ESX, vsphere)
  • Citrix XenServer
  • HyperV
  • Xen OpenSource
  • KVM

< ADVERTISEMENT>So whatever your hypervisor is, you can runan Aloha on top of it ūüôā </ADVERTISEMENT>

Since a Load-Balancer appliance is Network IO intensive, we thought it was a good opportunity to bench each Hypervisor from a virtual network performance point of view.

Well, more and more companies use Virtualization in their infrastructures, so we guessed that a lot of people would be interested by the results of this bench, that’s why we decided to publish them on our blog.

Things to bear in mind about virtualization

One of the interesting feature of Virtualization is to be able to consolidate several servers onto a single hardware.
As a consequence, the resources (CPU, memory, disk and network IOs) are shared between several virtual machines.
An other issue to take into account is that the Hypervisor is like a new “layer” between the hardware and the OS inside the VM, which means that it may have an impact on the performance.

Purpose of benchmarking Hypervisors

First of all: WE ARE TOTALLY NEUTRAL AND HAVE NO INTEREST SAYING GOOD OR BAD THINGS ABOUT ANY HYPERVISORS.

Our main goal here is to check if each Hypervisor performs well enough to allow us to sell our Virtual Appliance on top of it.
From the tests we’ll run, we want to be able to measure the impact of the Hypervisor on the Virtual Machine performance

Benchmark platform and procedure

To run these tests, we use the same server for all Hypervisors, just swapping the hard-drive, to run each hypervisor independently.

Hypervisor Hardware summarized below:

  • CPU quad core i7 @3.4GHz
  • 16G of memory
  • Network card 1G copper e1000e

NOTE: we benched some other network cards and we got UGLY results. (Cf conclusions)
NOTE: there is a single VM running on the hypervisor: The Aloha.

The Aloha Virtual Appliance used is the Aloha VA 4.2.5 with 1G of memory and 2 vCPUs.
The client and WWW servers are physical machines plugged on the same LAN than the Hypervisor.
The client tool is inject and the web server behind the Aloha VA is httpterm.
So basically, the only thing that will change during these tests is the Hypervisor.

The Aloha is configured in reverse-proxy mode (using HAProxy) between the client and the server, load-balancing and analyzing HTTP requests.
We focused mainly on virtual networking performance: number of HTTP connections per seconds and associated bandwidth.
We ran the benchmark with different object size: 0, 1K, 2K, 4K, 8K, 16K, 32K, 48K, 64K.
NOTE: by “HTTP connection”, we mean a single HTTP request with its response over a single TCP connection, like in HTTP/1.0.

Basically, the 0K object test is used to get the number of connections per second the VA can do and the 64K object is used to measure the maximum bandwidth.

NOTE: the maximum bandwith will be 1G anyway, since we’re limitated by the physical NIC.

We are going to bench Network IO only, since this is the intensive usage a load-balancer does.
We won’t bench disks IOs…

Tested Hypervisors


We benched a native Aloha against Aloha VA embedded in the Hypervisors listed below:

  • HyperV
  • RHEV (KVM based)
  • vshpere 5.0
  • Xen 4.1 on Ubuntu 11.10
  • XenServer 6.0

Benchmark results


Raw server performance (native tests, without any hypervisor)

For the first test, we run the Aloha on the server itself without any Hypervisor.
That way, we’ll have some figures on the capacity of the server itself. We’ll use those numbers later in the article to compare the impact of each Hypervisor on performance.

native_performance

Microsoft HyperV


We tested HyperV on a Windows 2008 r2 server.
For this hypervisor 2 network cards are available:

  1. Legacy network adapteur: which emulates the network layer through the tulip driver.
    ==> With this driver, we got around 1.5K requests per seconds, which is really poor…
  2. Network adapteur: requires the hv_netvsc driver supplied by Microsoft in open source since Linux Kernel 2.6.32.
    ==> this is the driver we used for the tests

hyperv_performance

RHEV 3.0 Beta (KVM based)

RHEV is Red Hat Hypervisor, based on KVM.
For the Virtualization of the Network Layer, RHEV uses the virtio drivers.
Note that RHEV was still in Beta version when running this test.

VMWare Vsphere 5

There are 3 types of network cards available for Vsphere 5.0
1. Intel e1000: e1000 driver, emulates network layer into the VM.
2. VMxNET 2: allows network layer virtualization
3. VMxNET 3: allows network layer virtualization
The best results were obtained with the vmxnet2 driver.

Note: We have not tested Vsphere 4 neither ESX 3.5.

vsphere_performance

Xen OpenSource 4.1 on Ubuntu 11.10

Since CentOS 6.0 does not provide Xen OpenSource in its official repositories, we decided to use the latest (Oneiric Ocelot) Ubuntu server distribution, with Xen 4.1 on top of it.
Xen provides two network interfaces:

  1. emulated one, based on 8139too driver
  2. virtualized network layer, xen-vnif

Of course, the results are much better with xen-vnif, so we’re going to use this driver for the test.

xen41_performance

Citrix Xenserver 6.0


The network driver used for XenServer is the same one than the Xen OpenSource: xen-vnif.

xenserver60_performance

Hypervisors comparison


HTTP connections per second


The graph below summarizes the http connections per second capacity for each Hypervisor.
It shows us the Hypervisor overhead by comparing the light blue line which represents the server capacity without any Hypervisor to each hypervisor line..

http_connections_comparison

Bandwith usage


The graph below summarizes the http connections per second capacity for each Hypervisor.
It shows us the Hypervisor overhead by comparing the light blue line which represents the server capacity without any Hypervisor to each hypervisor line..

bandwith_comparison

Performance loss


Well, comparing Hypervisors to each others is nice, but remember, we wanted to know how much performance was lost in the hypervisor layer.
The graph below shows, in percentage, the loss generated by each hypervisor when compared to the native Aloha.
The highest the percentage, the worste for the hypervisor…

performance_loss_comparison

Conclusion

  • the Hypervisor layer has a non-negligible impact on networking performance on a Virtualized Load-Balancer running in reverse-proxy mode.
    But I guess it would be the same for any VM which is Networking IO intensive
  • The shortest the connections, the biggest the impact is.
    For very long connection like TSE, IMAP, etc… virtualization might make sense
  • Vsphere seems in advanced compared to its competitors on a performance point of view.
  • HyperV and Citrix XenServer have interesting performance.
  • RHEV (KVM) and Xen open source can still improve performance.
    Unless this is related to our procedure.
  • Even if the hardware layer is not accessed by the VM anymore, it still has a huge impact on performance.
    IE, on vsphere, we could not go higher than 20K connections per second with a Realtek NIC in the server…
    With Intel e1000e driver, we got up to 55K connections per second….
    So, even when you use virtualization, hardware counts!

Links

Web traffic limitation

Synopsis

For different reason, we may want to limit the number of connections or the number of requests we allow to a web farm.
In example:

  • give more capacity to authenticated users compared to anonymous one
  • limit web farm users per virtualhost
  • protect your website from spiders
  • etc…

Basically, we’ll manage two webfarm, one with as much as capacity as we need, and an other one where we’ll redirect people we want to slow down.
The routing decision can be taken using a header, a cookie, a part of the url, source IP address, etc…

Configuration

The configuration below would do the job.

There are only two webservers in the farm, but we want to slow down some virtual host or old and almost never used applications in order to protect and let more capacity to the regular traffic.

you can play with the inspect-delay time to be more or less aggressive.

frontend www
  bind :80
  mode http
  acl spiderbots hdr_cnt(User-Agent) eq 0
  acl personnal hdr(Host) www.personnalwebsite.tld www.oldname.tld
  acl oldies path_beg /old /foo /bar
  use_backend limited_www if spiderbots or personnal or oldies
  default_backend www

backend www
 mode http
 server be1  192.168.0.1:80 check maxconn 100
 server be1  192.168.0.2:80 check maxconn 100

backend limited_www
 mode http
 acl too_fast be_sess_rate gt 10
 acl too_many be_conn gt 10
 tcp-request inspect-delay 3s
 tcp-request content accept if ! too_fast or ! too_many
 tcp-request content accept if WAIT_END
 server be1  192.168.0.1:80 check maxconn 100
 server be1  192.168.0.2:80 check maxconn 100

Results

Without the example above, an apache bench would be able to go up to 3600 req/s on the regular farm and only 9 req/s on the limited one.

Related articles

Links