HAProxy and container IP changes in Docker

HAProxy and Docker containers

Docker is a nice tool to handle containers: it allows building and running your apps in a simple and efficient way.

When used in production together with HAProxy, devops teams face a big challenge: how to followup a container IP change when restarting a container?

This blog article aims at giving a first answer to this question.

The version of docker used for this article is 1.8.1 (very important, since docker’s default behavior has changed in 1.9.0…)

HAProxy, webapp and docker diagram

The diagram below shows how docker runs on my laptop:

  • A docker0 network interface, with IP
  • all containers run in the subnet
|                                                                               |
|  +-----------------------docker-----------------------+         |
|  |                                                    | docker0:   |
|  |  +---------+  +---------+       +----------+       |                       |
|  |  | HAProxy |  | appsrv1 |       | rsyslogd |       |                       |
|  |  +---------+  +---------+       +----------+       |                       |
|  |                                                    |                       |
|  +----------------------------------------------------+                       |
|                                                                               |

The IP address associated to docker0 will be used to export some services.

In this article, we’ll start up 3 containers:

  1. rsyslogd: where HAProxy will send all its logs
  2. appsrv1: our application server, which may be restarted at any time
  3. haproxy: our load-balancer, which must follow-up appsrv1‘s IP

Building and running the lab


First, we need rsyslogd and haproxy containers. They can be build from the following Dockerfiles:

Then run:

docker build -t blog:haproxy_dns ~/tmp/haproxy/blog/haproxy_docker_dns_link/blog_haproxy_dns/
docker build -t blog:rsyslogd ~/tmp/haproxy/blog/haproxy_docker_dns_link/blog_rsyslogd/

I consider appsrv container as yours: it’s your application.

Starting up our lab

Docker assign IPs to containers in the order they are started up, incrementing last byte for each new container.

To make it simpler, let’s restart docker first, so our container IPs are predictible:

sudo /etc/rc.d/docker restart

Then let’s start up our rsyslogd container:

docker run --detach --name rsyslogd --hostname=rsyslogd \
	--publish= \

And let’s attach a terminal to it:

docker attach rsyslogd

Now, run appsrv container as appsrv1:

docker run --detach --name appsrv1 --hostname=appsrv1 demo:appsrv

And finally, let’s start HAProxy, with a docker link to appsrv1:

docker run --detach --name haproxy --hostname=haproxy \
	--link appsrv1:appsrv1 \

Docker links, /etc/hosts file updated and DNS

When using the ”–link” option, docker creates a new entry in the containers /etc/hosts file with the IP address and name provided by the ”link” directive.
Docker will also update this file when the remote container (here appsrv1) IP address is changed (IE when restarting the container).

If you’re familiar with HAProxy, you already know it doesn’t do file system IOs at run time. Furthermore, HAProxy doesn’t use /etc/hosts file directly. The glibc might use it when HAProxy asks for DNS resolution when parsing the configuration file. (read below for DNS resolution at runtime)

That said, if appsrv1 IP get changed, then /etc/hosts file is updated accordingly, then HAProxy is not aware of the change and the application may fail.
A quick solution would be to reload HAProxy process in its container, to force it taking into account the new IP.

A more reliable solution, is to use HAProxy 1.6 DNS resolution capability to follow-up the IP change. With this purpose in mind, we added 2 tools into our HAProxy container:

  1. dnsmasq: tiny software which can act as a DNS server which takes /etc/hosts file as its database
  2. inotifytools: watch changes on /etc/hosts file and force dnsmasq to reload it when necessary

I guess now you got it:

  • when appsrv1 is restarted, then docker gives it a new IP
  • Docker populates then this IP address into all /etc/hosts file required (those using ”link” directives)
  • Once populated, inotify tool detect the file change and triggers a dnsmasq reload
  • HAProxy periodically (can be configured) probes DNS and will get the new IP address quickly from dnsmasq

Docker container restart and HAProxy followup in action

At this stage, we should have a container attached to rsyslogd and we should be able to see HAProxy logging. Let’s give it a try:


Nov 17 09:29:09 haproxy[10]: [17/Nov/2015:09:29:09.729] f_myapp b_myapp/appsrv1 0/0/0/1/1 200 858 - - ---- 1/1/0/1/0 0/0 "GET / HTTP/1.1"

Now, let’s consider your dev team delivered a new version of your application, so you build it and need to restart its running container:

docker restart appsrv1

and voilĂ :

==> /var/log/haproxy/events <==
Nov 17 09:29:29 haproxy[10]: b_myapp/appsrv1 changed its IP from to by docker/dnsmasq.
Nov 17 09:29:29 haproxy[10]: b_myapp/appsrv1 changed its IP from to by docker/dnsmasq.

Let’s test the application again:


Nov 17 09:29:31 haproxy[10]: [17/Nov/2015:09:29:31.013] f_myapp b_myapp/appsrv1 0/0/0/0/0 200 858 - - ---- 1/1/0/1/0 0/0 "GET / HTTP/1.1"


There are a few limitations in this mechanism:

  • it’s painful to maintaint ”link” directive when you have a 10s or 100s or more of containers….
  • the host computer, and computers in the host network can’t easily access our containers, because we don’t know their IPs and their hostnames are resolved in HAProxy container only
  • if we want to add more appserver in HAProxy‘s farm we still need to restart HAProxy‘s container (and update configuration accordingly)

To fix some of the issues above, we can dedicate a container to perform DNS resolution within our docker world and deliver responses to any running containers or hosts in the network. We’ll see that in a next blog article


What’s new in HAProxy 1.6

[ANNOUNCE] HAProxy 1.6.0 released

Yesterday, 13th of October, Willy has announced the release of HAProxy 1.6.0, after 16 months of development!
First good news is that release cycle goes a bit faster and we aim to carry on making it as fast.
A total of 1156 commits from 59 people were committed since the release of 1.5.0, 16 months ago.

Please find here the official announce: [ANNOUNCE] haproxy-1.6.0 now released!.

In his mail, Willy detailed all the features that have been added to this release. The purpose of this blog article is to highlight a few of them, providing the benefits and some configuration examples.

NOTE: Most of the features below were already backported and integrated into our HAProxy Enterprise and ALOHA products.

HAProxy Enterprise is our Open Source version of HAProxy based on HAProxy community stable branch where we backport many features from dev branch and we package it to make the most stable, reliable, advanced and secured version of HAProxy. It also comes with third party software to fill the gap between a simple HAProxy process and a load-balancer (VRRP, syslog, SNMP, Route Health Injection, etc…). Cherry on the cake, we provide “enterprise” support on top of it.

NOTE 2: the list of new features introduced here is not exhaustive. Example are proposed in a quick and dirty way to teach you how to start with the feature. Don’t run those examples in production :)

It’s 2015, let’s use QUOTE in configuration file

Those who uses HAProxy for a long time will be happy to know that ‘\ ‘ (backslash-space) sequence is an old painful souvenir with 1.6 :)
We can now write:

reqirep "^Host: www.(.*)" "Host: foobar\1"


option httpchk GET / "HTTP/1.1\r\nHost: www.domain.com\r\nConnection: close"

Lua Scripting

Maybe the biggest change that occurred is the integration of Lua.
Quote from Lua’s website: “Lua is a powerful, fast, lightweight, embeddable scripting language.“.

Basically, everyone has now the ability to extend HAProxy by writing and running their own Lua scripts. No need to write C code, maintain patches, etc…
If some Lua snipplets are very popular, we may write the equivalent feature in C and make it available in HAProxy mainline.

One of the biggest challenge Thierry faced when integrating Lua is to give it the ability to propose non-blocking processing of Lua code and non-blocking network socket management.

HAProxy requires Lua 5.3 or above.

With Lua, we can add new functions to the following HAProxy elements:

  • Service
  • Action
  • Sample-fetch
  • Converter

Compiling HAProxy and Lua

Installing LUA 5.3

cd /usr/src
curl -R -O http://www.lua.org/ftp/lua-5.3.0.tar.gz
tar zxf lua-5.3.0.tar.gz
cd lua-5.3.0
make linux
sudo make INSTALL_TOP=/opt/lua53 install

LUA 5.3 library and include files are now installed in /opt/lua53.

Compiling HAProxy with Lua support

make TARGET=linux2628 USE_OPENSSL=1 USE_PCRE=1 USE_LUA=1 LUA_LIB=/opt/lua53/lib/ LUA_INC=/opt/lua53/include/

HAProxy/Lua simple Hello world example

A simple Hello world! in Lua could be written like this:

  • The lua code in a file called hello_world.lua:
    core.register_service("hello_world", "tcp", function(applet)
       applet:send("hello world\n")
  • The haproxy configuration
       lua-load hello_world.lua
    listen proxy
       tcp-request content use-service lua.hello_world

More with Lua

Please read the doc, ask your questions on HAProxy‘s ML (no registration needed): haproxy@formilux.org.
Of course, we’ll write more articles later on this blog about Lua integration.


HAProxy‘s running context is very important when writing configuration. In HAProxy, each context is isolated. IE: you can’t use a request header when processing the response.
With HAProxy 1.6, this is now possible: you can declare capture slots, store data in it and use it at any time during a session.

 mode http

frontend f_myapp
 bind :9001
 declare capture request len 32 # id=0 to store Host header
 declare capture request len 64 # id=1 to store User-Agent header
 http-request capture req.hdr(Host) id 0
 http-request capture req.hdr(User-Agent) id 1
 default_backend b_myapp

backend b_myapp
 http-response set-header Your-Host %[capture.req.hdr(0)]
 http-response set-header Your-User-Agent %[capture.req.hdr(1)]
 server s1 check

Two new headers are inserted in the response:
Your-User-Agent: curl/7.44.0

Multiprocess, peers and stick-tables

In 1.5, we introduced “peers” to synchronize stick-table content between HAProxy servers. This feature was not compatible with multi-process mode.
We can now, in 1.6, synchronize content of the table it is stick to one process. It allows creating configurations for massive SSL processing pointing to a single backend sticked on a single process where we can use stick tables and synchronize its content.

peers article
 peer itchy

 pidfile /tmp/haproxy.pid
 nbproc 3

 mode http

frontend f_scalessl
 bind-process 1,2
 bind :9001 ssl crt /home/bassmann/haproxy/ssl/server.pem
 default_backend bk_lo

backend bk_lo
 bind-process 1,2
 server f_myapp unix@/tmp/f_myapp send-proxy-v2

frontend f_myapp
 bind-process 3
 bind unix@/tmp/f_myapp accept-proxy
 default_backend b_myapp

backend b_myapp
 bind-process 3
 stick-table type ip size 10k peers article
 stick on src
 server s1 check



It is now possible to position a syslog tag per process, frontend or backend. Purpose is to ease the job of syslog servers when classifying logs.
If no log-tag is provided, the default value is the program name.

Example applied to the configuration snipplet right above:

frontend f_scalessl
 log-tag SSL

frontend f_myapp
 log-tag CLEAR

New log format variables

New log format variables have appeared:

  • %HM: HTTP method (ex: POST)
  • %HP: HTTP request URI without query string (path)
  • %HQ: HTTP request URI query string (ex: ?bar=baz)
  • %HU: HTTP request URI (ex: /foo?bar=baz)
  • %HV: HTTP version (ex: HTTP/1.0)

Server IP resolution using DNS at runtime

In 1.5 and before, HAProxy performed DNS resolution when parsing configuration, in a synchronous mode and using the glibc (hence /etc/resolv.conf file).
Now, HAProxy can perform DNS resolution at runtime, in an asynchronous way and update server IP on the fly. This is very convenient in environment like Docker or Amazon Web Service where server IPs can be changed at any time.
Configuration example applied to docker. A dnsmasq is used as an interface between /etc/hosts file (where docker stores server IPs) and HAProxy:

resolvers docker
 nameserver dnsmasq

 mode http
 log global
 option httplog

frontend f_myapp
 bind :80
 default_backend b_myapp

backend b_myapp
 server s1 nginx1:80 check resolvers docker resolve-prefer ipv4

Then, let’s restart s1 with the command “docker restart nginx1” and let’s have a look at the magic in the logs:
(...) haproxy[15]: b_myapp/nginx1 changed its IP from to by docker/dnsmasq.

HTTP rules

New HTTP rules have appeared:

  • http-request: capture, set-method, set-uri, set-map, set-var, track-scX, sc-in-gpc0, sc-inc-gpt0, silent-drop
  • http-response: capture, set-map, set-var, sc-inc-gpc0, sc-set-gpt0, silent-drop, redirect


We often used HTTP header fields to store temporary data in HAProxy. With 1.6, we can now define variables.
A variable is available for a scope: session, transaction (request or response), request, response.
A variable name is prefixed by its scope (sess, txn, req, res), a dot ‘.’ and a tag composed by ‘a-z’, ‘A-Z’, ‘0-9’ and ‘_’.

Let’s rewrite the capture example using variables

 # variables memory consumption, in bytes
 tune.vars.global-max-size 1048576
 tune.vars.reqres-max-size     512
 tune.vars.sess-max-size      2048
 tune.vars.txn-max-size        256

 mode http

frontend f_myapp
 bind :9001
 http-request set-var(txn.host) req.hdr(Host)
 http-request set-var(txn.ua) req.hdr(User-Agent)
 default_backend b_myapp

backend b_myapp
 http-response set-header Your-Host %[var(txn.host)]
 http-response set-header Your-User-Agent %[var(txn.ua)]
 server s1 check


Now, HAProxy can send emails when server states change (mainly goes DOWN), so your sysadmins/devos won’t sleep anymore :). It used to be able to log only before.

mailers mymailers
 mailer smtp1 
 mailer smtp2 
backend mybackend 
 mode tcp 
 balance roundrobin
 email-alert mailers mymailers
 email-alert from test1@horms.org
 email-alert to test2@horms.org
 server srv1
 server srv2

Processing of HTTP request body

Until 1.5 included, HAProxy could process only HTTP request headers. It can now access request body.
Simply enable the statement below in your frontend or backend to give HAProxy this ability:

option http-buffer-request

Protection against slow-POST attacks

slow-POST attacks are like the slowlorys one, except that HTTP header are sent quicly, but request body are sent very slowly.
Once enabled, the timeout http-request parameters also apply to the POSTED data.

Fetch methods

A few new fetch methods now exists to play with the body: req.body, req.body_param, req.body_len, req.body_size, etc…

Short example to detect “SELECT *” string in a request POST body, and of course how to deny it:

 mode http

frontend f_mywaf
 bind :9001
 option http-buffer-request
 http-request deny if { req.body -m reg "SELECT \*" }
 default_backend b_myapp

backend b_myapp
 server s1 check

New converters

1.5 introduced the converters but only a very few of them were available.
1.6 adds many of them. The list is too long, but let’s give the most important ones: json, in_table, field, reg_sub, table_* (to access counters from stick-tables), etc…

Device Identification

Through our company, we have some customer who want us to integrate into HAProxy the ability to detect device type and characteristics and report it to the backend server.
We got a couple of contributions from 2 companies experts in this domain: 51 degrees and deviceatlas.
You can now load those libraries in HAProxy in order to fully qualify a client capabilities and set up some headers your application server can rely on to adapt content delivered to the client or let the varnish cache server use it to cache multiple flavor of the same object based on client capabilities.

More on this blog later on how to integrate each product.

Seamless server states

Prior 1.6, when being reloaded, HAProxy considers all the servers are UP until the first check is performed.
Since 1.6, we can dump server states into a flat file right before performing the reload and let the new process know where the states are stored. That way, the old and new processes owns exactly the same server states (hence seamless).
The following information are reported:

  • server IP address when resolved by DNS
  • operational state (UP/DOWN/…)
  • administrative state (MAINT/DRAIN/…)
  • Weight (including slowstart relative weight)
  • health check status
  • rise / fall current counter
  • check state (ENABLED/PAUSED/…)
  • agent check state (ENABLED/PAUSED/…)

The state could be applied globally (all server found) or per backend.
Simple HAProxy configuration:

 stats socket /tmp/socket
 server-state-file /tmp/server_state

backend bk
 load-server-state-from-file global
 server s1 check weight 11
 server s2 check weight 12

Before reloading HAProxy, we save the server states using the following command:

socat /tmp/socket - <<< "show servers state" > /tmp/server_state

Here is the content of /tmp/server_state file:

# <field names skipped for the blog article>
1 bk 1 s1 2 0 11 11 4 6 3 4 6 0 0
1 bk 2 s2 2 0 12 12 4 6 3 4 6 0 0

Now, let’s proceed with reload as usual.
Of course, the best option is to export the server states using the init script.

External check

HAProxy can run a script to perform complicated health checks.
Just be aware about the security concerns when enabling this feature!!



backend b_myapp
 external-check path "/usr/bin:/bin"
 external-check command /bin/true
 server s1 check


NOTE: some of the features introduced here may need a recent openssl library.

Detection of ECDSA-able clients

This has already been documented by Nenad on this blog: Serving ECC and RSA certificates on same IP with HAProxy

SSL certificate forgery on the fly

Since 1.6, HAProxy can forge SSL certificate on the fly!
Yes, you can use HAProxy with your company’s CA to inspect content.

Support of Certificate Transparency (RFC6962) TLS extension

When loading PEM files, HAProxy also checks for the presence of file at the same path suffixed by “.sctl”. If such file is found, support for Certificate Transparency (RFC6962) TLS extension is enabled.
The file must contain a valid Signed Certificate Timestamp List, as described in RFC. File is parsed to check basic syntax, but no signatures are verified.

TLS Tickets key load through stats socket

A new stats socket command is available to update TLS ticket keys at runtime. The new key is used for encryption/decryption while the old ones are used for decryption only.

Server side SNI

Many application servers now takes benefit from Server Name Indication TLS extension.

backend b_myapp_ssl
 mode http
 server s1 check ssl sni req.hdr(Host)

Peers v2

The peers protocol, used to synchronized data from stick-tables between two HAProxy servers can now synchronise more than just the sample and the server-id.
It can synchronize all the tracked counters. Note that each node push its local counter to a peer. So this must be used for safe reload and server failover only.
Don’t expect to see 10 HAProxy servers to sync and aggregate counters in real time.

That said, this protocol has been extended to support different data type, so we may see more features soon relying on it 😉

HTTP connection sharing

On the road to HTTP/2, HAProxy must be able to support connection pooling.
And on the road to the connection pools, we have the ability to share a server side connection between multiple clients.
The server side connection may be used by multiple clients, until the owner (the client side connection) of this session dies. Then new connections may be established.

The new keyword is http-reuse and have different level of sharing connections:

  • never: no connections are shared
  • safe: first request of each client is sent over its own connection, subsequent request may used an other connection. It works like a regular HTTP keepalive
  • aggressive: send request to connections that has proved reliably support connection reuse (no quick connection close after a response has been sent)
  • always: send request to established connections, whatever happens. If the server was closing the connection in the mean time, the request is lost and the client must resend it.

Get rid of 408 in logs

Simply use the new option
option http-ignore-probes


Serving ECC and RSA certificates on same IP with HAproxy

ECC and RSA certificates and HTTPS

To keep this practical, we will not go into theory of ECC or RSA certificates. Let’s just mention that ECC certificates can provide as much security as RSA with much lower key size, meaning much lower computation requirements on the server side. Sadly, many clients do not support ciphers based on ECC, so to maintain compatibility as well as provide good performance we need to be able to detect which type of certificate is supported by the client to be able to serve it correctly.

The above is usually achieved with analyzing the cipher suites sent by the client in the ClientHello message at the start of the SSL handshake, but we’ve opted for a much simpler approach that works very well with all modern browsers (clients).


First you will need to obtain both RSA and ECC certificates for your web site. Depending on the registrar you are using, check their documentation. After you have been issued with the certificates, make sure you download the appropriate intermediate certificates and create the bundle files for HAproxy to read.

To be able to use the sample fetch required, you will need at least HAproxy 1.6-dev3 (not yet released as of writing) or you can clone latest HAproxy from the git repository. Feature was introduced in commit 5fc7d7e.


We will use chaining in order to achieve desired functionality. You can use abstract sockets on Linux to get even more performance, but note the drawbacks that can be found in HAproxy documentation.

 frontend ssl-relay
 mode tcp
 use_backend ssl-ecc if { req.ssl_ec_ext 1 }
 default_backend ssl-rsa

 backend ssl-ecc
 mode tcp
 server ecc unix@/var/run/haproxy_ssl_ecc.sock send-proxy-v2

 backend ssl-rsa
 mode tcp
 server rsa unix@/var/run/haproxy_ssl_rsa.sock send-proxy-v2

 listen all-ssl
 bind unix@/var/run/haproxy_ssl_ecc.sock accept-proxy ssl crt /usr/local/haproxy/ecc.www.foo.com.pem user nobody
 bind unix@/var/run/haproxy_ssl_rsa.sock accept-proxy ssl crt /usr/local/haproxy/www.foo.com.pem user nobody
 mode http
 server backend_1 check

The whole configuration revolves around the newly implemented sample fetch: req.ssl_ec_ext. What this fetch does is that it detects the presence of Supported Elliptic Curves Extension inside the ClientHello message. This extension is defined in RFC4492 and according to it, it SHOULD be sent with every ClientHello message by the client supporting ECC. We have observed that all modern clients send it correctly.

If the extension is detected, the client is sent through a unix socket to the frontend that will serve an ECC certificate. If not, a regular RSA certificate will be served.


We will provide full HAproxy benchmarks in the near future, but for the sake of comparison, let us view the difference present on an E5-2680v3 CPU and OpenSSL 1.0.2.

256bit ECDSA:
sign verify sign/s verify/s
0.0000s 0.0001s 24453.3 9866.9

2048bit RSA:
sign verify sign/s verify/s
0.000682s 0.000028s 1466.4 35225.1

As you can see, looking at the sign/s we are getting over 15 times the performance with ECDSA256 compared to RSA2048.

HAProxy and HTTP Strict Transport Security (HSTS) header in HTTP redirects


SSL everywhere is on its way.
Unfortunately, many applications were written for HTTP only and switching to HTTPs is not an easy and straight forward path. Read more here about impact of TLS offloading (when a third party tool perform TLS in front of your web application servers).

A mechanism called HTTP Strict Transport Security (HSTS) has been introduced through the RFC 6797.

HSTS main purpose is to let the application server to instruct the client it’s supposed to get connected only a ciphered and secured HTTPs connection when browsing the application.
It means that of course, that both the client and the server must be compatible…
That way, the application cookie is protected on its way from the client’s browser to the remote TLS endpoint (either the load-balancer or the application server). No cookie hijacking is possible on the wire.

HAProxy configuration for Strict-Transport-Security HTTP header

HSTS header insertion in server responses

To insert the header in every server response, you can use the following HAProxy directive, in HAProxy 1.5:

 # 16000000 seconds: a bit more than 6 months
 http-response set-header Strict-Transport-Security max-age=16000000;\ includeSubDomains;\ preload;

With the upcoming HAProxy 1.6, and thanks to William’s work, we can now get rid of these ugly backslashes:

 # 16000000 seconds: a bit more than 6 months
 http-response set-header Strict-Transport-Security "max-age=16000000; includeSubDomains; preload;"

Inserting HSTS header in HTTP redirects

When HAProxy has to perform HTTP redirects, it does in at the moment of the client request, through the http-request rules.
Since we want to insert a header in the response, we can use the http-response rules. Unfortunately, these rules are enabled when HAProxy get traffic from a backend server.
Here is the trick: we do perform the http-request redirect rule in a dedicated frontend where we route traffic to. That way, our application backend or frontend can perform HSTS insertion.

A simple configuration sniplet is usually easier to explain:

frontend fe_myapp
 bind :443 ssl crt /path/to/my/cert.pem
 bind :80
 use_backend be_dummy if !{ ssl_fc }
 default_backend be_myapp

backend be_myapp
 server s1

 http-response set-header Strict-Transport-Security max-age=16000000;\ includeSubDomains;\ preload;
 server haproxy_fe_dummy_ssl_redirect

frontend fe_dummy
 http-request redirect scheme https


Packetshield: quand votre load-balancer vous protĂšge contre les DDOS!

Les attaques par DDOS

Il y a quelques temps, nous avions publié sur ce blog, un article expliquant comment utiliser un load-balancer pour se protéger contre les attaques de type DDOS applicatif: http://blog.haproxy.com/2012/02/27/use-a-load-balancer-as-a-first-row-of-defense-against-ddos/.

Malheureusement, les attaques sont aujourd’hui sur plusieurs vecteurs, notamment rĂ©seau et applicatif.
Leur but est toujours le mĂȘme: trouver un point faible dans l’architecture et le saturer:

  1. un firewall, nombre de sessions ou paquets par secondes
  2. un load-balancer, nombre sessions Ă©tablies
  3. un serveur applicatif, nombre de requĂȘtes par secondes
  4. etc…

Dans l’article prĂ©cĂ©dent, nous avions vu comment protĂ©ger les serveurs d’applications en utilisant le load-balancer, ce qui correspond Ă  la partie applicative de l’architecture.

Pour protĂ©ger la tĂȘte de l’architecture, le firewall, il faut en gĂ©nĂ©ral investir beaucoup d’argent dans des Ă©quipements coĂ»teux.
Par ailleurs, dans certains cas, lorsque l’ALOHA Ă©tait connectĂ© en direct sur Internet, il ne savait se protĂ©ger que modĂ©rĂ©ment contre les attaques “rĂ©seau”.

C’est pourquoi, HAProxy Technologies a dĂ©veloppĂ© sa propre solution de protection contre les DDOS de niveau rĂ©seau. Cet outil s’appelle Packetshield.

Les attaques de niveau rĂ©seau sont assez simples Ă  mettre en oeuvre pour l’attaquant, mais peuvent ĂȘtre dĂ©vastatrices pour la cible: lorsqu’un Ă©quipement est saturĂ©, il bloque tout le trafic.
Un équipement peut se saturer de différentes maniÚres:

  • un trop grand nombre de paquets par secondes
  • un trop grand nombre se sessions ouvertes (TCP et/ou UDP)
  • un trop gros volume de trafic entrant

(liste non exhaustive)

NOTE: pour ce dernier cas, il faut passer par un service externe de “nettoyage” du flux, qui renverra ensuite un trafic lĂ©gitime vers votre architecture.

Packetshield: Protection contre les DDOS réseau

L’ALOHA, Ă©quipĂ© de packetshield, se place en point d’entrĂ©e de l’infrastructure.
Packetshield va agir comme un Firewall stateful en analysant les flux réseaux le traversant. Seuls le trafic considéré comme légitime sera autorisé à passer.
Différents mécanismes rentrent en jeu lors de la protection:

  • analyse des paquets: les paquets non valides sont automatiquement bloquĂ©s
  • filtrage de flux: l’administrateur peut dĂ©finir ce qui est autorisĂ© ou pas
  • analyse des flux autorisĂ©s: Packetshield fait le trie pour savoir ce qui est lĂ©gitime ou pas

Contrairement aux solutions existantes qui fonctionnent dans un mode stateless et qui gĂ©nĂšrent beaucoup de faux positifs, Packetshield garantie qu’aucune session valide ne sera bloquĂ©e.

Disponibilité de Packetshield

Packetshield est disponible dans l’ALOHA en version appliance physique depuis le firmware 7.0.
La protection de l’interface gigabit (Max 1 million de paquets par secondes) est inclu gratuitement dans l’ALOHA.
Nos clients existants n’ont qu’Ă  mettre Ă  jour le firmware de leur appliance pour en bĂ©nĂ©ficier. Tout nouveau client achetant l’une de nos appliances bĂ©nĂ©ficiera lui aussi de cette protection, sans surcoĂ»t.

Packetshield est aussi disponible en version 10G (14 millions de paquets traités par secondes) sur nos toutes appliances dernier modÚle ALB5100, peut importe la licence choisie, du 8K au 64K.

Mode de déploiement

De part son fonctionnement stateful, Packetshield est utilisé de maniÚre optimal dans les déploiement suivant:

Packetshield peut fonctionner en mode stateless, sans ses fonctionnalités avancé. Dans ce mode, Packetshield est compatible avec les type de déploiement suivants:

Pour plus d’information

La documentation sur la configuration de Packetshield est disponible Ă  cette adresse: http://haproxy.com/doc/aloha/7.0/packet_flood_protection/.

Pour plus d’information, un petit mail Ă  contact (at) haproxy (point) com.


HAProxy’s load-balancing algorithm for static content delivery with Varnish

HAProxy’s load-balancing algorithms

HAProxy supports many load-balancing algorithms which may be used in many different type of cases.
That said, cache servers, which deliver most of the time the static content from your web applications, may require some specific load-balancing algorithms.

HAProxy stands in front of your cache server for some good reasons:

  • SSL offloading (read PHK’s feeling about SSL, Varnish and HAProxy)
  • HTTP content switching capabilities
  • advanced load-balancing algorithms

The main purpose of this article is to show how HAProxy can be used to aggregate Varnish servers memory storage in some kind of “JBOD” mode (like the “Just a Bunch Of Disks“).
Main purpose of the examples delivered here are to optimize the resources on the cache, mainly its memory, in order to improve the HIT rate. This will also improve your application response time and make your site top ranked on google :)

Content Switching in HAProxy

This has been covered many times on this blog.
As a quick introduction for readers who are not familiar with HAProxy, let’s explain how it works.

Clients will get connected to HAProxy through a Frontend. Then HAProxy routes traffic to a backend (server farm) where the load-balancing algorithm is used to choose a server.
A frontend can points to multiple backends and the choice of a backend is made through acls and use_backend rules..
Acls can be formed using fetches. A fetch is a directive which instructs HAProxy where to get content from.

Enough theory, let’s make a practical example: splitting static and dynamic traffic using the following rules:

  • Static content is hosted on domain names starting by ‘static.’ and ‘images.’
  • Static content files extensions are ‘.jpg’ ‘.png’ ‘.gif’ ‘.css’ ‘.js’
  • Static content can match any of the rule above
  • anything which is not static is considered as dynamic

The configuration sniplet below should be integrated into the HAProxy frontend. It matches the rules above to do traffic splitting. The varnish servers will stands in the bk_static farm.

frontend ft_public
 <frontend settings>
 acl static_domain  req.hdr_beg(Host) -i static. images.
 acl static_content path_end          -i .jpg .png .gif .css .js
 use_backend bk_static if static_domain or static_content
 default_backend bk_dynamic
backend bk_static
 <parameters related to static content delivery>

The configuration above creates 2 named acls ‘static_domain‘ and ‘static_content‘ which are used by the used_backend rule to route the traffic to varnish servers.

HAProxy and hash based load-balancing algotithm

Later in this article, we’ll heavily used the hash based load-balancing algorithms from HAProxy.
So a few information here (non exhaustive, it would deserve a long blog article) which will be useful for people wanting to understand what happens deep inside HAProxy.

The following parameters are taken into account when computing a hash algorithm:

  • number of servers in the farm
  • weight of each server in the farm
  • status of the servers (UP or DOWN)

If any of the parameter above changes, the whole hash computation also changes, hence request may hit an other server. This may lead to a negative impact on the response time of the application (during a short period of time).
Fortunately, HAProxy allows ‘consistent’ hashing, which means that only the traffic related to the change will be impacted.
That’s why you’ll see a lot of hash-type consistent directives in the configuration samples below.

Load-Balancing varnish cache server

Now, let’s focus on the magic we can add in the bk_static server farm.

Hashing the URL

HAProxy can hash the URL to pick up a server. With this load-balancing algorithm, we guarantee that a single URL will always hit the same Varnish server.

hashing the URL path only

In the example below, HAProxy hashes the URL path, which is from the first slash ‘/’ character up to the question mark ‘?’:

backend bk_static
  balance uri
  hash-type consistent

hashing the whole url, including the query string

In some cases, the query string may contain some variables in the query string, which means we must include the query string in the hash:

backend bk_static
  balance uri whole
  hash-type consistent

Query string parameter hash

That said, in some cases (API, etc…), hashing the whole URL is not enough. We may want to hash only on a particular query string parameter.
This applies well in cases where the client can forge itself the URL and all the parameters may be randomly ordered.
The configuration below tells HAProxy to apply the hash to the query string parameter named ‘id’ (IE: /image.php?width=512&id=12&height=256)

backend bk_static
  balance url_param id
  hash-type consistent

hash on a HTTP header

HAProxy can apply the hash to a specific HTTP header field.
The example below applies it on the Host header. This can be used for people hosting many domain names with a few pages, like users dedicated pages.

backend bk_static
  balance hdr(Host)
  hash-type consistent

Compose your own hash: concatenation of Host header and URL

Nowadays, HAProxy becomes more and more flexible and we can use this flexibility in its configuration.
Imagine, in your varnish configuration, you have a storage hash key based on the concatenation of the host header and the URI, then you may want to apply the same load-balancing algorithm into HAProxy, to optimize your caches.

The configuration below creates a new HTTP header field named X-LB which contains the host header (converted to lowercase) concatenated to the request uri (converted in lowercase too).

backend bk_static
  http-request set-header X-LB %[req.hdr(Host),lower]%[req.uri,lower]
  balance hdr(X-LB)
  hash-type consistent


HAProxy and Varnish works very well together. Each soft can benefit from performance and flexibility of the other one.


Microsoft Remote Desktop Services (RDS) Load-Balancing

Microsoft Remote Desktop services (RDS)

Remote Desktop Services, formerly Terminal Services, is a technology from Microsoft that allows users to access remotely to a session-based desktop, virtual machine-based desktop or applications hosted in a datacenter from their corporate network or from the internet.

Multiple RDS servers can be used in a farm. Hence we need to balance the load against them.
To achieve this purpose, we have different ways:
* using a connection broker
* using a load-balancer with the connection broker
* using a load-balancer without the connection broker

Of course, our load-balancer of choice is HAProxy!
In this blog article, we’re going to focus only on the case where a load-balancer is used.

The main issue when load-balancing multiple Remote Desktop Services servers is to ensure a user the continuity of his session in case of a network outage.

Current article will focus on session high availability for an optimal end user experience.

HAProxy with a connection broker

The connection broker, formerly Session broker, main purpose is to reconnect a user to his existing session. Since Windows 2008, the connection broker also have some load-balancing mechanism.

So, why using a load-balancer if the connection broker can do load-balance?

Answer is simple: security. Since HAProxy is a Reverse-Proxy, it breaks the TCP connection between the client and the server. HAProxy can be deployed in DMZ to give access to users coming from internet to a RDS farm deployed in the VLAN dedicated to servers.

HAProxy configuration

Note: this configuration works for the ALOHA 6.0 and above and HAPEE (HAProxy Enterprise Edition) 1.5 and above.

frontend ft_rdp
  mode tcp
  bind name rdp
  timeout client 1h
  log global
  option tcplog
  tcp-request inspect-delay 2s
  tcp-request content accept if RDP_COOKIE
  default_backend bk_rdp

backend bk_rdp
  mode tcp
  balance leastconn
  persist rdp-cookie
  timeout server 1h
  timeout connect 4s
  log global
  option tcplog
  option tcp-check
  tcp-check connect port 3389 ssl
  default-server inter 3s rise 2 fall 3
  server srv01 weight 10 check
  server srv02 weight 10 check

HAProxy without a connection broker

HAProxy can be used on its own to perform session load-balancing and resumption. For this purpose, it needs a stick-table where the user-server association is stored.
A peers section is added to the configuration. So we can share session persistence information between a cluster of ALOHAs or HAPEE servers.

peers aloha
 peer aloha1
 peer aloha2

frontend ft_rdp
  mode tcp
  bind name rdp
  timeout client 1h
  log global
  option tcplog
  tcp-request inspect-delay 2s
  tcp-request content accept if RDP_COOKIE
  default_backend bk_rdp

backend bk_rdp
  mode tcp
  balance leastconn
  timeout server 1h
  timeout connect 4s
  log global
  option tcplog
  stick-table type string len 32 size 10k expire 8h peers aloha
  stick on rdp_cookie(mstshash)
  option tcp-check
  tcp-check connect port 3389 ssl
  default-server inter 3s rise 2 fall 3
  server srv01 weight 10 check
  server srv02 weight 10 check

To know the user-server association, we can simply read the content of the stick-table:

echo show table bk_rdp | socat /var/run/haproxy.stat -
# table: bk_rdp, type: string, size:10240, used:5
0x21c7eac: key=Administrator use=0 exp=83332288 server_id=1
0x21c7eac: key=test-001 use=0 exp=83332288 server_id=2

We can easily read the login used by the user, the expiation date (in milliseconds) and the server ID used for the session.