Category Archives: performance

ALOHA Pocket is coming…

Well, this project is not really a secret anymore and people start to ask about it, so let me present the beast :

front_smThis is the ALOHA Pocket. Probably the smallest load balancer you have ever seen from any vendor. It is a full-featured ALOHA with layer 4/7, SSL, VRRP, the complete web interface with templates, the logs… It consumes less than a watt (0.75W to be precise) and is powered over USB.  It can run for about ten hours from a single 2200mAh battery. Still it achieves more than a thousand connections per second and forward 70 Mbps between the two ports. Yes, this is more than what some applications we’ve seen in field deliver on huge servers consuming 1000 times this power and running with 4000 times its amount of RAM. This is made possible thanks to our highly optimized, lightweight products which are so energy efficient and need so little resource that they can run on almost anything (and of course, they are magnified when running on powerful hardware).

Obviously nobody wants to run their production on this, it would not look serious! But we found that this is the ideal format to bring your machine everywhere, for demos, for tests, to develop in the train, or even just to tease friends. And it’s so cool that I have several of them  on my desk and others in my bag and am using them all the day for various tests. And while using it I found that it was so much more convenient to use than a VM when explaining high availability to someone that we realized that it’s the format of choice for students discovering load balancing and high availability. Another nice thing is that since it has two ports, it’s perfect for plugging between your PC and the LAN to observe the HTTP communications between your browser and the application you’re developing.

So we decided to prepare one hundred of them that we’ll offer to students and interns working on a load balancing project, in exchange for their promise to blog about their project’s progress.  If they need we can even send them a cluster of two.  And who knows, maybe among these, someone will have a great idea and develop a worldwide successful project, and then we’ll be very proud to have provided the initial spark that made this possible. And if it helps students get a career around load balancing, we’ll be quite proud to transmit this passion as well!

We still have a few things to complete before it can go wild, such as a bit of documentation to explain how to start with it. But if you think you’re going to work on a load balancing project or are joining a company as an intern and will be doing some stuff with web servers, this can be the perfect way to discover this new amazing world to design solutions which resist to real failures caused by pulling off a cable and not just the clean “power down” button pressed in a VM. Start thinking about it to reserve one (or a pair) when we launch it in the upcoming weeks. Conversely if you absolutely want one, you just have to find a load balancing project to work on 🙂

In any case, don’t wait too much to think about your project, because the stock is limited, so if there is too much demand, we’ll have to selective on the projects we’re going to support for  the last ones.

Stay tuned!

Serving ECC and RSA certificates on same IP with HAproxy

ECC and RSA certificates and HTTPS

To keep this practical, we will not go into theory of ECC or RSA certificates. Let’s just mention that ECC certificates can provide as much security as RSA with much lower key size, meaning much lower computation requirements on the server side. Sadly, many clients do not support ciphers based on ECC, so to maintain compatibility as well as provide good performance we need to be able to detect which type of certificate is supported by the client to be able to serve it correctly.

The above is usually achieved with analyzing the cipher suites sent by the client in the ClientHello message at the start of the SSL handshake, but we’ve opted for a much simpler approach that works very well with all modern browsers (clients).

Prerequisites

First you will need to obtain both RSA and ECC certificates for your web site. Depending on the registrar you are using, check their documentation. After you have been issued with the certificates, make sure you download the appropriate intermediate certificates and create the bundle files for HAproxy to read.

To be able to use the sample fetch required, you will need at least HAproxy 1.6-dev3 (not yet released as of writing) or you can clone latest HAproxy from the git repository. Feature was introduced in commit 5fc7d7e.

Configuration

We will use chaining in order to achieve desired functionality. You can use abstract sockets on Linux to get even more performance, but note the drawbacks that can be found in HAproxy documentation.

 frontend ssl-relay
 mode tcp
 bind 0.0.0.0:443
 use_backend ssl-ecc if { req.ssl_ec_ext 1 }
 default_backend ssl-rsa

 backend ssl-ecc
 mode tcp
 server ecc unix@/var/run/haproxy_ssl_ecc.sock send-proxy-v2

 backend ssl-rsa
 mode tcp
 server rsa unix@/var/run/haproxy_ssl_rsa.sock send-proxy-v2

 listen all-ssl
 bind unix@/var/run/haproxy_ssl_ecc.sock accept-proxy ssl crt /usr/local/haproxy/ecc.www.foo.com.pem user nobody
 bind unix@/var/run/haproxy_ssl_rsa.sock accept-proxy ssl crt /usr/local/haproxy/www.foo.com.pem user nobody
 mode http
 server backend_1 192.168.1.1:8000 check

The whole configuration revolves around the newly implemented sample fetch: req.ssl_ec_ext. What this fetch does is that it detects the presence of Supported Elliptic Curves Extension inside the ClientHello message. This extension is defined in RFC4492 and according to it, it SHOULD be sent with every ClientHello message by the client supporting ECC. We have observed that all modern clients send it correctly.

If the extension is detected, the client is sent through a unix socket to the frontend that will serve an ECC certificate. If not, a regular RSA certificate will be served.

Benchmark

We will provide full HAproxy benchmarks in the near future, but for the sake of comparison, let us view the difference present on an E5-2680v3 CPU and OpenSSL 1.0.2.

256bit ECDSA:
sign verify sign/s verify/s
0.0000s 0.0001s 24453.3 9866.9

2048bit RSA:
sign verify sign/s verify/s
0.000682s 0.000028s 1466.4 35225.1

As you can see, looking at the sign/s we are getting over 15 times the performance with ECDSA256 compared to RSA2048.

Web application name to backend mapping in HAProxy

Synopsis

Let’s take a web application platform where many HTTP Host header points to.
Of course, this platform hosts many backends and HAProxy is used to perform content switching based on the Host header to route HTTP traffic to each backend.

HAProxy map


HAProxy 1.5 introduced a cool feature: converters. One converter type is map.
Long story made short: a map allows to map a data in input to an other one on output.

A map is stored in a flat file which is loaded by HAProxy on startup. It is composed by 2 columns, on the left the input string, on the right the output one:

in out

Basically, if you call the map above and give it the in strings, it will return out.

Mapping

Now, the interesting part of the article 🙂

As stated in introduction, we want to map hundreds of Host headers to tens of backends.

The old way of mapping: acl and use_backend rules

Before the map, we had to use acls and use_backend rules.

like below:

frontend ft_allapps
 [...]
 use_backend bk_app1 if { hdr(Host) -i app1.domain1.com app1.domain2.com }
 use_backend bk_app2 if { hdr(Host) -i app2.domain1.com app2.domain2.com }
 default_backend bk_default

Add one statement per use_backend rule.

This works nicely for a few backends and a few domain names. But this type of configuration is hardly scallable…

The new way of mapping: one map and one use_backend rule

Now we can use map to achieve the same purpose.

First, let’s create a map file called domain2backend.map, with the following content: on the left, the domain name, on the right, the backend name:

#domainname  backendname
app1.domain1.com bk_app1
app1.domain2.com bk_app1
app2.domain1.com bk_app2
app2.domain2.com bk_app2

And now, HAProxy configuration:

frontend ft_allapps
 [...]
 use_backend %[req.hdr(host),lower,map_dom(/etc/hapee-1.5/domain2backend.map,bk_default)]

Here is what HAProxy will do:

  1. req.hdr(host) ==> fetch the Host header from the HTTP request
  2. lower ==> convert the string into lowercase
  3. map_dom(/etc/hapee-1.5/domain2backend.map) ==> look for the lowercase Host header in the map and return the backend name if found. If not found, the name of a default backend is returned
  4. route traffic to the backend name returned by the map

Now, adding a new content switching rule means just add one new line in the map content (and reload HAProxy). No regexes, map data is stored in a tree, so processing time is very low compared to matching many string in many ACLs for many use_backend rules.

simple is beautiful!!!

HAProxy map content auto update


If you are an HAPEE user (and soon available for the ALOHA), you can use the lb-update content to download the content of the map automatically.
Add the following statement in your configuration:

dynamic-update
 update id domain2backend.map url https://10.0.0.1/domain2backend.map delay 60s timeout 5s retries 3 map

Links

HAProxy, high mysql request rate and TCP source port exhaustion

Synopsys


At HAProxy Technologies, we do provide professional services around HAPRoxy: this includes HAProxy itself, of course, but as well the underlying OS tuning, advice and recommendation about the architecture and sometimes we can also help customers troubleshooting application layer issues.
We don’t fix issues for the customer, but using information provided by HAProxy, we are able to reduce the investigation area, saving customer’s time and money.
The story I’m relating today is issued of one of this PS.

One of our customer is an hosting company which hosts some very busy PHP / MySQL websites. They used successfully HAProxy in front of their application servers.
They used to have a single MySQL server which was some kind of SPOF and which had to handle several thousands requests per seconds.
Sometimes, they had issues with this DB: it was like the clients (hence the Web servers) can’t hangs when using the DB.

So they decided to use MySQL replication and build an active/passive cluster. They also decided to split reads (SELECT queries) and writes (DELETE, INSERT, UPDATE queries) at the application level.
Then they were able to move the MySQL servers behind HAProxy.

Enough for the introduction 🙂 Today’s article will discuss about HAProxy and MySQL at high request rate, and an error some of you may already have encountered: TCP source port exhaustion (the famous high number of sockets in TIME_WAIT).

Diagram


So basically, we have here a standard web platform which involves HAProxy to load-balance MySQL:
haproxy_mysql_replication

The MySQL Master server is used to send WRITE requests and the READ request are “weighted-ly” load-balanced (the slaves have a higher weight than the master) against all the MySQL servers.

MySql scalability

One way of scaling MySQL, is to use the replication method: one MySQL server is designed as master and must manages all the write operations (DELETE, INSERT, UPDATE, etc…). for each operation, it notifies all the MySQL slave servers. We can use slaves for reading only, offloading these types of requests from the master.
IMPORTANT NOTE: The replication method allows scalability of the read part, so if your application require much more writes, then this is not the method for you.

Of course, one MySQL slave server can be designed as master when the master fails! This also ensure MySQL high-availability.

So, where is the problem ???

This type of platform works very well for the majority of websites. The problem occurs when you start having a high rate of requests. By high, I mean several thousands per second.

TCP source port exhaustion

HAProxy works as a reverse-proxy and so uses its own IP address to get connected to the server.
Any system has around 64K TCP source ports available to get connected to a remote IP:port. Once a combination of “source IP:port => dst IP:port” is in use, it can’t be re-used.
First lesson: you can’t have more than 64K opened connections from a HAProxy box to a single remote IP:port couple. I think only people load-balancing MS Exchange RPC services or sharepoint with NTLM may one day reach this limit…
(well, it is possible to workaround this limit using some hacks we’ll explain later in this article)

Why does TCP port exhaustion occur with MySQL clients???


As I said, the MySQL request rate was a few thousands per second, so we never ever reach this limit of 64K simultaneous opened connections to the remote service…
What’s up then???
Well, there is an issue with MySQL client library: when a client sends its “QUIT” sequence, it performs a few internal operations before immediately shutting down the TCP connection, without waiting for the server to do it. A basic tcpdump will show it to you easily.
Note that you won’t be able to reproduce this issue on a loopback interface, because the server answers fast enough… You must use a LAN connection and 2 different servers.

Basically, here is the sequence currently performed by a MySQL client:

Mysql Client ==> "QUIT" sequence ==> Mysql Server
Mysql Client ==>       FIN       ==> MySQL Server
Mysql Client <==     FIN ACK     <== MySQL Server
Mysql Client ==>       ACK       ==> MySQL Server

Which leads the client connection to remain unavailable for twice the MSL (Maximum Segment Life) time, which means 2 minutes.
Note: this type of close has no negative impact when the connection is made over a UNIX socket.

Explication of the issue (much better that I could explain it myself):
“There is no way for the person who sent the first FIN to get an ACK back for that last ACK. You might want to reread that now. The person that initially closed the connection enters the TIME_WAIT state; in case the other person didn’t really get the ACK and thinks the connection is still open. Typically, this lasts one to two minutes.” (Source)

Since the source port is unavailable for the system for 2 minutes, this means that over 534 MySQL requests per seconds you’re in danger of TCP source port exhaustion: 64000 (available ports) / 120 (number of seconds in 2 minutes) = 533.333.
This TCP port exhaustion appears on the MySQL client server itself, but as well on the HAProxy box because it forwards the client traffic to the server… And since we have many web servers, it happens much faster on the HAProxy box !!!!

Remember: at spike traffic, my customer had a few thousands requests/s….

How to avoid TCP source port exhaustion?


Here comes THE question!!!!
First, a “clean” sequence should be:

Mysql Client ==> "QUIT" sequence ==> Mysql Server
Mysql Client <==       FIN       <== MySQL Server
Mysql Client ==>     FIN ACK     ==> MySQL Server
Mysql Client <==       ACK       <== MySQL Server

Actually, this sequence happens when both MySQL client and server are hosted on the same box and uses the loopback interface, that’s why I said sooner that if you want to reproduce the issue you must add “latency” between the client and the server and so use 2 boxes over the LAN.
So, until MySQL rewrite the code to follow the sequence above, there won’t be any improvement here!!!!

Increasing source port range


By default, on a Linux box, you have around 28K source ports available (for a single destination IP:port):

$ sysctl net.ipv4.ip_local_port_range
net.ipv4.ip_local_port_range = 32768    61000

In order to get 64K source ports, just run:

$ sudo sysctl net.ipv4.ip_local_port_range="1025 65000"

And don’t forget to update your /etc/sysctl.conf file!!!

Note: this should definitively be applied also on the web servers….

Allow usage of source port in TIME_WAIT


A few sysctls can be used to tell the kernel to reuse faster the connection in TIME_WAIT:

net.ipv4.tcp_tw_reuse
net.ipv4.tcp_tw_recycle

tw_reuse can be used safely, be but careful with tw_recycle… It could have side effects. Same people behind a NAT might be able to get connected on the same device. So only use if your HAProxy is fully dedicated to your MySql setup.

anyway, these sysctls were already properly setup (value = 1) on both HAProxy and web servers.

Note: this should definitively be applied also on the web servers….
Note 2: tw_reuse should definitively be applied also on the web servers….

Using multiple IPs to get connected to a single server


In HAProxy configuration, you can precise on the server line the source IP address to use to get connected to a server, so just add more server lines with different IPs.
In the example below, the IPs 10.0.0.100 and 10.0.0.101 are configured on the HAProxy box:

[...]
  server mysql1     10.0.0.1:3306 check source 10.0.0.100
  server mysql1_bis 10.0.0.1:3306 check source 10.0.0.101
[...]

This allows us to open up to 128K source TCP port…
The kernel is responsible to affect a new TCP port when HAProxy requests it. Dispite improving things a bit, we still reach some source port exhaustion… We could not get over 80K connections in TIME_WAIT with 4 source IPs…

Let HAProxy manage TCP source ports


You can let HAProxy decides which source port to use when opening a new TCP connection, on behalf of the kernel. To address this topic, HAProxy has built-in functions which make it more efficient than a regular kernel.

Let’s update the configuration above:

[...]
  server mysql1     10.0.0.1:3306 check source 10.0.0.100:1025-65000
  server mysql1_bis 10.0.0.1:3306 check source 10.0.0.101:1025-65000
[...]

We managed to get 170K+ connections in TIME_WAIT with 4 source IPs… and not source port exhaustion anymore !!!!

Use a memcache


Fortunately, the devs from this customer are skilled and write flexible code 🙂
So they managed to move some requests from the MySQL DB to a memcache, opening much less connections.

Use MySQL persistant connections


This could prevent fine Load-Balancing on the Read-Only farm, but it would be very efficient on the MySQL master server.

Conclusion

  • If you see some SOCKERR information messages in HAProxy logs (mainly on health check), you may be running out of TCP source ports.
  • Have skilled developers who write flexible code, where moving from a DB to an other is made easy
  • This kind of issue can happen only with protocols or applications which make the client closing the connection first
  • This issue can’t happen on HAProxy in HTTP mode, since it let the server closes the connection before sending a TCP RST

Links

HAProxy log customization

Synopsis

One of the strength of HAProxy is its logging system. It is very verbose and provides many information.
HAProxy HTTP log line is briefly explained in an HAProxy Technologies memo. It’s a must have document when you have to analyze HAProxy‘s log lines to troubleshoot an issue.
An other interesting tool is HALog. It is available in HAProxy‘s sources, in the contrib directory. I’ll write later an article about it. In order to have an idea on how to use it, just have a look at HAProxy Technologies howto related to halog and HTTP analyze.

Why customizing HAProxy’s logs ???


There may be several reasons why one want to customize HAProxy’s logs:

  • the default log format is too much complicated
  • there are too many information in the default log format
  • there is not enough information in the default log format
  • third party log anaylzer can hardly understand default HAProxy log format
  • logs generated by HAPorxy must be compliant to an existing format from an existing appliance in the architecture
  • … add your own reason here …

That’s why, at HAProxy Technologies, we felt the need of letting our users to create their own HAProxy log-format.
As for compression in HAProxy, the job was done by Wlallemand.

HAProxy log format customization

Configuration directive

The name of the directive which allows you to generate a home made log format is simply called log-format.

Variables

The log-format directive understand variables.
A variable follows the rules below:

  • it is preceded by a percent character: ‘%
  • it can take arguments in braces ‘{}‘.
  • If multiple arguments, then they are separated by commas ‘,‘ within the braces.
  • Flags may be added or removed by prefixing them with a ‘+‘ or ‘‘ sign.
  • spaces ‘ ‘ must be escaped (It is considered as a separator)

Currently available flags:

  • Q: quote a string
  • X: hexadecimal representation (IPs, Ports, %Ts, %rt, %pid)

Currently available variables:

  +---+------+-----------------------------------------------+-------------+
  | R | var  | field name (8.2.2 and 8.2.3 for description)  | type        |
  +---+------+-----------------------------------------------+-------------+
  |   | %o   | special variable, apply flags on all next var |             |
  +---+------+-----------------------------------------------+-------------+
  |   | %B   | bytes_read                                    | numeric     |
  |   | %Ci  | client_ip                                     | IP          |
  |   | %Cp  | client_port                                   | numeric     |
  |   | %Bi  | backend_source_ip                             | IP          |
  |   | %Bp  | backend_source_port                           | numeric     |
  |   | %Fi  | frontend_ip                                   | IP          |
  |   | %Fp  | frontend_port                                 | numeric     |
  |   | %H   | hostname                                      | string      |
  |   | %ID  | unique-id                                     | string      |
  |   | %Si  | server_IP                                     | IP          |
  |   | %Sp  | server_port                                   | numeric     |
  |   | %T   | gmt_date_time                                 | date        |
  |   | %Tc  | Tc                                            | numeric     |
  | H | %Tq  | Tq                                            | numeric     |
  | H | %Tr  | Tr                                            | numeric     |
  |   | %Ts  | timestamp                                     | numeric     |
  |   | %Tt  | Tt                                            | numeric     |
  |   | %Tw  | Tw                                            | numeric     |
  |   | %ac  | actconn                                       | numeric     |
  |   | %b   | backend_name                                  | string      |
  |   | %bc  | beconn                                        | numeric     |
  |   | %bq  | backend_queue                                 | numeric     |
  | H | %cc  | captured_request_cookie                       | string      |
  | H | %rt  | http_request_counter                          | numeric     |
  | H | %cs  | captured_response_cookie                      | string      |
  |   | %f   | frontend_name                                 | string      |
  |   | %ft  | frontend_name_transport ('~' suffix for SSL)  | string      |
  |   | %fc  | feconn                                        | numeric     |
  | H | %hr  | captured_request_headers default style        | string      |
  | H | %hrl | captured_request_headers CLF style            | string list |
  | H | %hs  | captured_response_headers default style       | string      |
  | H | %hsl | captured_response_headers CLF style           | string list |
  |   | %ms  | accept date milliseconds                      | numeric     |
  |   | %pid | PID                                           | numeric     |
  | H | %r   | http_request                                  | string      |
  |   | %rc  | retries                                       | numeric     |
  |   | %s   | server_name                                   | string      |
  |   | %sc  | srv_conn                                      | numeric     |
  |   | %sq  | srv_queue                                     | numeric     |
  | S | %sslc| ssl_ciphers (ex: AES-SHA)                     | string      |
  | S | %sslv| ssl_version (ex: TLSv1)                       | string      |
  | H | %st  | status_code                                   | numeric     |
  |   | %t   | date_time                                     | date        |
  |   | %ts  | termination_state                             | string      |
  | H | %tsc | termination_state with cookie status          | string      |
  +---+------+-----------------------------------------------+-------------+

    R = Restrictions : H = mode http only ; S = SSL only

Log format examples

Default log format

  • TCP log format
    log-format %Ci:%Cp [%t] %ft %b/%s %Tw/%Tc/%Tt %B %ts 
               %ac/%fc/%bc/%sc/%rc %sq/%bq
    
  • HTTP log format
    log-format %Ci:%Cp [%t] %ft %b/%s %Tq/%Tw/%Tc/%Tr/%Tt %st %B %cc 
               %cs %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r
    
  • CLF log format
    log-format %{+Q}o %{-Q}Ci - - [%T] %r %st %B "" "" %Cp 
               %ms %ft %b %s %Tq %Tw %Tc %Tr %Tt %tsc %ac %fc 
               %bc %sc %rc %sq %bq %cc %cs %hrl %hsl
    

Home made formats

  • Logging HTTP Host header, the URL, the status code, number of bytes read from server and the server response time
    capture request header Host len 32
    log-format %hr %r %st %B %Tr
    
  • SSL log format with: HAProxy path (frontend, backend and server name), client information (source IP and port), SSL information (protocol version and negotiated cypher), connection termination state, including a few strings:
    log-format frontend:%f %b/%s client_ip:%Ci client_port:%Cp SSL_version:%sslv SSL_cypher:%sslc %ts

Links

HAProxy and gzip compression

Synopsis

Compression is a Technic to reduce object size to reduce delivery delay for objects over HTTP protocol.
Until now, HAProxy did not include such feature. But the guys at HAProxy Technologies worked hard on it (mainly David Du Colombier and @wlallemand).
HAProxy can now be considered an new option to compress HTTP streams, as well as nginx, apache or IIS which already does it.

Note that this is in early beta, so use it with care.

Compilation


Get the latest HAProxy git version, by running a “git pull” in your HAProxy git directory.
If you don’t already have such directory, then run the a:

git clone http://git.1wt.eu/git/haproxy.git

Once your HAProxy sources are updated, then you can compile HAProxy:

make TARGET=linux26 USE_ZLIB=yes

Configuration

this is a very simple configuration test:

listen ft_web
 option http-server-close
 mode http
 bind 127.0.0.1:8090 name http
 default_backend bk_web

backend bk_web
 option http-server-close
 mode http
 compression algo gzip
 compression type text/html text/plain text/css
 server localhost 127.0.0.1:80

Compression test

On my localost, I have an apache with compression disabled and a style.css object whose size is 16302 bytes.

Download without compression requested

curl -o/dev/null -D - "http://127.0.0.1:8090/style.css" 
HTTP/1.1 200 OK
Date: Fri, 26 Oct 2012 08:55:42 GMT
Server: Apache/2.2.16 (Debian)
Last-Modified: Sun, 11 Mar 2012 17:01:39 GMT
ETag: "a35d6-3fae-4bafa944542c0"
Accept-Ranges: bytes
Content-Length: 16302
Content-Type: text/css

100 16302  100 16302    0     0  5722k      0 --:--:-- --:--:-- --:--:-- 7959k

Download with compression requested

 curl -o/dev/null -D - "http://127.0.0.1:8090/style.css" -H "Accept-Encoding: gzip"
HTTP/1.1 200 OK
Date: Fri, 26 Oct 2012 08:56:28 GMT
Server: Apache/2.2.16 (Debian)
Last-Modified: Sun, 11 Mar 2012 17:01:39 GMT
ETag: "a35d6-3fae-4bafa944542c0"
Accept-Ranges: bytes
Content-Type: text/css
Transfer-Encoding: chunked
Content-Encoding: gzip

100  4036    0  4036    0     0  1169k      0 --:--:-- --:--:-- --:--:-- 1970k

In this example, object size passed from 16302 bytes to 4036 bytes.

Have fun !!!!

Links

Application Delivery Controller and ecommerce websites

Synopsis

Today, almost any ecommerce website uses a load-balancer or an application delivery controller in front of it, in order to improve its availability and reliability.
In today’s article, I’ll explain how we can take advantage of ADCs’ layer 7 features to improve an ecommerce website performance and give the best experience to end-user in order to increase the revenue.
The points on which we can work are:

  • Network optimization
  • Traffic regulation
  • Overusage protection
  • User “tagging” based on cart content
  • User “tagging” based purchase phase
  • Blackout prevention
  • SEO optimization
  • Partner slowness protection

Note: the list is not exhaustive and the given example will be very simple. My purpose is not to create a very complicated configuration but give the reader clues on how he can take advantage of our product.


Note2: I won’t discuss about static content, there is already one article with a lot of details about it on this blog.


As Usual, the configuration example below applies on our ALOHA ADC appliance, but should work as well on HAProxy 1.5.

Network optimization

Client-side network latency have a negative impact on websites: the slowest the user connectivity is, the longest the connection will remain opened on the web server, the time for the client to download the object. This could last much longer if the client and server uses HTTP Keepalives.
Basically, this is what happens with basic layer 4 load-balancers like LVS or some other appliance vendors, when the TCP connection is established between the client and the server directly.
Since HAProxy works as a HTTP reverse-proxy, it breaks the TCP connection and enables TCP buffering between both connections. It means HAProxy reads the response at the speed of the server and delivers it at the speed of the client.
Slow clients with high latency will have no impact anymore on application servers because HAProxy “hides” it by its own latency to the server.
An other good point is that you can enable HTTP Keepalives on the client side and disable it on the server side: it allows a client to re-use a connection to download several objects, with no impact on server resources.
TCP buffering does not require any configuration, while enabling client side HTTP keep-alive is achieved by the line option http-server-close.
And The configuration is pretty simple:

# default options
defaults
  option http-server-close
  mode http
  log 10.0.0.1 local2
  option httplog
  timeout connect 5s
  timeout client 20s
  timeout server 15s
  timeout check 1s
  timeout http-keep-alive 1s
  timeout http-request 10s  # slowloris protection
  default-server inter 3s fall 2 rise 2 slowstart 60s

# main frontend
frontend ft_web
  bind 10.0.0.3:80
  default_backend bk_appsrv

# application server farm
backend bk_appsrv
  balance roundrobin
  # app servers must say if everything is fine on their side and 
  # they are ready to process traffic
  option httpchk GET /appcheck
  http-check expect rstring [oO][kK]
  cookie SERVERID insert indirect nocache
  server s1 10.0.1.101:80 cookie s1 check
  server s2 10.0.1.102:80 cookie s2 check

Traffic Regulation


Any server has a maximum capacity. The more it handles requests, the slower it will be to process each request. And if it has too many requests to process, it can even crash and won’t obviously be able to answer to anybody!
HAProxy can regulate request streams to servers in order to prevent them from crashing or even slowing down. Note that when well set up, it can allow you to use your server at their maximum capacity without never being in trouble.
Basically, HAProxy is able to manage request queues.
You can configure traffic regulation with fullconn and maxconn parameters in the backend and with minconn and maxconn parameters on the server line description.
Let’s update our server line description above with a simple maxconn parameter:

  server s1 10.0.1.101:80 cookie s1 check maxconn 250
  server s2 10.0.1.102:80 cookie s2 check maxconn 250

Note: there would be many many things to say about queueing and the HAProxy parameter cited above, but this is not the purpose of the current article.

Over usage protection

By over usage, I mean that you want to be able to handle an unexpected flow of users and be able to classify users in 2 categories:

  1. Those who have already been identified by the website and are using it
  2. Those who have just arrived and wants to use it

The difference between both type of users can be done through the ecommerce CMS cookie: identified users owns a Cookie while brand new users doesn’t.
If you know your server farm has the capacity to manage 10000 users, then you don’t want to allow more than this number until you expand the farm.
Here is the configuration to protect against over-usage (The application Cookie is “MYCOOKIE”.):

# default options
defaults
  option http-server-close
  mode http
  log 10.0.0.2 local2
  option httplog
  timeout connect 5s
  timeout client 20s
  timeout server 15s
  timeout check 1s
  timeout http-keep-alive 1s
  timeout http-request 10s  # slowloris protection
  default-server inter 3s fall 2 rise 2 slowstart 60s

# main frontend
frontend ft_web
  bind 10.0.0.3:80
  # update the number below to the number of people you want to allow
  acl maxcapacity table_cnt(bk_appsrv) ge 10000
  acl knownuser hdr_sub(Cookie) MYCOOK
  # route any unknown user to the sorry page if we reached the maximum number
  # of allowed users and the request does not have a cookie
  use_backend bk_sorrypage if maxcapacity !knownuser
  default_backend bk_appsrv

# appsrv backend for dynamic content
backend bk_appsrv
  balance roundrobin
  # define a stick-table with at most 10K entries
  # cookie value would be cleared from the table if not used for 10 mn
  stick-table type string len 32 size 10K expire 10m nopurge
  stick store-response set-cookie(MYCOOK)
  # Reset the TTL in the stick table each time a request comes in
  stick store-request cookie(MYCOOK)
  # app servers must say if everything is fine on their side and 
  # they are ready to process traffic
  option httpchk GET /appcheck
  http-check expect rstring [oO][kK]
  cookie SERVERID insert indirect nocache
  server s1 10.0.1.101:80 cookie s1 check maxconn 250
  server s2 10.0.1.102:80 cookie s2 check maxconn 250

# sorry page management
backend bk_sorrypage
  balance roundrobin
  server s1 10.0.1.103:80 check maxconn 1000
  server s2 10.0.1.104:80 check maxconn 1000

User tagging based on cart content

When your architecture has enough capacity, you don’t need to classify users. But imagine if your platform runs out of capacity, you want to be able to reserve resources for users who have no article in the cart, that way the website looks very fast, hopefully these users will buy some articles.
Just configure your ecommerce application to setup a cookie with some information about the cart: either the number of article, the whole value, etc…
In the example below, we’ll consider the application creates a cookie named CART and the number of articles as a value.
Based on the information provided by this cookie, we’ll take routing decision and choose different farms with different capacity.

# default options
defaults
  option http-server-close
  mode http
  log 10.0.0.2 local2
  option httplog
  timeout connect 5s
  timeout client 20s
  timeout server 15s
  timeout check 1s
  timeout http-keep-alive 1s
  timeout http-request 10s  # slowloris protection
  default-server inter 3s fall 2 rise 2 slowstart 60s

# main frontend
frontend ft_web
  bind 10.0.0.3:80
  # update the number below to the number of people you want to allow
  acl maxcapacity table_cnt(bk_appsrv) ge 10000
  acl knownuser hdr_sub(Cookie) MYCOOK
  acl empty_cart hdr_sub(Cookie) CART=0
  # route any unknown user to the sorry page if we reached the maximum number
  # of allowed users and the request does not own a cookie
  use_backend bk_sorrypage if maxcapacity !knownuser
  # Once the user have something in the cart, move it to a farm with less resources
  # only when there are too many users on the website
  use_backend bk_appsrv if maxcapacity !empty_cart 
  default_backend bk_appsrv_empty_cart

# Default farm when everything goes well
backend bk_appsrv_empty_cart
  balance roundrobin
  # create the entry in the table when the server generates the cookie
  stick store-response set-cookie(MYCOOK) table bk_appsrv
  # Reset the TTL in the stick table each time a request comes in
  stick store-request cookie(MYCOOK) table bk_appsrv
  # app servers must say if everything is fine on their side
  # and they can process requests
  option httpchk GET /appcheck
  http-check expect rstring [oO][kK]
  cookie SERVERID insert indirect nocache
  server s1 10.0.1.101:80 cookie s1 check maxconn 200
  server s2 10.0.1.102:80 cookie s2 check maxconn 200

# Reserve resources for the few users which have something in their cart
backend bk_appsrv
  balance roundrobin
  # define a stick-table with at most 10K entries
  # cookie would be cleared from the table if not used for 10  mn
  stick-table type string len 32 size 10K expire 10m nopurge
  # create the entry in the table when the server generates the cookie
  stick store-response set-cookie(MYCOOK)
  # Reset the TTL in the stick table each time a request comes in
  stick store-request cookie(MYCOOK)
  cookie SERVERID insert indirect nocache
  server s1 10.0.1.101:80 cookie s1 track bk_appsrv_empty_cart/s1 maxconn 50
  server s2 10.0.1.102:80 cookie s2 track bk_appsrv_empty_cart/s2 maxconn 50

backend bk_sorrypage
  balance roundrobin
  server s1 10.0.1.103:80 check maxconn 1000
  server s2 10.0.1.104:80 check maxconn 1000

User tagging based on purchase phase

The synopsis of this chapter is the same as the precedent chapter: behing able to classify users and ability to reserve resources.
But this time, we’ll identify users based on the phase they are. Basically, we’ll consider two phases:

  1. browsing phase, when people add articles in the cart
  2. purchasing phase, when people have finished filling up the cart and start providing billing, delivery and payment information

In order to classify users, we’ll use the URL path. It starts by /purchase/ when the user is in the purchasing phase. Any other URLs are considered as browsing.
Based on the information provided by requested URL, we’ll take routing decision and choose different farms with different capacity.

# defaults options
defaults
  option http-server-close
  mode http
  log 10.0.0.2 local2
  option httplog
  timeout connect 5s
  timeout client 20s
  timeout server 15s
  timeout check 1s
  timeout http-keep-alive 1s
  timeout http-request 10s  # slowloris protection
  default-server inter 3s fall 2 rise 2 slowstart 60s

# main frontend
frontend ft_web
  bind 10.0.0.3:80
  # update the number below to the number of people you want to allow
  acl maxcapacity table_cnt(bk_appsrv) ge 10000
  acl knownuser hdr_sub(Cookie) MYCOOK
  acl purchase_phase path_beg /purchase/
  # route any unknown user to the sorry page if we reached the maximum number
  # of allowed users and the request does not own a cookie
  use_backend bk_sorrypage if maxcapacity !knownuser
  # Once the user is in the purchase phase, move it to a farm with less resources
  # only when there are too many users on the website
  use_backend bk_appsrv if maxcapacity purchase_phase 
  default_backend bk_appsrv_browse

# Default farm when everything goes well
backend bk_appsrv_browse
  balance roundrobin
  # create the entry in the table when the server generates the cookie
  stick store-response set-cookie(MYCOOK) table bk_appsrv
  # Reset the TTL in the stick table each time a request comes in
  stick store-request cookie(MYCOOK) table bk_appsrv
  # app servers must say if everything is fine on their side
  # and they can process requests
  option httpchk GET /appcheck
  http-check expect rstring [oO][kK]
  cookie SERVERID insert indirect nocache
  server s1 10.0.1.101:80 cookie s1 check maxconn 200
  server s2 10.0.1.102:80 cookie s2 check maxconn 200

# Reserve resources for the few users in the purchase phase
backend bk_appsrv
  balance roundrobin
  # define a stick-table with at most 10K entries
  # cookie would be cleared from the table if not used for 10  mn
  stick-table type string len 32 size 10K expire 10m nopurge
  # create the entry in the table when the server generates the cookie
  stick store-response set-cookie(MYCOOK)
  # Reset the TTL in the stick table each time a request comes in
  stick store-request cookie(MYCOOK)
  cookie SERVERID insert indirect nocache
  server s1 10.0.1.101:80 cookie s1 track bk_appsrv_browse/s1 maxconn 50
  server s2 10.0.1.102:80 cookie s2 track bk_appsrv_browse/s2 maxconn 50

backend bk_sorrypage
  balance roundrobin
  server s1 10.0.1.103:80 check maxconn 1000
  server s2 10.0.1.104:80 check maxconn 1000

Blackout prevention

A website blackout is the worst thing that could happen: something has crashed and the application does not work anymore, or none of the servers are reachable.
When such thing occurs, it is common to get 503 errors or a blank page after 30 seconds.
In both cases, end users have a negative feeling about the website. At least an excuse page with an estimated recovery date would be appreciated. HAProxy allows to communicate to end user even if none of the servers are available.
The configuration below shows how to do it:

# defaults options
defaults
  option http-server-close
  mode http
  log 10.0.0.2 local2
  option httplog
  timeout connect 5s
  timeout client 20s
  timeout server 15s
  timeout check 1s
  timeout http-keep-alive 1s
  timeout http-request 10s  # slowloris protection
  default-server inter 3s fall 2 rise 2 slowstart 60s

# main frontend
frontend ft_web
  bind 10.0.0.3:80
  # update the number below to the number of people you want to allow
  acl maxcapacity table_cnt(bk_appsrv) ge 10000
  acl knownuser hdr_sub(Cookie) MYCOOK
  acl purchase_phase path_beg /purchase/
  acl no_appsrv nbsrv(bk_appsrv_browse) eq 0
  acl no_sorrysrv nbsrv(bk_sorrypage) eq 0
  # worst case management
  use_backend bk_worst_case_management if no_appsrv no_sorrysrv
  # use sorry servers if available
  use_backend bk_sorrypage if no_appsrv !no_sorrysrv
  # route any unknown user to the sorry page if we reached the maximum number
  # of allowed users and the request does not own a cookie
  use_backend bk_sorrypage if maxcapacity !knownuser
  # Once the user is in the purchase phase, move it to a farm with less resources
  # only when there are too many users on the website
  use_backend bk_appsrv if maxcapacity purchase_phase 
  default_backend bk_appsrv_browse

# Default farm when everything goes well
backend bk_appsrv_browse
  balance roundrobin
  # create the entry in the table when the server generates the cookie
  stick store-response set-cookie(MYCOOK) table bk_appsrv
  # Reset the TTL in the stick table each time a request comes in
  stick store-request cookie(MYCOOK) table bk_appsrv
  # app servers must say if everything is fine on their side
  # and they can process requests
  option httpchk GET /appcheck
  http-check expect rstring [oO][kK]
  cookie SERVERID insert indirect nocache
  server s1 10.0.1.101:80 cookie s1 check maxconn 200
  server s2 10.0.1.102:80 cookie s2 check maxconn 200

# Reserve resources for the few users in the purchase phase
backend bk_appsrv
  balance roundrobin
  # define a stick-table with at most 10K entries
  # cookie would be cleared from the table if not used for 10  mn
  stick-table type string len 32 size 10K expire 10m nopurge
  # create the entry in the table when the server generates the cookie
  stick store-response set-cookie(MYCOOK)
  # Reset the TTL in the stick table each time a request comes in
  stick store-request cookie(MYCOOK)
  cookie SERVERID insert indirect nocache
  server s1 10.0.1.101:80 cookie s1 track bk_appsrv_browse/s1 maxconn 50
  server s2 10.0.1.102:80 cookie s2 track bk_appsrv_browse/s2 maxconn 50

backend bk_sorrypage
  balance roundrobin
  server s1 10.0.1.103:80 check maxconn 1000
  server s2 10.0.1.104:80 check maxconn 1000

backend bk_worst_case_management
  errorfile 503 /etc/haproxy/errors/503.txt

And the content of the file /etc/haproxy/errors/503.txt could look like:

HTTP/1.0 200 OK
Cache-Control: no-cache
Connection: close
Content-Type: text/html
Content-Length: 246

<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>Maintenance</title>
</head>
<body>
<h1>Maintenance</h1>
We're sorry, ecommerce.com is currently under maintenance and will come back soon.
</body>
</html>

SEO optimization

Most search engines takes now into account pages response time.
The configuration below redirects search engine bots to a dedicated server and if it’s not available, then it is forwarded to the default farm. The bot is identified by its User-Agent header.

# defaults options
defaults
  option http-server-close
  mode http
  log 10.0.0.2 local2
  option httplog
  timeout connect 5s
  timeout client 20s
  timeout server 15s
  timeout check 1s
  timeout http-keep-alive 1s
  timeout http-request 10s  # slowloris protection
  default-server inter 3s fall 2 rise 2 slowstart 60s

# main frontend
frontend ft_web
  bind 10.0.0.3:80
  # update the number below to the number of people you want to allow
  acl maxcapacity table_cnt(bk_appsrv) ge 10000
  acl knownuser hdr_sub(Cookie) MYCOOK
  acl purchase_phase path_beg /purchase/
  acl bot hdr_sub(User-Agent) -i googlebot bingbot slurp
  acl no_appsrv nbsrv(bk_appsrv_browse) eq 0
  acl no_sorrysrv nbsrv(bk_sorrypage) eq 0
  acl no_seosrv nbsrv(bk_seo) eq 0
  # worst caperformancese management
  use_backend bk_worst_case_management if no_appsrv no_sorrysrv
  # use sorry servers if available
  use_backend bk_sorrypage if no_appsrv !no_sorrysrv
  # redirect bots
  use_backend bk_seo if bot !no_seosrv
  use_backend bk_appsrv if bot no_seosrv
  # route any unknown user to the sorry page if we reached the maximum number
  # of allowed users and the request does not own a cookie
  use_backend bk_sorrypage if maxcapacity !knownuser
  # Once the user is in the purchase phase, move it to a farm with less resources
  # only when there are too many users on the website
  use_backend bk_appsrv if maxcapacity purchase_phase 
  default_backend bk_appsrv_browse

# Default farm when everything goes well
backend bk_appsrv_browse
  balance roundrobin
  # create the entry in the table when the server generates the cookie
  stick store-response set-cookie(MYCOOK) table bk_appsrv
  # Reset the TTL in the stick table each time a request comes in
  stick store-request cookie(MYCOOK) table bk_appsrv
  # app servers must say if everything is fine on their side
  # and they can process requests
  option httpchk GET /appcheck
  http-check expect rstring [oO][kK]
  cookie SERVERID insert indirect nocache
  server s1 10.0.1.101:80 cookie s1 check maxconn 200
  server s2 10.0.1.102:80 cookie s2 check maxconn 200

# Reserve resources for the few users in the purchase phase
backend bk_appsrv
  balance roundrobin
  # define a stick-table with at most 10K entries
  # cookie would be cleared from the table if not used for 10  mn
  stick-table type string len 32 size 10K expire 10m nopurge
  # create the entry in the table when the server generates the cookie
  stick store-response set-cookie(MYCOOK)
  # Reset the TTL in the stick table each time a request comes in
  stick store-request cookie(MYCOOK)
  cookie SERVERID insert indirect nocache
  server s1 10.0.1.101:80 cookie s1 track bk_appsrv_browse/s1 maxconn 50
  server s2 10.0.1.102:80 cookie s2 track bk_appsrv_browse/s2 maxconn 50

# Reserve resources search engines bot
backend bk_seo
  option httpchk GET /appcheck
  http-check expect rstring [oO][kK]
  server s3 10.0.1.103:80 check

backend bk_sorrypage
  balance roundrobin
  server s1 10.0.1.103:80 check maxconn 1000
  server s2 10.0.1.104:80 check maxconn 1000

backend bk_worst_case_management
  errorfile 503 /etc/haproxy/errors/503.txt

Partner slowness protection

Some ecommerce website relies on partners for some product or services. Unfortunately, if the partner’s webservice application slows down, then our own application will slow down. Even worst, we may see sessions pilling up and server crashes due to lack of resources…
In order to prevent this, just configure your appserver to pass through HAProxy to reach your partners’ webservices. HAProxy can shut a session if a partner is too slow to answer. If the partner complain you don’t send them enough deals, just tell him to improve his platform, maybe using a ADC like HAProxy / ALOHA Load-Balancer 😉

frontend ft_partner1
  bind 10.0.0.3:8001
  use_backend bk_partner1

backend bk_partner1
  # the partner has 2 seconds to answer each requests
  timeout server 2s
  # you can add a maxconn here if you're not supposed to open 
  # too many connections on the partner application
  server partner1 1.2.3.4:80 check

Related links

Links