Send user to the same backend for both HTTP and HTTPS


Your application uses both HTTP and HTTPS, depending on the pages.
SSL encryption is achieved by your backend server directly.
You want your user to get connected to the same backend for both protocols.


This configuration has to be applied on Layer7 (haproxy) tab of the Aloha.

Whatever protocol used on first request, your client IP will be associated to a backend and inserted in a sticky table.

  stats socket ./haproxy.stats level admin

frontend ft_http
	bind :80
	mode http
	default_backend bk_http

frontend ft_https
	bind :443
	mode tcp
	default_backend bk_https

backend bk_http
	mode http
	balance roundrobin
	stick on src table bk_https
	default-server inter 1s
	server s1 check id 1
	server s2 check id 2

backend bk_https
	mode tcp
	balance roundrobin
	stick-table type ip size 200k expire 30m
	stick on src
	default-server inter 1s
	server s1 check id 1
	server s2 check id 2


echo "show table https" | socat unix-connect:./haproxy.stat stdio
# table: https, type: ip, size:204800, used:2
0x1fea474: key= use=0 exp=1764443 server_id=1
0x2014a24: key= use=0 exp=1798278 server_id=2


Protect your web server against slowloris


Slowloris is a script which opens TCP connections and send HTTP headers very slowly to force webservers to keep connections opened.
Slowloris purpose is to take all resources from one server for him, preventing any regular browser from using the service.
It is a layer 7 DOS.


When using an Aloha, it’s easy to protect your web platform from such attacks by using HAProxy.
The configuration below shows how turn Aloha load balancer as a shield for your website.

	mode http
	maxconn 19500        # Should be slightly smaller than global.maxconn.
	timeout client 60s   # Client and server timeout must match the longest
	timeout server 60s   # time we may wait for a response from the server.
	timeout queue  60s   # Don't queue requests too long if saturated.
	timeout connect 4s   # There's no reason to change this one.
	timeout http-request 5s	# A complete request may never take that long.
	# Uncomment the following one to protect against nkiller2. But warning!
	# some slow clients might sometimes receive truncated data if last
	# segment is lost and never retransmitted :
	# option nolinger
	option httpclose
	option abortonclose
	balance roundrobin
	option forwardfor    # set the client's IP in X-Forwarded-For.
	retries 2

frontend public
	bind :80 # or any other IP:port combination we listen to.
	default_backend apache

backend apache
	# set the maxconn parameter below to match Apache's MaxClients minus
	# one or two connections so that you can still directly connect to it.
	server srv maxconn 248

Related articles


Maintain affinity based on SSL session ID


When load balancing HTTPS, we can’t have access to HTTP protocol since everything is encrypted. So it’s hard to maintain connection persistence in such condition.

Aloha load balancer allows you to maintain HTTPS sessions based on SSL connection ID.
That way, even if you can’t see the protocol, you can maintain affinity between a user and a backend

This is much better than doing affinity by IP source, since a lot of users could share the same IP address and generate an extra load on one backend.
Furthermore, we can follow on session even if the client change its IP address.


The configuration below explains how you can maintain session on SSL ID and store it in a stick-table.
We take advantage of HAProxy ACLs to do protocol validation.

# Learn SSL session ID from both request and response and create affinity.
backend https
	mode tcp
	balance roundrobin

	# maximum SSL session ID length is 32 bytes.
	stick-table type binary len 32 size 30k expire 30m

	acl clienthello req_ssl_hello_type 1
	acl serverhello rep_ssl_hello_type 2

	# use tcp content accepts to detects ssl client and server hello.
	tcp-request inspect-delay 5s
	tcp-request content accept if clienthello

	# no timeout on response inspect delay by default.
	tcp-response content accept if serverhello

	# SSL session ID (SSLID) may be present on a client or server hello.
	# Its length is coded on 1 byte at offset 43 and its value starts
	# at offset 44.
	# Match and learn on request if client hello.
	stick on payload_lv(43,1) if clienthello

	# Learn on response if server hello.
	stick store-response payload_lv(43,1) if serverhello

	server s1
	server s2


Implement HTTP keepalive without killing your apache server


This howto will explain you how you can use your Aloha LoadBalancer to implement HTTP Keepalive and save resources on your server at the same time.

What is HTTP KeepAlive?

In early version of HTTP protocol, clients used to send request over a new TCP connection to the server, getting content from the server through this connection and finally close it.
This method works well when web pages have a few objects.
More objects means more time to wait for each TCP connection setup and close.

HTTP 1.0 introduced the header Connection: Keepalive.
Clients and servers sent each other this header in order to tell the other side to keep the connection opened.
By default, there was no keepalive.

HTTP 1.1 considers every connection to be kept alive.
If one doesn’t want the connection to stay opened, the client or the server has to send the header: Connection: Close.

By using HTTP Keepalive, you’re going to reduce the web pages load time.
On the other hand, too many TCP connections maintained opened on an Apache server can make it consumes too much memory and CPU.

Using HAProxy to implement HTTP Keepalive

You can use HAProxy to add HTTP keepalive on the client side while not doing it on the server side.
The purpose is to deliver the objects quickly to the client without the overhead of TCP handcheck latency and to release memory and CPU on the server side by releasing TCP connection.
Note: Since HAProxy and the server are on the same LAN, the overhead of latency is negligible.

You can enable HTTP Keepalive on the client side by enabling option http-server-close

frontend public
	bind :80
	option http-server-close
	default_backend apache

backend apache
	option http-server-close
	server srv

PS: Indeed it works for other HTTP server software like IIS, Tomcat, etc….


How to play with maxconn to avoid server slowness or crash

Using Aloha load balancer and HAProxy, it is easy to protect any application or web server against unexpected high load.


The response time of web servers is directly related to the number of requests they have to manage at the same time. And the response time is not linearly linked to the number of requests, it looks like exponential.
The graph below shows a server response time compared to the number of simultaneous users browsing the website:

Simultaneous connections limiting

Simultaneous connections limiting is basically a number (aka the limit) a load balancer will consider as the maximum number of requests to send to a backend server at the same time.
Of course, since HAProxy has such a function, Aloha load-balancer does.

Smart handling of requests peak with HAProxy

The meaning is too prevent too many requests to be forwarded to an application server, by adding a limit for simultaneous requests for each server of the backend.

Fortunately, HAProxy would not reject any request over the limit, unlike some other load balancer does.

HAProxy use a queueing system and will wait for the backend server to be able to answer. This mechanism will add slow delays to request in the queue, but it has a few advantages :

  • no client request are rejected
  • every request can be faster served than with an overloaded backend server
  • the delay is still acceptable (a few ms in queue)
  • your server won’t crash because of the spike

simultaneous requests limiting occurs on the server side: HAProxy will limit the number of concurrent request to the server despite what happens on the client side.
HAProxy will never refuse any client connection until the underlying server runs out of capacity.

Concrete numbers

If you read carefully the graph above, you can easily see that the more your server has to process requests at the same time, the longer each request will take to process.
The table below summarize the time spent by our example server to process 250 requests with different simultaneous requests limiting value:

Number of requests Simultaneous requests limit Average time per request Longuest response time in ms
250 10 9 225
250 20 9 112
250 30 9 75
250 50 25 125
250 100 100 250
250 150 225 305
250 250 625 625

It’s up to the website owner to know what will be the best limit to setup on HAProxy.
You can approximate it by using HTTP benchmark tools and by comparing average response time to constant number of request you send to your backend server.

From the example above, we can see we would get the best of this backend server by setting up the limit to 30.
Setting up a limit too low would implies queueing request for a longer time and setting it too high would be counter-productive by slowing down each request because of server capacity.

HAProxy simultaneous requests limiting configuration

The simultaneous requests limiting configuration is made with the maxconn keyword on the server line definition.

frontend APPLI1
	bind :80
	mode http
	option http-server-close
	default_backend APPLI1

backend APPLI1
	balance roundrobin
	mode http
	server server1 srv1:80 maxconn 30
	server server2 srv2:80 maxconn 30


Smart content switching for news website


Build a scalable architecture for news website using below components:

  • load balancer with content switching capability
  • cache server
  • application server


  • Content switching: the ability to route traffic based on the content of the HTTP request: URI, parameters, headers, etc…
    HAproxy is a good example of OpenSource reverse proxy load-balancer with content switching capability.
  • cache server: a server able to quickly deliver static content.
    Squid, Varnish and Apache Traffic Server are OpenSource cache reverse proxy.
  • application server: the server which build the pages for your news website.
    This can be either Apache+PHP, Tomcat+Java, IIS+asp .net, etc…

Target Network Diagram



All the traffic pass through the Aloha load-balancer.
HAproxy, the layer 7 load-balancer included in Aloha, will do the content switching to route request either to cache servers or to application servers.
If cache server misses an object, it will get it from the application servers.

Haproxy configuration

Service configuration:

frontend public
	bind :80
	acl DYN path_beg /user
	acl DYN path_beg /profile
	acl DYN method POST
	use_backend APPLICATION if DYN
	default_backend CACHE

The content switching is achieved by the few lines beginning with the keyword acl.
If a URI starts with /user or /profile or if the method is a POSTthen, the traffic will be redirected to the APPLICATION server pool, otherwise the CACHE pool will be used.

Application pool configuration:

	balance roundrobin
	cookie PHPSESSID prefix
	option httpchk /health
	http-check expect string GOOD
	server APP1 cookie app1 check
	server APP2 cookie app2 check

We maintain backend server persistence using the cookie sent by the application server, named PHPSESSID in this example. You can change this cookie name to the cookie provided by your application, like JSESSIONIDASP.NET_SessionId or anything else.
Note the health check URL: /health. The script executed on the backend will check server health (database availability, CPU usage, memory usage, etc…) and will return a GOOD if everything looks fine and a WRONG if not. With this, HAproxy will consider the server as ready only if the server returns a GOOD.

Cache pool configuration

backend CACHE
	balance url
	hash-type consistent
	option httpchk /
	http-check expect KEYWORD
	reqidel ^Accept-Encoding unless { hdr_sub(Accept-Encoding) gzip }
	reqirep ^Accept-Encoding: .*gzip.* Accept-Encoding: gzip
	server CACHE1 check
	server CACHE2 check

Here, we balance requests according to the URL. The purpose of this metric is to “force” a single URL to always be retrieved from the same server.
Main benefits are :

  1. less objects in caches memory
  2. less requests to the application server for static and pseudo-static content

In order to lower the impact of the Vary: header on Content Encoding, we added the two lines reqidel / reqirep to normalize a bit the Accept-Encoding header.


Layer7 IPv6 configuration


Use the Aloha as an IPv6 to IPv4 gateway without modifying anything on your current platform.

Target Network Diagram


The website is available through IPv4 on  the service IP The IPv4 router does NAT IPv4 public address to this service IP.
About IPv6, the website hostname resolves directly on the IP 2001::2254, which is the  IPv6 service IP hosting the service. The router just routes traffic to the Aloha.
All IPv6 traffic will be automatically translated to IPv4 by the Aloha: nothing to change on your servers and your servers don’t even need to be IPv6 compliant.


Aloha 1 network configuration

On the GUI, click on Services > network > eth0  setup icon  , then  update  the configuration as below:

service network eth0
    vrrp id 254
    vrrp garp 30
    vrrp prio 100
    vrrp no-address
    vrrp address 2001::2254
    vrrp address
    vrrp address 2001::2254
    ip6  address 2001::2201/96
    ip address
    mtu 1500

Click on [OK], then [Close].

Once the configuration has been updated, you need to reload the services:

  • Network: Click on Services > eth0 reload icon
  • VRRP: Click on Services > vrrp reload icon

Aloha 2 network configuration

On the GUI, click on Services > network > eth0  setup icon  , then  update  the configuration as below:

service network eth0
    vrrp id 254
    vrrp garp 30
    vrrp prio 99
    vrrp no-address
    vrrp address 2001::2254
    vrrp address
    vrrp address 2001::2254
    ip6  address 2001::2202/96
    ip address
    mtu 1500

Click on [OK], then [Close].

Once the configuration has been updated, you need to reload the services:

  • Network: Click on Services > eth0 reload icon
  • VRRP: Click on Services > vrrp reload icon

Layer 7 (HAproxy) configuration

This configuration is common to both Aloha load balancer.
Add the bind on the IPv6 service address in the corresponding frontend section:

frontend ft_myappli
    bind 2001::2254:80
    mode http
    log global
    option httplog
    maxconn 1000
    timeout client 25s
    default_backend bk_myappli

Click on [OK], then [Apply].