Emulating Active/passing application clustering with HAProxy

Synopsis

HAProxy is a Load-Balancer, this is a fact. It is used to route traffic to servers to primarily ensure applications reliability.

Most of the time, the sessions are locally stored in a server. Which means that if you want to split client traffic on multiple servers, you have to ensure each user can be redirected to the server which manages his session (if the server is available, of course). HAProxy can do this in many ways: we call it persistence.
Thanks to persistence, we usually says that any application can be load-balanced… Which is true in 99% of the cases. In very rare cases, the application can’t be load-balanced. I mean that there might be a lock somewhere in the code or for some other good reasons…

In such case, to ensure high-availability, we build “active/passive” clusters, where a node can be active at a time.
HAProxy can be use in different ways to emulate an active/passive clustering mode, and this is the purpose of today’s article.

Bear in mind that by “active/passive”, I mean that 100% of the users must be forwarded to the same server. And if a fail over occurs, they must follow it in the mean time!

Diagram

Let’s use one HAProxy with a couple of servers, s1 and s2.
When starting up, s1 is master and s2 is used as backup:

  -------------
  |  HAProxy  |
  -------------
   |         `
   |active    ` backup
   |           `
 ------       ------
 | s1 |       | s2 |
 ------       ------

Configuration

Automatic failover and failback

The configuration below makes HAProxy to use s1 when available, otherwise fail over to s2 if available:

defaults
 mode http
 option http-server-close
 timeout client 20s
 timeout server 20s
 timeout connect 4s

frontend ft_app
 bind 10.0.0.100:80 name app
 default_backend bk_app

backend bk_app
 server s1 10.0.0.1:80 check
 server s2 10.0.0.2:80 check backup

The most important keyword above is “backup” on s2 configuration line.
Unfortunately, as soon as s1 comes back, then all the traffic will fail back to it again, which can be acceptable for web applications, but not for active/passive

Automatic failover without failback

The configuration below makes HAProxy to use s1 when available, otherwise fail over to s2 if available.
When a failover has occured, no failback will be processed automatically, thanks to the stick table:

peers LB
 peer LB1 10.0.0.98:1234
 peer LB2 10.0.0.99:1234

defaults
 mode http
 option http-server-close
 timeout client 20s
 timeout server 20s
 timeout connect 4s

frontend ft_app
 bind 10.0.0.100:80 name app
 default_backend bk_app

backend bk_app
 stick-table type ip size 1 peers LB
 stick on dst
 server s1 10.0.0.1:80 check
 server s2 10.0.0.2:80 check backup

The stick table will maintain persistence based on destination IP address (10.0.0.100 in this case):

show table bk_app
# table: bk_app, type: ip, size:20480, used:1
0x869154: key=10.0.0.100 use=0 exp=0 server_id=1

With such configuration, you can trigger a fail back by disabling s2 during a few second period.

Links

About these ads

About Baptiste Assmann

Aloha Product Manager
This entry was posted in Aloha, architecture, HAProxy and tagged , , . Bookmark the permalink.

9 Responses to Emulating Active/passing application clustering with HAProxy

  1. wtarreau says:

    Hi Baptiste,

    just a few comments :
    – you don’t need 20k entries since you stick on “dst”
    – you can even stick on “always_true” which has only one value, and have a single entry in your stick-table.

    • markruys says:

      You might not need 20k entries, but in in a lot of scenarios your backend server will be configured for multiple IP-adresses (e.g. in case of SSL certificates). So I would define the stick table with lets say at least 100 entries.

      • The dst IP which is stored is the IP from the frontend.
        This is a ‘dst’ from a client connection point of view.
        So definitely, the table size should be 1.

        If you have 100 IPs on your frontend, then this article won’t help you. I should write a new one for such scenario!

        Baptiste

  2. markruys says:

    Isn’t it simpler to use a very high rise instead of peers? E.g.:

     server s1 10.0.0.1:80 check rise 9999999

    • Hi Markus,

      Your solution does not answer the issue at all…
      Furthermore, your server will be back up after 115 days (if inter is 1s).

      Baptiste

      • markruys says:

        Hi Baptiste,

        I appreciate posts like these and your other blogs, very informative. Apparently I missed your point as I was under the assumption you needed a mechanism to prevent automatic fail back in a active/passive setting. I agree that adding ‘rise 999999′ does fallback to the master after over 100 days, but well, it gives you plenty time for a manual fail back… Source: http://serverfault.com/questions/220681/prevent-haproxy-from-toggeling-back-from-fallback-to-master

        Mark

      • Don’t trust all the thing you can read on Internet!
        Thanks to HAProxy agility, a single goal can be achieved by many ways.
        That said, in a production environment, the only reliable way is, from my point of view, the one in the blog.

        Baptiste

      • markruys says:

        > With such configuration, you can trigger a fail back by disabling s2 during a few second period.
        I have thought a bit more about your solution, and I find a ‘rise 9999999′ safer than using a ‘stick on dst’ table. This is because I never never want to fail back automatically. I have to be sure that stuff like db replication is recovered first. I’m afraid that in a temporary unstable netwerk, the backup server will be flagged as down and hence a fail back commences…

      • stick on dst already avoids automatic failback! This is why we use this solution!

        Baptiste

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s