Tag Archives: layer7

HAProxy and sslv3 poodle vulnerability

SSLv3 poodle vulnerability

Yesterday, Google security researchers have disclosed a new vulnerability on SSL protocol.
Fortunately, this vulnerability is only on an old version of the SSL protocol: SSLv3 (15 years old protocol).
An attacker can force a browser to downgrade the protocol version used to cipher traffic to SSLv3 in order to exploit the POODLE vulnerability and access to data in clear.

Some reading about SSLv3 Poodle vulnerability:
* http://googleonlinesecurity.blogspot.fr/2014/10/this-poodle-bites-exploiting-ssl-30.html
* https://www.imperialviolet.org/2014/10/14/poodle.html
* https://www.poodletest.com/

Today’s article is going to explain how to use HAProxy to simply prevent using SSLv3 or to prevent those users to reach your applications and print them a message.

Disable SSLv3 in HAProxy

In SSL offloading mode

In this mode, HAProxy is the SSL endpoint of the connection.
It’s a simple keyword on the frontend bind directive:

  bind 10.0.0.1:443 ssl crt /pat/to/cert.pem no-sslv3

In SSL forward mode


In this mode, HAProxy forwards the SSL traffic to the server without deciphering it.
We must setup an ACL to match the SSL protocol version, then we can refuse the connection. This must be added in a **frontend** section:

  bind 10.0.0.1:443
  tcp-request inspect-delay 2s
  acl sslv3 req.ssl_ver 3
  tcp-request content reject if sslv3

Communicate a message to users

Denying sslv3 is a good way, but a better one would to educate as well users who are using this protocol.
The configuration below shows how to redirect a user to a specific page when they want to use your application over an SSLv3 connection. Of course, HAProxy must allow itself SSLv3:

frontend ft_www
  bind 10.0.0.1:443 ssl crt /pat/to/cert.pem
  acl sslv3 ssl_fc_protocol SSLv3
# first rule after all your 'http-request deny' and
# before all the redirect, rewrite, etc....
  http-request allow if sslv3
[...]
# first content switching rule
  use_backend bk_sslv3 if sslv3

backend bk_sslv3
  mode http
  errorfile 503 /etc/haproxy/pages/sslv3.http

And the content of the file /etc/haproxy/pages/sslv3.http:

HTTP/1.0 200 OK
Cache-Control: no-cache
Connection: close
Content-Type: text/html

<head>
<title>SSLv3 spotted</title>
</head>
<body><h1>SSLv3 spotted</h1></body>
SSLv3 forbidden for your safety:<BR>
http://googleonlinesecurity.blogspot.fr/2014/10/this-poodle-bites-exploiting-ssl-30.html<BR>
<BR>
If you want to browse this website, you should upgrade your browser.
</html>

Links

Howto transparent proxying and binding with HAProxy and ALOHA Load-Balancer

Transparent Proxy

HAProxy works has a reverse-proxy. Which means it maintains 2 connections when allowing a client to cross it:
  – 1 connection between HAProxy and the client
  – 1 connection between HAProxy and the server

HAProxy then manipulate buffers between these two connections.
One of the drawback of this mode is that HAProxy will let the kernel to establish the connection to the server. The kernel is going to use a local IP address to do this.
Because of this, HAProxy “hides” the client IP by its own one: this can be an issue in some cases.
Here comes the transparent proxy mode: HAProxy can be configured to spoof the client IP address when establishing the TCP connection to the server. That way, the server thinks the connection comes from the client directly (of course, the server must answer back to HAProxy and not to the client, otherwise it can’t work: the client will get an acknowledge from the server IP while it has established the connection on HAProxy‘s IP).

Transparent binding

By default, when one want HAProxy to get traffic, we have to tell it to bind an IP address and a port.
The IP address must exist on the operating system (unless you have setup the sysctl net.ipv4.ip_nonlocal_bind) and the OS must announce the availability to the other devices on the network through ARP protocol.
Well, in some cases we want HAProxy to be able to catch traffic on the fly without configuring any IP address or VRRP or whatever…
This is where transparent binding comes in: HAProxy can be configured to catch traffic on the fly even if the destination IP address is not configured on the server.
These IP addresses will never be pingable, but they’ll deliver the services configured in HAProxy.

HAProxy and the Linux Kernel

Unfortunately, HAProxy can’t do transparent binding or proxying alone. It must stand on a compiled and tuned Linux Kernel and operating system.
Below, I’ll explain how to do this in a standard Linux distribution.
Here is the check list to meet:
  1. appropriate HAProxy compilation option
  2. appropriate Linux Kernel compilation option
  3. sysctl settings
  4. iptables rules
  5. ip route rules
  6. HAProxy configuration

HAProxy compilation requirements


First of all, HAProxy must be compiled with the option TPROXY enabled.
It is enabled by default when you use the target LINUX26 or LINUX2628.

Linux Kernel requirements

You have to ensure your kernel has been compiled with the following options:
  – CONFIG_NETFILTER_TPROXY
  – CONFIG_NETFILTER_XT_TARGET_TPROXY

Of course, iptables must be enabled as well in your kernel 🙂

sysctl settings

The following sysctls must be enabled:
  – net.ipv4.ip_forward
  – net.ipv4.ip_nonlocal_bind

iptables rules


You must setup the following iptables rules:

iptables -t mangle -N DIVERT
iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
iptables -t mangle -A DIVERT -j MARK --set-mark 1
iptables -t mangle -A DIVERT -j ACCEPT

Purpose is to mark packets which matches a socket bound locally (by HAProxy).

IP route rules


Then, tell the Operating System to forward packets marked by iptables to the loopback where HAProxy can catch them:

ip rule add fwmark 1 lookup 100
ip route add local 0.0.0.0/0 dev lo table 100

HAProxy configuration


Finally, you can configure HAProxy.
  * Transparent binding can be configured like this:

[...]
frontend ft_application
  bind 1.1.1.1:80 transparent
[...]

  * Transparent proxying can be configured like this:

[...]
backend bk_application
  source 0.0.0.0 usesrc clientip
[...]

Transparent mode in the ALOHA Load-Balancer


Now, the same steps in the ALOHA Load-balancer, which is an HAProxy based load-balancing appliance:
  1-5. not required, the ALOHA kernel is deeply tuned for this purpose
  6. HAProxy configuration

LB Admin tab (AKA click mode)

  * Transparent binding can be configured like this, when editing a Frontend listener:
frontend_listener_transparent

  * Transparent proxying can be configured like this when editing a farm:
backend_transparent

LB Layer 7 tab (vi in a browser mode)


  * Transparent binding can be configured like this:

[...]
frontend ft_application
  bind 1.1.1.1:80 transparent
[...]

  * Transparent proxying can be configured like this:

[...]
backend bk_application
  source 0.0.0.0 usesrc clientip
[...]

Links

Client IP persistence OR source IP hash load-balancing?

Client or Source IP ???


Well, this is roughly the same! Depends on people, environment, products, etc… I may use both of them in this article, but be aware that both of them points to the IP that is being used to get connected on the service whose being load-balanced.

Load-Balancing and Stickiness


Load-Balancing is the ability to spread requests among a server pool which deliver the same service. By definition, it means that any request can be sent to any server in the pool.
Some applications require stickiness between a client and a server: it means all the requests from a client must be sent to the same server. Otherwise, the application session may be broken and that may have a negative impact on the client.

Source IP stickiness


We may have many ways to stick a user to a server, which has already been discussed on this blog (Read load balancing, affinity, persistence, sticky sessions: what you need to know) (and many other article may follow).

That said, sometimes, the only information we can “rely” on to perform stickiness is the client (or source) IP address.
Note this is not optimal because:
  * many clients can be “hidden” behind a single IP address (Firewall, proxy, etc…)
  * a client can change its IP address during the session
  * a client can use multiple IP addresses
  * etc…

Performing source IP affinity


There are two ways of performing source IP affinity:
  1. Using a dedicated load-balancing algorithm: a hash on the source IP
  2. Using a stick table in memory (and a roundrobin load-balancing algorithm)

Actually, the main purpose of this article was to introduce both methods which are quite often misunderstood, and to show pros and cons of each, so people can make the right decision when configuring their Load-Balancer.

Source IP hash load-balancing algorithm


This algorithm is deterministic. It means that if no elements involved in the hash computation, then the result will be the same. 2 equipment are able to apply the same hash, hence load-balance the same way, making load-balancer failover transparent.
A hash function is applied on the source IP address of the incoming request. The hash must take into account the number of servers and each server’s weight.
The following events can make the hash change and so may redirect traffic differently over the time:
  * a server in the pool goes down
  * a server in the pool goes up
  * a server weight change

The main issue with source IP hash loadbalancing algorithm, is that each change can redirect EVERYBODY to a different server!!!
That’s why, some good load-balancers have implemented a consistent hashing method which ensure that if a server fails, for example, only the client connected to this server are redirected.
The counterpart of consistent hashing is that it doesn’t provide a perfect hash, and so, in a farm of 4 servers, some may receive more clients than others.
Note that when a failed server comes back, its “sticked” users (determined by the hash) will be redirected to it.
There is no overhead in term of CPU or memory when using such algorithm.

Configuration example in HAProxy or in the ALOHA Load-Balancer:

balance source
hash-type consistent

Source IP persistence using a stick-table


A table in memory is created by the Load-balancer to store the source IP address and the affected server from the pool.
We can rely on any non-deterministic load-balancing algorithm, such as roundrobin or leastconn (it usually depends on the type of application you’re load-balancing).
Once a client is sticked to a server, he’s sticked until the entry in the table expires OR the server fails.
There is an overhead in memory, to store stickiness information. In HAProxy, the overhead is pretty low: 40MB for 1.000.000 IPv4 addresses.
One main advantage of using a stick table is that when a failed server comes back, no existing sessions will be redirected to it. Only new incoming IPs can reach it. So no impact on users.
It is also possible to synchronize tables in memory between multiple HAProxy or ALOHA Load-Balancers, making a LB failover transparent.

Configuration example in HAProxy or in the ALOHA Load-Balancer:

stick-table type ip size 1m expire 1h
stick on src

Links

IIS 6.0 appsession cookie and PCI compliance

Synopsis

You’re using HAProxy or the ALOHA Load-Balancer to load-balance IIS 6.0 web applications and you want them to pass successfully PCI compliance test.
One of the pre-requisite is to force the cookie to be “HttpOnly”, in order to tell the browser to use this cookie for HTTP requests only, and “protect” it from local javascript access (to steal session information).
Unfortunately, II 6.0 is not able to setup such cookies. That’s why HAProxy can be used to update the cookie on the fly, when setup by the application server.

Rewriting appsession Cookie with HAProxy

Place the configuration line below in your backend configuration:

rspirep ^Set-Cookie: (appsession.*)    Set-Cookie: 1; HttpOnly

Now, you’re application is “more” secured… Well, at least, you can successfully pass the PCI compliancy tests!

Links

SSL offloading impact on web applications

SSL Offloading

Nowadays, it is common (and convenient) to use the Load-Balancer SSL capabilities to cypher/uncypher traffic from clients to the web application platform.
Performing SSL at the Load-Balancer Layer is called “SSL offloading“, because you offload this process from your application servers.

Note that SSL offloading is also “marketingly” called SSL acceleration. The only acceleration performed is moving SSL processing from application server, which have an other heavy task to perform, to an other devices in the network…
So no acceleration at all, despite some people claming they do accelerate SSL in a dedicated ASIC, they just offload

The diagram below shows what happens on an architecture with SSL offloading. We can clearly see that the traffic is encrypted up to the Load-Balancer, then it’s in clear between the Load-Balancer and the application server:
ssl offloading diagram

Benefits of SSL offloading


Offloading SSL from the application servers to the Load-Balancers have many benefits:
  * offload an heavy task from your application servers, letting them to focus on the application itself
  * save resources on your application servers
  * the Load-Balancers have access to clear HTTP traffic and can perform advanced features such as reverse-proxying, Cookie persistence, traffic regulation, etc…
  * When using an ALOHA Load-Balancer (or HAProxy), there are much more features available on the SSL stack than on any web application server.

It seems there should only be positive impact when offloading SSL.

Counterparts of SSL offloading


As we saw on the diagram above, the load-balancer will hide three important information from the client connection:
  1. protocol scheme: the client was using HTTPS while the connection on the application server is made over HTTP
  2. connection type: was it cyphered or not? Actually, this is linked to the protocol scheme
  3. destination port: on the load-balancer, the port is usually 443 and on the application server should be 80 (this may change)

Most web application will use 301/302 responses with a Location header to redirect the client to a page (most of the time after a login or a POST) as well as requiring an application cookie.

So basically, the worst impact is that your application server may have difficulties to know the client connection information and may not be able to perform right responses: it can break totally the application, preventing it to work.
Fortunately, the ALOHA Load-Balancer or HAProxy can help in such situation!

Basically, the application could return such responses to the client:

Location: http://www.domain.com/login.jsp

And the client will leave the secured connection…

Tracking issues with the load-balancer


In order to know if your application supports well SSL offloading, you can configure your ALOHA Load-Balancer to log some server responses.
Add the configuration below in your application frontend section:

capture response header Location   len 32
capture response header Set-Cookie len 32

Now, in the traffic log, you’ll see clearly if the application is setting up response for HTTP or HTTPS.

NOTE: some other headers may be needed, depending on your application.

Web Applications and SSL offloading


Here we are 🙂

There are 3 answers possible for such situation, detailed below.

Client connection information provided in HTTP header by the Load-Balancer


First of all, the Load-Balancer can provide client side connection information to the application server through HTTP header.
The configuration below shows how to insert a header called X-Forwarded-Proto containing the scheme used by the client.
To be added in your backend section.

http-request set-header X-Forwarded-Proto https if  { ssl_fc }
http-request set-header X-Forwarded-Proto http  if !{ ssl_fc }

Now, the ALOHA Load-Balancer will insert the following header when the connection is made over SSL:

X-Forwarded-Proto: https

and when performed over clear HTTP:

X-Forwarded-Proto: http

It’s up to your application to use this information to answer the right responses: this may require some code update.

HTTP header rewriting on the fly


An other way is to rewrite the response setup by the server on the fly.
The following configuration line would match the Location header and translate it from http to https if needed.
Rewriting rspirep http://www.domain.com:80/url:

rspirep ^Location:\ http://(.*):80(.*)  Location:\ https://\1:443\2   if  { ssl_fc }

NOTE: above example applies on HTTP redirection (Location header), but can be applied on Set-cookie as well (or any other header your application may use).
NOTE2: you can only rewrite the response HTTP headers, not in the body. So this is not compatible with applications setting up hard links in the HTML content.

SSL Bridging


In some cases, the application is not compatible at all with SSL offloading (even with the tricks above) and we must use a ciphered connection to the server but we still may require to perform cookie based persistence, content switching, etc…
This is called SSL bridging, or can also be called a man in the middle.

In the ALOHA, there is nothing to do, just do SSL offloading as usual and add the keyword ssl on your server directive.

Using this method, you can choose a light cipher and a light key between the Load-Balancer and the server and still use the Load-Balancer advanced SSL feature with a stronger key and a stronger cipher.
The application server won’t notice anything and the Load-Balancer can still perform Layer 7 processing.

Links

Microsoft Exchange 2013 architectures

Introduction to Microsoft Exchange 2013

There are 2 types of server in Exchange 2013:
  * Mailbox server
  * Client Access server

Definitions from Microsoft Technet website:
  * The Client Access server provides authentication, limited redirection, and proxy services, and offers all the usual client access protocols: HTTP, POP and IMAP, and SMTP. The Client Access server, a thin and stateless server, doesn’t do any data rendering. There’s never anything queued or stored on the Client Access server.
  * The Mailbox server includes all the traditional server components found in Exchange 2010: the Client Access protocols, the Transport service, the Mailbox databases, and Unified Messaging (the Client Access server redirects SIP traffic generated from incoming calls to the Mailbox server). The Mailbox server handles all activity for the active mailboxes on that server.

High availability in Exchange 2013


Mailbox server high availability is achieved by the creation of a DAG: Database Availability Group. All the redundancy is achieved within the DAG and the Client Access server will retrieve the user session through the DAG automatically when a fail over occurs.

Client Access servers must be Load-Balanced to achieve high-availability. There are many ways to load-balance them:
  * DNS round-robin (Com’on, we’re in 2013, stop mentioning this option !!!!)
  * NLB (Microsoft don’t recommend it)
  * Load-Balancers (also known as Hardware Load-Balancer or HLB): this solution is recommended since it can bring smart load-balancing and advanced health checking.

Performance in Exchange 2013


The key performance in Exchange 2013 is obviously the Mailbox server. If this one slows down, all the users are affected. So be careful when designing your Exchange platform and try to create multiple DAGs with one single master per DAG. If one Mailbox server fails or is in maintenance, then you’ll be in degraded mode (which does not mean degraded performance) where one Mailbox server would be master on 2 DAGs. The disks IO may also be important here 😉

Concerning the Client Access server, it’s easy to disable temporarily one of them if it’s slowing down and route connections to the other Client Access server(s).

Exchange 2013 Architecture


Your architecture should meet your requirements:
  * if you don’t need high-availability and performance, then a single server with both Client Access and Mailbox role is sufficient
  * if you need high-availability and have a moderate number of users, a couple of servers, each one hosting both Client Access and Mailbox role is enough. A DAG to ensure high-availability of the Mailbox servers and a Load-Balancer to ensure high-availability of the Client Access servers.
  * if you need high-availability and performance for a huge amount of connections, then multiple Mailbox server, multiple DAGs, multiple Client Access servers and a couple of Load-Balancers.

Note that the path from the first architecture to the third one is doable with almost no effort, thanks to Exchange 2013.

Client Access server Load-Balancing in Exchange 2013


Using a Load-Balancer to balance user connections to the Client Access servers allows multiple type of architectures. The current article mainly focus on these architectures: we do provide load-balancers 😉
If you need help from Exchange architects, we have partners who can help you with the installation, setup and tuning of the whole architecture.

Load-Balancing Client Access servers in Exchange 2013

First of all, bear in mind that Client Access servers in Exchange 2013 are stateless. It is very important from a load-balancing point of view because it makes the configuration much more simple and scalable.
Basically, I can see 3 main types of architectures for Client Access servers load-balancing in Exchange 2013:
  1. Reverse-proxy mode (also known as source NAT)
  2. Layer 4 NAT mode (desitnation NAT)
  3. Layer 4 DSR mode (Direct Server Return, also known as Gateway)

Each type of architecture has pros and cons and may have impact on your global infrastructure. In the next chapter, I’ll provide details for each of them.

To illustrate the article, I’ll use a simple Exchange 2013 configuration:
  * 2 Exchange 2013 servers with hosting both Client Access and Mailbox role
  * 1 ALOHA Load-Balancer
  * 1 client (laptop)
  * all the Exchange services hosted on a single hostname (IE mail.domain.com) (could work as well with an hostname per service)

Reverse-proxy mode (also known as source NAT)


Diagram
The diagram below shows how things work with a Load-balancer in Reverse-proxy:
  1. the client establishes a connection onto the Load-Balancer
  2. the Load-Balancer chooses a Client Access server and establishes the connection on it
  3. client and CAS server discuss through the Load-Balancer
Basically, the Load-Balancer breaks the TCP connection between the client and the server: there are 2 TCP connections established.

exchange_2013_reverse_proxy

Advantages of Reverse-proxy mode
  * not intrusive: no infrastructure changes neither server changes
  * ability to perform SSL offloading and layer 7 persistence
  * client, Client Access servers and load-balancers can be located anywhere in the infrastructure (same subnet or different subnet)
  * can be used in a DMZ
  * A single interface is required on the Load-Balancer

Limitations of Reverse-proxy mode
  * the servers don’t see the client IPs (only the LB one)
  * maximum 65K connections on the Client Access farm (without tricking the configuration)

Configuration
To perform this type of configuration, you can use the LB Admin tab and do it through the GUI or just copy/paste the following lines into the LB Layer 7 tab:

######## Default values for all entries till next defaults section
defaults
  option  dontlognull             # Do not log connections with no requests
  option  redispatch              # Try another server in case of connection failure
  option  contstats               # Enable continuous traffic statistics updates
  retries 3                       # Try to connect up to 3 times in case of failure 
  backlog 10000                   # Size of SYN backlog queue
  mode tcp                        #alctl: protocol analyser
  log global                      #alctl: log activation
  option tcplog                   #alctl: log format
  timeout client  300s            #alctl: client inactivity timeout
  timeout server  300s            #alctl: server inactivity timeout
  timeout connect   5s            # 5 seconds max to connect or to stay in queue
  default-server inter 3s rise 2 fall 3   #alctl: default check parameters

frontend ft_exchange
  bind 10.0.0.9:443 name https    #alctl: listener https configuration.
  maxconn 10000                   #alctl: max connections (dep. on ALOHA capacity)
  default_backend bk_exchange     #alctl: default farm to use

backend bk_exchange
  balance roundrobin              #alctl: load balancing algorithm
  server exchange1 10.0.0.13:443 check
  server exchange2 10.0.0.14:443 check

(update IPs for your infrastructure and update the DNS name with the IP configured on the frontend bind line.)

Layer 4 NAT mode (desitnation NAT)


Diagram
The diagram below shows how things work with a Load-balancer in Layer 4 NAT mode:
  1. the client establishes a connection to the Client Access server through the Load-Balancer
  2. client and CAS server discuss through the Load-Balancer
Basically, the Load-Balancer acts as a router between the client and the server: a single TCP connection is established directly between the client and the Client Access server, and the LB just forward packets between both of them.
exchange_2013_layer4_nat

Advantages of Layer 4 NAT mode
  * no connection limit
  * Client Access servers can see the client IP address

Limitations of Layer 4 NAT mode
  * infrastructure intrusive: the server default gateway must be the load-balancer
  * clients and Client Access servers can’t be in the same subnet
  * no SSL acceleration, no advanced persistence
  * Must use 2 interfaces on the Load-Balancer (or VLAN interface)

Configuration
To perform this type of configuration, you can use the LB Admin tab and do it through the GUI or just copy/paste the following lines into the LB Layer 4 tab:

director exchange 10.0.1.9:443 TCP
  balance roundrobin                               #alctl: load balancing algorythm
  mode nat                                         #alctl: forwarding mode
  check interval 10 timeout 2                      #alctl: check parameters
  option tcpcheck                                  #alctl: adv check parameters
  server exchange1 10.0.0.13:443 weight 10 check   #alctl: server exchange1
  server exchange2 10.0.0.14:443 weight 10 check   #alctl: server exchange2

(update IPs for your infrastructure and update the DNS name with the IP configured on the frontend bind line.)

Layer 4 DSR mode (Direct Server Return, also known as Gateway)


Diagram
The diagram below shows how things work with a Load-balancer in Layer 4 DSR mode:
  1. the client establishes a connection to the Client Access server through the Load-Balancer
  2. the Client Access server acknowledges the connection directly to the client, bypassing the Load-Balancer on the way back. The Client Access server must have the Load-Balancer Virtual IP configured on a loopback interface.
  3. client and CAS server keep on discussing the same way: request through the load-balancer and responses directly from server to client.
Basically, the Load-Balancer acts as a router between the client and the server: a single TCP connection is established directly between the client and the Client Access server.
The Client and the server can be in the same subnet, like in the diagram, or can be in two different subnet. In the second case, the server would use its default gateway (no need to forward the traffic back through the load-balancer).
exchange_2013_layer4_dsr

Advantages of Layer 4 DSR mode
  * no connection limit
  * Client Access servers can see the client IP address
  * clients and Client Access servers can be in the same subnet
  * A single interface is required on the Load-Balancer

Limitations of Layer 4 DSR mode
  * infrastructure intrusive: the Load-Balancer Virtual IP must be configured on the Client Access server (Loopback).
  * no SSL acceleration, no advanced persistence

Configuration
To perform this type of configuration, you can use the LB Admin tab and do it though the web form or just copy/paste the following lines into the LB Layer 4 tab:

director exchange 10.0.0.9:443 TCP
  balance roundrobin                               #alctl: load balancing algorythm
  mode gateway                                     #alctl: forwarding mode
  check interval 10 timeout 2                      #alctl: check parameters
  option tcpcheck                                  #alctl: adv check parameters
  server exchange1 10.0.0.13:443 weight 10 check   #alctl: server exchange1
  server exchange2 10.0.0.14:443 weight 10 check   #alctl: server exchange2

Related links

Links

Microsoft Exchange 2013 load-balancing with HAProxy

Introduction to Microsoft Exchange server 2013

Note: I’ll introduce exchange from a Load-Balancing point of view. For a detailed information about exchange history and new features, please read the pages linked in the Related links at the bottom of this article.

Exchange is the name of the Microsoft software which provides a business-class mail / calendar / contact platform. It’s an old software, starting with version 4.0 back in 1996…
Each new version of Exchange Server brings in new features, both expanding Exchange perimeter and making it easier to deploy and administrate.

Exchange 2007


Introduction of Outlook Anywhere, AKA RPC over HTTP: allows remote users to get connected on Exchange 2007 platform using HTTPs protocol.

Exchange 2010


In example, Exchange 2010 introduced CAS arrays, making client side services high-available and scalable. DAG also brings mail database high-availability. All the client access services required persistence: a user must be sticked to a single CAS server.
Exchange 2010 introduced as well a “layer” between the MAPI RPC clients and the mailbox servers (through the CAS servers), making the failover of a database transparent.

Exchange 2013


Exchange 2013 improved again the changes brought by Exchange 2010: the CAS servers are now state-less and independent from each other (no arrays anymore): no persistence required anymore.
In exchange 2013, raw TCP MAPI RPC services have disappeared and have definitively been replaced by Outlook Anywhere (RPC over HTTP).
Last but not least, SSL offloading does not seem to be allowed for now.

Load-Balancing Microsoft Exchange 2013

First of all, I’m pleased to announce that HAProxy and the ALOHA Load-Balancer are both able to load-balance Exchange 2013 (as well as 2010).

Exchange 2013 Services


As explained in introduction, the table below summarizes the TCP ports and services involved in an Exchange 2013 platform:




TCP PortProtocolCAS Service name (abbreviation)
443HTTPS– Autodiscover (AS)
– Exchange ActiveSync (EAS)
– Exchange Control Panel (ECP)
– Offline Address Book (OAB)
– Outlook Anywhere (OA)
– Outlook Web App (OWA)
110 and 995POP3 / POP3sPOP3
143 and 993IMAP4 / IMAP4sIMAP4

Diagram

There are two main types of architecture doable:
1. All the services are hosted on a single host name
2. Each service owns its own host name

Exhange 2013 and the Single host name diagram


exchange_2013_single_hostname

Exhange 2013 and the Multiple host name diagram


exchange_2013_multiple_hostnames

Configuration

There are two types of configuration with the ALOHA:
Layer 4 mode: the LB act as a router, infrastrcuture intrusive, ability to manage millions of connections
layer 7 mode: the LB act as a reverse-proxy, non-intrusive implementation (source NAT), ability to manage thousands of connections, perform SSL offloading, DDOS protection, advanced persistence, etc…

The present article describe the layer 7 configuration, even if we’re going to use it at layer 4 (mode tcp).

Note that it’s up to you to update your DNS configuration to make the hostname point to your Load-Balancer service Virtual IP.

Template:
Use the configuration below as templates and just change the IP addresses:
bind line to your client facing service IPs
server line IPs to match your CAS servers (and add as many line as you need)
Once updated, just copy/paste the whole configuration, including the default section to the bottom of your ALOHA Layer 7 configuration.

Load-Balancing Exhange 2013 services hosted on a Single host name

######## Default values for all entries till next defaults section
defaults
  option  dontlognull             # Do not log connections with no requests
  option  redispatch              # Try another server in case of connection failure
  option  contstats               # Enable continuous traffic statistics updates
  retries 3                       # Try to connect up to 3 times in case of failure 
  timeout connect 5s              # 5 seconds max to connect or to stay in queue
  timeout http-keep-alive 1s      # 1 second max for the client to post next request
  timeout http-request 15s        # 15 seconds max for the client to send a request
  timeout queue 30s               # 30 seconds max queued on load balancer
  timeout tarpit 1m               # tarpit hold tim
  backlog 10000                   # Size of SYN backlog queue

  balance roundrobin                      #alctl: load balancing algorithm
  mode tcp                                #alctl: protocol analyser
  option tcplog                           #alctl: log format
  log global                              #alctl: log activation
  timeout client 300s                     #alctl: client inactivity timeout
  timeout server 300s                     #alctl: server inactivity timeout
  default-server inter 3s rise 2 fall 3   #alctl: default check parameters

frontend ft_exchange_tcp
  bind 10.0.0.9:443 name https          #alctl: listener https configuration.
  maxconn 10000                         #alctl: connection max (depends on capacity)
  default_backend bk_exchange_tcp       #alctl: default farm to use

backend bk_exchange_tcp
  server cas1 10.0.0.15:443 maxconn 10000 check    #alctl: server cas1 configuration.
  server cas2 10.0.0.16:443 maxconn 10000 check    #alctl: server cas2 configuration.

And the result (LB Admin tab):
– Virtual Service:
aloha_exchange2013_single_domain_virtual_services
– Server Farm:
aloha_exchange2013_single_domain_server_farm

Load-Balancing Exhange 2013 services hosted on Multiple host name

######## Default values for all entries till next defaults section
defaults
  option  dontlognull             # Do not log connections with no requests
  option  redispatch              # Try another server in case of connection failure
  option  contstats               # Enable continuous traffic statistics updates
  retries 3                       # Try to connect up to 3 times in case of failure 
  timeout connect 5s              # 5 seconds max to connect or to stay in queue
  timeout http-keep-alive 1s      # 1 second max for the client to post next request
  timeout http-request 15s        # 15 seconds max for the client to send a request
  timeout queue 30s               # 30 seconds max queued on load balancer
  timeout tarpit 1m               # tarpit hold tim
  backlog 10000                   # Size of SYN backlog queue

  balance roundrobin                      #alctl: load balancing algorithm
  mode tcp                                #alctl: protocol analyser
  option tcplog                           #alctl: log format
  log global                              #alctl: log activation
  timeout client 300s                     #alctl: client inactivity timeout
  timeout server 300s                     #alctl: server inactivity timeout
  default-server inter 3s rise 2 fall 3   #alctl: default check parameters

frontend ft_exchange_tcp
  bind 10.0.0.5:443  name as        #alctl: listener: autodiscover service
  bind 10.0.0.6:443  name eas       #alctl: listener: Exchange ActiveSync service
  bind 10.0.0.7:443  name ecp       #alctl: listener: Exchange Control Panel service
  bind 10.0.0.8:443  name ews       #alctl: listener: Exchange Web Service service
  bind 10.0.0.8:443  name oa        #alctl: listener: Outlook Anywhere service
  maxconn 10000                     #alctl: connection max (depends on capacity)
  default_backend bk_exchange_tcp   #alctl: default farm to use

backend bk_exchange_tcp
  server cas1 10.0.0.15:443 maxconn 10000 check   #alctl: server cas1 configuration.
  server cas2 10.0.0.16:443 maxconn 10000 check   #alctl: server cas2 configuration.

And the result (LB Admin tab):
– Virtual Service:
aloha_exchange2013_multiple_domain_virtual_services
– Server Farm:
aloha_exchange2013_multiple_domain_server_farm

Conclusion


This is a very basic and straight forward configuration. We could make it much more complete and improve timeouts per services, better health checking, DDOS protection, etc…
I may write later articles about Exchange 2013 Load-Balancing with our products.

Related links

Exchange 2013 installation steps
Exchange 2013 first configuration
Microsoft Exchange Server (Wikipedia)
Microsft Exchange official webpage

Links