Category Archives: Aloha

Howto transparent proxying and binding with HAProxy and ALOHA Load-Balancer

Transparent Proxy

HAProxy works has a reverse-proxy. Which means it maintains 2 connections when allowing a client to cross it:
  – 1 connection between HAProxy and the client
  – 1 connection between HAProxy and the server

HAProxy then manipulate buffers between these two connections.
One of the drawback of this mode is that HAProxy will let the kernel to establish the connection to the server. The kernel is going to use a local IP address to do this.
Because of this, HAProxy “hides” the client IP by its own one: this can be an issue in some cases.
Here comes the transparent proxy mode: HAProxy can be configured to spoof the client IP address when establishing the TCP connection to the server. That way, the server thinks the connection comes from the client directly (of course, the server must answer back to HAProxy and not to the client, otherwise it can’t work: the client will get an acknowledge from the server IP while it has established the connection on HAProxy‘s IP).

Transparent binding

By default, when one want HAProxy to get traffic, we have to tell it to bind an IP address and a port.
The IP address must exist on the operating system (unless you have setup the sysctl net.ipv4.ip_nonlocal_bind) and the OS must announce the availability to the other devices on the network through ARP protocol.
Well, in some cases we want HAProxy to be able to catch traffic on the fly without configuring any IP address or VRRP or whatever…
This is where transparent binding comes in: HAProxy can be configured to catch traffic on the fly even if the destination IP address is not configured on the server.
These IP addresses will never be pingable, but they’ll deliver the services configured in HAProxy.

HAProxy and the Linux Kernel

Unfortunately, HAProxy can’t do transparent binding or proxying alone. It must stand on a compiled and tuned Linux Kernel and operating system.
Below, I’ll explain how to do this in a standard Linux distribution.
Here is the check list to meet:
  1. appropriate HAProxy compilation option
  2. appropriate Linux Kernel compilation option
  3. sysctl settings
  4. iptables rules
  5. ip route rules
  6. HAProxy configuration

HAProxy compilation requirements


First of all, HAProxy must be compiled with the option TPROXY enabled.
It is enabled by default when you use the target LINUX26 or LINUX2628.

Linux Kernel requirements

You have to ensure your kernel has been compiled with the following options:
  – CONFIG_NETFILTER_TPROXY
  – CONFIG_NETFILTER_XT_TARGET_TPROXY

Of course, iptables must be enabled as well in your kernel ūüôā

sysctl settings

The following sysctls must be enabled:
  – net.ipv4.ip_forward
  – net.ipv4.ip_nonlocal_bind

iptables rules


You must setup the following iptables rules:

iptables -t mangle -N DIVERT
iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
iptables -t mangle -A DIVERT -j MARK --set-mark 1
iptables -t mangle -A DIVERT -j ACCEPT

Purpose is to mark packets which matches a socket bound locally (by HAProxy).

IP route rules


Then, tell the Operating System to forward packets marked by iptables to the loopback where HAProxy can catch them:

ip rule add fwmark 1 lookup 100
ip route add local 0.0.0.0/0 dev lo table 100

HAProxy configuration


Finally, you can configure HAProxy.
  * Transparent binding can be configured like this:

[...]
frontend ft_application
  bind 1.1.1.1:80 transparent
[...]

  * Transparent proxying can be configured like this:

[...]
backend bk_application
  source 0.0.0.0 usesrc clientip
[...]

Transparent mode in the ALOHA Load-Balancer


Now, the same steps in the ALOHA Load-balancer, which is an HAProxy based load-balancing appliance:
  1-5. not required, the ALOHA kernel is deeply tuned for this purpose
  6. HAProxy configuration

LB Admin tab (AKA click mode)

  * Transparent binding can be configured like this, when editing a Frontend listener:
frontend_listener_transparent

  * Transparent proxying can be configured like this when editing a farm:
backend_transparent

LB Layer 7 tab (vi in a browser mode)


  * Transparent binding can be configured like this:

[...]
frontend ft_application
  bind 1.1.1.1:80 transparent
[...]

  * Transparent proxying can be configured like this:

[...]
backend bk_application
  source 0.0.0.0 usesrc clientip
[...]

Links

SSL Client certificate information in HTTP headers and logs

HAProxy and SSL

HAProxy has many nice features when speaking about SSL, despite SSL has been introduced in it lately.

One of those features is the client side certificate management, which has already been discussed on the blog.
One thing was missing in the article, since HAProxy did not have the feature when I first write the article. It is the capability of inserting client certificate information in HTTP headers and reporting them as well in the log line.

Fortunately, the devs at HAProxy Technologies keep on improving HAProxy and it is now available (well, for some time now, but I did not have any time to write the article yet).

OpenSSL commands to generate SSL certificates

Well, just take the script from HAProxy Technologies github, follow the instruction and you’ll have an environment setup in a very few seconds.
Here is the script: https://github.com/exceliance/haproxy/tree/master/blog/ssl_client_certificate_management_at_application_level

Configuration

The configuration below shows a frontend and a backend with SSL offloading and with insertion of client certificate information into HTTP headers. As you can see, this is pretty straight forward.

frontend ft_www
 bind 127.0.0.1:8080 name http
 bind 127.0.0.1:8081 name https ssl crt ./server.pem ca-file ./ca.crt verify required
 log-format %ci:%cp [%t] %ft %b/%s %Tq/%Tw/%Tc/%Tr/%Tt %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs {%[ssl_c_verify],%{+Q}[ssl_c_s_dn],%{+Q}[ssl_c_i_dn]} %{+Q}r
 http-request set-header X-SSL                  %[ssl_fc]
 http-request set-header X-SSL-Client-Verify    %[ssl_c_verify]
 http-request set-header X-SSL-Client-DN        %{+Q}[ssl_c_s_dn]
 http-request set-header X-SSL-Client-CN        %{+Q}[ssl_c_s_dn(cn)]
 http-request set-header X-SSL-Issuer           %{+Q}[ssl_c_i_dn]
 http-request set-header X-SSL-Client-NotBefore %{+Q}[ssl_c_notbefore]
 http-request set-header X-SSL-Client-NotAfter  %{+Q}[ssl_c_notafter]
 default_backend bk_www

backend bk_www
 cookie SRVID insert nocache
 server server1 127.0.0.1:8088 maxconn 1

To observe the result, I just fake a server using netcat and observe the headers sent by HAProxy:

X-SSL: 1
X-SSL-Client-Verify: 0
X-SSL-Client-DN: "/C=FR/ST=Ile de France/L=Jouy en Josas/O=haproxy.com/CN=client1/emailAddress=ba@haproxy.com"
X-SSL-Client-CN: "client1"
X-SSL-Issuer: "/C=FR/ST=Ile de France/L=Jouy en Josas/O=haproxy.com/CN=haproxy.com/emailAddress=ba@haproxy.com"
X-SSL-Client-NotBefore: "130613144555Z"
X-SSL-Client-NotAfter: "140613144555Z"

And the associated log line which has been generated:

Jun 13 18:09:49 localhost haproxy[32385]: 127.0.0.1:38849 [13/Jun/2013:18:09:45.277] ft_www~ bk_www/server1 
1643/0/1/-1/4645 504 194 - - sHNN 0/0/0/0/0 0/0 
{0,"/C=FR/ST=Ile de France/L=Jouy en Josas/O=haproxy.com/CN=client1/emailAddress=ba@haproxy.com",
"/C=FR/ST=Ile de France/L=Jouy en Josas/O=haproxy.com/CN=haproxy.com/emailAddress=ba@haproxy.com"} "GET /" 

NOTE: I have inserted a few CRLF to make it easily readable.

Now, my HAProxy can deliver the following information to my web server:
  * ssl_fc: did the client used a secured connection (1) or not (0)
  * ssl_c_verify: the status code of the TLS/SSL client connection
  * ssl_c_s_dn: returns the full Distinguished Name of the certificate presented by the client
  * ssl_c_s_dn(cn): same as above, but extracts only the Common Name
  * ssl_c_i_dn: full distinguished name of the issuer of the certificate presented by the client
  * ssl_c_notbefore: start date presented by the client as a formatted string YYMMDDhhmmss
  * ssl_c_notafter: end date presented by the client as a formatted string YYMMDDhhmmss

Related Links

Links

Client IP persistence OR source IP hash load-balancing?

Client or Source IP ???


Well, this is roughly the same! Depends on people, environment, products, etc… I may use both of them in this article, but be aware that both of them points to the IP that is being used to get connected on the service whose being load-balanced.

Load-Balancing and Stickiness


Load-Balancing is the ability to spread requests among a server pool which deliver the same service. By definition, it means that any request can be sent to any server in the pool.
Some applications require stickiness between a client and a server: it means all the requests from a client must be sent to the same server. Otherwise, the application session may be broken and that may have a negative impact on the client.

Source IP stickiness


We may have many ways to stick a user to a server, which has already been discussed on this blog (Read load balancing, affinity, persistence, sticky sessions: what you need to know) (and many other article may follow).

That said, sometimes, the only information we can “rely” on to perform stickiness is the client (or source) IP address.
Note this is not optimal because:
  * many clients can be “hidden” behind a single IP address (Firewall, proxy, etc…)
  * a client can change its IP address during the session
  * a client can use multiple IP addresses
  * etc…

Performing source IP affinity


There are two ways of performing source IP affinity:
  1. Using a dedicated load-balancing algorithm: a hash on the source IP
  2. Using a stick table in memory (and a roundrobin load-balancing algorithm)

Actually, the main purpose of this article was to introduce both methods which are quite often misunderstood, and to show pros and cons of each, so people can make the right decision when configuring their Load-Balancer.

Source IP hash load-balancing algorithm


This algorithm is deterministic. It means that if no elements involved in the hash computation, then the result will be the same. 2 equipment are able to apply the same hash, hence load-balance the same way, making load-balancer failover transparent.
A hash function is applied on the source IP address of the incoming request. The hash must take into account the number of servers and each server’s weight.
The following events can make the hash change and so may redirect traffic differently over the time:
  * a server in the pool goes down
  * a server in the pool goes up
  * a server weight change

The main issue with source IP hash loadbalancing algorithm, is that each change can redirect EVERYBODY to a different server!!!
That’s why, some good load-balancers have implemented a consistent hashing method which ensure that if a server fails, for example, only the client connected to this server are redirected.
The counterpart of consistent hashing is that it doesn’t provide a perfect hash, and so, in a farm of 4 servers, some may receive more clients than others.
Note that when a failed server comes back, its “sticked” users (determined by the hash) will be redirected to it.
There is no overhead in term of CPU or memory when using such algorithm.

Configuration example in HAProxy or in the ALOHA Load-Balancer:

balance source
hash-type consistent

Source IP persistence using a stick-table


A table in memory is created by the Load-balancer to store the source IP address and the affected server from the pool.
We can rely on any non-deterministic load-balancing algorithm, such as roundrobin or leastconn (it usually depends on the type of application you’re load-balancing).
Once a client is sticked to a server, he’s sticked until the entry in the table expires OR the server fails.
There is an overhead in memory, to store stickiness information. In HAProxy, the overhead is pretty low: 40MB for 1.000.000 IPv4 addresses.
One main advantage of using a stick table is that when a failed server comes back, no existing sessions will be redirected to it. Only new incoming IPs can reach it. So no impact on users.
It is also possible to synchronize tables in memory between multiple HAProxy or ALOHA Load-Balancers, making a LB failover transparent.

Configuration example in HAProxy or in the ALOHA Load-Balancer:

stick-table type ip size 1m expire 1h
stick on src

Links

Microsoft Remote Desktop Services (RDS) Load-Balancing and protection

RDS, RDP, TSE, remoteapp

Whatever you call it, it’s the remote desktop protocol from Microsoft, which has been renamed during the product life.
Basically, it allows users to get connected on remote server and run an application or a full desktop remotely.

It’s more and more fashion nowadays, because it’s easier for IT teams to patch and upgrade a single server hosting many remote desktops than maintaining many laptop individually.

VDI or Virtual Desktop Infrastructure


A Virtual Desktop Infrastructure is an architecture purposely built to export desktops to end users. It can use different type of product, such as Citrix or VMWare View.
Of course, you could build yours using Microsoft RDS as well as the ALOHA Load-Balancer to improve scalability and security of such platform.

Diagram

In a previous article, we already demonstrate how good the ALOHA Load-Balancer is to smartly load-balance Microsoft RDS.
Let’s start again from the same platform (I know, I’m a lazy engineer!):
rdp infrastructure

Windows Broker


When using the ALOHA Load-Balancer, you don’t need a windows broker anymore: the ALOHA is able to balance a new user to the less loaded server, as well as resume a broken session.
When an ALOHA outage occurs and a failover from the master to the slave happens, then all the sessions are resumed: the ALOHAs can share session affinity within a cluster, ensuring nobody will notice the issue.

Making an RDS platforms more secured


It’s easy to load-balance RDS protocol using the mstshash cookie, since microsoft described his protocol: it allows advanced persistence. Basically, each user owns a dedicated mstshash cookie value.

That said, being able to detect weird behavior or service abusers would be much better.

Actually, the ALOHA Load-Balancer allows you to monitor each source IP address, and, much better, each mstshash cookie value.
For any object, we can monitor the connection rate over a period of time, the number of current connection, the bytes in and out (over a period of time), etc…

Now, it’s up to you to protect your VDI infrastructure using the ALOHA Load-Balancer. In example:
  * counting the number of active connections per user (mstshash cookie)
  * counting the number of connection tries over the last 5 minutes per user

The above will allow you to detect the following problems / misbehavior on a VDI platform:
  * a user being abnormally connected too many times
¬†¬†* a user trying to get connected to quickly, or maybe somebody is trying to discover someone else’s password ūüėČ
¬†¬†* etc…

Configuration

The configuration below explains how to create a highly-available VDI infrastructure with advanced persistence and improved security:

# for High-Availability purpose
peers aloha
  peer aloha1 10.0.0.16:1024
  peer aloha2 10.0.0.17:1024

# RDP / TSE configuration
frontend ft_rdp
  bind 10.0.0.18:3389 name rdp
  mode tcp
  timeout client 1h
  option tcplog
  option log global
  option tcpka
  # Protection 1
  # wait up to 5s for the mstshash cookie, and reject the client if none
  tcp-request inspect-delay 5s
  tcp-request content accept if RDP_COOKIE

  default_backend bk_rdp

backend bk_rdp
  mode tcp
  balance roundrobin
  # Options
  timeout server 1h
  timeout connect 4s
  option redispatch
  option tcpka
  option tcplog
  option log global
  # sticky persistence + monitoring based on mstshash Cookie
  #    established connection
  #    connection tries over the last minute
  stick-table type string len 32 size 10k expire 1d peers aloha store conn_cur,conn_rate(1m)
  stick on rdp_cookie(mstshash)

  # Protection 2
  # Each user is supposed to get a single active connection at a time, block the second one
  tcp-request content reject if { sc1_conn_cur ge 2 }

  # Protection 3
  # if a user tried to get connected at least 10 times over the last minute, 
  # it could be a brute force
  tcp-request content reject if { sc1_conn_rate ge 10 }

  # Server farm
  server tse1 10.0.0.23:3389 weight 10 check inter 2s rise 2 fall 3
  server tse2 10.0.0.24:3389 weight 10 check inter 2s rise 2 fall 3
  server tse3 10.0.0.25:3389 weight 10 check inter 2s rise 2 fall 3
  server tse4 10.0.0.26:3389 weight 10 check inter 2s rise 2 fall 3

Troubleshouting RDS connections issues with the ALOHA


The command below, when run from your ALOHA Load-Balancer CLI, will print out (and refresh every 1 second) the content of the stick-table with the following information:
  * table name, table size and number of current entries in the table
  * mstshash cookie value as sent by RDS client: field key
  * affected server id: field server_id
  * monitored counter and values, fields conn_rate and conn_cur

while true ; do clear ; echo show table bk_rdp | socat /tmp/haproxy.sock - ; sleep 1 ; done

stick-table output in a normal case


Below, an example of the output when everything goes well:

# table: bk_rdp, type: string, size:20480, used:2
0x912f84: key=Administrator use=1 exp=0 server_id=0 conn_rate(60000)=1 conn_cur=1
0x91ba14: key=XLC\Admin use=1 exp=0 server_id=1 conn_rate(60000)=1 conn_cur=1

stick-table output tracking a RDS abuser

Below, an example of the output when somebody tries many connections to discover someone’s password:

# table: bk_rdp, type: string, size:20480, used:2
0x912f84: key=Administrator use=1 exp=0 server_id=0 conn_rate(60000)=1 conn_cur=1
0x91ba14: key=XLC\Admin use=1 exp=0 server_id=1 conn_rate(60000)=1 conn_cur=1
0x91ca12: key=XLC\user1 use=1 exp=0 server_id=2 conn_rate(60000)=15 conn_cur=0

NOTE: in this case, the client has been blocked and the user user1 from the domain XLC can’t get connected for 1 minute (conn_rate monitoring period). I agree, this type of protection can lead to DOS your users…
We can easily configure the load-balancer to track source IP connection rate and is a single source IP tries to connect too often on our RDS farm, then we can block it before he can access it. (this type of attack would succeed only if the user is not only connected on the VDI platform)
Combining both source IP and mstshash cookie allows us to improve security on the VDI platform.

Securing a RDS platform combining both mstshash cookie and source IP address


The configuration below improve the protection by combining source IP address and RDS cookie.
The purpose is to block an abuser at a lower layer (IP) to avoid him making the Load-Balancer to blacklist regular user, even if this is temporary.

# for High-Availability purpose
peers aloha
  peer aloha1 10.0.0.16:1024
  peer aloha2 10.0.0.17:1024

# RDP / TSE configuration
frontend ft_rdp
  mode tcp
  bind 10.0.0.18:3389 name rdp
  timeout client 1h
  option tcpka
  log global
  option tcplog
  default_backend bk_rdp

backend bk_rdp
  mode tcp
  balance roundrobin

  # Options
  timeout server 1h
  timeout connect 4s
  option redispatch
  option tcpka
  log global
  option tcplog

  # sticky persistence + monitoring based on mstshash Cookie
  #    established connection
  #    connection tries over the last minute
  stick-table type string len 32 size 10k expire 1d peers aloha store conn_cur,conn_rate(1m)
  stick on rdp_cookie(mstshash)

  # Protection 1
  # wait up to 5s for the mstshash cookie, and reject the client if none
  tcp-request inspect-delay 5s
  tcp-request content accept if RDP_COOKIE
  tcp-request content track-sc1 rdp_cookie(mstshash) table bk_rdp
  tcp-request content track-sc2 src                  table sourceip

# IP layer protection
  #  Protection 2
  #   each single IP can have up to 2 connections on the VDI infrastructure
  tcp-request content reject if { sc2_conn_cur ge 2 }

  #  Protection 3
  #   each single IP can try up to 5 connections in a single minute
  tcp-request content reject if { sc2_conn_rate ge 10 }

# RDS / TSE application layer protection
  #  Protection 4
  #   Each user is supposed to get a single active connection at a time, block the second one
  tcp-request content reject if { sc1_conn_cur ge 2 }

  #  Protection 5
  #   if a user tried to get connected at least 10 times over the last minute, 
  #   it could be a brute force
  tcp-request content reject if { sc1_conn_rate ge 10 }

  # Server farm
  server tse1 10.0.0.23:3389 weight 10 check inter 2s rise 2 fall 3
  server tse2 10.0.0.24:3389 weight 10 check inter 2s rise 2 fall 3
  server tse3 10.0.0.25:3389 weight 10 check inter 2s rise 2 fall 3
  server tse4 10.0.0.26:3389 weight 10 check inter 2s rise 2 fall 3

backend sourceip
 stick-table type ip size 20k store conn_cur,conn_rate(1m)

Note that if you have remote sites using your VDI infrastructure, it is also possible to manage a source IP white list. And the same for the mstshash cookies: you can allow some users (like Administrator) to get connected multiple time on the VDI infrastructure.

Source IP and mstshash cookie combined protection


To observe the content of both tables (and still refreshed every 1s), just run the command below on your ALOHA CLI:

while true ; do clear ; echo show table bk_rdp | socat /tmp/haproxy.sock - ; echo "=====" ; echo show table sourceip | socat /tmp/haproxy.sock - ; sleep 1 ; done

And the result:

# table: bk_rdp, type: string, size:20480, used:2
0x23f1424: key=Administrator use=1 exp=0 server_id=0 conn_rate(60000)=0 conn_cur=1
0x2402904: key=XLC\Admin use=1 exp=0 server_id=1 conn_rate(60000)=1 conn_cur=1

=====
# table: sourceip, type: ip, size:20480, used:2
0x23f9f70: key=10.0.4.215 use=2 exp=0 conn_rate(60000)=2 conn_cur=2
0x23f1500: key=10.0.3.20 use=1 exp=0 conn_rate(60000)=0 conn_cur=1

Important Note about Microsoft RDS clients


As Mathew Levett stated on LoadBalancer.org’s blog (here is the article: http://blog.loadbalancer.org/microsoft-drops-support-for-mstshash-cookies/), latest version of Microsoft RDS clients don’t send the full user’s information into the mstshash cookie. Worst, sometime, the client don’t even send the cookie!!!!

In the example we saw above, there was 2 users connected on the infrastructure: the Administrator user was connected from a Linux client using xfreerdp tool, and we saw the cookie properly in the stick table. Second user, Administrator, from the Windows domain XLC, as his credential truncated in the stick table (Value: XLC\Admin), it was getting connected using Windows 2012 Server Enterprise RDS client.

As lb.org said, hopefully Microsoft will fix this issue. Or maybe there are some alternative RDS client for Windows…

To bypass this issue, we can take advantage of the ALOHA: we can do a persistence based on the source IP address when there is no mstshash cookies in the client connection information.
Just add the line below, right after the stick on rdp_cookie(mstshash):

  stick on src

With such configuration, the ALOHA Load-Balancer will stick on mstshash cookie, if present, and will failover on source IP address otherwise.

Conclusion


Virtualisation Desktop Infrastructure is a very fashion way of providing access to end users to a company’s applications.
If you don’t use the right product to build your RDS / TSE / RemoteApp infrastructure, you may be in trouble at some point.

The ALOHA Load-Balancer, thanks to its flexible configuration has many advantages

You can freely give a try to the ALOHA Load-Balancer, it is available from our website for download, allowing up to 10 connections with no time limitation: Download the ALOHA Load-Balancer

Links

IIS 6.0 appsession cookie and PCI compliance

Synopsis

You’re using HAProxy or the ALOHA Load-Balancer to load-balance IIS 6.0 web applications and you want them to pass successfully PCI compliance test.
One of the pre-requisite is to force the cookie to be “HttpOnly”, in order to tell the browser to use this cookie for HTTP requests only, and “protect” it from local javascript access (to steal session information).
Unfortunately, II 6.0 is not able to setup such cookies. That’s why HAProxy can be used to update the cookie on the fly, when setup by the application server.

Rewriting appsession Cookie with HAProxy

Place the configuration line below in your backend configuration:

rspirep ^Set-Cookie: (appsession.*)    Set-Cookie: 1; HttpOnly

Now, you’re application is “more” secured… Well, at least, you can successfully pass the PCI compliancy tests!

Links

SSL offloading impact on web applications

SSL Offloading

Nowadays, it is common (and convenient) to use the Load-Balancer SSL capabilities to cypher/uncypher traffic from clients to the web application platform.
Performing SSL at the Load-Balancer Layer is called “SSL offloading“, because you offload this process from your application servers.

Note that SSL offloading is also “marketingly” called SSL acceleration. The only acceleration performed is moving SSL processing from application server, which have an other heavy task to perform, to an other devices in the network…
So no acceleration at all, despite some people claming they do accelerate SSL in a dedicated ASIC, they just offload

The diagram below shows what happens on an architecture with SSL offloading. We can clearly see that the traffic is encrypted up to the Load-Balancer, then it’s in clear between the Load-Balancer and the application server:
ssl offloading diagram

Benefits of SSL offloading


Offloading SSL from the application servers to the Load-Balancers have many benefits:
  * offload an heavy task from your application servers, letting them to focus on the application itself
  * save resources on your application servers
  * the Load-Balancers have access to clear HTTP traffic and can perform advanced features such as reverse-proxying, Cookie persistence, traffic regulation, etc…
  * When using an ALOHA Load-Balancer (or HAProxy), there are much more features available on the SSL stack than on any web application server.

It seems there should only be positive impact when offloading SSL.

Counterparts of SSL offloading


As we saw on the diagram above, the load-balancer will hide three important information from the client connection:
  1. protocol scheme: the client was using HTTPS while the connection on the application server is made over HTTP
  2. connection type: was it cyphered or not? Actually, this is linked to the protocol scheme
  3. destination port: on the load-balancer, the port is usually 443 and on the application server should be 80 (this may change)

Most web application will use 301/302 responses with a Location header to redirect the client to a page (most of the time after a login or a POST) as well as requiring an application cookie.

So basically, the worst impact is that your application server may have difficulties to know the client connection information and may not be able to perform right responses: it can break totally the application, preventing it to work.
Fortunately, the ALOHA Load-Balancer or HAProxy can help in such situation!

Basically, the application could return such responses to the client:

Location: http://www.domain.com/login.jsp

And the client will leave the secured connection…

Tracking issues with the load-balancer


In order to know if your application supports well SSL offloading, you can configure your ALOHA Load-Balancer to log some server responses.
Add the configuration below in your application frontend section:

capture response header Location   len 32
capture response header Set-Cookie len 32

Now, in the traffic log, you’ll see clearly if the application is setting up response for HTTP or HTTPS.

NOTE: some other headers may be needed, depending on your application.

Web Applications and SSL offloading


Here we are ūüôā

There are 3 answers possible for such situation, detailed below.

Client connection information provided in HTTP header by the Load-Balancer


First of all, the Load-Balancer can provide client side connection information to the application server through HTTP header.
The configuration below shows how to insert a header called X-Forwarded-Proto containing the scheme used by the client.
To be added in your backend section.

http-request set-header X-Forwarded-Proto https if  { ssl_fc }
http-request set-header X-Forwarded-Proto http  if !{ ssl_fc }

Now, the ALOHA Load-Balancer will insert the following header when the connection is made over SSL:

X-Forwarded-Proto: https

and when performed over clear HTTP:

X-Forwarded-Proto: http

It’s up to your application to use this information to answer the right responses: this may require some code update.

HTTP header rewriting on the fly


An other way is to rewrite the response setup by the server on the fly.
The following configuration line would match the Location header and translate it from http to https if needed.
Rewriting rspirep http://www.domain.com:80/url:

rspirep ^Location:\ http://(.*):80(.*)  Location:\ https://\1:443\2   if  { ssl_fc }

NOTE: above example applies on HTTP redirection (Location header), but can be applied on Set-cookie as well (or any other header your application may use).
NOTE2: you can only rewrite the response HTTP headers, not in the body. So this is not compatible with applications setting up hard links in the HTML content.

SSL Bridging


In some cases, the application is not compatible at all with SSL offloading (even with the tricks above) and we must use a ciphered connection to the server but we still may require to perform cookie based persistence, content switching, etc…
This is called SSL bridging, or can also be called a man in the middle.

In the ALOHA, there is nothing to do, just do SSL offloading as usual and add the keyword ssl on your server directive.

Using this method, you can choose a light cipher and a light key between the Load-Balancer and the server and still use the Load-Balancer advanced SSL feature with a stronger key and a stronger cipher.
The application server won’t notice anything and the Load-Balancer can still perform Layer 7 processing.

Links

Mitigating the SSL Beast attack using the ALOHA Load-Balancer / HAProxy

The beast attack on SSL isn’t new, but we have not yet published an article to explain how to mitigate it with the ALOHA or HAProxy.
First of all, to mitigate this attack, you must use the Load-Balancer as the SSL endpoint, then just append the following parameter on your HAProxy SSL frontend:
  * For the ALOHA Load-Balancer:

bind 10.0.0.9:443 name https ssl crt domain ciphers RC4:HIGH:!aNULL:!MD5

  * For HAProxy OpenSource:

bind 10.0.0.9:443 name https ssl crt /path/to/domain.pem ciphers RC4:HIGH:!aNULL:!MD5

As you may have understood, the most important part is the ciphers RC4:HIGH:!aNULL:!MD5 directive which can be used to force the cipher used during the connection and to force it to be strong enough to resist to the attack.

Related Links

Links