Category Archives: security

Scalable WAF protection with HAProxy and Apache with modsecurity

Greeting to Thomas Heil, from our German partner Olanis, for his help in Apache and modsecurity configuration assistance.

What is a Web Application Firewall (WAF)?

Years ago, it was common to protect networks using a firewall… Well known devices which filter traffic at layer 3 and 4…
Now, the failures have moved from the network stack to the application layer, making the old firewall useless and obsolete (for protection purpose I mean). We used then to deploy IDS or IPS, which tried to match attacks at a packet level. These products are usually very hard to tune.
Then the Web Application Firewall arrived: it’s a firewall aware of the layer 7 protocol in order to be more efficient when deciding to block requests (or responses).
This is because the attacks became more complicated, like the SQL Injection, Cross Site scripting, etc…

One of the most famous opensource WAF is mod_security, which works as a module on Apache webserver and IIS for a long time and has been announced recently for nginx too.
A very good alternative is naxsi, a module for nginx, still young but very promising.

On today’s article, I’ll focus on modsecurity for Apache. In a next article, I’ll build the same platform with naxsi and nginx.

Scalable WAF platform


The main problem with WAF, is that they require a lot of resources to analyse each requests headers and body. (it can even be configured to analyze the response). If you want to be able to protect all your upcoming traffic, then you must think scalability.
In the present article I’m going to explain how to build a reliable and scalable platform where WAF capacity won’t be an issue. I could add “and where WAF maintenance could be done during business hours).
Here are the basic purpose to achieve:

  • Web Application Firewall: achieved by Apache and modsecurity
  • High-availability: application server and WAF monitoring, achieved by HAProxy
  • Scalability: ability to adapt capacity to the upcoming volume of traffic, achieved by HAProxy

It would be good if the platform would achieve the following advanced features:

  • DDOS protection: blind and brutal attacks protection, slowloris protection, achieved by HAProxy
  • Content-Switching: ability to route only dynamic requests to the WAF, achieved by HAProxy
  • Reliability: ability to detect capacity overusage, this is achieved by HAProxy
  • Performance: deliver response as fast as possible, achieved by the whole platform

Web platform with WAF Diagram

The diagram below shows the platform with HAProxy frontends (prefixed by ft_) and backends (prefixed by bk_). Each farm is composed by 2 servers.

As you can see, at first, it seems all the traffic goes to the WAFs, then comes back in HAProxy before being routed to the web servers. This would be the basic configuration, meeting the following basic requirements: Web Application Firewall, High-Availability, Scalability.

Platform installation


As load-balancer, I’m going to use our well known ALOHA ūüôā
The web servers are standard debian with apache and PHP, the application used on top of it is dokuwiki. I have no procedure for this one, this is very straight forward!
The WAF run on CentOS 6.3 x86_64, using modsecurity 2.5.8. The installation procedure is outside of the scope of this article, so I documented it on my personal wiki.
All of these servers are virtualized on my laptop using KVM, so NO, I won’t run performance benchmark, it would be ridiculous!

Configuration

WAF configuration


Basic configuration here, no tuning at all. The purpose is not to explain how to configure a WAF, sorry.

Apache Configuration


Modification to the file /etc/httpd/conf/httpd.conf:

Listen 192.168.10.15:81
[...]
LoadModule security2_module modules/mod_security2.so
LoadModule unique_id_module modules/mod_unique_id.so
[...]
NameVirtualHost 192.168.10.15:81
[...]
<IfModule mod_security2.c>
        SecPcreMatchLimit 1000000
        SecPcreMatchLimitRecursion 1000000
        SecDataDir logs/
</IfModule>
<VirtualHost 192.168.10.15:81>
        ServerName *
        AddDefaultCharset UTF-8

        <IfModule mod_security2.c>
                Include modsecurity.d/modsecurity_crs_10_setup.conf
                Include modsecurity.d/aloha.conf
                Include modsecurity.d/rules/*.conf

                SecRuleEngine On
                SecRequestBodyAccess On
                SecResponseBodyAccess On
        </IfModule>

        ProxyPreserveHost On
        ProxyRequests off
        ProxyVia Off
        ProxyPass / http://192.168.10.2:81/
        ProxyPassReverse / http://192.168.10.2:81/
</VirtualHost>

Basically, we just turned Apache into a reverse-proxy, accepting traffic for any server name, applying modsecurity rules before routing traffic back to HAProxy frontend dedicated to web servers.

Client IP


HAProxy works has a reverse proxy and so will use its own IP address to get connected on the WAF server. So you have to install mod_rpaf to get the client IP in the WAF for both tracking and logging.
To install mod_rpaf, follow these instructions: apache mod_rpaf installation.
Concerning its configuration, we’ll do it as below, edit the file /etc/httpd/conf.d/mod_rpaf.conf:

LoadModule rpaf_module modules/mod_rpaf-2.0.so

<IfModule rpaf_module>
        RPAFenable On
        RPAFproxy_ips 192.168.10.1 192.168.10.3
        RPAFheader X-Client-IP
</IfModule>

modsecurity custom rules

In the Apache configuration there is a directive which tells modsecurity to load a file called aloha.conf. The purpose of this file is to tell to modsecurity to deny the health check requests from HAProxy and to prevent logging them.
HAProxy will consider the WAF as operational only if it gets a 403 response to this request. (see HAProxy configuration below).
Content of the file /etc/httpd/modsecurity.d/aloha.conf:

SecRule REQUEST_FILENAME "/waf_health_check" "nolog,deny"

Load-Balancer (HAProxy) configuration for basic usage


The configuration below is the first shoot we do when deploying such platform, it is basic, simple and straight forward:

######## Default values for all entries till next defaults section
defaults
  option  http-server-close
  option  dontlognull
  option  redispatch
  option  contstats
  retries 3
  timeout connect 5s
  timeout http-keep-alive 1s
  # Slowloris protection
  timeout http-request 15s
  timeout queue 30s
  timeout tarpit 1m          # tarpit hold tim
  backlog 10000

# public frontend where users get connected to
frontend ft_waf
  bind 192.168.10.2:80 name http
  mode http
  log global
  option httplog
  timeout client 25s
  maxconn 1000
  default_backend bk_waf

# WAF farm where users' traffic is routed first
backend bk_waf
  balance roundrobin
  mode http
  log global
  option httplog
  option forwardfor header X-Client-IP
  option httpchk HEAD /waf_health_check HTTP/1.0
  # Specific WAF checking: a DENY means everything is OK
  http-check expect status 403
  timeout server 25s
  default-server inter 3s rise 2 fall 3
  server waf1 192.168.10.15:81 maxconn 100 weight 10 check
  server waf2 192.168.10.16:81 maxconn 100 weight 10 check

# Traffic secured by the WAF arrives here
frontend ft_web
  bind 192.168.10.2:81 name http
  mode http
  log global
  option httplog
  timeout client 25s
  maxconn 1000
  # route health check requests to a specific backend to avoid graph pollution in ALOHA GUI
  use_backend bk_waf_health_check if { path /waf_health_check }
  default_backend bk_web

# application server farm
backend bk_web
  balance roundrobin
  mode http
  log global
  option httplog
  option forwardfor
  cookie SERVERID insert indirect nocache
  default-server inter 3s rise 2 fall 3
  option httpchk HEAD /
  timeout server 25s
  server server1 192.168.10.11:80 maxconn 100 weight 10 cookie server1 check
  server server2 192.168.10.12:80 maxconn 100 weight 10 cookie server2 check

# backend dedicated to WAF checking (to avoid graph pollution)
backend bk_waf_health_check
  balance roundrobin
  mode http
  log global
  option httplog
  option forwardfor
  default-server inter 3s rise 2 fall 3
  timeout server 25s
  server server1 192.168.10.11:80 maxconn 100 weight 10 check
  server server2 192.168.10.12:80 maxconn 100 weight 10 check

Advanced Load-Balancing (HAProxy) configuration


We’re going now to improve a bit the platform. The picture below shows which type of protection is achieved by the load-balancer and the WAF:

The configuration below adds a few more features:

  • DDOS protection on the frontend
  • abuser or attacker detection in bk_waf and blocking on the public interface (ft_waf)
  • Bypassing WAF when overusage or unavailable

Which will allow to meet the advanced requirements: DDOS protection, Content-Switching, Reliability, Performance.

######## Default values for all entries till next defaults section
defaults
  option  http-server-close
  option  dontlognull
  option  redispatch
  option  contstats
  retries 3
  timeout connect 5s
  timeout http-keep-alive 1s
  # Slowloris protection
  timeout http-request 15s
  timeout queue 30s
  timeout tarpit 1m          # tarpit hold tim
  backlog 10000

# public frontend where users get connected to
frontend ft_waf
  bind 192.168.10.2:80 name http
  mode http
  log global
  option httplog
  timeout client 25s
  maxconn 10000

  # DDOS protection
  # Use General Purpose Couter (gpc) 0 in SC1 as a global abuse counter
  # Monitors the number of request sent by an IP over a period of 10 seconds
  stick-table type ip size 1m expire 1m store gpc0,http_req_rate(10s),http_err_rate(10s)
  tcp-request connection track-sc1 src
  tcp-request connection reject if { sc1_get_gpc0 gt 0 }
  # Abuser means more than 100reqs/10s
  acl abuse sc1_http_req_rate(ft_web) ge 100
  acl flag_abuser sc1_inc_gpc0(ft_web)
  tcp-request content reject if abuse flag_abuser

  acl static path_beg /static/ /dokuwiki/images/
  acl no_waf nbsrv(bk_waf) eq 0
  acl waf_max_capacity queue(bk_waf) ge 1
  # bypass WAF farm if no WAF available
  use_backend bk_web if no_waf
  # bypass WAF farm if it reaches its capacity
  use_backend bk_web if static waf_max_capacity
  default_backend bk_waf

# WAF farm where users' traffic is routed first
backend bk_waf
  balance roundrobin
  mode http
  log global
  option httplog
  option forwardfor header X-Client-IP
  option httpchk HEAD /waf_health_check HTTP/1.0

  # If the source IP generated 10 or more http request over the defined period,
  # flag the IP as abuser on the frontend
  acl abuse sc1_http_err_rate(ft_waf) ge 10
  acl flag_abuser sc1_inc_gpc0(ft_waf)
  tcp-request content reject if abuse flag_abuser

  # Specific WAF checking: a DENY means everything is OK
  http-check expect status 403
  timeout server 25s
  default-server inter 3s rise 2 fall 3
  server waf1 192.168.10.15:81 maxconn 100 weight 10 check
  server waf2 192.168.10.16:81 maxconn 100 weight 10 check

# Traffic secured by the WAF arrives here
frontend ft_web
  bind 192.168.10.2:81 name http
  mode http
  log global
  option httplog
  timeout client 25s
  maxconn 1000
  # route health check requests to a specific backend to avoid graph pollution in ALOHA GUI
  use_backend bk_waf_health_check if { path /waf_health_check }
  default_backend bk_web

# application server farm
backend bk_web
  balance roundrobin
  mode http
  log global
  option httplog
  option forwardfor
  cookie SERVERID insert indirect nocache
  default-server inter 3s rise 2 fall 3
  option httpchk HEAD /
  # get connected on the application server using the user ip
  # provided in the X-Client-IP header setup by ft_waf frontend
  source 0.0.0.0 usesrc hdr_ip(X-Client-IP)
  timeout server 25s
  server server1 192.168.10.11:80 maxconn 100 weight 10 cookie server1 check
  server server2 192.168.10.12:80 maxconn 100 weight 10 cookie server2 check

# backend dedicated to WAF checking (to avoid graph pollution)
backend bk_waf_health_check
  balance roundrobin
  mode http
  log global
  option httplog
  option forwardfor
  default-server inter 3s rise 2 fall 3
  timeout server 25s
  server server1 192.168.10.11:80 maxconn 100 weight 10 check
  server server2 192.168.10.12:80 maxconn 100 weight 10 check

Detecting attacks


On the load-balancer


The ft_waf frontend stick table tracks two information: http_req_rate and http_err_rate which are respectively the http request rate and the http error rate generated by a single IP address.
HAProxy would automatically block an IP which has generated more than 100 requests over a period of 10s or 10 errors (WAF detection 403 responses included) in 10s. The user is blocked for 1 minute as long as he keeps on abusing.
Of course, you can setup above values to whatever you need: it is fully flexible.

To know the status of IPs in your load-balancer, just run the command below:

echo show table ft_waf | socat /var/run/haproxy.stat - 
# table: ft_waf, type: ip, size:1048576, used:1
0xc33304: key=192.168.10.254 use=0 exp=4555 gpc0=0 http_req_rate(10000)=1 http_err_rate(10000)=1

Note: The ALOHA Load-balancer does not provide watch, but you can monitor the content of the table in live with the command below:

while true ; do echo show table ft_waf | socat /var/run/haproxy.stat - ; sleep 2 ; clear ; done

On the Waf


I have not setup anything particular on WAF logging, so every errors appears in /var/log/httpd/error_log. IE:

[Fri Oct 12 10:48:21 2012] [error] [client 192.168.10.254] ModSecurity: Access denied with code 403 (phase 2). Pattern match "(?:(?:[\\;\\|\\`]\\W*?\\bcc|\\b(wget|curl))\\b|\\/cc(?:[\\'\"\\|\\;\\`\\-\\s]|$))" at REQUEST_FILENAME. [file "/etc/httpd/modsecurity.d/rules/modsecurity_crs_40_generic_attacks.conf"] [line "25"] [id "950907"] [rev "2.2.5"] [msg "System Command Injection"] [data "/cc-"] [severity "CRITICAL"] [tag "WEB_ATTACK/COMMAND_INJECTION"] [tag "WASCTC/WASC-31"] [tag "OWASP_TOP_10/A1"] [tag "PCI/6.5.2"] [hostname "mywiki"] [uri "/dokuwiki/lib/images/license/button/cc-by-sa.png"] [unique_id "UHfZVcCoCg8AAApVAzsAAAAA"]

Seems to be a false positive ūüôā

Conclusion


Today, we saw it’s easy to build a scalable and well performing WAF platform in front of our web application.
The WAF is able to communicate to HAProxy which IPs to automatically blacklist (throuth error rate monitoring), which is convenient since the attacker won’t bother the WAF for a certain amount of time ūüėČ
The platform allows to detect WAF farm availability and to bypass it in case of total failure, we even saw it is possible to bypass the WAF for static content if the farm is running out of capacity. Purpose is to deliver a good end-user experience without dropping too much the security.
Note that it is possible to route all the static content to the web servers (or a static farm) directly, whatever the status of the WAF farm.
This make me say that the platform is fully scallable and flexible.
Also, bear in mind to monitor your WAF logs, as shown in the example above, there was a false positive preventing an image to be loaded from dokuwiki.

Related links

Links

SSL Client certificate management at application level

HAProxy and SSL

The history of SSL in HAProxy is very short: around one month ago, we announced the ability for HAProxy to offload SSL from the servers.
HAProxy SSL stack comes with some advanced features like TLS extension SNI.


Well, since yesterday afternoon (Tuesday the 2nd), HAProxy can also offload the client certificate management from the server, with some advanced features. This is the purpose of today’s article.
Again, all the dev is provided by HAProxy Technologies.
For the people using the ALOHA Load-Balancer, these features will be included in the next release without GUI integration (which will come later).
Concerning HAProxy, just git clone the latest version or wait for HAProxy-1.5-dev13. When compiling, don’t forget the USE_OPENSSL=yes flag.

Introduction


Why client certificates?


The main purpose of using client-side certificates is to increase the level of authentication.
Since HAProxy is often in front of web platform, it is the right place to do this authentication. That way, it could do all the certificate checking before allowing the user to pass through. Then it can process SSL on behalf of server and apply any standard features.
The main purpose of the article is to introduce the new HAProxy features related to SSL client certificates.

Basically, we’ll see how to protect access to our application with client-side certificates and how to properly redirect users to the right page when there is an issue with their certificates.

SSL Client certificate generation: thanks nginx!


Well, we’ll have to create a CA, a server certificate and clients certificates!
Nathan, a nginx user, has written a very nice and well documented article on how to generate a CA and client certificate here: http://blog.nategood.com/client-side-certificate-authentication-in-ngi.

So I won’t rewrite all the procedure here, just follow Nathan instructions to create your own CA and generate a few client certificates. All you need is available here:
https://github.com/exceliance/haproxy/blob/master/blog/ssl_client_certificate_management_at_application_level/

Other stuff

During this article, we’ll use a few client certificates: client1, client2 and client_expired (whose certificate has … expired!).
On the githbu link above, you’ll find as well how to generate the PEM file required by HAProxy.

Phase 1: Client Certificate mandatory

In the configuration below, only users with a client certificate are allowed to get connected on the application. This is achieved by the keywords “verify required“.

frontend ft_ssltests
 mode http
 bind 192.168.10.1:443 ssl crt ./server.pem ca-file ./ca.crt verify required
 default_backend bk_ssltests

backend ssltests
 mode http
 server s1 192.168.10.101:80 check
 server s2 192.168.10.102:80 check

If the client does not provide any certificate, then HAProxy would shut the connection during the SSL handshake. It’s up the the user’s software to report the right error…
Testing:

  • Connection with a certificate is allowed:
    $ openssl s_client -connect 192.168.10.1:443 -cert ./client1.crt -key ./client1.key
  • Connection without a certificate is refused:
    $ openssl s_client -connect 192.168.10.1:443
    [...]ssl handshake failure[...]
  • Connection with an expired certificate is refused too:
    $ openssl s_client -connect 192.168.10.1:443 -cert ./client_expired.crt -key ./client_expired.key 
    [...]ssl handshake failure[...]

Phase 2: Client Certificate optional

In the configuration below, all users, those with and those without the certificate are allowed to get connected. This is achieved by the keyword “verify optional“.
We can redirect users to different farm, based on the presence of the certificate or not:

frontend ssltests
 mode http
 bind 192.168.10.1:443 ssl crt ./server.pem ca-file ./ca.crt verify optional
 use_backend sharepoint if { ssl_fc_has_crt }     # check if the certificate has been provided and give access to the application
 default_backend webmail

backend sharepoint
 mode http
 server srv1 192.168.10.101:80 check
 server srv2 192.168.10.102:80 check

backend webmail
 mode http
 server srv3 192.168.10.103:80 check
 server srv4 192.168.10.104:80 check
  • If the client does not provide any certificate, then HAProxy route him to the webmail.
  • If the client provides a certificate, then HAProxy routes him to the application (sharepoint in our example)
  • If the client provides an expired certificate, then HAProxy refuses the connection like in the phase 1

Phase 3: Client Certificate optional and managing expired certificates

In the configuration below, all users, those with and those without the certificate are allowed to get connected. This is achieved by the keyword “verify optional“.
The option “crt-ignore-err 10” tells HAProxy to ignore Certificate errors 10 which actually matches the expired certificate.
We can redirect users to different farm, based on the presence of the certificate or not and we can propose a dedicated page for users whose certificate has expired with a procedure on how to renew or ask for a new certificate

frontend ssltests
 mode http
 bind 192.168.10.1:443 ssl crt ./server.pem ca-file ./ca.crt verify optional crt-ignore-err 10
 use_backend static if { ssl_c_verify 10 }  # if the certificate has expired, route the user to a less sensitive server to print an help page
 use_backend sharepoint if { ssl_fc_has_crt }        # check if the certificate has been provided and give access to the application
 default_backend webmail

backend static
 mode http
 option http-server-close
 redirect location /certexpired.html if { ssl_c_verify 10 } ! { path /certexpired.html }
 server srv5 192.168.10.105:80 check
 server srv6 192.168.10.106:80 check

backend sharepoint
 mode http
 server srv1 192.168.10.101:80 check
 server srv2 192.168.10.102:80 check

backend webmail
 mode http
 server srv3 192.168.10.103:80 check
 server srv4 192.168.10.104:80 check
  • If the client does not provide any certificate, then HAProxy route him to the webmail.
  • If the client provides a certificate, then HAProxy routes him to the application (sharepoint in our example)
  • If the client provides an expired certificate, then HAProxy routes him to a static server (non-sensitive) and force the users to show the page which provides the explanation about the expired certificate and how to renew it (it’s up to the admin to write this page).

Phase 4: Client Certificate optional, managing expired certificates and a revocation list

In the configuration below, all users, those with and those without the certificate are allowed to get connected. This is achieved by the keyword “verify optional“.
The option “crt-ignore-err all” tells HAProxy to ignore any client Certificate error.
The option “crl-file ./ca_crl.pem” tells HAProxy to check if the client has not been revoked in the Certificate Revocation List provided in argument.
We can redirect users to different farm, based on the presence of the certificate or not and we can propose a dedicated page for users whose certificate has expired with a procedure on how to renew or ask for a new certificate. We can also present a dedicate page to users whose certificate has been revoked.

frontend ssltests
 mode http
 bind 192.168.10.1:443 ssl crt ./server.pem ca-file ./ca.crt verify optional crt-ignore-err all crl-file ./ca_crl.pem
 use_backend static unless { ssl_c_verify 0 }  # if there is an error with the certificate, then route the user to a less sensitive farm
 use_backend sharepoint if { ssl_fc_has_crt }           # check if the certificate has been provided and give access to the application
 default_backend webmail

backend static
 mode http
 option http-server-close
 redirect location /certexpired.html if { ssl_c_verify 10 } ! { path /certexpired.html } # SSL error 10 means &quot;certificate expired&quot;
 redirect location /certrevoked.html if { ssl_c_verify 23 } ! { path /certrevoked.html } # SSL error 23 means &quot;Certificate revoked&quot;
redirect location /othererrors.html unless { ssl_c_verify 0 } ! { path /othererrors.html }
 server srv5 192.168.10.105:80 check
 server srv6 192.168.10.106:80 check

backend sharepoint
 mode http
 server srv1 192.168.10.101:80 check
 server srv2 192.168.10.102:80 check

backend webmail
 mode http
 server srv3 192.168.10.103:80 check
 server srv4 192.168.10.104:80 check
  • If the client does not provide any certificate, then HAProxy route him to the webmail.
  • If the client provides a certificate, then HAProxy routes him to the application (sharepoint in our example)
  • If the client provides an expired certificate, then HAProxy routes him to a static server (non-sensitive) and force the users to show the page which provides the explanation about the expired certificate and how to renew it (it’s up to the admin to write this page).
  • If the client provides a revoked certificate, then HAProxy routes him to a static server (non-sensitive) and force the users to show the page which provides the explanation about the revoked certificate (it’s up to the admin to write this page).
  • For any other errors related to the client certificate, then HAProxy routes the user to a static server (non-sensitive) and force the users to show a page explaining there has been an error and how to contact the support (it’s up to the admin to write this page).

Phase 5: same as phase 4, but with multiple CAs, Cert error in header and some ACLs

In the configuration below, all users, those with and those without the certificate are allowed to get connected. This is achieved by the keyword “verify optional“.
The option “crt-ignore-err all” tells HAProxy to ignore all client Certificate.
The option “crl-file ./ca_crl.pem” tells HAProxy to check if the client has not been revoked in the Certificate Revocation List provided in argument.
The file ca.pem contains 2 CAs: ca and ca2.
We can redirect users to different farm, based on the presence of the certificate or not and we can propose a dedicated page for users whose certificate has expired with a procedure on how to renew or ask for a new certificate. We can also present a dedicate page to users whose certificate has been revoked.

frontend ssltests
 mode http
 bind 192.168.10.1:443 ssl crt ./server.pem ca-file ./ca.pem verify optional crt-ignore-err all crl-file ./ca_crl.pem
 use_backend static unless { ssl_c_verify 0 }  # if there is an error with the certificate, then route the user to a less sensitive farm
 use_backend sharepoint if { ssl_fc_has_crt }           # check if the certificate has been provided and give access to the application
 default_backend webmail

backend static
 mode http
 option http-server-close
 acl url_expired path /certexpired.html
 acl url_revoked path /certrevoked.html
 acl url_othererrors path /othererrors.html
 acl cert_expired ssl_c_verify 10
 acl cert_revoked ssl_c_verify 23
 reqadd X-Ssl-Error: 10 if cert_expired
 reqadd X-Ssl-Error: 23 if cert_revoked
 reqadd X-Ssl-Error: other if ! cert_expired ! cert_revoked
 redirect location /certexpired.html if cert_expired ! url_expired
 redirect location /certrevoked.html if cert_revoked ! url_revoked
 redirect location /othererrors.html if ! cert_expired ! cert_revoked ! url_othererrors
 server srv5 192.168.10.105:80 check
 server srv6 192.168.10.106:80 check

backend sharepoint
 mode http
 server srv1 192.168.10.101:80 check
 server srv2 192.168.10.102:80 check

backend webmail
 mode http
 server srv3 192.168.10.103:80 check
 server srv4 192.168.10.104:80 check
  • If the client does not provide any certificate, then HAProxy route him to the webmail.
  • If the client provides a certificate, then HAProxy routes him to the application (sharepoint in our example)
  • If the client provides an expired certificate, then HAProxy routes him to a static server (non-sensitive) and force the users to show the page which provides the explanation about the expired certificate and how to renew it (it’s up to the admin to write this page).
  • If the client provides a revoked certificate, then HAProxy routes him to a static server (non-sensitive) and force the users to show the page which provides the explanation about the revoked certificate (it’s up to the admin to write this page).
  • For any other errors related to the client certificate, then HAProxy routes the user to a static server (non-sensitive) and force the users to show a page explaining there has been an error and how to contact the support (it’s up to the admin to write this page).

Coming soon…


Later, we’ll improve HAProxy client certificate management with:

  • client certificate information in HTTP header
  • ACLs to match the client information provided in the certificate and take classic decision (routing, blocking, etc….)
  • Persistence based on the information provided by the certificate (stick tables)
  • Ability to use a client certificate to get connected (and authenticated) on a server

Don’t hesitate to send your us your wishes!

Related links

Links

HTTP request flood mitigation

In a recent article, we saw how we can use a load-balancer as a first row of defense against DDOS.

The purpose of the present article to provide a configuration to protect your applications against HTTP request flood.

The configuration below allows only 10 requests per source IP over a period of 10s for the dynamic part of the website.
If the user go above this limit, it gets blacklisted until he stops browsing the website for 10 seconds.
HAProxy would return him a 403 for requests over an established connection and would refuse any new connection from this user.

# On Aloha, the global section is already setup for you
# and the haproxy stats socket is available at /var/run/haproxy.stats
global
  stats socket ./haproxy.stats level admin

defaults
  option http-server-close
  mode http
  timeout http-request 5s
  timeout connect 5s
  timeout server 10s
  timeout client 30s

# On Aloha, you don't need to set up the stats page, the GUI already provides
# all the necessary information
listen stats
  bind 0.0.0.0:8880
  stats enable
  stats hide-version
  stats uri     /
  stats realm   HAProxy Statistics
  stats auth    admin:admin

frontend ft_web
  bind 0.0.0.0:8080

  # Use General Purpose Couter (gpc) 0 in SC1 as a global abuse counter
  # Monitors the number of request sent by an IP over a period of 10 seconds
  stick-table type ip size 1m expire 10s store gpc0,http_req_rate(10s)
  tcp-request connection track-sc1 src
  # refuses a new connection from an abuser
  tcp-request content reject if { src_get_gpc0 gt 0 }
  # returns a 403 for requests in an established connection
  http-request deny if { src_get_gpc0 gt 0 }

  # Split static and dynamic traffic since these requests have different impacts on the servers
  use_backend bk_web_static if { path_end .jpg .png .gif .css .js }

  default_backend bk_web

# Dynamic part of the application
backend bk_web
  balance roundrobin
  cookie MYSRV insert indirect nocache

  # If the source IP sent 10 or more http request over the defined period, 
  # flag the IP as abuser on the frontend
  acl abuse src_http_req_rate(ft_web) ge 10
  acl flag_abuser src_inc_gpc0(ft_web) ge 0
  # Returns a 403 to the abuser
  http-request deny if abuse flag_abuser

  server srv1 192.168.1.2:80 check cookie srv1 maxconn 100
  server srv2 192.168.1.3:80 check cookie srv2 maxconn 100

# Static objects
backend bk_web_static
  balance roundrobin
  server srv1 192.168.1.2:80 check maxconn 1000
  server srv2 192.168.1.3:80 check maxconn 1000

Links

Use GeoIP database within HAProxy

Introduction

Sometimes we need to know the country of the user using the application, for different purposes:

  • Automatically select the most appropriate language
  • Send a 302 to redirect the user to the closest POP from his location
  • Allow only a single country to browse the site, for legal reason
  • Block some countries we don’t do business with and which are the source of most web attacks

IP based location


To achieve this purpose, the most “reliable” information we have from a user is his IP address.
Well, not as reliable as we could hope:

  • not “reliable” because it’s easy to use a proxy installed in a foreign country to fake your IP address…
  • not “reliable” because GeoIP databases are not accurate.
  • not “reliable” because GeoIP databases rely on information provided by the ISP.
  • not “reliable” because any subnets can be routed from anywhere on earth.

When an ISP does a request for a new subnet to its local RIR, it has to tell the country where it will be used.
This country is supplied as a code composed by two letters, normalized by ISO 3166.

You can use whois tool can be used to know the country code of a IP address:

whois 1.1.1.1
[...]
country:        AU
[...]

geolocation definition

Well, it’s quite easy to understand: Geolocation is the process to link a third party to a geographicl location. In easier words: know the country of a client IP address.
On Internet, such base is called GeoIP.

geolocation databases


There are a few GeoIP databases available on the Internet, most of them uses IP ranges to link an IP address to its country code.
An IP range is simply a couple of IP addresses representing the first and the last IP address of a range.
NOTE: It might correspond to a real subnet, but in most cases, it doesn’t ;).

In example:

"1.1.2.0","1.1.63.255","16843264","16859135","CN","China"

What’s the issue with HAProxy then???


HAProxy can only use CIDR notation, with real subnets.
It means we’ll have to turn the IP ranges into CIDR notation. This is not a easy task since you must split the ip range in multiple subnets…
Once done, we’ll be able to configure HAProxy to use them in ACLs and do anything an ACL can do.

For example, the range above should be translated to the following subnets:

1.1.2.0/23 "CN"
1.1.4.0/22 "CN"
1.1.8.0/21 "CN"
1.1.16.0/20 "CN"
1.1.32.0/19 "CN"

Now, you can understand why GeoIP databases uses IP ranges: it takes fewer lines ūüôā

iprange tool

To ease this job, Willy released a tool called iprange in HAProxy sources contrib directory.
You can see it here, in HAProxy’s git: http://www.haproxy.org/git/?p=haproxy.git;a=tree;f=contrib/iprange
It can be used to extract CIDR subnets from an IP range.

iprange installation


Just downlaod both Makefile and iprange.c then run make:

make
gcc -s -O3 -o iprange iprange.c

too complicated ūüôā

iprange usage


iprange take a single incoming format, composed by 3 columns separated by commas:

  1. first IP
  2. Last IP
  3. Country code

For example:

"1.1.2.0","1.1.63.255","CN"

NOTE: in the example below, we’ll work with Maxmind Country code lite database.

The database looks like:

$ head GeoIPCountryWhois.csv
"1.0.0.0","1.0.0.255","16777216","16777471","AU","Australia"
"1.0.1.0","1.0.3.255","16777472","16778239","CN","China"
"1.0.4.0","1.0.7.255","16778240","16779263","AU","Australia"
"1.0.8.0","1.0.15.255","16779264","16781311","CN","China"
"1.0.16.0","1.0.31.255","16781312","16785407","JP","Japan"
"1.0.32.0","1.0.63.255","16785408","16793599","CN","China"
"1.0.64.0","1.0.127.255","16793600","16809983","JP","Japan"
"1.0.128.0","1.0.255.255","16809984","16842751","TH","Thailand"
"1.1.0.0","1.1.0.255","16842752","16843007","CN","China"
"1.1.1.0","1.1.1.255","16843008","16843263","AU","Australia"

In order to make it compatible with iprange tool, use cut:

$ cut -d, -f1,2,5 GeoIPCountryWhois.csv | head
"1.0.0.0","1.0.0.255","AU"
"1.0.1.0","1.0.3.255","CN"
"1.0.4.0","1.0.7.255","AU"
"1.0.8.0","1.0.15.255","CN"
"1.0.16.0","1.0.31.255","JP"
"1.0.32.0","1.0.63.255","CN"
"1.0.64.0","1.0.127.255","JP"
"1.0.128.0","1.0.255.255","TH"
"1.1.0.0","1.1.0.255","CN"
"1.1.1.0","1.1.1.255","AU"

Now, you can use it with iprange:

$ cut -d, -f1,2,5 GeoIPCountryWhois.csv | head | ./iprange 
1.0.0.0/24 "AU"
1.0.1.0/24 "CN"
1.0.2.0/23 "CN"
1.0.4.0/22 "AU"
1.0.8.0/21 "CN"
1.0.16.0/20 "JP"
1.0.32.0/19 "CN"
1.0.64.0/18 "JP"
1.0.128.0/17 "TH"
1.1.0.0/24 "CN"
1.1.1.0/24 "AU"

Country codes and HAProxy ACLs


Now we’re ready to turn IP ranges into subnets associated to a country code.
Still need to be able to use it in HAProxy.
The easiest is to write all the subnets concerning a country code in a single file.

$ cut -d, -f1,2,5 GeoIPCountryWhois.csv | ./iprange | sed 's/"//g' 
| awk -F' ' '{ print $1 >> $2".subnets" }'

And the result is nice:

$ ls *.subnets
A1.subnets  AX.subnets  BW.subnets  CX.subnets  FJ.subnets  GR.subnets  IR.subnets  LA.subnets  ML.subnets  NF.subnets  PR.subnets  SI.subnets  TK.subnets  VE.subnets
A2.subnets  AZ.subnets  BY.subnets  CY.subnets  FK.subnets  GS.subnets  IS.subnets  LB.subnets  MM.subnets  NG.subnets  PS.subnets  SJ.subnets  TL.subnets  VG.subnets
AD.subnets  BA.subnets  BZ.subnets  CZ.subnets  FM.subnets  GT.subnets  IT.subnets  LC.subnets  MN.subnets  NI.subnets  PT.subnets  SK.subnets  TM.subnets  VI.subnets
AE.subnets  BB.subnets  CA.subnets  DE.subnets  FO.subnets  GU.subnets  JE.subnets  LI.subnets  MO.subnets  NL.subnets  PW.subnets  SL.subnets  TN.subnets  VN.subnets
AF.subnets  BD.subnets  CC.subnets  DJ.subnets  FR.subnets  GW.subnets  JM.subnets  LK.subnets  MP.subnets  NO.subnets  PY.subnets  SM.subnets  TO.subnets  VU.subnets
AG.subnets  BE.subnets  CD.subnets  DK.subnets  GA.subnets  GY.subnets  JO.subnets  LR.subnets  MQ.subnets  NP.subnets  QA.subnets  SN.subnets  TR.subnets  WF.subnets
AI.subnets  BF.subnets  CF.subnets  DM.subnets  GB.subnets  HK.subnets  JP.subnets  LS.subnets  MR.subnets  NR.subnets  RE.subnets  SO.subnets  TT.subnets  WS.subnets
AL.subnets  BG.subnets  CG.subnets  DO.subnets  GD.subnets  HN.subnets  KE.subnets  LT.subnets  MS.subnets  NU.subnets  RO.subnets  SR.subnets  TV.subnets  YE.subnets
AM.subnets  BH.subnets  CH.subnets  DZ.subnets  GE.subnets  HR.subnets  KG.subnets  LU.subnets  MT.subnets  NZ.subnets  RS.subnets  ST.subnets  TW.subnets  YT.subnets
AN.subnets  BI.subnets  CI.subnets  EC.subnets  GF.subnets  HT.subnets  KH.subnets  LV.subnets  MU.subnets  OM.subnets  RU.subnets  SV.subnets  TZ.subnets  ZA.subnets
AO.subnets  BJ.subnets  CK.subnets  EE.subnets  GG.subnets  HU.subnets  KI.subnets  LY.subnets  MV.subnets  PA.subnets  RW.subnets  SY.subnets  UA.subnets  ZM.subnets
AP.subnets  BM.subnets  CL.subnets  EG.subnets  GH.subnets  ID.subnets  KM.subnets  MA.subnets  MW.subnets  PE.subnets  SA.subnets  SZ.subnets  UG.subnets  ZW.subnets
AQ.subnets  BN.subnets  CM.subnets  EH.subnets  GI.subnets  IE.subnets  KN.subnets  MC.subnets  MX.subnets  PF.subnets  SB.subnets  TC.subnets  UM.subnets
AR.subnets  BO.subnets  CN.subnets  ER.subnets  GL.subnets  IL.subnets  KP.subnets  MD.subnets  MY.subnets  PG.subnets  SC.subnets  TD.subnets  US.subnets
AS.subnets  BR.subnets  CO.subnets  ES.subnets  GM.subnets  IM.subnets  KR.subnets  ME.subnets  MZ.subnets  PH.subnets  SD.subnets  TF.subnets  UY.subnets
AT.subnets  BS.subnets  CR.subnets  ET.subnets  GN.subnets  IN.subnets  KW.subnets  MG.subnets  NA.subnets  PK.subnets  SE.subnets  TG.subnets  UZ.subnets
AU.subnets  BT.subnets  CU.subnets  EU.subnets  GP.subnets  IO.subnets  KY.subnets  MH.subnets  NC.subnets  PL.subnets  SG.subnets  TH.subnets  VA.subnets
AW.subnets  BV.subnets  CV.subnets  FI.subnets  GQ.subnets  IQ.subnets  KZ.subnets  MK.subnets  NE.subnets  PM.subnets  SH.subnets  TJ.subnets  VC.subnets

Which makes subnets available for 246 countries !!!!!

In example, the subnets associated to AUstralia are:

$ cat AU.subnets 
1.0.0.0/24
1.0.4.0/22
1.1.1.0/24
[...]

The bash loop below prepares the ACLs in a file called haproxy.cfg:

$ for f in `ls *.subnets` ; do echo $f | 
awk -F'.' '{ print "acl "$1" src -f "$0 >> "haproxy.cfg" }' ; done
$ head haproxy.cfg 
acl src A1 -f A1.subnets
acl src A2 -f A2.subnets
acl src AD -f AD.subnets
acl src AE -f AE.subnets
acl src AF -f AF.subnets
acl src AG -f AG.subnets
acl src AI -f AI.subnets
acl src AL -f AL.subnets
acl src AM -f AM.subnets
acl src AN -f AN.subnets

That makes a lot of countries ūüôā

Continent codes and HAProxy ACLs


Fortunately, we can summarizes it to continent. Copy and paste into a file the country code and continent relation from maxmind website: http://www.maxmind.com/app/country_continent.

The script below will create files with named with the continent name and containing the country codes related to it:

$ for c in `fgrep -v '-' country_continents.txt | sort -t',' -k 2` ; 
do echo $c | awk -F',' '{ print $1 >> $2".continent" }' ; done

We have now 7 new files:

$ ls *.continent
AF.continent  AN.continent  AS.continent  EU.continent  
NA.continent  OC.continent  SA.continent

Let’s have a look for countries in South America:

$ cat SA.continent 
AR
BO
BR
CL
CO
EC
FK
GF
GY
PE
PY
SR
UY
VE

Let’s aggregate subnets for each country in a continent into a single file:

$ for f in `ls *.continent` ; do for c in $(cat $f) ; 
do cat ${c}.subnets >> ${f%%.*}.subnets ; done ;  done

Now we can generate the HAProxy configuration file to use them:

$ for c in AF AN AS EU NA OC SA ; do 
echo acl $c src -f $c.subnets >> "haproxy.conf" ; done

Usage in HAProxy


Coming soon, an article giving some examples on how to use the files generated below to improve performance and security of your platforms.

Links

Use a load-balancer as a first row of defense against DDOS

We‚Äôve seen recently more and more DOS and DDOS attacks. Some of them were very big, requiring thousands of computers…
But in most cases, this kind of attacks are made by a few computers aiming to make a service or website unavailable, either by sending it too many requests or by taking all its available resources, preventing regular users to use the service.
Some attacks targets known vulnerabilities of widely used applications.

In the present article, we’ll explain how to take advantage of an application delivery controller to protect your website and application against DOS, DDOS and vulnerability scans.

Why using a LB for such protection since a firewall and a Web Application Firewall (aka WAF) could already do the job?
Well, the Firewall is not aware of the application layer but would be useful to pretect against SYN flood attacks. That’s why we saw recently application layer firewalls: Web Application Firewalls, also known as WAF.
Well, since the load balancer is in front of the platform, it can be a good partner for the WAF, filtering out 99% of the attacks, which are managed by script kiddies. The WAF can then happily clean up the remaining attacks.
Well, maybe you don’t need a WAF and you want to take advantage of your Aloha and save some money ;).

Note that you need an application layer load-balancer, like Aloha or OpenSource HAProxy to be efficient.

TCP syn flood attacks


The syn flood attacks consist in sending as many TCP syn packets as possible to a single server trying to saturate it or at least, saturating its uplink bandwith.

If you’re using the Aloha load-balancer, you’re already protected against this kind of attacks: the Aloha includes mechanism to protect you.
The TCP syn flood attack mitigation capacity may vary depending on your Aloha box.

It you’re running your own LB based on HAProxy or HAPEE, you should have a look at the sysctl below (edit /etc/sysctl.conf or play with sysctl command):

# Protection SYN flood
net.ipv4.tcp_syncookies = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.tcp_max_syn_backlog = 1024 

Note: If the attack is very big and saturates your internet bandwith, the only solution is to ask your internet access provider to null route the attackers IPs on its core network.

Slowloris like attacks


For this kind of attack, the clients will send very slowly their requests to a server: header by header, or even worst character by character, waiting long time between each of them.
The server have to wait until the end of the request to process it and send back its response.
The purpose of this attack is to prevent regular users to use the service, since the attacker would be using all the available resources with very slow queries.

In order to protect your website against this kind of attack, just setup the HAProxy option ‚Äútimeout http-request‚ÄĚ.
You can set it up to 5s, which is long enough.
It tells HAProxy to let five seconds to a client to send its whole HTTP request, otherwise HAProxy would shut the connection with an error.

For example:

# On Aloha, the global section is already setup for you
# and the haproxy stats socket is available at /var/run/haproxy.stats
global
  stats socket ./haproxy.stats level admin

defaults
  option http-server-close
  mode http
  timeout http-request 5s
  timeout connect 5s
  timeout server 10s
  timeout client 30s

listen stats
  bind 0.0.0.0:8880
  stats enable
  stats hide-version
  stats uri     /
  stats realm   HAProxy Statistics
  stats auth    admin:admin

frontend ft_web
  bind 0.0.0.0:8080

  # Spalreadylit static and dynamic traffic since these requests have different impacts on the servers
  use_backend bk_web_static if { path_end .jpg .png .gif .css .js }

  default_backend bk_web

# Dynamic part of the application
backend bk_web
  balance roundrobin
  cookie MYSRV insert indirect nocache
  server srv1 192.168.1.2:80 check cookie srv1 maxconn 100
  server srv2 192.168.1.3:80 check cookie srv2 maxconn 100

# Static objects
backend bk_web_static
  balance roundrobin
  server srv1 192.168.1.2:80 check maxconn 1000
  server srv2 192.168.1.3:80 check maxconn 1000

To test this configuration, simply open a telnet to the frontend port and wait for 5 seconds:

telnet 127.0.0.1 8080
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
HTTP/1.0 408 Request Time-out
Cache-Control: no-cache
Connection: close
Content-Type: text/html

<h1>408 Request Time-out</h1>
Your browser didn't send a complete request in time.

Connection closed by foreign host.

Unfair users, AKA abusers


By unfair users, I mean users (or scripts) which have an abnormal behavior on your website:

  • too many connections opened
  • new connection rate too high
  • http request rate too high
  • bandwith usage too high
  • client not respecting RFCs (IE for SMTP)

How does a regular browser works?


Before trying to protect your website from weird behavior, we have to define what a “normal” behavior is!
This paragraphs gives the main lines of how a browser works and there may be some differences between browsers.

So, when one wants to browse a website, we use a browser: Chrome, Firefox, Internet Explorer, Opera are the most famous ones.
After typing the website name in the URL bar, the browser will look like for the IP address of your website.
Then it will establish a tcp connection to the server, downloading the main page, analyze its content and follow its links from the HTML code to get the objects required to build the page: javascript, css, images, etc…
To get the objects, it may open up to 6 or 7 TCP connections per domain name.
Once it has finished to download the objects, it starts aggregating everything then print out the page.

Limiting the number of connections per users


As seen before, a browser opens up 5 to 7 TCP connections to a website when it wants to download objetcs and they are opened quite quickly.
One can consider that somebody having more than 10 connections opened is not a regular user.
The configuration below shows how to do this limitation in the Aloha and HAProxy:

This configuration also applies to any kind of TCP based application.

The most important lines are from 25 to 32.

# On Aloha, the global section is already setup for you
# and the haproxy stats socket is available at /var/run/haproxy.stats
global
  stats socket ./haproxy.stats level admin

defaults
  option http-server-close
  mode http
  timeout http-request 5s
  timeout connect 5s
  timeout server 10s
  timeout client 30s

listen stats
  bind 0.0.0.0:8880
  stats enable
  stats hide-version
  stats uri     /
  stats realm   HAProxy Statistics
  stats auth    admin:admin

frontend ft_web
  bind 0.0.0.0:8080

  # Table definition  
  stick-table type ip size 100k expire 30s store conn_cur

  # Allow clean known IPs to bypass the filter
  tcp-request connection accept if { src -f /etc/haproxy/whitelist.lst }
  # Shut the new connection as long as the client has already 10 opened 
  tcp-request connection reject if { src_conn_cur ge 10 }
  tcp-request connection track-sc1 src

  # Split static and dynamic traffic since these requests have different impacts on the servers
  use_backend bk_web_static if { path_end .jpg .png .gif .css .js }

  default_backend bk_web

# Dynamic part of the application
backend bk_web
  balance roundrobin
  cookie MYSRV insert indirect nocache
  server srv1 192.168.1.2:80 check cookie srv1 maxconn 100
  server srv2 192.168.1.3:80 check cookie srv2 maxconn 100

# Static objects
backend bk_web_static
  balance roundrobin
  server srv1 192.168.1.2:80 check maxconn 1000
  server srv2 192.168.1.3:80 check maxconn 1000
  • NOTE: if several domain name points to your frontend, then you may want to increase the conn_cur limit. (Remember a browser opens its 5 to 7 TCP connections per domain name).
  • NOTE2: if several users are hidden behind the same IP (NAT or proxy), this configuration may have a negative impact for them. You can whitelist these IPs.

Testing the configuration

run an apache bench to open 10 connections and doing request on these connections:

ab -n 50000000 -c 10 http://127.0.0.1:8080/

Watch the table content on the haproxy stats socket:

echo "show table ft_web" | socat unix:./haproxy.stats -
# table: ft_web, type: ip, size:102400, used:1
0x7afa34: key=127.0.0.1 use=10 exp=29994 conn_cur=10

Let’s try to open an eleventh connection using telnet:

telnet 127.0.0.1 8080
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
Connection closed by foreign host.

Basically, opened connections can keep on working, while a new one can’t be established.

Limiting the connection rate per user


In the previous chapter, we’ve seen how to protect ourselves from somebody who wants to open more than X connections at the same time.
Well, this is good, but something which may kill performance would to allow somebody to open and close a lot of tcp connections over a short period of time.
As we’ve seen previously, a browser will open up to 7 TCP connections in a very short period of time (a few seconds). One can consider that somebody having more than 20 connections opened over a period of 3 seconds is not a regular user.
The configuration below shows how to do this limitation in the Aloha and HAProxy:

This configuration also applies to any kind of TCP based application.

The most important lines are from 25 to 32.

# On Aloha, the global section is already setup for you
# and the haproxy stats socket is available at /var/run/haproxy.stats
global
  stats socket ./haproxy.stats level admin

defaults
  option http-server-close
  mode http
  timeout http-request 5s
  timeout connect 5s
  timeout server 10s
  timeout client 30s

listen stats
  bind 0.0.0.0:8880
  stats enable
  stats hide-version
  stats uri     /
  stats realm   HAProxy Statistics
  stats auth    admin:admin

frontend ft_web
  bind 0.0.0.0:8080

  # Table definition  
  stick-table type ip size 100k expire 30s store conn_rate(3s)

  # Allow clean known IPs to bypass the filter
  tcp-request connection accept if { src -f /etc/haproxy/whitelist.lst }
  # Shut the new connection as long as the client has already 10 opened 
  tcp-request connection reject if { src_conn_rate ge 10 }
  tcp-request connection track-sc1 src

  # Split static and dynamic traffic since these requests have different impacts on the servers
  use_backend bk_web_static if { path_end .jpg .png .gif .css .js }

  default_backend bk_web

# Dynamic part of the application
backend bk_web
  balance roundrobin
  cookie MYSRV insert indirect nocache
  server srv1 192.168.1.2:80 check cookie srv1 maxconn 100
  server srv2 192.168.1.3:80 check cookie srv2 maxconn 100

# Static objects
backend bk_web_static
  balance roundrobin
  server srv1 192.168.1.2:80 check maxconn 1000
  server srv2 192.168.1.3:80 check maxconn 1000
  • NOTE2: if several users are hidden behind the same IP (NAT or proxy), this configuration may have a negative impact for them. You can whitelist these IPs.

Testing the configuration


run 10 requests with ApacheBench, everything may be fine:

ab -n 10 -c 1 -r http://127.0.0.1:8080/

Using socat we can watch this traffic in the stick-table:

# table: ft_web, type: ip, size:102400, used:1
0x11faa3c: key=127.0.0.1 use=0 exp=28395 conn_rate(3000)=10

Running a telnet to run a eleventh request and the connections get closed:

telnet 127.0.0.1 8080
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
Connection closed by foreign host.

Limiting the HTTP request rate


Even if in the previous examples, we were using HTTP as the protocol, we based our protection on layer 4 information: number or opening rate of TCP connections.
An attacker could respect the number of connection we would set by emulating the behavior of a regular browser.
Now, let’s go deeper and see what we can do on HTTP protocol.

The configuration below tracks HTTP request rate per user on the backend side, blocking abusers on the frontend side if the backend detects abuse.

# On Aloha, the global section is already setup for you
# and the haproxy stats socket is available at /var/run/haproxy.stats
global
  stats socket ./haproxy.stats level admin

defaults
  option http-server-close
  mode http
  timeout http-request 5s
  timeout connect 5s
  timeout server 10s
  timeout client 30s

listen stats
  bind 0.0.0.0:8880
  stats enable
  stats hide-version
  stats uri     /
  stats realm   HAProxy Statistics
  stats auth    admin:admin

frontend ft_web
  bind 0.0.0.0:8080

  # Use General Purpose Couter (gpc) 0 in SC1 as a global abuse counter
  # Monitors the number of request sent by an IP over a period of 10 seconds
  stick-table type ip size 1m expire 10s store gpc0,http_req_rate(10s)
  tcp-request connection track-sc1 src
  tcp-request connection reject if { src_get_gpc0 gt 0 }

  # Split static and dynamic traffic since these requests have different impacts on the servers
  use_backend bk_web_static if { path_end .jpg .png .gif .css .js }

  default_backend bk_web

# Dynamic part of the application
backend bk_web
  balance roundrobin
  cookie MYSRV insert indirect nocache

  # If the source IP sent 10 or more http request over the defined period, 
  # flag the IP as abuser on the frontend
  acl abuse src_http_req_rate(ft_web) ge 10
  acl flag_abuser src_inc_gpc0(ft_web)
  tcp-request content reject if abuse flag_abuser

  server srv1 192.168.1.2:80 check cookie srv1 maxconn 100
  server srv2 192.168.1.3:80 check cookie srv2 maxconn 100

# Static objects
backend bk_web_static
  balance roundrobin
  server srv1 192.168.1.2:80 check maxconn 1000
  server srv2 192.168.1.3:80 check maxconn 1000
  • NOTE: if several users are hidden behind the same IP (NAT or proxy), this configuration may have a negative impact for them. You can whitelist these IPs.

Testing the configuration

run 10 requests with ApacheBench, everything may be fine:

ab -n 10 -c 1 -r http://127.0.0.1:8080/

Using socat we can watch this traffic in the stick-table:

# table: ft_web, type: ip, size:1048576, used:1
0xbebbb0: key=127.0.0.1 use=0 exp=8169 gpc0=1 http_req_rate(10000)=10

Running a telnet to run a eleventh request and the connections get closed:

telnet 127.0.0.1 8080
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
Connection closed by foreign host.

Detecting vulnerability scans

Vulnerability scans could generate different kind of errors which can be tracked by Aloha and HAProxy:

  • invalid and truncated requests
  • denied or tarpitted requests
  • failed authentications
  • 4xx error pages

HAProxy is able to monitor an error rate per user then can take decision based on it.

# On Aloha, the global section is already setup for you
# and the haproxy stats socket is available at /var/run/haproxy.stats
global
  stats socket ./haproxy.stats level admin

defaults
  option http-server-close
  mode http
  timeout http-request 5s
  timeout connect 5s
  timeout server 10s
  timeout client 30s

listen stats
  bind 0.0.0.0:8880
  stats enable
  stats hide-version
  stats uri     /
  stats realm   HAProxy Statistics
  stats auth    admin:admin

frontend ft_web
  bind 0.0.0.0:8080

  # Use General Purpose Couter 0 in SC1 as a global abuse counter
  # Monitors the number of errors generated by an IP over a period of 10 seconds
  stick-table type ip size 1m expire 10s store gpc0,http_err_rate(10s)
  tcp-request connection track-sc1 src
  tcp-request connection reject if { src_get_gpc0 gt 0 }

  # Split static and dynamic traffic since these requests have different impacts on the servers
  use_backend bk_web_static if { path_end .jpg .png .gif .css .js }

  default_backend bk_web

# Dynamic part of the application
backend bk_web
  balance roundrobin
  cookie MYSRV insert indirect nocache

  # If the source IP generated 10 or more http request over the defined period, 
  # flag the IP as abuser on the frontend
  acl abuse src_http_err_rate(ft_web) ge 10
  acl flag_abuser src_inc_gpc0(ft_web)
  tcp-request content reject if abuse flag_abuser

  server srv1 192.168.1.2:80 check cookie srv1 maxconn 100
  server srv2 192.168.1.3:80 check cookie srv2 maxconn 100

# Static objects
backend bk_web_static
  balance roundrobin
  server srv1 192.168.1.2:80 check maxconn 1000
  server srv2 192.168.1.3:80 check maxconn 1000

Testing the configuration

run an apache bench, pointing it on a purposely wrong URL:

ab -n 10 http://127.0.0.1:8080/dlskfjlkdsjlkfdsj

Watch the table content on the haproxy stats socket:

echo "show table ft_web" | socat unix:./haproxy.stats -
# table: ft_web, type: ip, size:1048576, used:1
0x8a9770: key=127.0.0.1 use=0 exp=5866 gpc0=1 http_err_rate(10000)=11

Let‚Äôs try to run the same ab command and let’s get the error:

apr_socket_recv: Connection reset by peer (104)

which means that HAProxy has blocked the IP address

Notes

  • We could combine configuration example above together to improve protection. This will be described later in an other article
  • The numbers provided in the examples may be different for your application and architecture. Bench your configuration properly before applying in production.

Related articles

Links

Fight spam with early talking detection

Synopsis

A good way to improve efficiency against spammers is to use early talking detection:

  1. you own a SMTP relay platform and you want to improve its efficiency on spam fighting
  2. your current MTA has no early talking detection feature and you want to be able to add one.
  3. you want to offload the early talking detection feature from your SMTP server to an other device in your architecture.

What is early talking detection?

The picture below shows the SMTP “hello phase”, in 4 steps:
smtp_helo_phase

  • Step 1: the client gets connected to the SMTP server
  • Step 2: the server acknowledge the SMTP connection by a “220 service ready” message
  • Step 3: the client sends his identity (basicaly his domain name)
  • Step 4: the server welcomes the client and the client is now allowed to send a mail

Usually, spammers have no time to waste, so they do the TCP connection then send directly the HELO packet to the server.
The Aloha can hold the connection on the client side and monitors it for a few seconds.
Then you have 2 options:

  1. If the client “speaks” first, it means it’s a spammer, so the connection can be closed safely. No need to bother the server with it.
  2. If the client waits for the SMTP banner during the observation period, it means it looks to be a regular user. So we may accept its connection and let the client and server speaks together.

This way of listening on the connection to detect if client talks first is called early talking detection.

Diagram

In order to use the Aloha in front of a public SMTP relay platform, it’s recommanded to configure it in reverse-proxy transparent mode, also known as Destination NAT.
That way, the SMTP servers will be aware of the client IP address.
In that mode, the default gateway of the server must be the Aloha, or you must configure Policy Based Routing to redirect traffic from the SMTP server source port 25 to the Aloha.
smtp_diagram

Configuration

In the Aloha, use the Layer7 (HAProxy) load-balancing tab and apply the configuration below:

frontend ft_smtp
  mode tcp
  bind :25
  source 0.0.0.0 usesrc clientip
  log global
  option tcplog
# reject SMTP connection if client speaks first before 30s
  tcp-request inspect-delay 30s
  acl content_present req_len gt 0
  tcp-request content reject if content_present
  default_backend bk_smtp

backend bk_smtp
  mode tcp
  balance roundrobin
  log global
  option tcplog
# SMTP health check
  option smtpchk HELO mydomain.com
  default-server inter 3s rise 2 fall 3
  server smtp1 10.0.0.1:25 check
  server smtp2 10.0.0.2:25 check

Updates:
(1) renaming and updating the article to describe early talking detection and not grey listing

Protect Apache against “apache killer” script

What is Apache killer?

Apache killer is a script which aims to exploit an Apache Vulnerability.
Basically, it makes Apache to fill up the /tmp directory which makes the webserver unstable.

Who is concerned?

Anybody running a website on Apache.
The Apache announce

How can Aloha Load-Balancer help you?

First, let’s have a look at the diagram below:

The Aloha can clean up your Range headers as well as limiting rate of connection from malicious people and event emulate the success of the attack.

Protect against Range header

Basically, the attack consists on sending a lot of Range headers to the webserver.
So, if a “client” sends more than 10 Range headers, we can consider this as an attack and we can clean them up.
Just add the two lines below in your Layer 7 (HAPropxy) backend configuration to protect your Apache web servers:

backend bk_http
[...]
  # Detect an ApacheKiller-like Attack
  acl weirdrangehdr hdr_cnt(Range) gt 10
  # Clean up the request
  reqidel ^Range if weirdrangehdr
[...]

Protect against service abuser

Since this kind of attack is combined with a DOS, you can blacklist bad guys with the configuration below.
It will limit users to 10 connections over a 10s period, then hold the connection for 10s before answering a 503 HTTP response.

You should adjust the values below to your website traffic.

frontend ft_http
[...]
  option http-server-close

  # Setup stick table
  stick-table type ip size 1k expire 30s store gpc0
  # Configure the DoS src
  acl MARKED src_get_gpc0(ft_http) gt 0
  # tarpit attackers if src_DoS
  use_backend bk_tarpit if MARKED
  # If not blocked, track the connection
  tcp-request connection track-sc1 src if ! MARKED

  default_backend bk_http
[...]

backend bk_http
[...]
  # Table to track connection rate
  stick-table type ip size 1k expire 30s store conn_rate(5s)
  # Track request
  tcp-request content track-sc2 src
  # Mark as abuser if more than 10 connection
  acl ABUSER sc2_conn_rate gt 10
  acl MARKED_AS_ABUSER sc1_inc_gpc0
  # Block connection concidered as abuser
  tcp-request content reject if ABUSER MARKED_AS_ABUSER
[...]

# Slow down attackers
backend bk_tarpit
  mode http
  # hold the connection for 10s before answering
  timeout tarpit 10s
  # Emulate a 503 error
  errorfile 500 /etc/errors/500_tarpit.txt
  # slowdown any request coming up to here
  reqitarpit .

Open a shell on your Aloha Load-Balancer, then:

  • create the directory /etc/errors/
  • create the file 500_tarpit.txt with the content below.

500_tarpit.txt:

HTTP/1.0 503 Service Unavailable
Cache-Control: no-cache
Connection: close
Content-Type: text/html
Content-Length: 310

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">;
<html xmlns="http://www.w3.org/1999/xhtml">;
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>Error</title>
</head>
<body><h1>Something went wrong</h1></body>
</html>

Don’t forget to save your configuration with the command

config save

Related articles

Links