Tag Archives: nginx

What is a slow POST Attack and how to turn HAProxy into your first line of Defense?

One of the biggest security challenges that companies face in today’s modern climate is the POST attack. Unlike a more traditional “Denial-of-Service” attack, POST attacks target a servers logical resources – making them particularly powerful when executed.

What is a slow POST Attack?


In a POST attack, an attacker begins by sending a legitimate HTTP POST header to a Web server, exactly as they would under normal circumstances. The header specifies the exact size of the message body that will then follow. However, that message body is then sent at an alarmingly low rate – sometimes as slow as 1 byte per approximately two minutes. Because the entire message is technically correct and complete, the targeted server attempts to obey all specified rules – which as you would expect, can take quite awhile. The issue is that if an attacker were to establish hundreds or even thousands of these POST attacks simultaneously, it will quickly use up all server resources and make legitimate connections impossible.

How HAProxy can protect against slow POST attack?


Because POST attacks can be incredibly powerful, it’s always important to have a tool in place to identify these types of issues when they’re still in their nascent stages to prevent them from becoming much larger, more serious issues down the road. Because HAProxy was designed as an application delivery controller to manage Web application high availability and performance, it is already in an ideal position to stop these types of POST attacks in their tracks.

HAProxy Configuration Example

Because of HAProxy‘s structure and configuration flexibility, many professionals and consumers alike often use it as a security tool. Case in point: by using the following configuration example, you can easily help protect your servers against POST attacks to prevent attackers from clogging resources and ultimately harming the well-being of not only your equipment but your entire organization at the same time.

frontend ft_myapp
 [...]
 option http-buffer-request
 timeout http-request 10s

As you can see, with just a few simple modifications, HAProxy can quickly and effortlessly remove POST attacks from the list of things you have to worry about on a daily basis with regards to your mission-critical business applications and API.
The option http-buffer-request instructs HAProxy to wait for the whole DATA before forwarding it to a server and the timeout http-request 10s option tells how much time HAProxy let to a client to send the whole POST.

Thanks to its functionality as a security tool, a reverse proxy and more in addition to its intended functionality as a load balancer, it’s easy to see why HAProxy is used by some of the largest sites on the Internet including Reddit, Tumblr, GitHub and more on a daily basis.

This function is available in the following versions of HAProxy:

Related links

Links

binary health check with HAProxy 1.5: php-fpm/fastcgi probe example

Application layer health checking

Health checking is the ability to probe a server to ensure the service is up and running.
This is one of the root feature of any load-balancer.

One can probe servers and services at different layer of the OSI model:
  * ARP check (not available in HAProxy)
  * ICMP (ping) check (not available in HAProxy)
  * TCP (handshake) check
  * Application (HTTP, MySql, SMTP, POP, etc…) check

The most representative of the application status the check is, the best.

This means that the best way to check a service is to “speak” the protocol itself.
Unfortunately, it is impossible to write one check per protocol, there are too many protocols and some of them are proprietary and/or binary.
That’s why HAProxy 1.5 now embeds a new health checking method called “tcp-check“. It is very simple and basic “send/expect” probing method where HAProxy can send arbitrary strings and match string or regex comparison on server responses.
Many send and expect can be executed in a row to determine the status of a server.

I’ve already explained how to check redis server and how to balance traffic only to the redis master server of a cluster.

Today’s article introduces a binary protocol widely deployed: fastcgi used by php-fpm.

fastcgi binary ping/pong health check

fastcgi is a binary protocol. It means data on the network are not readable by humans, like HTTP or SMTP.
php-fpm relies on this protocol to treat PHP code. It is common to use HAProxy to load-balance many php-fpm servers for resiliency and scalability.

php-fpm Configuration

Enable a dedicated url to probe and setup the response in your php-fpm configuration:

ping.path = /ping
ping.response = pong

Which means HAProxy has to send a fastcgi request on /ping and expects the server to response with “pong“.

HAProxy health checking for php-fpm

Add the following tcp-check sequence in your php-fpm backend to probe the /ping url and ensure the server answers a “pong“.
The comment at the end of the line describe the php-cgi protocol fields.

option tcp-check
 # FCGI_BEGIN_REQUEST
 tcp-check send-binary   01 # version
 tcp-check send-binary   01 # FCGI_BEGIN_REQUEST
 tcp-check send-binary 0001 # request id
 tcp-check send-binary 0008 # content length
 tcp-check send-binary   00 # padding length
 tcp-check send-binary   00 #
 tcp-check send-binary 0001 # FCGI responder
 tcp-check send-binary 0000 # flags
 tcp-check send-binary 0000 #
 tcp-check send-binary 0000 #
 # FCGI_PARAMS
 tcp-check send-binary   01 # version
 tcp-check send-binary   04 # FCGI_PARAMS
 tcp-check send-binary 0001 # request id
 tcp-check send-binary 0045 # content length
 tcp-check send-binary   03 # padding length: padding for content % 8 = 0
 tcp-check send-binary   00 #
 tcp-check send-binary 0e03524551554553545f4d4554484f44474554 # REQUEST_METHOD = GET
 tcp-check send-binary 0b055343524950545f4e414d452f70696e67   # SCRIPT_NAME = /ping
 tcp-check send-binary 0f055343524950545f46494c454e414d452f70696e67 # SCRIPT_FILENAME = /ping
 tcp-check send-binary 040455534552524F4F54 # USER = ROOT
 tcp-check send-binary 000000 # padding
 # FCGI_PARAMS
 tcp-check send-binary   01 # version
 tcp-check send-binary   04 # FCGI_PARAMS
 tcp-check send-binary 0001 # request id
 tcp-check send-binary 0000 # content length
 tcp-check send-binary   00 # padding length: padding for content % 8 = 0
 tcp-check send-binary   00 #

 tcp-check expect binary 706f6e67 # pong

Note that the whole send string could be written on a single line.

Any protocol???

If some day, you have to do the same type of configuration on a protocol nobody knows, simply capture network traffic of a “hello” sequence using tcpdump.
Then send the tcp payload cpatured using the tcp-check send command and configure the appropriate expect.
And it will work!

Links

Configuring HAProxy and Nginx for SPDY

Introduction to SPDY / HTTP-bis

SPDY is a protocol designed by google which aims to fix HTTP/1.1 protocol weaknesses and to adapt this 14 years old protocol to today’s internet devices and requirements.
Back in 1999, when HTTP/1.1 was designed, there was no mobile devices, the web pages were composed by HTML with a few images, almost no javascript and no CSS. The ISP delivered internet over very slow connections.
HTTP/2.0 has to address today’s and tomorrow’s need when much more devices can browse the internet from very different type of connections (very slow with high packet loss or very fast ones) and more and more people want multimedia content and interactivity. Websites became “web applications”.

SPDY is not HTTP 2.0! But HTTP 2.0 will use SPDY as basement

Note that as long as HTTP/2.0 has not been released officially, ALL articles written on SPDY, NPN, ALPN, etc may be outdated at some point.
Sadly, this is true for the present article, BUT I’ll try to keep it up to date 🙂
As an example, NPN is a TLS extension that has been designed to allow a client and a server to negotiate the protocol which will be used at the above network layer (in our case this is application layer).
Well, this NPN TLS extension is going to be outdated soon by an official RFC called “Transport Layer Security (TLS) Application Layer Protocol Negotiation Extension” (shortened to ALPN).

Also, we’re already at the third version of SPDY protocol (3.1 to be accurate). It changes quite often.

HAProxy and SPDY

As Willy explained on HAProxy‘s mailing list, HAProxy won’t implement SPDY.
But, as soon as HTTP/2.0 will be released officially, then the development will focus on it.
Main driver for this is the problem seen in introduction: waste of time because the drafts are updated quite often.

Saying that, HAProxy can be used to:
* load-balance SPDY web servers
* detect SPDY protocol through NPN (or more recent ALPN) TLS extension

Diagram

The picture below shows how HAProxy can be used to detect and split HTTP and SPDY traffic over an HTTPs connection:
spdy

Basically, HAProxy uses the NPN (and later the ALPN) TLS extension to figure out whether the client can browse the website using SPDY. If yes, the connection is forwarded to the SPDY farm (here hosted on nginx), otherwise, the connection is forwarded to the HTTP server farm (here hosted on nginx too).

Configuration

HAProxy configuration example for NPN and SPDY


Below, the HAProxy configuration to detect and split HTTP and SPDY traffic to two different farms:

defaults
 mode tcp
 log global
 option tcplog
 timeout connect           4s
 timeout server          300s
 timeout client          300s

frontend ft_spdy
 bind 64.210.140.163:443 name https ssl crt STAR_haproxylab_net.pem npn spdy/2

# tcp log format + SSL information (TLS version, cipher in use, SNI, NPN)
 log-format %ci:%cp [%t] %ft %b/%s %Tw/%Tc/%Tt %B %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq {%sslv/%sslc/%[ssl_fc_sni]/%[ssl_fc_npn]}

# acls: npn
 acl npn_spdy           ssl_fc_npn -i spdy/2

# spdy redirection
 use_backend bk_spdy      if npn_spdy

 default_backend bk_http

backend bk_spdy
 option httpchk HEAD /healthcheck
 server nginx 127.0.0.1:8082 maxconn 100 check port 8081

backend bk_http
 option httpchk HEAD /healthcheck
 http-request set-header Spdy no
 server nginx 127.0.0.1:8081 maxconn 100 check

NGINX configuration example for SPDY


And the corresponding nginx configuration:
Note that I use a LUA script to check if the “Spdy: no” HTTP header is present.
  * if present: then the connection was made over HTTP in HAProxy
  * if not present, then the connection was made over SPDY

   server {
      listen 127.0.0.1:8081;
      listen 127.0.0.1:8082 spdy;
      root /var/www/htdocs/spdy/;

        location = /healthcheck {
          access_log off;
          return 200;
        }

        location / {
             default_type 'text/plain';
             content_by_lua '
               local c = ngx.req.get_headers()["Spdy"];
               if c then
                 ngx.say("Currently browsing over HTTP")
               else
                 ngx.say("Currently browsing over spdy")
               end
                return
             ';
        }

   }

Testing


This setup is in production.
Simply browse https://spdy.haproxylab.net/ to give it a try.

Related links

  • SPDY whitepaper: http://www.chromium.org/spdy/spdy-whitepaper
  • SPDY (official ???) page: http://www.chromium.org/spdy
  • IETF Transport Layer Security (TLS) Application Layer Protocol Negotiation Extension: http://tools.ietf.org/html/draft-friedl-tls-applayerprotoneg-02
  • IETF HTTP/1.1 RFC: http://www.ietf.org/rfc/rfc2616.txt

Links

high performance WAF platform with Naxsi and HAProxy

Synopsis

I’ve already described WAF in a previous article, where I spoke about WAF scalability with apache and modsecurity.
One of the main issue with Apache and modsecurity is the performance. To address this issue, an alternative exists: naxsi, a Web Application Firewall module for nginx.

So using Naxsi and HAProxy as a load-balancer, we’re able to build a platform which meets the following requirements:

  • Web Application Firewall: achieved by Apache and modsecurity
  • High-availability: application server and WAF monitoring, achieved by HAProxy
  • Scalability: ability to adapt capacity to the upcoming volume of traffic, achieved by HAProxy
  • DDOS protection: blind and brutal attacks protection, slowloris protection, achieved by HAProxy
  • Content-Switching: ability to route only dynamic requests to the WAF, achieved by HAProxy
  • Reliability: ability to detect capacity overusage, this is achieved by HAProxy
  • Performance: deliver response as fast as possible, achieved by the whole platform

The picture below provides a better overview:

The LAB platform is composed by 6 boxes:

  • 2 ALOHA Load-Balancers (could be replaced by HAProxy 1.5-dev)
  • 2 WAF servers: CentOS 6.0, nginx and Naxsi
  • 2 Web servers: Debian + apache + PHP + dokuwiki

Nginx and Naxsi installation on CentOS 6

Purpose of this article is not to provide such procedue. So please read this wiki article which summarizes how to install nginx and naxsi on CentOS 6.0.

Diagram

The diagram below shows the platform with HAProxy frontends (prefixed by ft_) and backends (prefixed by bk_). Each farm is composed by 2 servers.

Configuration

Nginx and Naxsi


Configure nginx as a reverse-proxy which listen in bk_waf and forward traffic to ft_web. In the mean time, naxsi is there to analyze the requests.

server {
 proxy_set_header Proxy-Connection "";
 listen       192.168.10.15:81;
 access_log  /var/log/nginx/naxsi_access.log;
 error_log  /var/log/nginx/naxsi_error.log debug;

 location / {
  include    /etc/nginx/test.rules;
  proxy_pass http://192.168.10.2:81/;
 }

 error_page 403 /403.html;
 location = /403.html {
  root /opt/nginx/html;
  internal;
 }

 location /RequestDenied {
  return 403;
 }
}

HAProxy Load-Balancer configuration


The configuration below allows the following advanced features:

  • DDOS protection on the frontend
  • abuser or attacker detection in bk_waf and blocking on the public interface (ft_waf)
  • Bypassing WAF when overusage or unavailable
######## Default values for all entries till next defaults section
defaults
  option  http-server-close
  option  dontlognull
  option  redispatch
  option  contstats
  retries 3
  timeout connect 5s
  timeout http-keep-alive 1s
  # Slowloris protection
  timeout http-request 15s
  timeout queue 30s
  timeout tarpit 1m          # tarpit hold tim
  backlog 10000

# public frontend where users get connected to
frontend ft_waf
  bind 192.168.10.2:80 name http
  mode http
  log global
  option httplog
  timeout client 25s
  maxconn 10000

  # DDOS protection
  # Use General Purpose Couter (gpc) 0 in SC1 as a global abuse counter
  # Monitors the number of request sent by an IP over a period of 10 seconds
  stick-table type ip size 1m expire 1m store gpc0,http_req_rate(10s),http_err_rate(10s)
  tcp-request connection track-sc1 src
  tcp-request connection reject if { sc1_get_gpc0 gt 0 }
  # Abuser means more than 100reqs/10s
  acl abuse sc1_http_req_rate(ft_web) ge 100
  acl flag_abuser sc1_inc_gpc0(ft_web)
  tcp-request content reject if abuse flag_abuser

  acl static path_beg /static/ /dokuwiki/images/
  acl no_waf nbsrv(bk_waf) eq 0
  acl waf_max_capacity queue(bk_waf) ge 1
  # bypass WAF farm if no WAF available
  use_backend bk_web if no_waf
  # bypass WAF farm if it reaches its capacity
  use_backend bk_web if static waf_max_capacity
  default_backend bk_waf

# WAF farm where users' traffic is routed first
backend bk_waf
  balance roundrobin
  mode http
  log global
  option httplog
  option forwardfor header X-Client-IP
  option httpchk HEAD /waf_health_check HTTP/1.0

  # If the source IP generated 10 or more http request over the defined period,
  # flag the IP as abuser on the frontend
  acl abuse sc1_http_err_rate(ft_waf) ge 10
  acl flag_abuser sc1_inc_gpc0(ft_waf)
  tcp-request content reject if abuse flag_abuser

  # Specific WAF checking: a DENY means everything is OK
  http-check expect status 403
  timeout server 25s
  default-server inter 3s rise 2 fall 3
  server waf1 192.168.10.15:81 maxconn 100 weight 10 check
  server waf2 192.168.10.16:81 maxconn 100 weight 10 check

# Traffic secured by the WAF arrives here
frontend ft_web
  bind 192.168.10.2:81 name http
  mode http
  log global
  option httplog
  timeout client 25s
  maxconn 1000
  # route health check requests to a specific backend to avoid graph pollution in ALOHA GUI
  use_backend bk_waf_health_check if { path /waf_health_check }
  default_backend bk_web

# application server farm
backend bk_web
  balance roundrobin
  mode http
  log global
  option httplog
  option forwardfor
  cookie SERVERID insert indirect nocache
  default-server inter 3s rise 2 fall 3
  option httpchk HEAD /
  # get connected on the application server using the user ip
  # provided in the X-Client-IP header setup by ft_waf frontend
  source 0.0.0.0 usesrc hdr_ip(X-Client-IP)
  timeout server 25s
  server server1 192.168.10.11:80 maxconn 100 weight 10 cookie server1 check
  server server2 192.168.10.12:80 maxconn 100 weight 10 cookie server2 check

# backend dedicated to WAF checking (to avoid graph pollution)
backend bk_waf_health_check
  balance roundrobin
  mode http
  log global
  option httplog
  option forwardfor
  default-server inter 3s rise 2 fall 3
  timeout server 25s
  server server1 192.168.10.11:80 maxconn 100 weight 10 check
  server server2 192.168.10.12:80 maxconn 100 weight 10 check

Detecting attacks


On the load-balancer


The ft_waf frontend stick table tracks two information: http_req_rate and http_err_rate which are respectively the http request rate and the http error rate generated by a single IP address.
HAProxy would automatically block an IP which has generated more than 100 requests over a period of 10s or 10 errors (WAF detection 403 responses included) in 10s. The user is blocked for 1 minute as long as he keeps on abusing.
Of course, you can setup above values to whatever you need: it is fully flexible.

To know the status of IPs in your load-balancer, just run the command below:

echo show table ft_waf | socat /var/run/haproxy.stat - 
# table: ft_waf, type: ip, size:1048576, used:1
0xc33304: key=192.168.10.254 use=0 exp=4555 gpc0=0 http_req_rate(10000)=1 http_err_rate(10000)=1

Note: The ALOHA Load-balancer does not provide watch tool, but you can monitor the content of the table in live with the command below:

while true ; do echo show table ft_waf | socat /var/run/haproxy.stat - ; sleep 2 ; clear ; done

On the Waf


Every Naxsi error log appears in /var/log/nginx/naxsi_error.log. IE:

2012/10/16 13:40:13 [error] 10556#0: *10293 NAXSI_FMT: ip=192.168.10.254&server=192.168.10.15&uri=/testphp.vulnweb.com/artists.php&total_processed=3195&total_blocked=2&zone0=ARGS&id0=1000&var_name0=artist, client: 192.168.10.254, server: , request: "GET /testphp.vulnweb.com/artists.php?artist=0+div+1+union%23foo*%2F*bar%0D%0Aselect%23foo%0D%0A1%2C2%2Ccurrent_user HTTP/1.1", host: "192.168.10.15:81"

Naxsi log line is less obvious than modsecurity one. The rule which matched os provided by the argument idX=abcde.
No false positive during the test, I had to build a request to make Naxsi match it 🙂 .

conclusion


Today, we saw it’s easy to build a scalable and performing WAF platform in front of any web application.
The WAF is able to communicate to HAProxy which IPs to automatically blacklist (throuth error rate monitoring), which is convenient since the attacker won’t bother the WAF for a certain amount of time 😉
The platform allows to detect WAF farm availability and to bypass it in case of total failure, we even saw it is possible to bypass the WAF for static content if the farm is running out of capacity. Purpose is to deliver a good end-user experience without dropping too much the security.
Note that it is possible to route all the static content to the web servers (or a static farm) directly, whatever the status of the WAF farm.
This make me say that the platform is fully scallable and flexible.
Thanks to HAProxy, the architecture is very flexible: I could switch my apache + modexurity to nginx + naxsi with no issues at all 🙂 This could be done as well for any third party waf appliances.
Note that I did not try any naxsi advanced features like learning mode and the UI as well.

Related links

Links