Tag Archives: apache

Howto write apache ProxyPass* rules in HAProxy

Apache mod_proxy

Apache webserver is a widely deployed modular web server.
One of its module is called mod_proxy. It aims to turn the web server into a proxy / reverse proxy server with load-balancing capabilities.

At HAProxy Technologies, we only use HAProxy :). Heh, what else ???
And during some deployments, customers ask us to migrate Apache mod_proxy configuration into HAProxy.
Present article explains how to translate ProxyPass related rules.

ProxyPass, ProxyPassReverse, etc…

Apache mod_proxy defines a few directives which let it forward traffic to a remote server.
They are listed below with a short description.

Note: I know the directives introduced in this article could be used in much complicated ways.

ProxyPass

ProxyPass maps local server URLs to remote servers + URL.
It applies on traffic from client to server.

For example:

ProxyPass /mirror/foo/ http://backend.example.com/

This makes the external URL http://example.com/mirror/foo/bar to be translated and forwarded to a remote server this way: http://backend.example.com/bar

This directive makes apache to update URL and headers to match both external traffic to internal needs.

ProxyPassReverse

ProxyPassReverse Adjusts the URL in HTTP response headers sent from a reverse proxied server. It only updates Location, Content-Location and URL.
It applies to traffic from server to client.

For example:

ProxyPassReverse /mirror/foo/ http://backend.example.com/

This directive makes apache to adapt responses generated by servers following internal urls to match external urls.

ProxyPassReverseCookieDomain

ProxyPassReverseCookieDomain adjusts the Set-Cookie header sent by the server to match external domain name.

It’s usage is pretty simple. For example:

ProxyPassReverseCookieDomain internal-domain public-domain

ProxyPassReverseCookiePath

ProxyPassReverseCookiePath adjusts the Set-Cookie header sent by the server to match external path.

It’s usage is pretty simple. For example:

ProxyPassReverseCookiePath internal-path public-path

Configure ProxyPass and ProxyPassReverse in HAProxy

Bellow, an example HAProxy configuration to make HAProxy work the same way as apache ProxyPass and ProxyPassReverse configuration. It should be added in the backend section while the frontend ensure that only traffic matching this external URL would be redirected to that backend.

frontend ft_global
 acl host_dom.com    req.hdr(Host) dom.com
 acl path_mirror_foo path -m beg   /mirror/foo/
 use_backend bk_myapp if host_dom.com path_mirror_foo

backend bk_myapp
[...]
# external URL                  => internal URL
# http://dom.com/mirror/foo/bar => http://bk.dom.com/bar

 # ProxyPass /mirror/foo/ http://bk.dom.com/bar
 http-request set-header Host bk.dom.com
 reqirep  ^([^ :]*)\ /mirror/foo/(.*)     \1\ /\2

 # ProxyPassReverse /mirror/foo/ http://bk.dom.com/bar
 # Note: we turn the urls into absolute in the mean time
 acl hdr_location res.hdr(Location) -m found
 rspirep ^Location:\ (https?://bk.dom.com(:[0-9]+)?)?(/.*) Location:\ /mirror/foo3 if hdr_location

 # ProxyPassReverseCookieDomain bk.dom.com dom.com
 acl hdr_set_cookie_dom res.hdr(Set-cookie) -m sub Domain= bk.dom.com
 rspirep ^(Set-Cookie:.*)\ Domain=bk.dom.com(.*) \1\ Domain=dom.com\2 if hdr_set_cookie_dom

 # ProxyPassReverseCookieDomain / /mirror/foo/
 acl hdr_set_cookie_path res.hdr(Set-cookie) -m sub Path= 
 rspirep ^(Set-Cookie:.*)\ Path=(.*) \1\ Path=/mirror/foo2 if hdr_set_cookie_path

Notes:
  * http to https redirect rules should be handled by HAProxy itself and not by the application server (to avoid some redirect loops)

Links

SSL Client certificate information in HTTP headers and logs

HAProxy and SSL

HAProxy has many nice features when speaking about SSL, despite SSL has been introduced in it lately.

One of those features is the client side certificate management, which has already been discussed on the blog.
One thing was missing in the article, since HAProxy did not have the feature when I first write the article. It is the capability of inserting client certificate information in HTTP headers and reporting them as well in the log line.

Fortunately, the devs at HAProxy Technologies keep on improving HAProxy and it is now available (well, for some time now, but I did not have any time to write the article yet).

OpenSSL commands to generate SSL certificates

Well, just take the script from HAProxy Technologies github, follow the instruction and you’ll have an environment setup in a very few seconds.
Here is the script: https://github.com/exceliance/haproxy/tree/master/blog/ssl_client_certificate_management_at_application_level

Configuration

The configuration below shows a frontend and a backend with SSL offloading and with insertion of client certificate information into HTTP headers. As you can see, this is pretty straight forward.

frontend ft_www
 bind 127.0.0.1:8080 name http
 bind 127.0.0.1:8081 name https ssl crt ./server.pem ca-file ./ca.crt verify required
 log-format %ci:%cp [%t] %ft %b/%s %Tq/%Tw/%Tc/%Tr/%Tt %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs {%[ssl_c_verify],%{+Q}[ssl_c_s_dn],%{+Q}[ssl_c_i_dn]} %{+Q}r
 http-request set-header X-SSL                  %[ssl_fc]
 http-request set-header X-SSL-Client-Verify    %[ssl_c_verify]
 http-request set-header X-SSL-Client-DN        %{+Q}[ssl_c_s_dn]
 http-request set-header X-SSL-Client-CN        %{+Q}[ssl_c_s_dn(cn)]
 http-request set-header X-SSL-Issuer           %{+Q}[ssl_c_i_dn]
 http-request set-header X-SSL-Client-NotBefore %{+Q}[ssl_c_notbefore]
 http-request set-header X-SSL-Client-NotAfter  %{+Q}[ssl_c_notafter]
 default_backend bk_www

backend bk_www
 cookie SRVID insert nocache
 server server1 127.0.0.1:8088 maxconn 1

To observe the result, I just fake a server using netcat and observe the headers sent by HAProxy:

X-SSL: 1
X-SSL-Client-Verify: 0
X-SSL-Client-DN: "/C=FR/ST=Ile de France/L=Jouy en Josas/O=haproxy.com/CN=client1/emailAddress=ba@haproxy.com"
X-SSL-Client-CN: "client1"
X-SSL-Issuer: "/C=FR/ST=Ile de France/L=Jouy en Josas/O=haproxy.com/CN=haproxy.com/emailAddress=ba@haproxy.com"
X-SSL-Client-NotBefore: "130613144555Z"
X-SSL-Client-NotAfter: "140613144555Z"

And the associated log line which has been generated:

Jun 13 18:09:49 localhost haproxy[32385]: 127.0.0.1:38849 [13/Jun/2013:18:09:45.277] ft_www~ bk_www/server1 
1643/0/1/-1/4645 504 194 - - sHNN 0/0/0/0/0 0/0 
{0,"/C=FR/ST=Ile de France/L=Jouy en Josas/O=haproxy.com/CN=client1/emailAddress=ba@haproxy.com",
"/C=FR/ST=Ile de France/L=Jouy en Josas/O=haproxy.com/CN=haproxy.com/emailAddress=ba@haproxy.com"} "GET /" 

NOTE: I have inserted a few CRLF to make it easily readable.

Now, my HAProxy can deliver the following information to my web server:
  * ssl_fc: did the client used a secured connection (1) or not (0)
  * ssl_c_verify: the status code of the TLS/SSL client connection
  * ssl_c_s_dn: returns the full Distinguished Name of the certificate presented by the client
  * ssl_c_s_dn(cn): same as above, but extracts only the Common Name
  * ssl_c_i_dn: full distinguished name of the issuer of the certificate presented by the client
  * ssl_c_notbefore: start date presented by the client as a formatted string YYMMDDhhmmss
  * ssl_c_notafter: end date presented by the client as a formatted string YYMMDDhhmmss

Related Links

Links

Apache cdorked backdoor detection

Apache Cdorked.A backdoor

This is a pretty recent attack, using Cpanel to change the Apache httpd binary by a compromised one which embeds a backdoor.

A few articles with more details are available here:
  * http://www.welivesecurity.com/2013/04/26/linuxcdorked-new-apache-backdoor-in-the-wild-serves-blackhole/
  * http://blog.sucuri.net/2013/04/apache-binary-backdoors-on-cpanel-based-servers.html

It seems there are a few ways to detect if your server was compromised:
  1. requests with “GET_BACK;” encoded in the query string may arrive
  2. an unexpected Cookie (SECID in that case) may be sent to the server
  3. the server may answer some unexpected Location headers

Configuration

The HAProxy configuration below provides a few hints on how you can detect if you’ve been infected by the backdoor and how you can try to protect users using your services

I consider the website hostname is “www.domain.tld” and static content is delivered by “static.domain.tld”.

The configuration below can be added in the Frontend section:

# We want to capture and log the cookies sent by the client
 capture request header Cookie Len 128
# We want to capture and log the Location header sent by the server
 capture response header Location Len 128

# block any request with GET_BACK; string encoded
 http-request deny if { url_sub 4745545f4241434b3b } 
# block any request with a weird cookie
 http-request deny if { cook_cnt(SECID) ge 1 }

# block a response with a Location header for a unknown domain
 rspdeny ^Location: http://(www|static).domain.tld.*

Note that with such backdoor, you may have to monitor your logs (detection phase) first to know if you’ve been affected. Then you can update your configuration to block the attack (protection phase) and of course, you should remove the bad apache binary.

Links

HAProxy and gzip compression

Synopsis

Compression is a Technic to reduce object size to reduce delivery delay for objects over HTTP protocol.
Until now, HAProxy did not include such feature. But the guys at HAProxy Technologies worked hard on it (mainly David Du Colombier and @wlallemand).
HAProxy can now be considered an new option to compress HTTP streams, as well as nginx, apache or IIS which already does it.

Note that this is in early beta, so use it with care.

Compilation


Get the latest HAProxy git version, by running a “git pull” in your HAProxy git directory.
If you don’t already have such directory, then run the a:

git clone http://git.1wt.eu/git/haproxy.git

Once your HAProxy sources are updated, then you can compile HAProxy:

make TARGET=linux26 USE_ZLIB=yes

Configuration

this is a very simple configuration test:

listen ft_web
 option http-server-close
 mode http
 bind 127.0.0.1:8090 name http
 default_backend bk_web

backend bk_web
 option http-server-close
 mode http
 compression algo gzip
 compression type text/html text/plain text/css
 server localhost 127.0.0.1:80

Compression test

On my localost, I have an apache with compression disabled and a style.css object whose size is 16302 bytes.

Download without compression requested

curl -o/dev/null -D - "http://127.0.0.1:8090/style.css" 
HTTP/1.1 200 OK
Date: Fri, 26 Oct 2012 08:55:42 GMT
Server: Apache/2.2.16 (Debian)
Last-Modified: Sun, 11 Mar 2012 17:01:39 GMT
ETag: "a35d6-3fae-4bafa944542c0"
Accept-Ranges: bytes
Content-Length: 16302
Content-Type: text/css

100 16302  100 16302    0     0  5722k      0 --:--:-- --:--:-- --:--:-- 7959k

Download with compression requested

 curl -o/dev/null -D - "http://127.0.0.1:8090/style.css" -H "Accept-Encoding: gzip"
HTTP/1.1 200 OK
Date: Fri, 26 Oct 2012 08:56:28 GMT
Server: Apache/2.2.16 (Debian)
Last-Modified: Sun, 11 Mar 2012 17:01:39 GMT
ETag: "a35d6-3fae-4bafa944542c0"
Accept-Ranges: bytes
Content-Type: text/css
Transfer-Encoding: chunked
Content-Encoding: gzip

100  4036    0  4036    0     0  1169k      0 --:--:-- --:--:-- --:--:-- 1970k

In this example, object size passed from 16302 bytes to 4036 bytes.

Have fun !!!!

Links

Scalable WAF protection with HAProxy and Apache with modsecurity

Greeting to Thomas Heil, from our German partner Olanis, for his help in Apache and modsecurity configuration assistance.

What is a Web Application Firewall (WAF)?

Years ago, it was common to protect networks using a firewall… Well known devices which filter traffic at layer 3 and 4…
Now, the failures have moved from the network stack to the application layer, making the old firewall useless and obsolete (for protection purpose I mean). We used then to deploy IDS or IPS, which tried to match attacks at a packet level. These products are usually very hard to tune.
Then the Web Application Firewall arrived: it’s a firewall aware of the layer 7 protocol in order to be more efficient when deciding to block requests (or responses).
This is because the attacks became more complicated, like the SQL Injection, Cross Site scripting, etc…

One of the most famous opensource WAF is mod_security, which works as a module on Apache webserver and IIS for a long time and has been announced recently for nginx too.
A very good alternative is naxsi, a module for nginx, still young but very promising.

On today’s article, I’ll focus on modsecurity for Apache. In a next article, I’ll build the same platform with naxsi and nginx.

Scalable WAF platform


The main problem with WAF, is that they require a lot of resources to analyse each requests headers and body. (it can even be configured to analyze the response). If you want to be able to protect all your upcoming traffic, then you must think scalability.
In the present article I’m going to explain how to build a reliable and scalable platform where WAF capacity won’t be an issue. I could add “and where WAF maintenance could be done during business hours).
Here are the basic purpose to achieve:

  • Web Application Firewall: achieved by Apache and modsecurity
  • High-availability: application server and WAF monitoring, achieved by HAProxy
  • Scalability: ability to adapt capacity to the upcoming volume of traffic, achieved by HAProxy

It would be good if the platform would achieve the following advanced features:

  • DDOS protection: blind and brutal attacks protection, slowloris protection, achieved by HAProxy
  • Content-Switching: ability to route only dynamic requests to the WAF, achieved by HAProxy
  • Reliability: ability to detect capacity overusage, this is achieved by HAProxy
  • Performance: deliver response as fast as possible, achieved by the whole platform

Web platform with WAF Diagram

The diagram below shows the platform with HAProxy frontends (prefixed by ft_) and backends (prefixed by bk_). Each farm is composed by 2 servers.

As you can see, at first, it seems all the traffic goes to the WAFs, then comes back in HAProxy before being routed to the web servers. This would be the basic configuration, meeting the following basic requirements: Web Application Firewall, High-Availability, Scalability.

Platform installation


As load-balancer, I’m going to use our well known ALOHA 🙂
The web servers are standard debian with apache and PHP, the application used on top of it is dokuwiki. I have no procedure for this one, this is very straight forward!
The WAF run on CentOS 6.3 x86_64, using modsecurity 2.5.8. The installation procedure is outside of the scope of this article, so I documented it on my personal wiki.
All of these servers are virtualized on my laptop using KVM, so NO, I won’t run performance benchmark, it would be ridiculous!

Configuration

WAF configuration


Basic configuration here, no tuning at all. The purpose is not to explain how to configure a WAF, sorry.

Apache Configuration


Modification to the file /etc/httpd/conf/httpd.conf:

Listen 192.168.10.15:81
[...]
LoadModule security2_module modules/mod_security2.so
LoadModule unique_id_module modules/mod_unique_id.so
[...]
NameVirtualHost 192.168.10.15:81
[...]
<IfModule mod_security2.c>
        SecPcreMatchLimit 1000000
        SecPcreMatchLimitRecursion 1000000
        SecDataDir logs/
</IfModule>
<VirtualHost 192.168.10.15:81>
        ServerName *
        AddDefaultCharset UTF-8

        <IfModule mod_security2.c>
                Include modsecurity.d/modsecurity_crs_10_setup.conf
                Include modsecurity.d/aloha.conf
                Include modsecurity.d/rules/*.conf

                SecRuleEngine On
                SecRequestBodyAccess On
                SecResponseBodyAccess On
        </IfModule>

        ProxyPreserveHost On
        ProxyRequests off
        ProxyVia Off
        ProxyPass / http://192.168.10.2:81/
        ProxyPassReverse / http://192.168.10.2:81/
</VirtualHost>

Basically, we just turned Apache into a reverse-proxy, accepting traffic for any server name, applying modsecurity rules before routing traffic back to HAProxy frontend dedicated to web servers.

Client IP


HAProxy works has a reverse proxy and so will use its own IP address to get connected on the WAF server. So you have to install mod_rpaf to get the client IP in the WAF for both tracking and logging.
To install mod_rpaf, follow these instructions: apache mod_rpaf installation.
Concerning its configuration, we’ll do it as below, edit the file /etc/httpd/conf.d/mod_rpaf.conf:

LoadModule rpaf_module modules/mod_rpaf-2.0.so

<IfModule rpaf_module>
        RPAFenable On
        RPAFproxy_ips 192.168.10.1 192.168.10.3
        RPAFheader X-Client-IP
</IfModule>

modsecurity custom rules

In the Apache configuration there is a directive which tells modsecurity to load a file called aloha.conf. The purpose of this file is to tell to modsecurity to deny the health check requests from HAProxy and to prevent logging them.
HAProxy will consider the WAF as operational only if it gets a 403 response to this request. (see HAProxy configuration below).
Content of the file /etc/httpd/modsecurity.d/aloha.conf:

SecRule REQUEST_FILENAME "/waf_health_check" "nolog,deny"

Load-Balancer (HAProxy) configuration for basic usage


The configuration below is the first shoot we do when deploying such platform, it is basic, simple and straight forward:

######## Default values for all entries till next defaults section
defaults
  option  http-server-close
  option  dontlognull
  option  redispatch
  option  contstats
  retries 3
  timeout connect 5s
  timeout http-keep-alive 1s
  # Slowloris protection
  timeout http-request 15s
  timeout queue 30s
  timeout tarpit 1m          # tarpit hold tim
  backlog 10000

# public frontend where users get connected to
frontend ft_waf
  bind 192.168.10.2:80 name http
  mode http
  log global
  option httplog
  timeout client 25s
  maxconn 1000
  default_backend bk_waf

# WAF farm where users' traffic is routed first
backend bk_waf
  balance roundrobin
  mode http
  log global
  option httplog
  option forwardfor header X-Client-IP
  option httpchk HEAD /waf_health_check HTTP/1.0
  # Specific WAF checking: a DENY means everything is OK
  http-check expect status 403
  timeout server 25s
  default-server inter 3s rise 2 fall 3
  server waf1 192.168.10.15:81 maxconn 100 weight 10 check
  server waf2 192.168.10.16:81 maxconn 100 weight 10 check

# Traffic secured by the WAF arrives here
frontend ft_web
  bind 192.168.10.2:81 name http
  mode http
  log global
  option httplog
  timeout client 25s
  maxconn 1000
  # route health check requests to a specific backend to avoid graph pollution in ALOHA GUI
  use_backend bk_waf_health_check if { path /waf_health_check }
  default_backend bk_web

# application server farm
backend bk_web
  balance roundrobin
  mode http
  log global
  option httplog
  option forwardfor
  cookie SERVERID insert indirect nocache
  default-server inter 3s rise 2 fall 3
  option httpchk HEAD /
  timeout server 25s
  server server1 192.168.10.11:80 maxconn 100 weight 10 cookie server1 check
  server server2 192.168.10.12:80 maxconn 100 weight 10 cookie server2 check

# backend dedicated to WAF checking (to avoid graph pollution)
backend bk_waf_health_check
  balance roundrobin
  mode http
  log global
  option httplog
  option forwardfor
  default-server inter 3s rise 2 fall 3
  timeout server 25s
  server server1 192.168.10.11:80 maxconn 100 weight 10 check
  server server2 192.168.10.12:80 maxconn 100 weight 10 check

Advanced Load-Balancing (HAProxy) configuration


We’re going now to improve a bit the platform. The picture below shows which type of protection is achieved by the load-balancer and the WAF:

The configuration below adds a few more features:

  • DDOS protection on the frontend
  • abuser or attacker detection in bk_waf and blocking on the public interface (ft_waf)
  • Bypassing WAF when overusage or unavailable

Which will allow to meet the advanced requirements: DDOS protection, Content-Switching, Reliability, Performance.

######## Default values for all entries till next defaults section
defaults
  option  http-server-close
  option  dontlognull
  option  redispatch
  option  contstats
  retries 3
  timeout connect 5s
  timeout http-keep-alive 1s
  # Slowloris protection
  timeout http-request 15s
  timeout queue 30s
  timeout tarpit 1m          # tarpit hold tim
  backlog 10000

# public frontend where users get connected to
frontend ft_waf
  bind 192.168.10.2:80 name http
  mode http
  log global
  option httplog
  timeout client 25s
  maxconn 10000

  # DDOS protection
  # Use General Purpose Couter (gpc) 0 in SC1 as a global abuse counter
  # Monitors the number of request sent by an IP over a period of 10 seconds
  stick-table type ip size 1m expire 1m store gpc0,http_req_rate(10s),http_err_rate(10s)
  tcp-request connection track-sc1 src
  tcp-request connection reject if { sc1_get_gpc0 gt 0 }
  # Abuser means more than 100reqs/10s
  acl abuse sc1_http_req_rate(ft_web) ge 100
  acl flag_abuser sc1_inc_gpc0(ft_web)
  tcp-request content reject if abuse flag_abuser

  acl static path_beg /static/ /dokuwiki/images/
  acl no_waf nbsrv(bk_waf) eq 0
  acl waf_max_capacity queue(bk_waf) ge 1
  # bypass WAF farm if no WAF available
  use_backend bk_web if no_waf
  # bypass WAF farm if it reaches its capacity
  use_backend bk_web if static waf_max_capacity
  default_backend bk_waf

# WAF farm where users' traffic is routed first
backend bk_waf
  balance roundrobin
  mode http
  log global
  option httplog
  option forwardfor header X-Client-IP
  option httpchk HEAD /waf_health_check HTTP/1.0

  # If the source IP generated 10 or more http request over the defined period,
  # flag the IP as abuser on the frontend
  acl abuse sc1_http_err_rate(ft_waf) ge 10
  acl flag_abuser sc1_inc_gpc0(ft_waf)
  tcp-request content reject if abuse flag_abuser

  # Specific WAF checking: a DENY means everything is OK
  http-check expect status 403
  timeout server 25s
  default-server inter 3s rise 2 fall 3
  server waf1 192.168.10.15:81 maxconn 100 weight 10 check
  server waf2 192.168.10.16:81 maxconn 100 weight 10 check

# Traffic secured by the WAF arrives here
frontend ft_web
  bind 192.168.10.2:81 name http
  mode http
  log global
  option httplog
  timeout client 25s
  maxconn 1000
  # route health check requests to a specific backend to avoid graph pollution in ALOHA GUI
  use_backend bk_waf_health_check if { path /waf_health_check }
  default_backend bk_web

# application server farm
backend bk_web
  balance roundrobin
  mode http
  log global
  option httplog
  option forwardfor
  cookie SERVERID insert indirect nocache
  default-server inter 3s rise 2 fall 3
  option httpchk HEAD /
  # get connected on the application server using the user ip
  # provided in the X-Client-IP header setup by ft_waf frontend
  source 0.0.0.0 usesrc hdr_ip(X-Client-IP)
  timeout server 25s
  server server1 192.168.10.11:80 maxconn 100 weight 10 cookie server1 check
  server server2 192.168.10.12:80 maxconn 100 weight 10 cookie server2 check

# backend dedicated to WAF checking (to avoid graph pollution)
backend bk_waf_health_check
  balance roundrobin
  mode http
  log global
  option httplog
  option forwardfor
  default-server inter 3s rise 2 fall 3
  timeout server 25s
  server server1 192.168.10.11:80 maxconn 100 weight 10 check
  server server2 192.168.10.12:80 maxconn 100 weight 10 check

Detecting attacks


On the load-balancer


The ft_waf frontend stick table tracks two information: http_req_rate and http_err_rate which are respectively the http request rate and the http error rate generated by a single IP address.
HAProxy would automatically block an IP which has generated more than 100 requests over a period of 10s or 10 errors (WAF detection 403 responses included) in 10s. The user is blocked for 1 minute as long as he keeps on abusing.
Of course, you can setup above values to whatever you need: it is fully flexible.

To know the status of IPs in your load-balancer, just run the command below:

echo show table ft_waf | socat /var/run/haproxy.stat - 
# table: ft_waf, type: ip, size:1048576, used:1
0xc33304: key=192.168.10.254 use=0 exp=4555 gpc0=0 http_req_rate(10000)=1 http_err_rate(10000)=1

Note: The ALOHA Load-balancer does not provide watch, but you can monitor the content of the table in live with the command below:

while true ; do echo show table ft_waf | socat /var/run/haproxy.stat - ; sleep 2 ; clear ; done

On the Waf


I have not setup anything particular on WAF logging, so every errors appears in /var/log/httpd/error_log. IE:

[Fri Oct 12 10:48:21 2012] [error] [client 192.168.10.254] ModSecurity: Access denied with code 403 (phase 2). Pattern match "(?:(?:[\\;\\|\\`]\\W*?\\bcc|\\b(wget|curl))\\b|\\/cc(?:[\\'\"\\|\\;\\`\\-\\s]|$))" at REQUEST_FILENAME. [file "/etc/httpd/modsecurity.d/rules/modsecurity_crs_40_generic_attacks.conf"] [line "25"] [id "950907"] [rev "2.2.5"] [msg "System Command Injection"] [data "/cc-"] [severity "CRITICAL"] [tag "WEB_ATTACK/COMMAND_INJECTION"] [tag "WASCTC/WASC-31"] [tag "OWASP_TOP_10/A1"] [tag "PCI/6.5.2"] [hostname "mywiki"] [uri "/dokuwiki/lib/images/license/button/cc-by-sa.png"] [unique_id "UHfZVcCoCg8AAApVAzsAAAAA"]

Seems to be a false positive 🙂

Conclusion


Today, we saw it’s easy to build a scalable and well performing WAF platform in front of our web application.
The WAF is able to communicate to HAProxy which IPs to automatically blacklist (throuth error rate monitoring), which is convenient since the attacker won’t bother the WAF for a certain amount of time 😉
The platform allows to detect WAF farm availability and to bypass it in case of total failure, we even saw it is possible to bypass the WAF for static content if the farm is running out of capacity. Purpose is to deliver a good end-user experience without dropping too much the security.
Note that it is possible to route all the static content to the web servers (or a static farm) directly, whatever the status of the WAF farm.
This make me say that the platform is fully scallable and flexible.
Also, bear in mind to monitor your WAF logs, as shown in the example above, there was a false positive preventing an image to be loaded from dokuwiki.

Related links

Links

Protect Apache against “apache killer” script

What is Apache killer?

Apache killer is a script which aims to exploit an Apache Vulnerability.
Basically, it makes Apache to fill up the /tmp directory which makes the webserver unstable.

Who is concerned?

Anybody running a website on Apache.
The Apache announce

How can Aloha Load-Balancer help you?

First, let’s have a look at the diagram below:

The Aloha can clean up your Range headers as well as limiting rate of connection from malicious people and event emulate the success of the attack.

Protect against Range header

Basically, the attack consists on sending a lot of Range headers to the webserver.
So, if a “client” sends more than 10 Range headers, we can consider this as an attack and we can clean them up.
Just add the two lines below in your Layer 7 (HAPropxy) backend configuration to protect your Apache web servers:

backend bk_http
[...]
  # Detect an ApacheKiller-like Attack
  acl weirdrangehdr hdr_cnt(Range) gt 10
  # Clean up the request
  reqidel ^Range if weirdrangehdr
[...]

Protect against service abuser

Since this kind of attack is combined with a DOS, you can blacklist bad guys with the configuration below.
It will limit users to 10 connections over a 10s period, then hold the connection for 10s before answering a 503 HTTP response.

You should adjust the values below to your website traffic.

frontend ft_http
[...]
  option http-server-close

  # Setup stick table
  stick-table type ip size 1k expire 30s store gpc0
  # Configure the DoS src
  acl MARKED src_get_gpc0(ft_http) gt 0
  # tarpit attackers if src_DoS
  use_backend bk_tarpit if MARKED
  # If not blocked, track the connection
  tcp-request connection track-sc1 src if ! MARKED

  default_backend bk_http
[...]

backend bk_http
[...]
  # Table to track connection rate
  stick-table type ip size 1k expire 30s store conn_rate(5s)
  # Track request
  tcp-request content track-sc2 src
  # Mark as abuser if more than 10 connection
  acl ABUSER sc2_conn_rate gt 10
  acl MARKED_AS_ABUSER sc1_inc_gpc0
  # Block connection concidered as abuser
  tcp-request content reject if ABUSER MARKED_AS_ABUSER
[...]

# Slow down attackers
backend bk_tarpit
  mode http
  # hold the connection for 10s before answering
  timeout tarpit 10s
  # Emulate a 503 error
  errorfile 500 /etc/errors/500_tarpit.txt
  # slowdown any request coming up to here
  reqitarpit .

Open a shell on your Aloha Load-Balancer, then:

  • create the directory /etc/errors/
  • create the file 500_tarpit.txt with the content below.

500_tarpit.txt:

HTTP/1.0 503 Service Unavailable
Cache-Control: no-cache
Connection: close
Content-Type: text/html
Content-Length: 310

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">;
<html xmlns="http://www.w3.org/1999/xhtml">;
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>Error</title>
</head>
<body><h1>Something went wrong</h1></body>
</html>

Don’t forget to save your configuration with the command

config save

Related articles

Links

Implement HTTP keepalive without killing your apache server

Synopsis

This howto will explain you how you can use your Aloha LoadBalancer to implement HTTP Keepalive and save resources on your server at the same time.

What is HTTP KeepAlive?

In early version of HTTP protocol, clients used to send request over a new TCP connection to the server, getting content from the server through this connection and finally close it.
This method works well when web pages have a few objects.
More objects means more time to wait for each TCP connection setup and close.

HTTP 1.0 introduced the header Connection: Keepalive.
Clients and servers sent each other this header in order to tell the other side to keep the connection opened.
By default, there was no keepalive.

HTTP 1.1 considers every connection to be kept alive.
If one doesn’t want the connection to stay opened, the client or the server has to send the header: Connection: Close.

By using HTTP Keepalive, you’re going to reduce the web pages load time.
On the other hand, too many TCP connections maintained opened on an Apache server can make it consumes too much memory and CPU.

Using HAProxy to implement HTTP Keepalive

You can use HAProxy to add HTTP keepalive on the client side while not doing it on the server side.
The purpose is to deliver the objects quickly to the client without the overhead of TCP handcheck latency and to release memory and CPU on the server side by releasing TCP connection.
Note: Since HAProxy and the server are on the same LAN, the overhead of latency is negligible.

You can enable HTTP Keepalive on the client side by enabling option http-server-close

frontend public
	bind :80
	option http-server-close
	default_backend apache

backend apache
	option http-server-close
	server srv 127.0.0.1:8080

PS: Indeed it works for other HTTP server software like IIS, Tomcat, etc….

Links