How to scale-out your server?

Is caching right solution for you?

Written by Dobrica Pavlinusic dpavlin(at)rot13(dot)org on 2003-11-08
Updated on 2004-02-04 to include additional parts about real IP addresses and DMZ

In today's world, there are various reasons to scale-out your server. The obvious one is slasdot effect.

However, our goal for scale-out was different: we needed faster connection from our's servers www.plivazdravlje.hr and www.plivamed.net to our users. Logical solution was to co-locate server in one of largest ISPs in Croatia, HTnet. However, that decision didn't come without it's price: our sites are database driven (using PostgreSQL) with back-ends that can easily create more than 300Kb of HTML for editors. That is not a problem if back-ends are located at our LAN, but with co-location in sight, we had to made some wise decisions.

Database replication

First logical solution was to deploy database replication. This way, editors can still use back-end located at our LAN while users will contact server located at ISP facilities. While there are quite a few possible methods to do PostgreSQL replication, none of them served us well. We needed asynchronous (over ssh) replication between two databases which both can have updates and inserts. Our portal software is designed in such way that we can minimize number of tables which needs to be updated on both public-facing and back-end server.

However, we still needed multi-master replication of database that allows two hosts to stay in sync while updates, inserts and deletes are performed on both of them. This is quite different from master-slave replication in which data flow just in one direction.

Since there wasn't any ready-mate solution, I started extending RServ which is included in PostgreSQL contrib directory to include full multi-master replication capabilities. Project is called RServ improved but in moment of this writing it's still not ready for production use.

Apache comes to rescue

Since database replication wasn't option at time, we tried to consider installing front-end proxy in ISP facilities which will contact main server (which runs all dynamic content as well as back-ends) at our LAN.

This solution would lower load on our web server and, more importantly, speed up content delivery to our users (since HTML pages are just 20% of all content delivered to our users for each page).

We had in mind following architecture:
editors
LAN
master server
leased line to ISP
front-end server
users
apache
php
mod_gzip
mod_rpaf
PostgreSQL
back-ends
cron jobs
>>>>
rsync update of static content
>>>>
apache
mod_rewrite
mod_proxy

Our master server also serve quite a few virtual hosts (11 in moment of this writing, just for those two sites) which all use same php code and database for operation. This resulted in need to minimize changes to httpd.conf in order to ease management and avoid possible errors.

Installation of front-end server

First, we installed apache and copied all static content to front-end server. Than, we configured apache on front-end to response to all virtual hosts used at our site.

Than we started tweaking configuration on front-end server in order to support delivery of static content directly from disk (if available, content which still isn't replicated will be fetched from mater server and cached using mod_proxy) and fetching of dynamic content from master server.

/etc/hosts

First, we added to /etc/hosts aliases for all virtual hosts with unique name (usually just adding -up for upstream host to www) so that we can reference master host as www-up.plivazdravlje.hr. Than we added www-up names to each virtual host on master in ServerAlias.

Adding records to /etc/hosts has two benefits: we don't pollute public DNS space for our domains with records needed just for this replication (it might even have security implications) and our proxying won't depend on operation of DNS.

/etc/apache

We selected this directory for all configuration files because our servers run Debian GNU/Linux and apache configuration files are usually stored there.

We created two data files:

Please note that in first version of this article, there was a slash (/) at end of upstream server. That created two slashes in requests which seem to confuse some browsers.

Then, we created configuration file static.conf which will be included in each virtual host using Include /etc/apache/static.conf thus minimasing changes in httpd.conf. Here is that file:

RewriteEngine	on


RewriteMap	lowercase	int:tolower
RewriteMap	host2dir	txt:/etc/apache/static-host2dir.txt
RewriteMap	host2upstream	txt:/etc/apache/static-host2upstream.txt

RewriteCond	${lowercase:%{HTTP_HOST}|NONE}	^(.+)$
RewriteCond	${host2dir:%1}			^(/.*)$
RewriteCond	%1/%{REQUEST_FILENAME}		-f
RewriteRule	^/(.*\.(bmp|gif|GIF|htm|html|jpe|jpeg|jpg|JPG|pdf|psd|sh|shs|swf|zip))$		%1/$1 [L]

RewriteCond	${lowercase:%{HTTP_HOST}|NONE}	^(.+)$
RewriteCond	${host2upstream:%1}		^(http://.*)$
RewriteRule	^/(.*)$	%1/$1 [P,L]

We also added options to enable modules which are needed (mod_rewrite and mod_proxy in modules.conf or httpd.conf depending on Apache version) and options to store caching on disk for mod_proxy in httpd.conf:

CacheRoot "/data/proxy"
CacheSize 1024000
CacheGcInterval 4
CacheMaxExpire 24
CacheLastModifiedFactor 0.1
CacheDefaultExpire 8
CacheForceCompletion 100

Compress your content

mod_gzip for Apache is probably one of best kept secrets. It allows to compress content delivered to web browsers using gzip encoding thus reducing it to 20% or less of original length.

While increased CPU usage required for mod_gzip might be an issue for some sites, since we where off-loading our master server, we could deploy it without any problems. It reduced our home page from 40K to just 9K, thus saving precious seconds of download time for each user (and bandwidth on our leased line to ISP).

Going live

With all this configured there where only two more things to do: setup replication of static content from master server to front-end every 10 minutes using rsync, and to change our DNS to point to front-end server.

We immediately saw improvements: number of hits on master server reduced to 1/6 of usual number, and bandwidth usage over leased line decreased to 1/3 of earlier traffic.

What about IP addresses?

As you might guess, you might want to see IP addresses of your clients on back-end (master) server (for example for site access statistics).
mod_proxy in current version adds X-Forwarded-For header correctly (which was added sometime in 2002 to mod_proxy, before that it was stand-alone module called mod_proxy_add_forward).

So, the first solution which comes to mind is to change your log format to something like:

LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %T %v %{X-Forwarded-For}i" xfull
and than make a perl script which will pre-process log file and replace first IP address with last one (from X-Forwarded-For). While this works for site statistics, it doesn't allow your dynamic site to know from which IP is user coming.

Fortunately, there is solution. There are at least two modules for Apache that will replace client's IP address with one from X-Forwarded-For header: mod_rpaf and mod_extract_forwarded. I used reverse proxy add forward module for Apache (mod_rpaf) mostly because it was the first one I found using Google.

Installation is straight-forward (but it includes compilation, so you will need apache-dev package), and includes adding following configuration directives to httpd.conf:

LoadModule rpaf_module    /usr/lib/apache/1.3/mod_rpaf.so
RPAFenable On
RPAFsethostname On
RPAFproxy_ips 192.168.x.y
After that, you will get real IP addresses of your clients for connections coming from proxy with IP 192.168.x.y. Simple, eh?

What if there is DMZ between front-end and back-end server?

This seem like a easy question. But, I found that there are some pitfalls waiting for you if you are not careful. Real configuration of our network is a bit more complicated than one outlined above:

editors
LAN
master server
DMZ
leased line to ISP
front-end server
users

DMZ allows rsync connections over ssh to front-end server, but at the same time provide connectivity from front-end to master server.

First solution was to use TCP tunnel (we used rinetd for that) from DMZ to master server. Even with advice about real IP addresses above, that worked well provided that machine running rinetd has IP address 192.168.x.y (just to be consistent with example).

However, since we didn't move all sites running on master server to co-location facility, it was logical that we wanted real IP addresses on those sites instead of 192.168.x.y IP address which was visible to master server.
Problem was that rinetd is simple TCP tunnel, and it doesn't add X-Forwarded-For header to HTTP requests (because it doesn't know anything about HTTP anyway). Logical solution was to use mod_proxy once again, this time from DMZ to master server.

It was configured like this:

ProxyPass               /       http://10.a.b.c/
ProxyPassReverse        /       http://10.a.b.c/

NoCache		*.php

CustomLog	/dev/null none
ErrorLog	none
Where 10.a.b.c is IP address of master server. Than I wanted to add some local caching (for clients which access sites that are still only on master server). However, that turned out to be a problem. mod_proxy will happily cache pages on disk, but it uses URL for cache key. Since most of our sites use same files (for example, logo.gif) which look very different on different sites, we where getting sites mixed.

After initial shock, we found that sites which used same elements got cached on first-come-first-cached principle. So, when a client requested http://site1.domain.com/logo.gif, mod_proxy transfers that to http://10.a.b.c/logo.gif and store that in cache. But, since it also sends Host: site1.domain.com header to master server it correctly gets logo.gif from site1.domain.com.

However, when another client request http://site2.domain.com/logo.gif, mod_proxy will first examine it's cache (again for request http://10.a.b.c/logo.gif) and serve wrong picture!

Temporary solution is to disable caching on disk on machine in DMZ. This way, it will always ask master server for content and get correct one. While Squid might be more clever about this, I haven't tested it yet. I know one more apache proxy module that does the same thing: wodan. So, lesson learned here for proxy designer is: Never believe that URL alone is enough to uniquely identify content in your cache. If at all possible, use also Host: header!

Future work

Future work include finishing of master-master replication for PostgreSQL and installing of replication between master and front-end server so that we can fail over to ISP if needed.
That will also eliminate all user-generated load on master server and only traffic traveling through leased line will be replication data (we will then need to take measurements, to see if volume of that data will be less or more than total length of compressed dynamic pages that are transfered now).

Practices presented here are quite easy to implement if you have co-location facility that offers apache with mod_rewrite and mod_proxy even on shared server. They work well for us, and doesn't require any application changes (as opposed to replication). They are quite easy to administer compared to adding separate configurations for each host to httpd.conf.