How to host a scalable WordPress setup yourself with W3 Total Cache, Varnish and Amazon Cloudfront.
united-coders.com is not that big. We’re currently getting in between 500 and 1000 unique visits a day. As we’re coders we like to host ourselves. We’ve previously been on a real root server, which was a bit overkill. So we decided try something smaller and cheaper and we’re currently hosting this site for around $7 per month with enough headroom for more traffic. We do have the occasional spike in the 5000 range when we publish something that catches on. So this needs to be covered.
If you’re interested in the setup and a detailed description, you can find it on RichWP.
The Components
The virtual Server (vServer)
A virtual server is a slice of a real server with a certain amount of resources dedicated to you. For us (speaking united-coders.com), the smallest version is sufficient. We got one cpu share (hard to measure, yeah) and a guaranteed 1GB of RAM (up to 2GB dynamically, but we only work with what we will constantly have).
That currently sells for around $6 here. The vServer is our backbone. We set up your WordPress there and then added the other components once everything was running.
Caching
W3 Total Cache and Varnish is our local caching solution. We’re serving static pages only, so most of the content will be in the cache and the website will be fast. Invalidation and updates will be done by WordPress, so except for setting it up you don’t need to worry about much. You can do a bit of fine-tuning to suit your specific needs.
Content Delivery Network (CDN)
To take this even a step further, most of the files (images, css, js, etc.) are hosted on a CDN. Which means they will not be fetched from the vServer but from the CDN, which uses servers closer to your location. This takes some stress of the vServer and also boosts speed.
How does the site end up in my browser?
When you make a request to united-coders.com Varnish will serve the site if it’s in the cache. The cache is in the memory which will improve access times as memory is a lot faster than disk, especially if you’re getting hits on lots of different pages on your domain.
If Varnish does not find the page it will be requested from the webserver and placed there. For this to work Varnish sits in front of the webserver (on port 80) and has the real webserver as a backend to refer to when it’s missing something.
The html sent to your browser refers to files residing in the CDN. So the browser will fetch most of the stuff there, which reliefs our cache and webserver.
How much does it cost?
The virtual server is around $6 month. That part of the bill will stay the same every month.
Additionally to that, we have to pay Amazon for hosting our files which we want to be server by the CDN. They reside in the Amazon Simple Storage Service (S3) and are accessed by the CDN. With our current traffic this is less than $1 per month. This part of the bill will vary, depending on how much traffic you get and how much you decide to put on S3/CDN.
Pitfalls and Lessons Learned
Varnish cache size and response times
I ran into two problems so far.
Depending on the overall load of the vServer (the other guests using up much cpu) the Varnis response to the managedment thread was too slow and lead to crashes. So after some googling a possible solution seemed to increase the cl_timeout, which in my case worked.
varnishadm .cli_timeout 60
You can read some detailed information on tuning and setup on Kristian Lyngstøl’s Blog.
The second issue had to do with the flexible ram assignment. This is more of an trial/error assumption due to the observed behavior. The cache would fill up beyon the guaranteed 1GB ram, and when the dynamically allocated RAM was taken away from our vServer (reduced to the guaranteed 1GB) the cache crashed. So I decided to limit the used memory to 256MB, which is more then plenty.
Here’s an example of how to manually start it with 600MB memory cache:
varnishd -f /etc/varnish/default.vcl -s malloc,0.6G -T 127.0.0.1:2000 -a 0.0.0.0:80
The system now runs at around 800 MB of RAM.
Invalidation files on CloudFront
One time the site kept serving an outdated file. Tracing the error from the source to the front – WordPress -> Varnish -> S3 -> CloudFront – showed, that only CloudFront was serving the outdated file. In this case I had to manually invalidated the file in CloudFront.
Conclusion
After some issues in the beginning and during setup, the site is now operating smoothly. I did some load testing and performance looked good. Of course it can not compete with our root server setup, but it’s only a fraction of the cost.
The procession power we get on our current hoster seems below average. A clear sign of overselling the machine. But caching seems to cover most of it. On my tests the site loaded on average in roughly over one second.

Nico Heid

Latest posts by Nico Heid (see all)
- Raspberry Pi Supply Switch, start and shut down your Pi like your PC - August 24, 2015
- Infinite House of Pancakes – code jam 2015 - July 13, 2015
- google code jam 2014 – magic trick - April 18, 2014