Practical Stress Testing with Siege

Written on April 28th, 2013. We've got comments, too!

This site runs on the Kirby CMS. Included within it is the Smartypants library — it enhances simple typography, like turning don't into don’t. It’s nifty, and it saves me some time and pre-processing. But, like all other “features”, I have to ask myself the money question: Is this feature going to cost my server? Is it worth the extra code effort? After all, it’s not a huge deal for me to do this myself. What should I do?

The answer, of course, is to find solid data.

There’s just one issue with getting said data: other people’s data doesn’t always fit my case, and while there are tons of good benchmarking software out there, they tend to be complicated to install and learn. This is where Siege comes in.

Siege is a simple benchmarking app that simply runs on the terminal. You throw it a URL (and some optional configuration), and it throws you numbers. Like this:


Now, before I get sexy with Siege, first I need to figure out if Smartypants is even an issue in the first place. I’ve talked about Cachegrind before — a part of Xdebug, it allows me to see where PHP spends its time on a script. If Smartypants were an issue, it’d show up as a time-consuming process.

Let’s take a look. I visited my homepage, and passed the results to webgrind, a web front-end, and…


Okay, it’s clearly an issue: it’s taking more time (e.g. has a higher “Cost”) than pretty much everything else in the script. It’s even slower than Kirby’s Markdown processing!

While this tells me that Smartypants needs attention, it doesn’t guarantee that it’s an issue. A lot more happens on a server than PHP parsing. And I need a way to measure and compare whatever improvements I make.

Some people try to measure script execution times, but those vary too much, and don’t simulate a server under pressure.

Enter Siege

Installing siege is easy. People on Ubuntu can pretty much just do:

sudo apt-get install siege

If you’re on OS X or just want to install manually, you can also try the instructions here. In a nutshell, get the code, extract it, open the folder in a terminal, and do the usual ./configure && make && make install on it.

Now you can call up Siege straight from the terminal whenever you want!

Time to figure out just how bad Smartypants affects my code.

Launch the Siege!

I’ll start by SSHing into my server to start the test. Yes, the same server where the site is hosted — simply because I don’t want network latency to be an issue. I only want to see how well my code is performing.

From there, it’s a simple call to siege:

siege -i -b -t1M

This will create 15 clients (by default) to bombard my site. It’ll try to get a response and immediately make another request afterwards… Over and over. It will last for one minute. It’ll only hit my homepage, although there are options to give it a list of URLs to hit instead. My site is mostly uniform, and I expect most visitors to stay on my homepage, so this shouldn’t be a problem.

I flip Smartypants to on. One minute later, I get my results.

Transactions:               2122 hits
Availability:             100.00 %
Elapsed time:              59.40 secs
Data transferred:          20.03 MB
Response time:              0.42 secs
Transaction rate:          35.72 trans/sec
Throughput:             0.34 MB/sec
Concurrency:               14.94
Successful transactions:        2122
Failed transactions:               0
Longest transaction:            2.21
Shortest transaction:           0.09

Kirby with Smartypants On.

This is all really nifty data. I’m simulating 15 users constantly refreshing as fast as they can. During the span of one minute, they managed to load the page 2122 times.

The most important number is Response Time. This shows how long, on average, the server took to reply to a user HTTP request (when the server is at load, as this simulates). 420 milliseconds is unacceptably long for me. It means your average user has to wait at least 420ms before getting the first byte from the server, and that sucks. We’ll need to take a look at that.

(Note, however, that this is in stressed conditions. The Shortest transaction is only 90ms, which is just fine.)

Glancing at the other numbers, we look fine. The Throughput looks low, but that’s because our bottleneck lies in code execution, not bandwidth. If I were using Siege from a remote server, then this would be affected by my bandwidth. I also don’t have any Failed transactions, which is a good thing.

Right. Let’s do the comparison. I’ll switch Smartypants off.

Transactions:             3318 hits
Availability:             100.00 %
Elapsed time:              59.72 secs
Data transferred:          30.77 MB
Response time:              0.27 secs
Transaction rate:          55.56 trans/sec
Throughput:             0.52 MB/sec
Concurrency:               14.97
Successful transactions:        3318
Failed transactions:               0
Longest transaction:            1.65
Shortest transaction:           0.07

Kirby with Smartypants Off.

Right off the bat, we can see that the server managed to serve 50% more transactions overall. The response time dropped from 420ms to 270ms — a huge difference.

I now know that Smartypants accounts for a lot of my site’s processing. Well, Smartypants is basically just a function that runs on text — there’s no reason for it to run every time someone views a page, is there? It only needs to process text once.

Right! It’s time to turn on Caching.

Kirby offers simple output caching to the filesystem. It also allows me to choose between manually clearing the cache whenever I change a page, or if I want Kirby to detect stale caches. Clearly, telling Kirby to update my cache is easier, but it also means more IO activity as it checks constantly.

I’ll turn the cache on, with automatic updating also on. I’ll leave Smartypants on as well, because it only gets parsed once anyway — when the cache is generated.

Transactions:               3517 hits
Availability:             100.00 %
Elapsed time:              59.44 secs
Data transferred:          33.20 MB
Response time:              0.25 secs
Transaction rate:          59.17 trans/sec
Throughput:             0.56 MB/sec
Concurrency:               14.93
Successful transactions:        3517
Failed transactions:               0
Longest transaction:            1.29
Shortest transaction:           0.05

Kirby, with Smartypants On, caching On, and auto-updating On.

Great! We’ve managed to return to non-Smartypants levels with 250ms high-load average response times, dipping to 50ms when at a minimum.

That’s sexy.

Now, does the auto-updating feature actually slow anything down? Time to find out.

I’ll turn off auto-updating.

Transactions:             5158 hits
Availability:             100.00 %
Elapsed time:              59.07 secs
Data transferred:          48.70 MB
Response time:              0.17 secs
Transaction rate:          87.32 trans/sec
Throughput:             0.82 MB/sec
Concurrency:               14.94
Successful transactions:        5158
Failed transactions:               0
Longest transaction:            1.15
Shortest transaction:           0.04

Kirby, with Smartypants On, caching On, and auto-updating Off.


Better results across the board. Average response times dropped further to 170ms, while managing a 40ms minimum. We’ve basically cut down the original setup — Smartypants on with no caching — by over half. It’s a huge improvement.

That’s the power of caching. The trick is that not all caching is this efficient. Some doesn’t show spectacular results, while others might up your responsiveness tenfold. Generally, the slower (or worse) the application code is, the better caching becomes. Which… Is kind of silly, really. (Note that having caching on mimics Kirby without smartypants — this shows how lightweight Kirby itself is. Smartypants is just a complex operation, and it should always be cached.)

Wrapping Up

At this point, I can conclude that, for my site, Smartypants isn’t a big deal at all — as long as caching is on. Which makes sense. I also know that I should have the auto-updating mechanism off, unless I’m actively updating the site.

And that, ladies and gents, is why benchmarking your real site, using free and simple tools like Siege, is infinitely better than guessing “eh, caching looks good, I’ll just turn it on.”

A Word Of Warning

Siege is a rough benchmarking tool. It doesn’t run the whole gamut of tests more elaborate systems provide, so it can’t replace more thorough testing.

Like all testing tools, Siege is best used in conjunction with others.

The “tweak and measure” method I use works well for big feature chunks. When you’re dealing with more granular changes, especially tiny ones, you’ll want to check out more things, like how PHP parses your script. There’s a very real possibility that slow-downs are caused by interactions with other parts of your code, which you can try fixing yourself.

Also — remember that benefits have to be substantial for them to be useful. My tests simulated fairly heavy loads, but will my site actually need to handle that? Maybe I could leave auto-updating on. A 10ms difference under low load isn’t that big. Remember to suit your testing, and your conclusions, to your needs.

Finally, keep in mind that Siege basically executes a low-level Denial Of Service attack. It’s a very weak one, and one easily blocked (since it comes from one source), but it’s designed to hammer your CPU and filesystem thoroughly. Shared hosts, where you share said CPU cycles with other tenants, dislike this. This is why I test in very short bursts, and I measure my CPU use to ensure I don’t peak it out for everyone else (in fact, I can browse and use my site normally while the siege is in progress — that’s a fairly good sign that the test isn’t being too harsh).