{{ searchResult.published_at | date:'d MMMM yyyy' }}

Loading ...
Loading ...

Enter a search term such as “mobile analytics” or browse our content using the filters above.

No_results

That’s not only a poor Scrabble score but we also couldn’t find any results matching “”.
Check your spelling or try broadening your search.

Logo_distressed

Sorry about this, there is a problem with our search at the moment.
Please try again later.

If you're building a web application and have high hopes that it will be used by lots of people, one of the most important things to do before you launch it is to test how it functions under extreme conditions.

The purpose of stress and load testing is simple - find out what happens to your application when it gets "hammered" by simulating lots of usage.

When done right, such testing can mimic real-world usage and will yield valuable data.

You'll find out how your application performs. You'll often be able to locate performance bottlenecks. And you can better determine what sort of scaling strategy makes sense for your application.

Fortunately, there are a number of open source testing tools that enable you to do all of this without spending an arm and a leg.

curl-loader

This tool, written in C, simulates the behavior of thousands and tens of thousands of client requests and has support for authentication.

FunkLoad

FunkLoad is an advanced and versatile testing tool that can help discover bottlenecks, "expose bugs that do not surface in cursory testing," and determine how your application manages under heavy stress.

Because it mimics a real web browser, has advanced configuration options and provides detailed reports, it's one of my favorite testing tools.

Hammerhead 2

Hammerhead 2 is a stress testing tool that can simulate many concurrent users at a single time. Its configuration functionality permits the creation of "scenarios" that enable you to simulate real usage.

JCrawler

JCrawler is a very nifty stress testing tool that functions a bit differently than most other stress testing tools. First, it acts like a crawler so it can more easily provide widespread testing of a complex application without requiring lots of explicit configuration. Second, it allows the user to specify hits-per-second instead of "threads." And lastly, it supports applications that make use of HTTP redirects and cookies.

PushToTest TestMaker 5

The maker of this Java-based tool claims to have 160,000 users and describes TestMaker 5 as a "test automation tool" that can simulate "real-world production environments to test scalability under increasing levels of load."

Commercial services and support are available.

Pylot

A multi-threaded stress testing tool, Pylot allows the creation of "test cases" via an XML file. Requests can be configured, a nice GUI is offered for real-time monitoring purposes and useful reports are generated at the end of each test.

Avatar-blank-50x50

Published 21 July, 2008 by Patrick Oak

82 more posts from this author

Comments (6)

Comment
No-profile-pic
Save or Cancel
Jon Bovard

Jon Bovard, Director of eCommerce at A well known Telco

hmmm. which of the above would be most suitable for testing visitors simply clicking around an ecommerce site. no baskets or registration processes..
other question. how does your hosting company distinguish requests from these types of agents from say DDOS attacks or similar?

about 8 years ago

Avatar-blank-50x50

edward miller

A. Interesting list, but I'd think there would be more of these that work
at the HTTP/S level. Isn't there a simple HTTP/S Web Interface
with PERL that does most of this?

B. In response to Jon Bovard's questions, a true DDOS would
require lots of CPUs and the methods in test tools generally are too
inefficient to be very goo DDOS engines.

C. Also, to further Jon Bovard's need: What if I click around on an
e-commerce website and need to maintain browser internal state...
so that the test is coherent/stateful? Do any of those solutions do
this?

-Edward Miller

about 8 years ago

Avatar-blank-50x50

Patrick Oak, Blogger at Econsultancy

Jon,

JCrawler might be suitable for that since it acts like a crawler and doesn't require you to configure usage scenarios explicitly.

I would NOT recommend using any of these tools on your live/production website if you plan to simulate heavy loads. These tools are best used on a local setup that has been created specifically for testing purposes (and that you can afford to have "crash").

If you have to use a server that is leased from a hosting company it would be a good idea to check with the host first if you think any problems may occur.

Edward,

A simple Perl "script" that runs on a web server (like Apache) would have some significant limitations.

In general, a tool that is written in a compiled language (like C) is more ideally suited for this type of application although some in the list are written with interpreted languages (lFunkLoad is built with Python).

about 8 years ago

Avatar-blank-50x50

Jason

Sloppy (http://www.dallaway.com/sloppy/) is really useful as well for testing slow Internet connections.

about 8 years ago

Avatar-blank-50x50

Deri Jones, CEO at SciVisum.co.uk

Tools like these can be very useful.

But 'a little knowledge is a dangerous thing' - they can also end up being worse than doing nothing. True of all tools I guess - but as we do a lot of load testing, we see it happen alot.

The things that we most commonly see folk get wrong in their load testing:

i) false sense of security - a bunch of numbers are generated that 'look good'... but when the traffic peak comes, the site is all over the floor.

If for example all a load test reports is figures of 'concurrent users', then you're in this boat: that is an extremely poor measure of capacity (for obvious reasons: one user who hits the homepage and then disappears will show as 1 user in the session tables: as will a user who hits 20 pages and buys 3 products: so comparing a load test of 'concurrent users' with historic peak days' concurrent users is comparing apples and oranges)

ii) hitting single URLs, is meaningless: real users follow multi-page routes to achieve their goal: you need to measure multi-page user journeys

iii) wrong choice of multi-page User Journeys: missing out key journeys, because of assumptions that 'they willt be OK because of technical reason X'.
There's many a slip twixt cup and lip... and software always has bugs, it's just a question of how bad they are :<)

iv) measurement of simplistic User Journeys, gives meaningless data. A load test that has all virtual users say buying the same product... doesn't represent reality and can end up giving much better or much worse figures than real users buying a range of different products.

v) No time to do a decent job.
Normally, load testing is the last thing to be done before going live with a new site or change: the schedule has normally slipped, the staff allocated to do the testing have less time, and may be pulled in to help fix last-minute coding issues: result a half-hearted attempt is made, with inconclusive findings.

vi) It takes effort, thought and time to write meaningful user journeys.
At each page, it needs to check:
* is this the right page content I'm expecting?;
* do I choose at random from the choices offered, or the 3rd one down or what?;
* have any of the range of possible error messages turned up within what is otherwise the right page?
* has the site wrongly 'jumped back' to an earlier page in the journey
etc.

vii) lack of benchmarks: if you've only ever run the tool on your own site, and recently, you really don't have any way to benchmark your findings

viii) The tools are free, so the tech team's offer to management to do the load testing is not unwelcome! But the effort to do it right - know the tools and their limitations, know how to set up hardware and OS's correctly, know how to design and script meaningful journeys... that's all time and effort.

ix) expensive tools may not do a better job: they may be reassuringly expensive (tipping of hats off to Stella Artois ), but they have scripting quirks and limitations, gotchas that are not obvious, and again require a bunch of time and effort to be used properly.

x) business managers are likely to 'blame the testers' if the site struggles after testing suggested 'it would be OK'.
Make sure you complement the load testing with ongoing 24/7 metrics of user journeys during your shopping peak traffic, so that if the site does struggle, you have hard evidence of when each, how long, which journeys, what percent of users impacted etc. That way, any port mortem can be based on facts, not speculation and assumption.

xi) make sure that any decisions to either 'go with the site as it is' or ' spend some money to fix the problems' are made by the right level of business managers: it's unfair to delegate that to the tech team.

Phew, that's a lot of reasons not to go the DIY route... but in contrast we have seen on occassion clients do excellent work themselves. It usually happens when they have a particularly strong tech guy in house.

If you're willing to limit the inhouse testing to just metrics of your homepage, that'll increase the probability of a meaningful number: albeit only the one.

But make sure that the business-managers know in advance that's all your targetting

Deri

about 8 years ago

Avatar-blank-50x50

Atisha Mittal

Please let me know the record and play back tool for functionality testing.I don't want to write any scripts ,as the project is of a short duration.

Thanks in Advance.

over 3 years ago

Comment
No-profile-pic
Save or Cancel
Daily_pulse_signup_wide

Enjoying this article?

Get more just like this, delivered to your inbox.

Keep up to date with the latest analysis, inspiration and learning from the Econsultancy blog with our free Daily Pulse newsletter. Each weekday, you ll receive a hand-picked digest of the latest and greatest articles, as well as snippets of new market data, best practice guides and trends research.