One of the things that keep coming up in load testing for website performance is the comparison of simulated versus real users. People tend to fixate on pure concurrency numbers (the quantity) without assessing things like user-profiles and throughput (the quality). It’s easy to crank up threads and feed search functionality a collection of values, but that only loads the servers; it doesn’t necessarily have any relation to the number of real users an application can sustain.
A Better Way to Test Website Performance
A better approach for load testing a website is to find relevant use case(s) and try to match them with the expected artifact throughput(s).
To illustrate, we recently completed an SAP customer relationship management (CRM) performance engagement. The client had newly acquired additional assets. They were in the process of restructuring their sales and customer service representative (CSR) infrastructure to accommodate. Everyone was being brought into the same corporate Wide Area Network (WAN) and integrated into the new CRM. Therefore, the client needed multiple points-of-presence (UK, CA, and the US) for the test itself. They had some specific CSR ticket and sales order volumes that required testing across varying network architectures. Furthermore, they needed the resulting records to funnel through an existing multi-tiered series of web services to their endpoint customers.
If we had taken a simple concurrency approach, we could have ramped up a few thousand threads against search and login, saw that servers loaded, and called it a day – maybe they’d have had a successful launch and maybe they wouldn’t. That wasn’t good enough for us.
Instead, we looked at the sales/CSR volume they’d had in the past (both at the parent and the new subsidiaries). Then, we modeled test scripts to meet those numbers with their desired concurrency. Finally, we applied that targeted load from the various geographic entry points.
We hit both the concurrency numbers and the sales order/CSR ticket volumes, hit all the critical elements of their architecture, and scaled those numbers up until we found their actual bottlenecks. In this particular case, the limiting factor was the memory configuration in a couple of nodes.
The bottleneck(s) are easy to find and fix if you can dial real load up and down at will, but very elusive if you have to depend on production troubleshooting or execute overly simplistic tests. Relating thread counts to actual user counts is vital to real-world load testing for website performance.