From a simple form on a website to a whole new customer-relationship management (CRM) or enterprise resource planning (ERP) system, good testing is key to maximizing benefits and minimizing problems. And, as a businesses, you can save a lot of time, money, and prevent some reputation hits by doing a better job of testing new implementations.

But, like most things, there’s doing it – and there’s doing it right.

Okay, so who tests?

The first testers in line for any system are typically the team who are involved in any new product or system deployment. This is because those involved in purchasing and designing or developing the system will test to be sure things work as expected at a very high level; ‘did the lights come on and are they green?’ if yes, then it’s working.

All too often though, these folks are the only testers and Quality Assurance stops there…

For most systems, a group of users who are familiar with the functions of the system should help create test cases because they understand the goal functionality. They will also be the best barometer of how easy the new system is to navigate, use, and whether or not the functions they use day in and day out are easy to access.

But beyond this you need external testers who have not been part of the system design / purchase or are experts with the old system, to really test it.

Why is this? Because ‘fresh eyes’ will inherently boundary test the system by not having learned orders of operation. The wrong buttons will be pushed, incorrect data types will be entered, and forms will be submitted in the wrong sequence. The trick though is to ensure these fresh eyes also have QA testing experience, and can turn their findings into actionable bug reports that can be instrumental in speedy fixes.

Sounds good! But what should we test?

Test for failure, not success. This is the biggest mistake most businesses make with internal testing. If you test to make sure it all works, the most likely result will be that it does – in that very specific scenario…

Consider this example: when testing a new phone system with your employees, they will call the main number, enter their work extension, listen to the recording, and leave a message. All good, right?

But when customers use the new phone system they dial the main number, won’t know the right extension, get into the company directory, misspell the person’s name, and then end up in an endless loop of recordings.

Not good.

Trained QA engineers can help make sure that your test plans focus on trying things that should not work, and then making sure that the resultant bug reports from the testers clearly illustrate the function that failed and how to make it fail. This will allow the design and implementation folks ensure that the system responds correctly to real-world usage.

So, where should I test?

To give the truest result, test in a number of different environments. Test on the worst computer out there in your user pool, not the best. Test on spotty networks that your clients might have, not your rock-steady internal network connected directly to the servers. If the system has a mobile component, test on every device possible: smartphones of different flavors and resolutions, different browsers, and operating systems.

No matter how hard you try, there will still be things you didn’t anticipate or that internal testing didn’t identify… But a thoughtful, organized testing plan formulated by QA testing experts who do this sort of thing every day, spread across multiple use-cases, hardware, and environments, will improve the results and minimize the time and money spent to get a new implementation up and running.