Some time ago I wandered the building here at iBeta to speak with some of our Test Engineers, like the folks who work in our Automation, Performance, and Security verticals. This was for the purpose of getting them to write a bit about how they do what they do for the blog.
Now, this was not easy; your typical capital ‘E’ Engineer doesn’t even speak the same language as everyone else, and they generally don’t care for the trivialities of ‘blogs‘ or ‘social media‘ either. So plying them with the promise of making them Internet Famous didn’t really work… Fortunately there is always pizza – the universal food group – and I was eventually successful in getting them to write a few things for us!
So, without further adieu; “The Flow of an Automation Engagement“, by Joshua Kitchen
An automation engagement has five primary phases:
There’s overlap from phase to phase and any particular phase doesn’t necessarily need to be “complete” to move to the next one, but this is the general flow.
Infrastructure is all about laying the groundwork to enable a successful automation effort. Minor gains can be made on the part of single testers given automation tools, but to make fundamental changes to development culture it is necessary to lay out infrastructure.
Infrastructure can be described in four categories:
- Tool assets
- Personnel assets
- Physical assets
- Software assets
Tool assets are generally the first place organizations start in automation: buy or find a tool that meets the general anticipated needs, then hand it out to the test staff. The other three areas are a little move involved, because they involve some additional non-obvious degree of investment and some time before returns are realized.
Personnel assets are assigned to do the heavy lifting in automation projects; it is desirable to have everyone in the test group familiar with and accustom to using the tool(s), but a dedicated position is strongly recommended. This gives the group some capability in handling more complicated scripting tasks and helps drive a central focus.
Physical and software assets are all about sandboxes for doing test tool development and final “production” test tool environments. Generally speaking, servers running VMs or Remote Desktops are a better investment than workstations for prolonged user interface automation; UI automation strategies usually end up with 1 session per instance, so to speed up execution time multiple instances are required.
It is relatively easy to create disposable single-use scripts that mirror the steps of a manual test case; however, a little central guidance on deciding where and how to automate saves a lot of time in the long term by eliminating poor automation candidates and promoting coding standards.
Formal scripting can begin once the infrastructure is partially defined, up, and running.
Scripting can be broken down into the following, most of which have little to do with the actual act of creating code:
- Use Case Selection
- Path Variations
- Data Variations
- Tool/Tool Language
- Version management
- Error Handling
We start with identifying manual tests as “candidates” for automation. A good automation candidate is one that is either seldom changed (e.g. core regression) or sees a great deal of short-term use between edits (e.g. data driven). Identify the basic path, profile the necessary variations and data requirements then move to coding. Code to a standard or template, version, add error handling to support robust unattended execution, and add detailed reporting as needed to eliminate false positives and enhance error localization.
A batch consolidates a collection of scripts, interfaces with their error handling, and outputs concise reports. Initially, batches are executed in single instances (as in, for example, nightly test execution), but are eventually structured to run multi-instanced to complete comprehensive automated suites quickly.
- Error Handling
Start with single batches, refine the batch execution to distribute over multiple hosts, add batch level error handling for unattended execution, and add the appropriate level of reporting into an easily readable report file or dashboard.
The ultimate goal of integration is to integrate with your build tools. Integration will allow test suites to execute automatically as code is deployed, and therefore they can provide real-time feedback of the overall build health.
- First, integrate execution hosts with build tool
- Then integrate batch with build tool for activated execution
- Then integrate batch with build tool for automatic execution
- And finally refine batch/build tool automatic execution
Fundamentally, add execution hosts to the build tool, add the batched test scripts as an independent build, then integrate the batched scripts + execution hosts to the build(s) proper. You can refine your integration by parsing the scripts to related build content (if applicable) as this will reduce the number of redundant tests run when code is uploaded to the build.
Everything requires maintenance; one of the major pitfalls of automation is not allowing for it. Application changes, expanding to cover greater scope, or discontinuation due to obsolescence are all reasons your scripts will need to be edited – so plan accordingly.
- New Use Cases
Essentially, when you’re in maintenance you’re refactoring all the previous phases, adding refinements, editing/obsoleting existing test scripts and adding new ones to accommodate changes to the application.