Generic Automation Deployment Flow

An automation engagement has five primary phases:

  • Infrastructure
  • Scripting
  • Batching
  • Integration
  • Maintenance

There’s overlap from phase to phase and any particular phase doesn’t necessarily need to be “complete” to move to the next one, but this is the general flow:



Infrastructure is all about laying the groundwork to enable a successful automation effort.  Minor gains can be made on the part of single testers given automation tools, but to make fundamental changes to development culture it is necessary to lay out infrastructure.

Infrastructure can be described in four categories:

  • Tool assets
  • Personnel assets
  • Physical assets
  • Software assets

Tool assets are generally the first place organizations start in automation: buy or find a tool that meets the general anticipated needs, then hand it out to the test staff.  The other three areas are a little move involved, because they involve some additional non-obvious degree of investment and some time before returns are realized.

Personnel assets are assigned to do the heavy lifting in automation projects; it is desirable to have everyone in the test group familiar with and accustom to using the tool(s), but a dedicated position is strongly recommended.  This gives the group some capability in handling more complicated scripting tasks and helps drive a central focus.

Physical and software assets are all about sandboxes for doing test tool development and final “production” test tool environments.   Generally speaking, servers running VMs or Remote Desktops are a better investment than workstations for prolonged user interface automation; UI automation strategies usually end up with 1 session per instance, so to speed up execution time multiple instances are required.



It is relatively easy to create disposable single-use scripts that mirror the steps of a manual test case; however, a little central guidance on deciding where and how to automate saves a lot of time in the long term by eliminated poor automation candidates and promoting coding standards.

Once the infrastructure is partially defined, up, and running, formal scripting can begin.

Scripting can be broken down into the following, most of which have little to do with the actual act of creating code:

  • Use Case Selection
    • Steps/Path
    • Path Variations
    • Data Variations
  • Recording/Coding
    • Tool/Tool Language
    • Version management
  • Error Handling
  • Reporting

Start with identifying manual tests as “candidates” for automation.  A good automation candidate is one that is either seldom changed (e.g. core regression) or sees a great deal of short-term use between edits (e.g. data driven).  Identify the basic path, profile the necessary variations and data requirements then move to coding.  Code to a standard or template, version, add error handling to support robust unattended execution, and add detailed reporting as needed to eliminate false positives and enhance error localization.



A batch consolidates a collection of scripts, interfaces with their error handling, and outputs concise reports.  Initially, batches are executed in single instances (as in, for example, nightly test execution), but are eventually structured to run multi-instanced to complete comprehensive automated suites quickly.

  • Base
  • Structure/Multi-Batching
  • Error Handling
  • Reporting

Start with single batches, refine the batch execution to distribute over multiple hosts, add batch level error handling for unattended execution, and add the appropriate level of reporting into an easily readable report file or dashboard.



The ultimate goal is integration with build tools, so test suites execute automatically as code is deployed and provide real-time feedback with overall build health.

  • Integrate execution hosts with build tool
  • Integrate batch with build tool for activated execution
  • Integrate batch with build tool for automatic execution
  • Refine batch/build tool automatic execution

Fundamentally, add execution hosts to the build tool, add the batched test scripts as an independent build, then integrate the batched scripts + execution hosts to the build(s) proper.  Refine by parsing the scripts to related build content (if applicable) to reduce the number of redundant tests run when code is uploaded to the build.



Everything requires maintenance; one of the major pitfalls of automation is not allowing for it.  Scripts need to be edited to accommodate application changes, expanded to cover greater scope, or simply discontinued due to obsolescence.

  • Refinement
  • Obsolescence
  • New Use Cases

Essentially, when you’re in maintenance you’re refactoring all the previous phases, adding refinements, editing/obsoleting existing test scripts and adding new ones to accommodate changes to the application.