
Automation in QA: When is the Right Time?
Macie Hatamian
Test automation is a critical part of quality assurance. It accelerates delivery, tightens feedback loops, and makes continuous deployment safe. It also has real limits. Getting the mix right is the difference between testing as a matter of course, and testing as a strategic advantage.
The Pros of Test Automation
Test automation provides several key advantages:
- Speed & Efficiency. Automated tests execute much faster than manual testing. A regression suite that takes hours to complete manually can be run in minutes with automation, saving time and effort.
- Repeatability & Consistency. Automation ensures that tests are executed the same way every time, eliminating human errors caused by fatigue.
- Cost Savings in the Long Run. The up-front investment pays off over time by reducing repetitive manual testing efforts, especially in large projects with frequent releases.
- Better Coverage. Automated tests can validate a large number of scenarios in less time, leading to broader test coverage. This is crucial for regression testing, API testing, and performance testing.
- CI/CD Integration. New code changes are tested automatically before release, helping keep deployments safe.
The Cons of Test Automation
Despite its advantages, automation comes with some challenges:
- High Initial Investment. Setting up test automation requires time, expertise, and tools. Creating robust test scripts and maintaining them can be resource-intensive.
- Maintenance Overhead. Automated scripts can break when the application changes, requiring frequent updates. UI-heavy applications with frequent design changes can have particularly high overheads.
- Not a Replacement for Manual Testing. Automation is great for repetitive tasks, but it cannot replace human intuition, creativity, and exploratory testing.
- Risk of Flaky Tests or False Positives. Unstable or imprecise tests can produce unreliable results, erode trust in the suite, and cause more harm than good.
What’s Worth Automating?
Not all tests should be automated. The following types of tests are ideal candidates for automation:
- Regression Testing. Automating regression tests ensures that new code changes don’t break existing functionality. Regression testing is repetitive and automation significantly improves efficiency.
- Smoke and Sanity Testing. Smoke tests verify that the basic functionality of an application is working after a new deployment. Automating them helps detect major issues early.
- Performance and Load Testing. Simulating thousands of users manually is impractical. Automation tools like JMeter and Gatling help measure system performance under load.
- API Testing. APIs are less prone to UI-driven changes, making tests more stable. Automated API tests ensure that backend functionality works as expected.
- Data-Driven Testing. If a test requires multiple input combinations, automation allows efficient testing of different data sets without manual intervention.
Real-World Example: Automating Turnstone’s Zone Data Collection
At the core of Turnstone is a data pipeline that continuously ingests, normalizes, and organizes city parking data, then builds modelled outputs based on that input. Changes happen to upstream data sources regularly, and product code is similarly updated with regular UI and functional improvements.
Manually validating the parking data that appears on the platform’s frontend, across four cities each with multiple permutations of date and time, took up to eight hours of focused testing.
We built a Selenium script to handle this task efficiently. Here’s how it works:
- Login and setup. The script opens Chrome, logs into Turnstone, and starts a loop to iterate through each city in order.
- Data extraction. Within each city, we retrieve Active Transactions, Paid Occupancy, and Modeled Occupancy across multiple time ranges (last week, last month, and prior months), and immediately flags missing data for human review.
- Export and delivery. Results land in an Excel file, a run summary is generated (zones processed, anomalies found), and the file is emailed to the right recipients automatically. Those recipients can then look for more nuanced anomalies (an opportunity for future automation).
With this automation in place, eight hours of manual testing was reduced to an hour of oversight, an 88% reduction that paid for itself within a quarter. The results are all shared, giving the team confidence in the suite and a basis for continual improvement. And the team was empowered to move on to harder problems.
What’s Not Worth Automating?
Some things are best left to manual testing:
- Exploratory Testing. Exploratory testing requires human creativity to find edge cases and usability issues that automated scripts cannot detect.
- Frequently-Changing UI. If an application’s UI is constantly evolving, maintaining automation scripts becomes expensive and time-consuming.
- One-Time or Rarely Run Tests. Tests that are only executed once or very infrequently may not justify the effort required to automate them.
- Usability and UX Testing. User experience is subjective and requires human judgment. Automation does a poor job assessing how intuitive an application feels to real users.
Counter-Example: Visual Design Reviews
A prime example where automation doesn’t make sense is visual review on an evolving UI.
Fonts, colours, spacing, alignment, and the behaviour of dynamic elements like pop-ups and animations are exactly the kinds of details that matter to a finished product and that automated tools struggle to judge reliably. Frequent design updates make snapshot-based visual regression suites go stale quickly, which turns the automation itself into a maintenance burden, potentially wiping out any savings automation creates.
Instead of automation, visual reviews should be done through a combination of manual testing and design tools like Figma, Adobe XD, or screenshot comparison tools. Testers and designers who collaborate well can ensure the UI meets expectations faster and more consistently than automation.
Best Practices for Effective Test Automation
To maximize the benefits of test automation, consider the following best practices:
- Tool selection: Selenium for UI, Cypress for frontend, Postman for APIs.
- Write maintainable scripts: Use modular, reusable functions to reduce maintenance effort.
- Prioritize stable, high-Value tests: Focus on automating tests that provide the most ROI.
- Regularly review and update tests: Keep scripts up to date with application changes to avoid test failures.
- Integrate with CI/CD: Ensure tests run automatically in your CI/CD pipeline to catch issues early.
Conclusion
Test automation is a powerful way to improve efficiency, reduce testing time, and increase software quality, but it’s essential to strike the right balance. Automation is great for repetitive, stable, and high-value tests, but not so for visual review on rapidly-changing UI.
Our Turnstone script saved seven hours of testing per run. Keeping visual review manual avoids maintenance that costs as much as the tests themselves. QA teams who thoughtfully consider what’s worth automating and what isn’t turn testing into a strategic advantage, supporting faster, more reliable, high quality software delivery.
Macie Hatamian is a senior quality assurance lead with experience in both manual and automation testing.
TAGS

