Back to Automate with Alex

Resource

QA Automation Audit Checklist

A practical checklist to evaluate your current automation framework, CI/CD setup, API coverage, flaky tests, reporting, test data, and release readiness. Score yourself honestly — the gaps are where the biggest wins are.

Book a Free Consultation

Tip: use your browser's Print → Save as PDF to keep an offline copy.

01

Framework architecture

  • There is a clear test layering (page object / screen-action / domain model).
  • New tests can be added without touching unrelated code.
  • Selectors are centralized, named meaningfully, and resilient.
  • Common waits, assertions, and helpers are abstracted (no copy-paste sleeps).
  • Framework is version-controlled in the same repository as the product (or clearly linked).
02

Test reliability and flakiness

  • Flaky tests are tracked, not silenced or auto-retried into passing.
  • Root causes for the top 5 flaky tests are documented.
  • Timing waits use deterministic conditions (not arbitrary sleeps).
  • Test data resets cleanly between runs.
  • There is a flake budget or acceptable failure threshold defined.
03

CI/CD execution

  • Tests run on every pull request, not just on release.
  • CI execution time is under 30 minutes for the critical path.
  • Parallelization or sharding is in place for slow test groups.
  • Pipeline gates merges on test results (or there is an explicit reason it doesn't).
  • Test failures show enough context (logs, screenshots, traces) to debug without re-running locally.
04

API test coverage

  • Critical API endpoints have automated tests separate from UI tests.
  • Tests cover positive, negative, and edge-case requests.
  • Schema or contract is validated where applicable.
  • API tests run in CI on every change to backend code.
  • Authentication, environment, and test data are externalized — not hard-coded.
05

UI test coverage

  • Tests cover critical user journeys (signup, checkout, core workflows).
  • Coverage is mapped to features — you know which flows are NOT covered.
  • UI tests are isolated from backend setup (use API for arrange when possible).
  • Visual / accessibility / cross-browser checks are addressed where they matter.
  • Mobile viewports are tested if the product is used on mobile.
06

Test data strategy

  • Tests don't depend on shared mutable state.
  • There is a defined approach for creating/cleaning test data (factories, API setup, fixtures).
  • PII / production data is never used in test environments.
  • Test data setup is fast — tests don't spend more time arranging than asserting.
07

Environment configuration

  • Tests can run against multiple environments (local, staging, prod-mirror) by config.
  • Secrets are managed properly (env vars, vaults) — never in source.
  • Environment-specific behavior is parameterized, not branched in test code.
  • There is a dedicated, stable QA environment (or clear plan to get one).
08

Reporting and debugging

  • Failure output is actionable — points to the failing assertion or step, not 'something broke'.
  • Reports are accessible to non-QA stakeholders (PMs, engineering managers).
  • Trends over time are visible (pass rate, duration, flakiness).
  • Reports integrate with test management (Qase, TestRail, Xray) where used.
09

Performance testing readiness

  • Critical performance scenarios (login, search, checkout) have at least smoke load tests.
  • Baselines and SLAs are documented.
  • There is a plan for spike, stress, and endurance testing for high-risk launches.
  • Performance tests run in a stable, comparable environment.
10

Ownership and maintainability

  • More than one engineer understands the framework.
  • Documentation (README, contribution guide) exists and is current.
  • Code review applies to test code as much as production code.
  • Tests are deleted when the feature is removed (no zombie tests).
  • Onboarding a new contributor takes hours, not weeks.
11

AI-assisted QA opportunities

  • Test design is a bottleneck — AI assistance could accelerate it.
  • Coverage mapping is manual and incomplete — AI could help reason about gaps.
  • Engineers spend significant time scaffolding new tests — AI could draft them.
  • There's an AI experimentation policy or appetite — not a blanket prohibition.

© 2026 Aleksandar Stojanovic · www.automatewithalex.com