Quality Engineering vs Traditional QA
The shift from traditional quality assurance (QA) to quality engineering (QE) represents one of the most significant evolutions in software development over the past decade. While both aim to deliver working software, they differ fundamentally in approach, timing, ownership, and tooling. Understanding these differences is essential for teams looking to modernize their quality practices and keep pace with the demands of continuous delivery.
This lesson compares the two approaches in detail, explains the tools and practices that define modern quality engineering, and introduces the testing pyramid that guides how to allocate testing effort effectively.
Traditional QA: The Gatekeeper Model
Traditional quality assurance follows what might be called the "gatekeeper model." Quality is the responsibility of a separate QA team that operates as a quality gate near the end of the development process. Here is how it typically works:
- Separate team: The QA team is organizationally distinct from the development team. Testers may report to a different manager, sit in a different area, and have separate planning and prioritization.
- Testing after development: Development is considered "done" when the developer says it is done. The feature is then "thrown over the wall" to QA for testing. Testing happens after the fact, not during development.
- Manual test scripts: Testers follow written test scripts that describe step-by-step procedures for verifying functionality. These scripts are often maintained in spreadsheets or dedicated test management tools. Creating and updating them is time-consuming.
- Quality gate at the end: QA serves as a checkpoint before release. If testing reveals bugs, the feature goes back to development for fixes, and the cycle repeats. This creates a "ping-pong" dynamic between development and QA that can stretch delivery timelines significantly.
- Focus on functional testing: Traditional QA primarily asks "does it work as specified?" Non-functional aspects like accessibility, security, and performance often receive less attention or are handled by separate specialized teams (if they are handled at all).
- Release-based cadence: Testing is organized around releases. A "test phase" might last days or weeks before a scheduled release. This model makes it difficult to support continuous delivery.
Quality Engineering: The Embedded Model
Quality engineering replaces the gatekeeper model with an embedded model where quality is woven into every stage of development. The differences are substantial:
- Embedded in development: Quality engineers work alongside developers, not in a separate team. They participate in design discussions, contribute to code reviews, and pair with developers to write tests. In many modern teams, there is no separate QA team at all — developers own quality with guidance from quality engineering specialists.
- Automated from the start: Quality checks are automated and run continuously, not manually at the end. Every push to the repository triggers a suite of automated checks. Quality feedback arrives in minutes, not days.
- Everyone owns quality: Quality is not delegated to a separate team. Developers write tests. Designers consider accessibility. Product managers define quality criteria. The entire team is accountable for the quality of what ships.
- Continuous feedback loops: Rather than a single quality gate before release, quality engineering creates multiple fast feedback loops throughout development: pre-commit hooks, CI/CD pipeline checks, code review, staging environment tests, and production monitoring.
- Multi-dimensional quality: QE goes beyond "does it work?" to ask "is it accessible, secure, performant, discoverable, and maintainable?" Quality is measured across multiple dimensions simultaneously.
- Continuous delivery: Quality engineering supports deploying multiple times per day because automated checks provide rapid confidence in each change. There is no multi-day "test phase" because testing happens continuously.
Side-by-Side Comparison
The following comparison highlights the key differences between traditional QA and quality engineering across several dimensions:
When Quality Happens
- Traditional QA: After development, during a dedicated test phase
- Quality Engineering: Continuously, at every stage from requirements to production monitoring
Who Owns Quality
- Traditional QA: The QA team owns quality; developers write code, QA tests it
- Quality Engineering: The entire team owns quality; developers write tests, designers ensure accessibility, PMs define quality criteria
Automation Level
- Traditional QA: Primarily manual testing with some automation (often an afterthought)
- Quality Engineering: Automation-first, with manual testing reserved for exploratory and usability testing
Feedback Speed
- Traditional QA: Days or weeks between writing code and receiving quality feedback
- Quality Engineering: Minutes to hours; automated checks run on every push
Scope of Quality
- Traditional QA: Primarily functional correctness ("does it work?")
- Quality Engineering: Multi-dimensional: accessibility, security, performance, SEO, code quality, and functional correctness
Deployment Frequency
- Traditional QA: Release-based (weekly, biweekly, or monthly deployments)
- Quality Engineering: Continuous (multiple deployments per day)
The Quality Engineering Toolchain
Modern quality engineering relies on a rich ecosystem of tools that automate quality checks across dimensions. Understanding this toolchain is essential for implementing QE practices effectively:
CI/CD Pipelines
The foundation of quality engineering automation is the CI/CD pipeline. Every push to the repository triggers a series of automated jobs that build, test, and validate the code.
- GitHub Actions: GitHub's built-in CI/CD platform. Excellent integration with pull requests. Free for public repositories and generous free tier for private ones.
- Jenkins: The long-standing open-source CI/CD server. Highly customizable with thousands of plugins. Requires self-hosting but offers maximum flexibility.
- GitLab CI/CD: GitLab's integrated pipeline system. Tightly coupled with GitLab's code hosting and review features.
Automated Accessibility Testing
Accessibility testing tools check web content against WCAG (Web Content Accessibility Guidelines) standards. Automated testing cannot certify full WCAG conformance — these tools typically detect only about 30% of issues in real-world audits. Manual testing with assistive technologies such as screen readers and keyboard-only navigation is required to achieve genuine conformance. That said, automated tools identify common problems that affect a large number of users and provide a valuable first line of defense.
- Pa11y: Open-source command-line accessibility testing tool. Can be integrated into CI/CD pipelines to run on every build. Tests against WCAG 2.1 AA standards by default.
- axe-core: The accessibility testing engine by Deque. Available as a browser extension (axe DevTools), a JavaScript library, and a CI/CD integration. Widely regarded as having high accuracy with very few false positives.
Security Scanning
Security tools scan code and dependencies for vulnerabilities, exposed secrets, and configuration issues.
- Gitleaks: Scans git repositories for secrets (API keys, passwords, tokens) that should not be committed. Fast and effective for preventing secret exposure.
- OWASP ZAP: Open-source web application security scanner. Can perform automated security testing against running applications. Covers many of the OWASP Top 10 vulnerability categories.
- Dependabot / Snyk: Dependency scanning tools that check your project's dependencies for known security vulnerabilities. They can automatically create pull requests to update vulnerable dependencies.
Code Review Automation
Code review is a critical quality checkpoint. Modern tools augment human reviewers with automated analysis.
- CodeRabbit: AI-powered code review tool that analyzes pull requests for bugs, security vulnerabilities, performance issues, and best practice violations. It reviews code within seconds of a PR being opened, providing feedback that complements human review.
- SonarQube: Comprehensive static analysis platform that checks code quality, code smells, bugs, and security vulnerabilities. Supports dozens of programming languages.
Performance Testing
- Lighthouse: Google's automated auditing tool for web performance, accessibility, SEO, and best practices. Available as a browser extension, command-line tool, and CI/CD integration (Lighthouse CI).
- WebPageTest: Detailed performance analysis tool that tests from real browsers in real locations. Excellent for understanding real-world performance characteristics.
Comprehensive Quality Analysis
- CodeFrog: Combines multiple quality checks into a single analysis covering accessibility, security, performance, SEO, HTML validation, and more. Provides a unified quality score and detailed findings across all dimensions, making it easier to maintain a holistic view of quality.
Monitoring and Observability
- Sentry: Error tracking platform that captures and aggregates production errors with full stack traces and context.
- UptimeRobot / Pingdom: Uptime monitoring services that alert you when your site goes down.
- Real User Monitoring (RUM): Tools that collect performance data from actual user sessions, providing ground-truth performance metrics.
QE Does Not Replace Manual Testing
A common misconception about quality engineering is that it eliminates the need for manual testing. This is not the case. Quality engineering augments manual testing by automating the repetitive, deterministic checks so that humans can focus on the testing that requires human judgment, creativity, and intuition.
Manual testing remains essential for:
- Exploratory testing: A skilled tester exploring the application without a script, using their experience and creativity to find issues that no one anticipated. Exploratory testing finds bugs that automated tests cannot because the tester is thinking about scenarios that were never scripted.
- Usability testing: Is the interface intuitive? Is the workflow logical? Does the error message help the user recover? These are fundamentally human questions that require human judgment.
- Accessibility testing with assistive technology: While automated tools catch many accessibility issues, testing with an actual screen reader reveals problems that automation misses: confusing reading order, missing context, poor announcements, and navigation difficulties.
- Edge case discovery: Human testers are excellent at thinking "what if?" What if the user enters emoji in the name field? What if they switch tabs mid-transaction? What if they use the back button?
The Testing Pyramid
The testing pyramid, originally described by Mike Cohn, provides a guide for how to allocate testing effort across different levels of the test stack. It is a foundational concept in quality engineering:
Base: Unit Tests (Many)
Unit tests form the wide base of the pyramid. They test individual functions, methods, or components in isolation. They are:
- Fast: A typical unit test suite with hundreds of tests runs in seconds
- Cheap: Easy to write, easy to maintain, and fast to execute
- Focused: When a unit test fails, you know exactly which component has the issue
- Abundant: You should have many unit tests — hundreds or thousands for a non-trivial application
Middle: Integration Tests (Some)
Integration tests verify that components work correctly together. They test the interactions between modules, services, databases, and APIs. They are:
- Slower than unit tests because they involve multiple components and often external services
- More complex to set up because they require realistic environments
- Valuable for catching interface issues that unit tests cannot find (e.g., a component sends data in a format the other component does not expect)
- Moderate in quantity: You should have enough to cover critical integration points, but not so many that they slow down your pipeline
Top: End-to-End Tests (Few)
End-to-end (E2E) tests simulate complete user workflows from start to finish, usually through a real or near-real browser and environment. They are:
- Slow: E2E tests can take minutes per test because they interact with real browsers and full application stacks
- Brittle: They break easily when UI changes, making them expensive to maintain
- Comprehensive: They verify that the entire system works together from the user's perspective
- Limited in quantity: Have just enough to cover critical user journeys (login, purchase, key workflows)
Above the Pyramid: Manual Exploratory Testing
At the very top, above the automated pyramid, sits manual exploratory testing. This is where human creativity and intuition find the unexpected issues that no automated test would catch. It is the most expensive per-test but provides unique value that automation cannot replicate.
Making the Transition
If your team currently follows a traditional QA model, transitioning to quality engineering does not have to happen overnight. Here is a practical roadmap:
- Start with CI/CD. If you do not have automated tests running on every push, start there. Even running your existing tests automatically is a significant improvement over manual test execution.
- Add quality checks incrementally. Start with one automated quality check (accessibility, security, or performance) and add it to your pipeline. Expand to additional dimensions over time.
- Embed quality in code review. Add quality considerations to your code review checklist. Consider tools like CodeRabbit that automate portions of code review.
- Shift ownership gradually. Start having developers write tests for their own code. Involve designers in accessibility reviews. Add quality criteria to acceptance criteria in your project management tool.
- Repurpose QA expertise. If you have dedicated QA staff, reposition them as quality engineering specialists who build testing frameworks, define quality strategies, and enable the rest of the team to deliver quality — rather than being the sole owners of testing.
- Measure and iterate. Track quality metrics (defect escape rate, test coverage, pipeline pass rate, accessibility score, performance metrics) and use them to identify areas for improvement.
The transition from traditional QA to quality engineering is a journey, and every step along that journey improves your team's ability to deliver high-quality software at the speed the modern world demands.
Resources
- Google Testing Blog — Articles on testing strategy, test infrastructure, and quality engineering practices from Google's engineering teams
- Continuous Delivery by Jez Humble and David Farley — The foundational book on building reliable, automated pipelines for software delivery