Software QA Testing Flowchart: A Structured Testing Workflow

Build a complete QA testing flowchart covering test planning, execution, defect tracking, and release sign-off. Includes CI/CD integration and metrics.

7分で読めます

Software releases fail in predictable ways. Teams skip test planning and discover missing coverage after a critical defect reaches production. QA runs tests without clear pass/fail criteria and ends up in endless debates about whether something is a bug or expected behavior. Developers merge code without knowing which tests their changes require. A well-designed QA testing flowchart prevents all of this by making the testing process explicit, repeatable, and measurable.

This guide walks through the complete QA testing lifecycle—from test planning to release sign-off—with decision points, escalation paths, and integration patterns for modern engineering teams.

The QA testing lifecycle

Testing isn't a single activity; it's a series of overlapping phases, each with distinct inputs, outputs, and decision points:

  1. Test planning and strategy
  2. Test case design and review
  3. Environment setup and data preparation
  4. Test execution (unit, integration, system, UAT)
  5. Defect reporting and tracking
  6. Regression testing
  7. Release sign-off

Each phase feeds the next. Skipping or rushing any step creates unpredictable failures downstream.

Phase 1: Test planning

Test planning answers three questions: what gets tested, how it gets tested, and what "done" looks like.

Planning outputs:

  • Test strategy (manual vs automated, testing levels)
  • Test scope (in-scope and explicitly out-of-scope)
  • Entry and exit criteria
  • Resource assignments and schedule
  • Defect severity and priority definitions

Exit criteria must be specific. "All tests pass" is not an exit criterion. "Zero open P0/P1 defects, test coverage ≥85%, UAT sign-off from product owner" is.

Requirements → Risk assessment → Define scope
                                      │
                              Set entry/exit criteria
                                      │
                              Assign resources + schedule
                                      │
                              Test plan approved? ──No──→ Revise
                                      │ Yes
                                 Proceed to design

Phase 2: Test case design

Test cases translate requirements into executable steps with expected outcomes.

Required fields for each test case:

  • Preconditions (environment state, test data)
  • Step-by-step actions
  • Expected results for each step
  • Pass/fail criteria

Test types to cover:

Test Type What It Validates
Positive cases Feature behaves correctly under normal input
Negative cases Feature handles invalid input gracefully
Boundary cases Edge values at limits (max, min, zero, empty)
Error cases Appropriate errors when dependencies fail

Test cases should go through peer review before execution. Common issues: missing negative cases, untestable steps, ambiguous expected results.

Phase 3: Environment setup

Environment failures are a leading cause of false test results. Before executing any tests, validate:

  • Services deployed and healthy
  • Database seeded with required test data
  • API integrations responding
  • Test accounts accessible
  • Rollback procedure documented

A failed environment check should block test execution. Running tests in a broken environment produces results that can't be trusted.

Phase 4: Test execution

Unit testing

Developers run unit tests before code merges. These are fast, isolated, and automated.

┌──────────────────┐     ┌──────────────────┐     ┌──────────────────┐
│  Developer       │────→│  Unit tests run  │────→│  All pass?       │
│  submits PR      │     │  (CI trigger)    │     └────────┬─────────┘
└──────────────────┘     └──────────────────┘       Yes │  │ No
                                                        ▼  ▼
                                                   Next   Block merge,
                                                   stage  notify dev

Unit test failure blocks the pull request. No exceptions for "I'll fix it later."

Integration testing

Integration tests verify that components interact correctly. They run after unit tests pass.

Integration test scope:

  • API contracts between services
  • Database read/write operations
  • Third-party service integrations
  • Authentication and authorization flows

System testing

System tests validate the complete application against requirements in an environment that mirrors production.

┌──────────────────┐     ┌──────────────────┐     ┌──────────────────┐
│  System tests    │────→│  Defects found?  │─Yes→│  Log + classify  │
│  execute         │     └──────────────────┘     │  defect          │
└──────────────────┘            │ No              └────────┬─────────┘
                                ▼                          │
                         Pass system gate         Severity P0/P1?
                                                    │          │
                                                   Yes         No
                                                    │          │
                                               Block       Continue
                                               release,    testing,
                                               escalate    queue fix

User Acceptance Testing (UAT)

UAT validates that the software meets business requirements as understood by actual users. Product owners and business stakeholders run UAT, not QA engineers. Defects may be functional (software doesn't match requirements) or requirement gaps (requirements didn't capture what was actually needed).

Phase 5: Defect reporting and tracking

A bug report that requires three rounds of clarification is three times slower than one that's complete on first submission.

Required defect fields:

  • Environment and version
  • Steps to reproduce (numbered, specific)
  • Expected vs actual behavior
  • Severity classification
  • Screenshots, logs, or video

Severity classification

Severity Criteria
Critical System crash, data loss, security breach. No workaround.
High Core feature broken, major data error. Painful workaround.
Medium Feature partially working. Easy workaround available.
Low Cosmetic, edge case, no functional impact.

Defect lifecycle:

New → Assigned → In Progress → In Review → Verified → Closed
                                    │
                              Failed review
                                    ▼
                              Back to In Progress

Verified means QA confirmed the fix works in the test environment, not just that the developer says it's fixed.

Phase 6: Regression testing

Every fix introduces regression risk. Regression testing catches cases where fixing one thing broke something else.

Regression scope selection:

Change scope assessment
        │
┌───────┴───────────────┐
│                       │
▼                       ▼
Core system change   Isolated change
(cross-cutting)      (UI only, single endpoint)
        │                       │
        ▼                       ▼
Full regression suite    Targeted regression
                         affected modules only

Regression test selection is a judgment call. The risk of under-testing is shipping a regression. When in doubt, test more.

Shift-left and manual vs automated testing

Shift-left moves testing earlier in the development lifecycle:

  • QA reviews requirements before development starts
  • Test cases designed during sprint planning, not after
  • Developers write unit tests as they code
  • Code review includes test coverage review

Cost of defect detection by phase:

Detection Phase Relative Cost
Requirements 1x
Development 10x
Testing 20x
Production 100x

What to automate vs keep manual:

Automate Keep Manual
Regression and smoke tests Exploratory testing
API contract validation Usability and visual review
Performance benchmarks Complex business logic
Data boundary cases Accessibility evaluation

CI/CD integration

┌──────────┐   ┌──────────┐   ┌──────────┐   ┌───────────┐
│ Commit + │──→│  Build   │──→│  Unit    │──→│Integration│
│   lint   │   │ compile  │   │  tests   │   │  tests    │
└──────────┘   └──────────┘   └──────────┘   └─────┬─────┘
                                                     │
                                          ┌──────────▼────────────┐
                                          │   All pass?           │
                                          └──────────┬────────────┘
                                            Yes │    │ No
                                                ▼    ▼
                                           Deploy  Notify,
                                           to test block deploy
                                           env
                                                │
                                     ┌──────────▼────────────┐
                                     │  System + E2E tests   │
                                     └──────────┬────────────┘
                                                │
                                     ┌──────────▼────────────┐
                                     │  Gate: deploy to prod?│
                                     └───────────────────────┘

Failed CI gates block progression automatically. If gates can be bypassed without explicit approval, they provide no protection.

Release sign-off: the go/no-go decision

Criteria Gate
P0/P1 defects Zero open
P2 defects Documented and accepted by product owner
Test coverage Meets minimum threshold (e.g., 85%)
UAT sign-off Written approval from product owner
Performance Within acceptable benchmarks
Security scan No high/critical findings unresolved
Rollback plan Documented and tested

Go/no-go decisions should be made by people with authority to accept risk—typically product, engineering lead, and QA lead together.

QA metrics and reporting

Key metrics to track:

  • Defect detection rate by phase (shift-left effectiveness)
  • Escaped defects (found in production after release)
  • Test execution rate (planned vs executed)
  • Regression failure rate per suite
Metric Target Alarm
Test execution by go/no-go 100% <80%
P0/P1 open at release 0 Any
Escaped defects <2% >5%
Regression failures <5% per suite >10%

Common QA process failures

No written exit criteria. Testing continues indefinitely because nobody agreed on what "done" means. Fix: define exit criteria in test planning before execution begins.

QA gate only at end. Defects accumulate through development and hit QA as a flood. Fix: integrate testing throughout development via shift-left practices.

Test environments differ from production. Tests pass in staging, fail in production. Fix: infrastructure-as-code for reproducible environments plus an environment validation checklist.

Automated tests not maintained. Test suite grows but becomes unreliable as tests break without being fixed. Fix: treat test code with the same discipline as application code.

UAT skipped under time pressure. Product ships without stakeholder validation. Fix: UAT is a non-negotiable gate, not optional review.

Building your QA flowchart with Flowova

QA processes are often scattered across wikis, spreadsheets, and tribal knowledge. A visual flowchart makes the process legible to the whole team. Flowova lets you build and share QA process flowcharts quickly:

  1. Describe your current process: Outline your testing phases, decision points, and handoff criteria. Include your specific severity definitions and exit criteria.
  2. Generate and refine: Use AI generation to produce the initial structure, then replace generic labels with your specific thresholds—your pass rate targets, severity definitions, CI/CD stage names.
  3. Export and integrate: Share as PNG for onboarding documentation, Mermaid for engineering wikis, or embed in runbooks.

A QA flowchart that lives in your team's documentation gets used. One that exists only in someone's head creates inconsistency every time that person is unavailable.

Related articles:

Templates:

関連する記事