Without measurement, test progress is invisible — decisions become guesswork
"How is testing going?" answered with "Fine, we're making progress" tells a project manager nothing useful. How many tests have been run? How many passed? How many defects are open? Are we on track to meet exit criteria?
Test monitoring collects objective data about testing progress. Test reporting communicates that data to stakeholders in a meaningful way. Together they enable evidence-based decisions about release readiness, resource allocation, and quality.
// example: spotify — dashboard-driven release decisions
Test Monitoring & Reporting — CTFL 4.0.1
Test monitoring
The ongoing collection of data about testing activities to evaluate progress against the test plan. Key metrics collected include:
- Test execution metrics — planned vs actual tests executed, pass/fail/blocked counts
- Defect metrics — defects found, fixed, and open; defect discovery rate; defect density
- Coverage metrics — requirements covered, code coverage percentage
- Effort metrics — actual vs estimated hours spent
Test control
Actions taken in response to monitoring data to bring testing back on track. Examples: re-prioritise tests, add resources, reduce scope, extend the test cycle, escalate critical defects.
Test reporting
Test progress report — produced during testing. Shows current status: tests executed, pass/fail/blocked, open defects, risks, schedule status. Frequency: daily or sprint-based.
Test summary report — produced at the end of a test level or project. Summarises what was tested, what was found, which exit criteria were met, residual risks, and a recommendation on release readiness.
// tip: Exam Tip: Know the difference between a test progress report and a test summary report. Progress reports are produced DURING testing — they show current status. Summary reports are produced AFTER testing — they summarise the entire testing phase, state whether exit criteria were met, and communicate residual risk to support the release decision.
Key Test Metrics and What They Tell You
| Metric | Formula / Definition | What It Indicates |
|---|---|---|
| Test execution progress | Tests executed ÷ Tests planned × 100% | How much of the planned test scope has been run |
| Pass rate | Tests passed ÷ Tests executed × 100% | Proportion of executed tests finding the system correct |
| Defect detection rate | Number of new defects found per day or per test cycle | System stability — a declining rate suggests stabilisation |
| Defect fix rate | Defects fixed ÷ Defects reported × 100% | Development team's responsiveness to defect reports |
| Requirements coverage | Requirements with at least one test ÷ Total requirements × 100% | How much of the specified scope is being tested |
| Blocked test percentage | Blocked tests ÷ Total tests × 100% | Environment, dependency, or prerequisite issues preventing testing |
Test Execution
132
Passed
18
Failed
6
Blocked
// Formula
Executed ÷ Planned × 100%
// Report Types
Tests executed vs planned
Pass/fail/blocked counts
Open defects by severity
Schedule status
Current risks
// Key Question
Is testing on track?
// Exam tip
Progress reports = DURING testing (status updates). Summary reports = AFTER testing (release decision). The exam will describe a scenario and ask which report type is appropriate — match the timing to the purpose.
Test Progress Report vs Test Summary Report
| Aspect | Test Progress Report | Test Summary Report |
|---|---|---|
| When produced | During testing (daily or per sprint) | At the end of a test level or project |
| Purpose | Show current status; support day-to-day decisions | Summarise all testing; support release decision |
| Audience | Test manager, project manager, development team | Stakeholders, release board, senior management |
| Contents | Tests executed, pass/fail, open defects, schedule status, risks | Test scope, exit criteria assessment, residual risks, lessons learned, release recommendation |
| Key output | Is testing on track? | Is the product ready to release? |
// warning: Exam Trap: Treating metrics as goals rather than measurements. A team that targets "95% pass rate" may start marking defects as "known issues" to hit the number. A team targeting "zero open critical defects" may downgrade defect severity. CTFL emphasises that metrics are tools for understanding reality — not targets to optimise. When metrics become targets, they stop accurately reflecting quality.
Exam Practice Questions
// ctfl 4.0.1 style — select an answer to reveal explanation