BETAPlatform actively being built — new topics and features added regularly.

ISTQB Foundation Level (CTFL 4.0.1)~5 min read20/26

Test Monitoring & Reporting

// track test progress, measure quality, and communicate results to stakeholders.

loading...
// content

Without measurement, test progress is invisible — decisions become guesswork

"How is testing going?" answered with "Fine, we're making progress" tells a project manager nothing useful. How many tests have been run? How many passed? How many defects are open? Are we on track to meet exit criteria?

Test monitoring collects objective data about testing progress. Test reporting communicates that data to stakeholders in a meaningful way. Together they enable evidence-based decisions about release readiness, resource allocation, and quality.

// example: spotify — dashboard-driven release decisions

Scenario: Spotify's QA team uses a real-time test dashboard that tracks: percentage of test cases executed, pass/fail/blocked ratios, open defect count by severity, test coverage against planned scope, and trend lines over the sprint. What happened: During a critical release, the dashboard showed that defect discovery rate was still rising in day 8 of a planned 10-day test cycle. This indicated the system was not yet stable — the team extended testing by 3 days. The defect discovery rate peaked on day 9 and declined on day 11, confirming the system was stabilising. The release was delayed by 3 days but shipped with confidence. Why it matters: Without the monitoring data, the team would either have released too early (while defects were still being found at a high rate) or continued testing indefinitely (with no objective signal that quality was improving).

Test Monitoring & Reporting — CTFL 4.0.1

Test monitoring

The ongoing collection of data about testing activities to evaluate progress against the test plan. Key metrics collected include:

  • Test execution metrics — planned vs actual tests executed, pass/fail/blocked counts
  • Defect metrics — defects found, fixed, and open; defect discovery rate; defect density
  • Coverage metrics — requirements covered, code coverage percentage
  • Effort metrics — actual vs estimated hours spent

Test control

Actions taken in response to monitoring data to bring testing back on track. Examples: re-prioritise tests, add resources, reduce scope, extend the test cycle, escalate critical defects.

Test reporting

Test progress report — produced during testing. Shows current status: tests executed, pass/fail/blocked, open defects, risks, schedule status. Frequency: daily or sprint-based.

Test summary report — produced at the end of a test level or project. Summarises what was tested, what was found, which exit criteria were met, residual risks, and a recommendation on release readiness.

// tip: Exam Tip: Know the difference between a test progress report and a test summary report. Progress reports are produced DURING testing — they show current status. Summary reports are produced AFTER testing — they summarise the entire testing phase, state whether exit criteria were met, and communicate residual risk to support the release decision.

Key Test Metrics and What They Tell You

MetricFormula / DefinitionWhat It Indicates
Test execution progressTests executed ÷ Tests planned × 100%How much of the planned test scope has been run
Pass rateTests passed ÷ Tests executed × 100%Proportion of executed tests finding the system correct
Defect detection rateNumber of new defects found per day or per test cycleSystem stability — a declining rate suggests stabilisation
Defect fix rateDefects fixed ÷ Defects reported × 100%Development team's responsiveness to defect reports
Requirements coverageRequirements with at least one test ÷ Total requirements × 100%How much of the specified scope is being tested
Blocked test percentageBlocked tests ÷ Total tests × 100%Environment, dependency, or prerequisite issues preventing testing

Test Execution

At Risk
Planned: 200Executed: 156

132

Passed

18

Failed

6

Blocked

// Formula

Executed ÷ Planned × 100%

// Report Types

Tests executed vs planned

Pass/fail/blocked counts

Open defects by severity

Schedule status

Current risks

// Key Question

Is testing on track?

// Exam tip

Progress reports = DURING testing (status updates). Summary reports = AFTER testing (release decision). The exam will describe a scenario and ask which report type is appropriate — match the timing to the purpose.

Test Progress Report vs Test Summary Report

AspectTest Progress ReportTest Summary Report
When producedDuring testing (daily or per sprint)At the end of a test level or project
PurposeShow current status; support day-to-day decisionsSummarise all testing; support release decision
AudienceTest manager, project manager, development teamStakeholders, release board, senior management
ContentsTests executed, pass/fail, open defects, schedule status, risksTest scope, exit criteria assessment, residual risks, lessons learned, release recommendation
Key outputIs testing on track?Is the product ready to release?

// warning: Exam Trap: Treating metrics as goals rather than measurements. A team that targets "95% pass rate" may start marking defects as "known issues" to hit the number. A team targeting "zero open critical defects" may downgrade defect severity. CTFL emphasises that metrics are tools for understanding reality — not targets to optimise. When metrics become targets, they stop accurately reflecting quality.

Exam Practice Questions

// ctfl 4.0.1 style — select an answer to reveal explanation

3Q
Q1.Which document is produced AT THE END of a testing phase to support the release decision?
Q2.A test manager notices that the defect discovery rate has been declining for three consecutive days. What does this most likely indicate?
Q3.Which metric measures the proportion of planned requirements that have at least one test case covering them?
// end