One wrong word costs you a passing score
The CTFL exam is built on precise terminology. "Defect", "failure", and "error" are three different things in CTFL — but in daily work most testers use them interchangeably. The same distinction applies to "verification vs validation", "static vs dynamic testing", "test case vs test condition vs test procedure".
Candidates who understand the concepts but use the wrong CTFL term lose points on questions they otherwise know the answer to. This glossary covers every term that appears on the exam, grouped by chapter, with the exact CTFL 4.0.1 definition and the most common misunderstanding for each.
// example: a ctfl candidate reads: "a tester runs the login function and observes that the system accepts an empty password. what is the term for what the tester observed?" the candidate knows what happened — the software behaved incorrectly. but they write "defect". the correct answer is "failure". the defect is the incorrect code that caused it. the error is the programmer's mistake that introduced the code. the failure is the observable wrong behaviour the tester witnessed during execution. this three-way distinction — error → defect → failure — appears on nearly every ctfl exam in some form. getting it wrong here is not a knowledge gap, it is a terminology gap.
Chapter 1 & 2 — Core testing and SDLC terms
| Term | CTFL 4.0.1 Definition | Common confusion |
|---|---|---|
| Error | A human action that produces an incorrect result (the mistake made by a person) | Confused with "defect" — the error causes the defect, it is not the defect itself |
| Defect (bug/fault) | An imperfection or deficiency in a work product — the incorrect code, document, or artefact | Confused with "failure" — the defect exists in the code; failure is what you observe at runtime |
| Failure | An event in which a component or system does not perform a required function within specified limits — the observable wrong behaviour | Confused with "defect" — a defect can exist without causing a failure (e.g., dead code) |
| Test object | The work product to be tested (e.g., a component, system, or document) | Confused with "test item" — these are synonyms in CTFL |
| Test basis | The body of knowledge used as a basis for test design (requirements, specs, code, regulations) | Confused with "test oracle" — the basis is the source for designing tests; the oracle determines pass/fail |
| Test oracle | A source to determine whether the test object passes or fails a test (expected result) | Thought to mean "the test itself" — it is specifically the mechanism for determining pass/fail |
| Test condition | An aspect of the test object that needs to be verified (e.g., "login with empty password") | Confused with "test case" — a condition is more abstract; a test case is a specific set of inputs and expected results |
| Test case | A set of preconditions, inputs, actions, expected results, and postconditions developed from a test condition | Confused with "test procedure" — a test case is what to test; a procedure is the sequence of steps to execute it |
| Test suite | A set of test scripts or test procedures to be executed in a specific test run | Confused with "test plan" — a suite is an executable set; a plan is a document describing approach and scope |
| Testware | Work products produced during the test process (test plans, test cases, scripts, test data, reports) | Thought to mean only test scripts — testware includes all testing artefacts |
| Verification | Confirmation that a work product meets its specification ("Are we building the product right?") | Confused with "validation" — verification checks conformance to spec; validation checks fitness for actual use |
| Validation | Confirmation that a work product meets stakeholder needs in its operational context ("Are we building the right product?") | Confused with "verification" — remember: Verification = spec, Validation = value to user |
| Debugging | The process of finding, analysing, and removing defects from software (done by developers) | Confused with "testing" — testing finds failures; debugging finds and fixes the defect that caused them |
| Test level | A specific group of test activities organised and managed together (component, integration, system, acceptance) | Confused with "test type" — levels describe WHEN/SCOPE; types describe WHAT quality characteristic is tested |
| Test type | A group of test activities based on specific quality characteristics (functional, performance, security, usability) | Confused with "test level" — functional testing can occur at any level; a level is not a type |
| Regression testing | Testing of a previously tested program after modification to detect whether defects have been introduced or uncovered | Confused with "confirmation testing" — confirmation retests a specific fixed defect; regression checks for side effects |
| Confirmation testing | Dynamic testing that checks whether a specific defect has been fixed | Confused with "regression testing" — confirmation is targeted at one fixed defect; regression is broader |
// note:
Chapter 3, 4, 5 & 6 — Static, techniques, management, and tools
| Term | CTFL 4.0.1 Definition | Common confusion |
|---|---|---|
| Static testing | Testing that does not involve executing the software — reviews, walkthroughs, static analysis | Confused with "dynamic testing" — static = no execution; dynamic = code runs |
| Dynamic testing | Testing that involves executing the software with test cases | Thought to mean "non-automated" — dynamic testing can be automated or manual |
| Review | A type of static testing in which a work product or process is evaluated by one or more persons (informal, walkthrough, technical, inspection, checklist) | Thought to be synonymous with "inspection" — inspection is one formal type of review |
| Inspection | The most formal review type — follows a defined process with trained moderator, entry/exit criteria, and formal defect logging | Confused with "technical review" — inspections are more formal and require a trained moderator |
| Walkthrough | A review led by the author to gather feedback and share information — informal, no formal preparation required | Confused with "inspection" — walkthroughs are author-led and informal; inspections have a formal moderator |
| Static analysis | Evaluation of source code or other artefacts without execution, using tools to detect defects, complexity, and standards violations | Confused with "static testing" — static analysis is a tool-based subset of static testing |
| Equivalence partition | A subset of the value domain of a variable for which all values are expected to be processed equivalently by the test object | Thought to mean "any group of inputs" — partitions must be based on processing equivalence, not arbitrary grouping |
| Boundary value | A value at the edge of an equivalence partition — minimum, maximum, just inside, and just outside | Thought to mean "just the minimum and maximum" — BVA also tests values just inside and just outside boundaries |
| Test coverage | The degree to which specified coverage items have been exercised by a test suite, expressed as a percentage | Confused with "requirements coverage" — coverage can be measured against many dimensions (branches, states, conditions) |
| Statement coverage | The percentage of executable statements exercised by a test suite | Confused with "branch coverage" — statement coverage misses false branches of if-statements; branch coverage does not |
| Branch coverage | The percentage of branches (true and false outcomes of each decision point) exercised by a test suite | Thought to be weaker than statement coverage — branch coverage SUBSUMES statement coverage (it is stronger) |
| Test plan | A document describing the scope, approach, resources, schedule, and objectives for test activities | Confused with "test strategy" — a strategy is high-level and may cover an organisation; a plan is project-specific |
| Test strategy | A description of the testing approach for a product or project, often at organisational level | Confused with "test approach" — approach is project-level within a plan; strategy is organisation-level |
| Risk | A factor that could result in a negative consequence in the future — characterised by likelihood and impact | Confused with "issue" — a risk is potential; an issue has already occurred |
| Product risk | Risk related to the quality of the test object (incorrect behaviour, missing feature, poor performance) | Confused with "project risk" — product risks affect users; project risks affect the team's ability to deliver |
| Project risk | Risk related to the management and control of the project (resource shortage, schedule delays, unclear requirements) | Confused with "product risk" — remember: project risk = delivery risk; product risk = quality risk |
| Configuration item (CI) | An item or set of items under configuration management control (source code, test cases, environment configs, defect records) | Thought to mean only source code — testware and environment configs are also CIs |
| Baseline | A formally approved version of a configuration item that serves as the basis for further development or delivery | Confused with "release" — a baseline is an internal reference point; a release is delivered to users |
| Test automation framework (TAF) | The tools, libraries, coding conventions, and patterns used as a shared platform for building and running automated tests | Confused with "test automation architecture" — the TAF is the implementation; the TAA is the design |
| Exploratory testing | An approach in which the tester simultaneously designs, executes, and learns — guided by charters and time-boxes | Thought to mean "random" testing — exploratory testing is structured through charters and session notes |
| Entry criteria | Conditions that must be met before a test activity can begin (e.g., test environment ready, test data prepared) | Confused with "exit criteria" — entry = before we start; exit = before we finish/release |
| Exit criteria | Conditions that must be met to declare a test activity complete (e.g., coverage target met, no critical open defects) | Confused with "entry criteria" — exit = done; entry = ready to start |
interactive_visualization
// glossary-explorer
The most dangerous term pairs on the exam
These pairs are responsible for the majority of terminology-based mistakes. Memorise the distinguishing criterion for each.
| Pair | How to distinguish |
|---|---|
| Error vs Defect vs Failure | Error = person's mistake. Defect = the artefact flaw it created. Failure = the observable wrong behaviour when the defect is triggered. |
| Testing vs Debugging | Testing = finding failures (done by testers). Debugging = finding the defect and fixing it (done by developers). |
| Verification vs Validation | Verification = "built to spec" (internal review/document check). Validation = "right product for users" (real environment, real users). |
| Static vs Dynamic testing | Static = no code runs (reviews, analysis). Dynamic = code executes (manual test, automated test). |
| Test case vs Test condition vs Test procedure | Condition = what aspect to test (abstract). Case = specific inputs + expected result. Procedure = step-by-step execution sequence. |
| Regression vs Confirmation testing | Confirmation = retesting the specific fixed defect. Regression = broader check that the fix didn't break anything else. |
| Product risk vs Project risk | Product risk = wrong/missing software quality (affects users). Project risk = delivery problem (affects the team). |
| Test plan vs Test strategy | Strategy = organisation-level, high-level direction. Plan = project-specific, detailed scope and schedule. |
| Inspection vs Walkthrough | Inspection = formal, moderator-led, defined entry/exit criteria. Walkthrough = informal, author-led, no formal criteria. |
| Entry criteria vs Exit criteria | Entry = gates before testing starts. Exit = gates before testing is declared done. |
// note:
Exam Practice Questions
// ctfl 4.0.1 style — select an answer to reveal explanation