BETAPlatform actively being built — new topics and features added regularly.

ISTQB Foundation Level (CTFL 4.0.1)~12 min read25/26

Key Terms Glossary

// all 50+ key ctfl terms defined clearly with exam-ready explanations.

loading...
// content

One wrong word costs you a passing score

The CTFL exam is built on precise terminology. "Defect", "failure", and "error" are three different things in CTFL — but in daily work most testers use them interchangeably. The same distinction applies to "verification vs validation", "static vs dynamic testing", "test case vs test condition vs test procedure".

Candidates who understand the concepts but use the wrong CTFL term lose points on questions they otherwise know the answer to. This glossary covers every term that appears on the exam, grouped by chapter, with the exact CTFL 4.0.1 definition and the most common misunderstanding for each.

// example: a ctfl candidate reads: "a tester runs the login function and observes that the system accepts an empty password. what is the term for what the tester observed?" the candidate knows what happened — the software behaved incorrectly. but they write "defect". the correct answer is "failure". the defect is the incorrect code that caused it. the error is the programmer's mistake that introduced the code. the failure is the observable wrong behaviour the tester witnessed during execution. this three-way distinction — error → defect → failure — appears on nearly every ctfl exam in some form. getting it wrong here is not a knowledge gap, it is a terminology gap.

A CTFL candidate reads: "A tester runs the login function and observes that the system accepts an empty password. What is the term for what the tester observed?" The candidate knows what happened — the software behaved incorrectly. But they write "defect". The correct answer is "failure". The defect is the incorrect code that caused it. The error is the programmer's mistake that introduced the code. The failure is the observable wrong behaviour the tester witnessed during execution. This three-way distinction — error → defect → failure — appears on nearly every CTFL exam in some form. Getting it wrong here is not a knowledge gap, it is a terminology gap.

Chapter 1 & 2 — Core testing and SDLC terms

TermCTFL 4.0.1 DefinitionCommon confusion
ErrorA human action that produces an incorrect result (the mistake made by a person)Confused with "defect" — the error causes the defect, it is not the defect itself
Defect (bug/fault)An imperfection or deficiency in a work product — the incorrect code, document, or artefactConfused with "failure" — the defect exists in the code; failure is what you observe at runtime
FailureAn event in which a component or system does not perform a required function within specified limits — the observable wrong behaviourConfused with "defect" — a defect can exist without causing a failure (e.g., dead code)
Test objectThe work product to be tested (e.g., a component, system, or document)Confused with "test item" — these are synonyms in CTFL
Test basisThe body of knowledge used as a basis for test design (requirements, specs, code, regulations)Confused with "test oracle" — the basis is the source for designing tests; the oracle determines pass/fail
Test oracleA source to determine whether the test object passes or fails a test (expected result)Thought to mean "the test itself" — it is specifically the mechanism for determining pass/fail
Test conditionAn aspect of the test object that needs to be verified (e.g., "login with empty password")Confused with "test case" — a condition is more abstract; a test case is a specific set of inputs and expected results
Test caseA set of preconditions, inputs, actions, expected results, and postconditions developed from a test conditionConfused with "test procedure" — a test case is what to test; a procedure is the sequence of steps to execute it
Test suiteA set of test scripts or test procedures to be executed in a specific test runConfused with "test plan" — a suite is an executable set; a plan is a document describing approach and scope
TestwareWork products produced during the test process (test plans, test cases, scripts, test data, reports)Thought to mean only test scripts — testware includes all testing artefacts
VerificationConfirmation that a work product meets its specification ("Are we building the product right?")Confused with "validation" — verification checks conformance to spec; validation checks fitness for actual use
ValidationConfirmation that a work product meets stakeholder needs in its operational context ("Are we building the right product?")Confused with "verification" — remember: Verification = spec, Validation = value to user
DebuggingThe process of finding, analysing, and removing defects from software (done by developers)Confused with "testing" — testing finds failures; debugging finds and fixes the defect that caused them
Test levelA specific group of test activities organised and managed together (component, integration, system, acceptance)Confused with "test type" — levels describe WHEN/SCOPE; types describe WHAT quality characteristic is tested
Test typeA group of test activities based on specific quality characteristics (functional, performance, security, usability)Confused with "test level" — functional testing can occur at any level; a level is not a type
Regression testingTesting of a previously tested program after modification to detect whether defects have been introduced or uncoveredConfused with "confirmation testing" — confirmation retests a specific fixed defect; regression checks for side effects
Confirmation testingDynamic testing that checks whether a specific defect has been fixedConfused with "regression testing" — confirmation is targeted at one fixed defect; regression is broader

// note:

Chapter 3, 4, 5 & 6 — Static, techniques, management, and tools

TermCTFL 4.0.1 DefinitionCommon confusion
Static testingTesting that does not involve executing the software — reviews, walkthroughs, static analysisConfused with "dynamic testing" — static = no execution; dynamic = code runs
Dynamic testingTesting that involves executing the software with test casesThought to mean "non-automated" — dynamic testing can be automated or manual
ReviewA type of static testing in which a work product or process is evaluated by one or more persons (informal, walkthrough, technical, inspection, checklist)Thought to be synonymous with "inspection" — inspection is one formal type of review
InspectionThe most formal review type — follows a defined process with trained moderator, entry/exit criteria, and formal defect loggingConfused with "technical review" — inspections are more formal and require a trained moderator
WalkthroughA review led by the author to gather feedback and share information — informal, no formal preparation requiredConfused with "inspection" — walkthroughs are author-led and informal; inspections have a formal moderator
Static analysisEvaluation of source code or other artefacts without execution, using tools to detect defects, complexity, and standards violationsConfused with "static testing" — static analysis is a tool-based subset of static testing
Equivalence partitionA subset of the value domain of a variable for which all values are expected to be processed equivalently by the test objectThought to mean "any group of inputs" — partitions must be based on processing equivalence, not arbitrary grouping
Boundary valueA value at the edge of an equivalence partition — minimum, maximum, just inside, and just outsideThought to mean "just the minimum and maximum" — BVA also tests values just inside and just outside boundaries
Test coverageThe degree to which specified coverage items have been exercised by a test suite, expressed as a percentageConfused with "requirements coverage" — coverage can be measured against many dimensions (branches, states, conditions)
Statement coverageThe percentage of executable statements exercised by a test suiteConfused with "branch coverage" — statement coverage misses false branches of if-statements; branch coverage does not
Branch coverageThe percentage of branches (true and false outcomes of each decision point) exercised by a test suiteThought to be weaker than statement coverage — branch coverage SUBSUMES statement coverage (it is stronger)
Test planA document describing the scope, approach, resources, schedule, and objectives for test activitiesConfused with "test strategy" — a strategy is high-level and may cover an organisation; a plan is project-specific
Test strategyA description of the testing approach for a product or project, often at organisational levelConfused with "test approach" — approach is project-level within a plan; strategy is organisation-level
RiskA factor that could result in a negative consequence in the future — characterised by likelihood and impactConfused with "issue" — a risk is potential; an issue has already occurred
Product riskRisk related to the quality of the test object (incorrect behaviour, missing feature, poor performance)Confused with "project risk" — product risks affect users; project risks affect the team's ability to deliver
Project riskRisk related to the management and control of the project (resource shortage, schedule delays, unclear requirements)Confused with "product risk" — remember: project risk = delivery risk; product risk = quality risk
Configuration item (CI)An item or set of items under configuration management control (source code, test cases, environment configs, defect records)Thought to mean only source code — testware and environment configs are also CIs
BaselineA formally approved version of a configuration item that serves as the basis for further development or deliveryConfused with "release" — a baseline is an internal reference point; a release is delivered to users
Test automation framework (TAF)The tools, libraries, coding conventions, and patterns used as a shared platform for building and running automated testsConfused with "test automation architecture" — the TAF is the implementation; the TAA is the design
Exploratory testingAn approach in which the tester simultaneously designs, executes, and learns — guided by charters and time-boxesThought to mean "random" testing — exploratory testing is structured through charters and session notes
Entry criteriaConditions that must be met before a test activity can begin (e.g., test environment ready, test data prepared)Confused with "exit criteria" — entry = before we start; exit = before we finish/release
Exit criteriaConditions that must be met to declare a test activity complete (e.g., coverage target met, no critical open defects)Confused with "entry criteria" — exit = done; entry = ready to start

interactive_visualization

// glossary-explorer

The most dangerous term pairs on the exam

These pairs are responsible for the majority of terminology-based mistakes. Memorise the distinguishing criterion for each.

PairHow to distinguish
Error vs Defect vs FailureError = person's mistake. Defect = the artefact flaw it created. Failure = the observable wrong behaviour when the defect is triggered.
Testing vs DebuggingTesting = finding failures (done by testers). Debugging = finding the defect and fixing it (done by developers).
Verification vs ValidationVerification = "built to spec" (internal review/document check). Validation = "right product for users" (real environment, real users).
Static vs Dynamic testingStatic = no code runs (reviews, analysis). Dynamic = code executes (manual test, automated test).
Test case vs Test condition vs Test procedureCondition = what aspect to test (abstract). Case = specific inputs + expected result. Procedure = step-by-step execution sequence.
Regression vs Confirmation testingConfirmation = retesting the specific fixed defect. Regression = broader check that the fix didn't break anything else.
Product risk vs Project riskProduct risk = wrong/missing software quality (affects users). Project risk = delivery problem (affects the team).
Test plan vs Test strategyStrategy = organisation-level, high-level direction. Plan = project-specific, detailed scope and schedule.
Inspection vs WalkthroughInspection = formal, moderator-led, defined entry/exit criteria. Walkthrough = informal, author-led, no formal criteria.
Entry criteria vs Exit criteriaEntry = gates before testing starts. Exit = gates before testing is declared done.

// note:

Exam Practice Questions

// ctfl 4.0.1 style — select an answer to reveal explanation

5Q
Q1.A developer writes code with an incorrect conditional statement. A tester runs the software and observes wrong output. What is the CORRECT mapping of CTFL terms to this scenario?
Q2.A team reviews a requirements specification document without running any code. Which type of testing is this?
Q3.What is the difference between verification and validation?
Q4.After a defect is fixed, a tester re-executes the original test case that found the defect. A second tester runs the full regression suite. Which terms apply respectively?
Q5.A tester notices that a test environment becoming unavailable could delay the project schedule. What type of risk is this?
// end