Testing Glossary - Testing Definitions A-C

ad hoc
For this specific purpose; for a special case only, without general application [an ad hoc committee].

adjunct processor
A secondary CPU that is in communication with a primary CPU. This secondary CPU or processor handles a specific task or function. Typically, the primary CPU sends traffic to the adjunct processor to be processed. Also called an attached processor.

Agile development methods
See AgileAlliance. "Agile Software Development Manifesto." February 13, 2001. www.agilemanifesto.org

art
1. The human ability to make things; creativity of human beings as distinguished from the world of nature. 2. Skill; craftsmanship. 3. Any specific skill or its application (the art of making friends). 4. Any craft, trade, or profession or its principles. 5. Making or doing of things that display form, beauty, and unusual perception; art includes painting, sculpture, architecture, music, literature, drama, the dance, etc. 6. Artful or cunning. 7. Sly or cunning trick; wile. (Webster's New World Dictionary of the American Language, Second College Edition, Prentice Hall, 1984)

assumption
1. The act of assuming, a taking upon oneself, taking over, or taking for granted. 2. Anything taken for granted; supposition. 3. Presumption. (Webster's New World Dictionary of the American Language, Second College Edition, Prentice Hall, 1984)

basis suite
A highly optimized test suite used to establish the baseline behavior of a system.
See also diagnostic suite.

behavioral testing
Tests that verify the output is correct for a given input, without verifying the process that produced the output; data testing.

benchmark
1. A surveyor's mark made on a permanent landmark of known position and altitude; it is used as a reference point in determining other altitudes. 2. A standard or point of reference in measuring or judging quality, value, and so on. (Webster's New World Dictionary of the American Language, Second College Edition, Prentice Hall, 1984)

bias
Error we introduce by having knowledge and therefore expectations of a system.

black box testing
See behavioral testing.

bottom-up testing
Each module or component is first tested alone, and then the modules are combined a few at a time and tested with simulators used in place of components that are necessary but missing.
See also unit test.

brainstorming
Using group synergy to think up ideas.

branch
In program logic, a branch refers to a decision in the code, usually a conditional branch such as an if statement, but it could also be an unconditional branch like a goto statement.

branch coverage
The count of the minimum number of paths required to exercise both branches of each decision node in the system. Best known in unit testing as the number of logic branches in the source code (such as the number of if statements multiplied by 2).

branch test
A test that exercises a logic branch in a program. Traditionally part of unit testing.

calculate
To determine by using mathematics, to compute.

Capability Maturity Model (CMM)
Scheme for measuring the levels of process maturity in a company. Developed at Carnegie Mellon University.

client/server
A name given to the architecture that gives the user or client access to specific data through a
server.

code generator
A software application that generates program source code.

code inspections
A formal process where the source code is inspected for defects.

coding
The act of writing a software program. Program language statements are called code. This is an old term from precompiler days when programmers translated programming instructions directly into machine language.

CPU
Central processing unit.

cyclomatic complexity
A term used interchangeably with the cyclomatic number.

cyclomatic number
The minimum number of linearly independent paths through a structured system

Testing Glossary - Testing Definitions D-E

data
Things known or assumed; facts or figures from which conclusions can be inferred; information.

data analysis
The process of analyzing data.

data dependent
Something that is dependent on the value of a given piece of information. For example, which branch of an if statement will be selected is usually dependent on the information being processed at that specific time.

database
A large collection of data in a computer, organized so that it can be expanded, updated, and retrieved rapidly for various uses.

debug
Given a program that has a bug, to track the problem down in the source code.

decisions
A branching node with multiple edges entering and one edge leaving; decisions can contain processes; in this text, for the purposes of clarity, decisions will be modeled with only one edge entering.

deformation
The changing of form or shape induced by stress.

design
1. To make preliminary sketches of; sketch a pattern or outline for; plan. 2. To plan and carry out, especially by artistic arrangement or in a skillful way. 3. To form (plans, etc.) in the mind; contrive. 4. To plan to do; purpose; intend. (Webster's New World Dictionary of the American Language, Second College Edition, Prentice Hall, 1984)

diagnose
To ascertain why a system responds to a set of stimuli the way it does.

diagnostic suite
A highly optimized test suite used to establish the current behavior of a system, used to isolate the site (or source) of a failure.

document inspection
A formal process where the project documentation is inspected for defects.

edges
In logic flow diagrams, these are lines that connect nodes on the logic flow map.

effectiveness
1. Having an effect; producing a result. 2. Producing a definite or desired result; efficient. 3. In effect; operative; active. 4. Actual, not merely potential or theoretical. 5. Making a striking impression; impressive. 6. Equipped and ready for combat. (Webster's New World Dictionary of the American Language, Second College Edition, Prentice Hall, 1984)

efficiency
1.Ability to produce a desired effect, product, and so on with a minimum of effort, expense, or waste; quality or fact of being efficient. 2. The ratio of effective work to the energy expended in producing it, as of a machine; output divided by input. (Webster's New World Dictionary of the American Language, Second College Edition, Prentice Hall, 1984)

empirically
Determined by trial or experiment.

end-to-end testing
Type of testing where the entire system is tested-that is, from end-to-end.

engineering
1. (a) The science concerned with putting scientific knowledge to practical uses. (b) The planning, designing, construction, or management of machinery, roads, bridges, buildings, and so on. 2. The act of maneuvering or managing. (Webster's New World Dictionary of the American Language, Second College Edition, Prentice Hall, 1984)

environment catalog
A catalog or list of the elements of a given environment, usually includes description and specifications.

excellence
The fact or condition of excelling; of superiority; surpassing goodness or merit, and so on.

expected response
A standard against which a test is compared.

experimentation
The act of conducting experiments.

expert testers
Testers who are experts in their areas

Testing Glossary - Testing Definitions F-I

feature richness
A measure of the abundance and quality of the features offered by a product.

formal
Following a set of prescribed or fixed procedures.

fourth-generation languages (4GL)
4GLs are characterized by natural language-like commands and/or application generators. 4GLs are typically easier to use than traditional procedural languages. They can be employed by end users to develop applications quickly.

function paths
The logic paths that are taken when a program function is executed.

function points
A synthetic software metric that is composed of the weighted totals of inputs, outputs, inquiries, logical files or user data groups, and interfaces belonging to an application.

function test
A test of program functions normally conducted from the user interface.

fundamental metric
A measurement of a physical quantity, where what is measured is the name of the metric, for example, errors per 100 lines of code.

graphical user interface (GUI)
Computer user interface where the user can manipulate objects to accomplish tasks.

IEEE
Institute of Electrical and Electronics Engineering.

incremental delivery
A strategy for delivering a system to the users in increments. Each increment delivered adds function to the previous product. Such systems are generally delivered using incremental development or modular development techniques.

incremental development
Modules that implement function to be delivered are developed and unit tested; then they are assembled, integrated into the existing system, and tested as they become available. The system is stabilized after each addition. Theoretically, this means that there is always a stable version ready to be shipped.

independent function paths
The discrete logical paths that can be executed through a function in an application or a system where each one is independent from the others.

innovate
Renew, alter, introduce new methods, devices, and so on; to bring in as an innovation.

inspection
The process of examining something carefully and in detail.

integration test
This is the process where systems are built. Units that make up a system are combined, and the interfaces and data flow within the system are tested. Units are usually added one at a time, and the system's stability is reestablished before the next unit is added.

integrator
One who integrates.

integrity
The quality or state of being complete; unbroken condition; wholeness.

invent
1. To come upon, meet, or discover. 2. To think up; devise or fabricate in the mind [to invent excuses]. 3. To think out or produce [a new device process, etc.]; originate, as by experiment; devise for the first time. (Webster's New World Dictionary of the American Language, Second College Edition, Prentice Hall, 1984)

inventory
A detailed list.

Testing Glossary - Testing Definitions K-P

keytrap tool
A software test tool that captures and saves the keystrokes typed by the user. Also called capture/replay and capture/playback.

linear independence
A line that is independent of other lines. For system traversals, this means that each linearly independent path through the system must traverse some unique path segment that is not traversed by any other traversal through the system.

lines of code
The count of the lines of program code in a software module or system.

load testing
Testing the load-bearing ability of a system. For example, verifying that the system can process the required number of transactions per time period.

logic flow map
Graphic depiction of the logic paths through a system, or some function that is modeled as a system.

logic schematics
A logic scheme, plan, or diagram.

magnitude of a physical quantity
Specified by a number and a unit, such as bugs per thousand lines of code or per minutes of test.

management
The act, art, or manner of managing, or handling, controlling, directing, and so on.

measure
"The act or process of determining extent, dimensions, and so on; especially as determined by a standard," (according to Websters New World Dictionary). The IEEE definition is "A quantitative assessment of the degree to which a software product or process possesses a given attribute." [IEEE043098]

menu
A program element that offers the user a number of choices; menus do not involve data entry.

metric
A measure.

metric system
A set or system of measures.

Most Important Tests (MITS)
The tests most likely to be of interest on the basis of probable importance and risk of failure.

node
From the Latin nodus, meaning knot. A dilemma or complication; a point of concentration. In logic flow mapping, both processes and decisions are nodes.

object-oriented languages
A programming system where program functions and utilities are precompiled into objects that have distinct properties and behaviors.

paper documentation
Documentation printed on paper.

path
A track or way worn by footsteps; also a line of movement or course taken; any traversal through a system.

path analysis
Examining and enumerating the paths through a program or system.

path-dependent function
A program traversal that follows a particular path regardless of the current data.

percent function coverage
The percent of all functions that are being tested.

performance
1. The act of performing; execution, accomplishment, fulfillment, and so on. 2. Operation or functioning, usually with regard to effectiveness, as of a machine. 3. Something done or performed; deed or feat. 4. (a) A formal exhibition or presentation before an audience, as a play, musical program, show, and so on. (b) One's part in this. (Webster's New World Dictionary of the American Language, Second College Edition, Prentice Hall, 1984)

performance testing
See load testing.

physical quantity
The description of the operational procedure for measuring the quantity.

plan-driven
Term coined by Barry Boehm in his article, "Get Ready for Agile Methods, with Care" to describe traditional waterfall-style development methods. See the "References."

process
A continuing development involving many changes.

processes
In logic flow mapping, a process is a collector node with multiple edges entering and one edge leaving; a process node can represent one program statement or an entire software system, as long as the contents are consistent throughout the logic flow diagram.

production system monitoring
The act of watching a production system; the object is to detect anomalies or failures as they occur.

programmatic paths
The logic flow through the code statements in a program.

proprietary
Privately owned and operated. Held under patent, trademark, or copyright by a private person or company.

Testing Glossary - Testing Definitions Q-S

quality
The degree of excellence that a thing possesses. The degree of conformance to a standard.

quality assurance
According to the British Standard 4778, this standard cites all those planned and systematic actions necessary to provide adequate confidence that a product or service will satisfy given requirements for quality.

quality control
According to the British Standard 4778, the operational techniques and activities that are used to fulfill requirements for quality.

random
Without specific order.

rank
An orderly arrangement; a relative position, usually in a scale classifying persons or things.
rapid application development (RAD)
A development process that evolves a product through multiple trial-and-error cycles.

regions
Any area that is completely surrounded by edges and processes.

regression test
Retesting something that has been tested previously. Usually conducted after some part of the system has been changed. Regressing; going back, returning.

review
To look at or go over again.

science
1. Systematized knowledge derived from observation, study, and experimentation carried on in order to determine the nature and principles of what is being studied. 2. A branch of knowledge or study, especially one concerned with establishing and systematizing facts, principles, and methods, as by experiments and hypotheses. 3. (a) The systematized knowledge of nature and the physical world. (b) Any branch of this. 4. Skill or technique based upon systematized training. (Webster's New World Dictionary of the American Language, Second College Edition, Prentice Hall, 1984)

scientific method
The systematic attempt to construct theories that correlate wide groups of observed facts and are capable of predicting the results of future observations. Such theories are tested by controlled experimentation and are accepted only so long as they are consistent with all observed facts.

severity
The quality or condition of being severe; strictness; harshness.

software application
A computer program that performs some set of functions.

Software Capability Maturity Model (SW-CMM)
A scheme for measuring the levels of process maturity in a company. Developed at Carnegie Mellon University, Software Engineering Institute. The Capability Maturity Model uses a conceptual framework based on industry best practices to assess the process maturity, capability, and performance of a software development organization.

source code
In programming, the actual statements of programming language in a program.

spaghetti code
Referring to poorly constructed, disorganized, and unstructured source code.

statement
The thing stated; account; declaration. In programming, a single line of program code, a single program action.

statement coverage
A method of path counting that counts the minimum number of paths required to walk through each statement in the source code.

statement test
Testing statements in a software program at the source code level.

static code analyzer
A software tool that analyzes the program source code, in an uncompiled state. As opposed to dynamic code analyzers, which analyze the activity of code while it is being run.

structural test
A test that verifies the structural integrity of a set or system of program elements.

structured system
A system or subsystem that has only one entry point and one exit point.

system
A set or arrangement of things so related or connected so as to form a unity or organic whole. A set of decisions and processes that as a group have one entry point and one exit point. A group of units that can interact, as well as act independently.

system test
This term is often used interchangeably with integration test, but it really refers to testing a system that is built. The functions of the complete system are verified

Testing Glossary - Testing Definitions T-U

technique
The method or procedure (with reference to practical or formal details), or way of using basic skills, in rendering an artistic work or carrying out a scientific or mechanical operation.

test
Ascertain the response of a system to stimuli and compare that response to a standard. Evaluate the quality of the response with respect to the standard. Given some software and a list of the functions it is supposed to perform, find out if it performs these functions as they are described. Additionally, find out if it does other things that are not described. (Validate and verify.)
test 1
(IEEE) A set of one or more test cases.

test case
A condition to be tested that includes its own identification and the expected response. Sometimes used interchangeably with test script.

test coverage
The percentage of everything that could be tested that was actually tested.

test effort
Process by which testers produce their product, involving developing and evaluating a software system by conducting tests and getting bugs removed.

test inspection
A formal process where the tests are inspected for defects.

test inventory
The complete enumeration of all known tests; path, data, module, design, system, and so on.

test script
A collection of tests or activities that are performed in sequence. Used interchangeably with test case.

test set
Term used to describe a group of tests.
See also test suite.
See also test inventory.

test suite
A group of tests run sequentially.

testing
(IEEE) The process of analyzing a software item to detect the differences between existing and required conditions (that is, bugs) and to evaluate the features of the software item.

theoretical
Limited to or based on theory.

theory
1. A mental viewing; contemplation. 2. A speculative idea or plan as to how something might be done. 3. A systematic statement of principles involved. 4. A formulation of apparent relationships or underlying principles of certain observed phenomena that has been verified to some degree. 5. That branch of an art or science consisting in a knowledge of its principles and methods rather than in its practice; pure, as opposed to applied, science, and so on. 6. Popularly, a mere conjecture, or guess. (Webster's New World Dictionary of the American Language, Second College Edition, Prentice Hall, 1984)

top-down testing
A testing process that first assembles a system and then tests the entire system at once from the user's perspective.

total independent paths (TIP)
Total number of linearly independent paths being considered.

Underwriters Laboratory (UL)
An establishment in the United States licensed to certify that electronic products meet established safety standards.

uniformity
State, quality, or instance of being uniform.

unit
A discrete, logical set of function(s). This can be a single small program.

unit test
To test a program unit, a separately compiled module, an object, or a group of closely related modules.

universal description discovery and integration (UDDI)
A cross-industry effort driven by major platform and software providers, as well as marketplace operators and e-business leaders within the OASIS standards consortium. UDDI creates a standard interoperable platform that enables companies and applications to quickly, easily, and dynamically find and use Web services over the Internet. http://www.uddi.org/

unreproducible bug
A bug that cannot be reproduced by following the same steps that produced it originally.

user acceptance test (UAT)
Tests performed by the user to determine if the system is acceptable

Testing Glossary - Testing Definitions V-W

validate
To confirm the validity of.

validation
The act of confirming; to declare valid.

validity
The state, quality, or fact of being valid (strong, powerful, properly executed) in law or in argument, proof, authority, and so on.

verification
Verifying or being verified; establishment or confirmation of the truth or accuracy of a fact, theory, and so on.

verify
1. To prove to be true by demonstration, evidence, or testimony; confirm or substantiate. 2. To test or check the accuracy or correctness of, as by investigation, comparison with a standard, or reference to the facts.

versioning
A process used by the first version.

white box testing
Testing that examines and verifies the process by which program functions are carried out; path testing.

working hypothesis
To provide a basis for further investigation, argument, and so on of an unproved theory.

WYSIWYG
What you see is what you get.

What is a Test Case?


IEEE Standard 610 (1990) defines test case as follows:
“(1) A set of test inputs, execution conditions, and expected results developed for a particular objective, such as to exercise a particular program path or to verify compliance
with a specific requirement.

“(2) (IEEE Std 829-1983) Documentation specifying inputs, predicted results, and a set of execution conditions for a test item.”

According to Ron Patton (2001, p. 65),
“Test cases are the specific inputs that you’ll try and the procedures that you’ll follow when you test the software.”

Boris Beizer (1995, p. 3) defines a test as
“A sequence of one or more subtests executed as a sequence because the outcome and/or final state of one subtest is the input and/or initial state of the next. The word ‘test’ is used to include subtests, tests proper, and test suites.

Testing Techniques

Overtime the IT industry and the testing discipline have developed several techniques for analyzing and testing applications.

Black-box Tests
Black-box tests are derived from an understanding of the purpose of the code; knowledge on or about the actual internal program structure is not required when using this approach. The risk involved with this type of approach is that .hidden. (functions unknown to the tester) will not be tested and may not been even exercised.

White-box Tests or Glass-box tests
White-box tests are derived from an intimate understanding of the purpose of the code and the code itself; this allows the tester to test .hidden. (undocumented functionality) within the body of the code. The challenge with any white-box testing is to find testers that are comfortable with reading and understanding code.

Regression tests
Regression testing is not a testing technique or test phase; it is the reuse of existing tests to test previously implemented functionality--it is included here only for clarification.

Equivalence Partitioning
Equivalence testing leverages the concept of "classes" of input conditions. A "class" of input could be "City Name" where testing one or several city names could be deemed equivalent to testing all city names. In other word each instance of a class in a test covers a large set of other possible tests.

Boundary-value Analysis
Boundary-value analysis is really a variant on Equivalence Partitioning but in this case the upper and lower end of the class and often values outside the valid range of the class are used for input into the test cases. For example, if the Class in "Numeric Month of the Year" then the Boundary-values could be 0, 1, 12, and 13.

Error Guessing
Error Guessing involves making an itemized list of the errors expected to occur in a particular area of the system and then designing a set of test cases to check for these expected errors. Error Guessing is more testing art than testing science but can be very effective given a tester familiar with the history of the system.

Output Forcing
Output Forcing involves making a set of test cases designed to produce a particular output from the system. The focus here is on creating the desired output not on the input that initiated the system response.

Testing Roles

As in any organization or organized endeavor there are Roles that must be fulfilled within any testing organization. The requirement for any given role depends on the size, complexity, goals, and maturity of the testing organization. These are roles, so it is quite possible that one person could fulfill many roles within the testing organization.

Test Lead or Test Manager
The Role of Test Lead / Manager is to effectively lead the testing team. To fulfill this role the Lead must understand the discipline of testing and how to effectively implement a testing process while fulfilling the traditional leadership roles of a manager. What does this mean? The manager must manage and implement or maintain an effective testing process.

Test Architect
The Role of the Test Architect is to formulate an integrated test architecture that supports the testing process and leverages the available testing infrastructure. To fulfill this role the Test Architect must have a clear understanding of the short-term and long-term goals of the organization, the resources (both hard and soft) available to the organization, and a clear vision on how to most effectively deploy these assets to form an integrated test architecture.

Test Designer or Tester
The Role of the Test Designer / Tester is to: design and document test cases, execute tests, record test results, document defects, and perform test coverage analysis. To fulfill this role the designer must be able to apply the most appropriate testing techniques to test the application as efficiently as possible while meeting the test organizations testing mandate.

Test Automation Engineer
The Role of the Test Automation Engineer to is to create automated test case scripts that perform the tests as designed by the Test Designer. To fulfill this role the Test Automation Engineer must develop and maintain an effective test automation infrastructure using the tools and techniques available to the testing organization. The Test Automation Engineer must work in concert with the Test Designer to ensure the appropriate automation solution is being deployed.

Test Methodologist or Methodology Specialist
The Role of the Test Methodologist is to provide the test organization with resources on testing methodologies. To fulfill this role the Methodologist works with Quality Assurance to facilitate continuous quality improvement within the testing methodology and the testing organization as a whole. To this end the methodologist: evaluates the test strategy, provides testing frameworks and templates, and ensures effective implementation of the appropriate testing techniques

Testing Levels / Phases

Testing levels or phases should be applied against the application under test when the previous phase of testing is deemed to be complete . or .complete enough.. Any defects detected during any level or phase of testing need to be recorded and acted on appropriately.

Design Review
"The objective of Design Reviews is to verify all documented design criteria before development begins." The design deliverable or deliverables to be reviewed should be complete within themselves. The environment of the review should be a professional examination of the deliverable with the focus being the deliverable not the author (or authors). The review must ensure each design deliverable for: completeness, correctness, and fit (both within the business model, and system architecture).
Design reviews should be conducted by: system matter experts, testers, developers, and system architects to ensure all aspects of the design are reviewed.

Unit Test
"The objective of unit test is to test every line of code in a component or module." The unit of code to be tested can be tested independent of all other units. The environment of the test should be isolated to the immediate development environment and have little, if any, impact on other units being developed at the same time. The test data can be fictitious and does not have to bear any relationship to .real world. business events. The test data need only consist of what is required to ensure that the component and component interfaces conform to the system architecture. The unit test must ensure each component: compiles, executes, interfaces, and passes control from the unit under test to the next component in the process according to the process model.
The developer in conjunction with a peer should conduct unit test to ensure the component is stable enough to be released into the product stream.

Function Test
"The objective of function test is to measure the quality of the functional (business) components of the system." Tests verify that the system behaves correctly from the user / business perspective and functions according to the requirements, models, storyboards, or any other design paradigm used to specify the application. The function test must determine if each component or business event: performs in accordance to the specifications, responds correctly to all conditions that may be presented by incoming events / data, moves data correctly from one business event to the next (including data stores), and that business events are initiated in the order required to meet the business objectives of the system.
Function test should be conducted by an independent testing organization to ensure the various components are stable and meet minimum quality criteria before proceeding to System test.

System Test
"The objective of system test is to measure the effectiveness and efficiency of the system in the "real-world" environment." System tests are based on business processes (workflows) and performance criteria rather than processing conditions. The system test must determine if the deployed system: satisfies the operational and technical performance criteria, satisfies the business requirements of the System Owner / Users / Business Analyst, integrates properly with operations (business processes, work procedures, user guides), and that the business objectives for building the system were attained.

There are many aspects to System testing the most common are:
Security Testing: The tester designs test case scenarios that attempt to subvert or bypass security.
Stress Testing: The tester attempts to stress or load an aspect of the system to the point of failure; the goal being to determine weak points in the system architecture.
Performance Testing: The tester designs test case scenarios to determine if the system meets the stated performance criteria (i.e. A Login request shall be responded to in 1 second or less under a typical daily load of 1000 requests per minute.)
Install (Roll-out) Testing: The tester designs test case scenarios to determine if the installation procedures lead to an invalid or incorrect installation.
Recovery Testing: The tester designs test case scenarios to determine if the system meets the stated fail-over and recovery requirements.
System test should be conducted by an independent testing organization to ensure the system is stable and meets minimum quality criteria before proceeding to User Acceptance test.

User Acceptance Test

"The objective of User Acceptance test is for the user community to measure the effectiveness and efficiency of the system in the "real-world" environment.". User Acceptance test is based on User Acceptance criteria, which can include aspects of Function and System test. The User Acceptance test must determine if the deployed system: meets the end Users expectations, supports all operational requirements (both recorded and non-recorded), and fulfills the business objectives (both recorded and non-recorded) for the system.
User Acceptance test should be conducted by the end users of the system and monitored by an independent testing organization. The Users must ensure the system is stable and meets the minimum quality criteria before proceeding to system deployment (roll-out).