Test design techniques
5.1 The TEST DEVELOPMENT PROCESS
The process described in this section can be done in different ways, from very informal with little or no documentation, to very formal (as it is described below). The level of formality depends on the context of the testing, including the organization, the maturity of testing and development processes, time constraints and the people involved.
During test analysis, the test basis documentation is analyzed in order to determine what to test, i.e. to identify the test conditions. A test condition is defined as an item or event that could be verified by one or more test cases (e.g. a function, transaction, quality characteristic or structural element).
Establishing traceability from test conditions back to the specifications and requirements enables both impact analysis, when requirements change, and requirements coverage to be determined for a set of tests. During test analysis the detailed test approach is implemented to select the test design techniques to use, based on, among other considerations, the risks identified
During test design the test cases and test data are created and specified. A test case consists of a set of input values, execution preconditions, expected results and execution post-conditions, developed to cover certain test condition(s). The ‘Standard for Software Test Documentation’ (IEEE 829) describes the content of test design specifications (containing test conditions) and test case specifications.
Expected results should be produced as part of the specification of a test case and include outputs, changes to data and states, and any other consequences of the test. If expected results have not been defined then a plausible, but erroneous, result may be interpreted as the correct one. Expected results should ideally be defined prior to test execution.
During test implementation the test cases are developed, implemented, prioritized and organized in the test procedure specification. The test procedure (or manual test script) specifies the sequence of action for the execution of a test. If tests are run using a test execution tool, the sequence of actions is specified in a test script (which is an automated test procedure).
The various test procedures and automated test scripts are subsequently formed into a test execution schedule that defines the order in which the various test procedures, and possibly automated test scripts, are executed, when they are to be carried out and by whom. The test execution schedule will take into account such factors as regression tests, prioritization, and technical and logical dependencies
5.2 Categories of test design techniques
The purpose of a test design technique is to identify test conditions and test cases.
It is a classic distinction to denote test techniques as black box or white box. Black-box techniques (which include specification-based and experienced-based techniques) are a way to derive and select test conditions or test cases based on an analysis of the test basis documentation and the experience of developers, testers and users, whether functional or non-functional, for a component or system without reference to its internal structure. White-box techniques (also called structural or structure-based techniques) are based on an analysis of the structure of the component or system.
Some techniques fall clearly into a single category; others have elements of more than one category.
This syllabus refers to specification-based or experience-based approaches as black-box techniques and structure-based as white-box techniques.
Common features of specification-based techniques:
•
Models, either formal or informal, are used for the specification of the problem to be solved, the software or its components.
•
From these models test cases can be derived systematically.
Common features of structure-based techniques:
•
Information about how the software is constructed is used to derive the test cases, for example, code and design.
•
The extent of coverage of the software can be measured for existing test cases, and further test cases can be derived systematically to increase coverage.
Common features of experience-based techniques:
•
The knowledge and experience of people are used to derive the test cases.
•
Knowledge of testers, developers, users and other stakeholders about the software, its usage and its environment.
•
Knowledge about likely defects and their distribution
5.3 Specification-based or black-box techniques
5.3.1 Equivalence partitioning
Inputs to the software or system are divided into groups that are expected to exhibit similar behavior, so they are likely to be processed in the same way. Equivalence partitions (or classes) can be found for both valid data and invalid data, i.e. values that should be rejected. Partitions can also be identified for outputs, internal values, time-related values (e.g. before or after an event) and for interface parameters (e.g. during integration testing). Tests can be designed to cover partitions. Equivalence partitioning is applicable at all levels of testing.
Equivalence partitioning as a technique can be used to achieve input and output coverage. It can be applied to human input, input via interfaces to a system, or interface parameters in integration testing.
Example:
If you are testing for an input box accepting numbers from 1 to 1000 then, there is no use in writing thousand test cases for all 1000 valid input numbers plus other test cases for invalid data.
Using equivalence partitioning method above test cases can be divided into three sets of input data called classes. Each test case is a representative of respective class.
So, from the above example, we can divide our test cases into three equivalence classes of some valid and invalid inputs.
1. One input data class with all valid inputs. Pick a single value from range 1 to 1000 as a valid test case. If you select other values between 1 and 1000 then result is going to be same. So one test case for valid input data should be sufficient.
2. Another input data class with all values below lower limit (for example any value below 1, as an invalid input data test case).
3. Last input data with any value greater than 1000 to represent third invalid input class.
5.3.2 Boundary value analysis
Behavior at the edge of each equivalence partition is more likely to be incorrect, so boundaries are an area where testing is likely to yield defects. The maximum and minimum values of a partition are its boundary values. A boundary value for a valid partition is a valid boundary value; the boundary of an invalid partition is an invalid boundary value. Tests can be designed to cover both valid and invalid boundary values. When designing test cases, a test for each boundary value is chosen.
Boundary value analysis can be applied at all test levels. It is relatively easy to apply and its defect finding capability is high; detailed specifications are helpful.
This technique is often considered as an extension of equivalence partitioning. It can be used on equivalence classes for user input on screen as well as, for example, on time ranges (e.g. time out, transactional speed requirements) or table ranges (e.g. table size is 256*256). Boundary values may also be used for test data selection
Referring to our example mentioned above we will have the following test cases:
1. Test cases with test data exactly as the input boundaries of input domain (values 1 and 1000 in our case)
2. Test data with values just below the extreme edges of input domains (values 0 and 999 in our case)
3. Test data with values just above the extreme edges of input domain (values 2 and 1001 in our case
5.3.3 Cause Effect Analysis
The main drawback of the previous two techniques is that they do not explore the combination of input conditions.
Cause effect analysis is an approach for studying the specifications carefully and identifying the combinations of input conditions (causes) and their effect in the form of a table and designing test cases
It is suitable for applications in which combinations of input conditions are few and readily visible.
5.3.4 Cause Effect Graphing
This is a rigorous approach, recommended for complex systems only. In such systems the number of inputs and number of equivalent classes for each input could be many and hence the number of input combinations usually is astronomical. Hence we need a systematic approach to select a subset of these input conditions.
Guidelines for graphing:
-
Divide specifications into workable pieces as it may be practically difficult to work on large specifications.
-
Identify the causes and their effects. A cause is an input condition or an equivalence class of input conditions. An effect is an output condition or a system transformation.
-
Link causes and effects in a Boolean graph which is the cause-effect graph.
-
Make decision tables based on the graph. This is done by having one row each for a node in the graph. The number of columns will depend on the number of different combinations of input conditions which can be made.
-
Convert the columns in the decision table into test cases.
Example:
A program accepts Transaction Code - 3 characters as input. For a valid input the following must be true.
1st character (denoting issue or receipt) + for issue- for receipt
2nd character - a digit
3rd character - a digit
To carry out cause effect graphing, the control flow graph is constructed as below.
In the graph:
(1) or (2) must be true (V in the graph to be interpreted as OR)
(3) and (4) must be true (? in the graph to be interpreted as AND)
The Boolean graph has to be interpreted as follows:
-
node (1) turns true if the 1st character is ‘+’
-
node (2) turns true if the 1st character is ‘-’ (both node (1) and node (2) cannot be true simultaneously)
-
node(3) becomes true if the 2nd character is a digit
-
node(4) becomes true if the 3rd character is a digit the intermediate
-
node (5) turns true if (1) or (2) is true (i.e., if the 1st character is ‘+’ or ‘-‘) the intermediate
-
node (6) turns true if (3) and (4) are true (i.e., if the 2nd and 3rd characters are digits) The final
-
node (7) turns true if (5) and (6) are true. (i.e., if the 1st character is ‘+’ or ‘-‘, 2nd and 3rd characters are digits)
-
The final node will be true for any valid input and false for any invalid input
5.1 The TEST DEVELOPMENT PROCESS
The process described in this section can be done in different ways, from very informal with little or no documentation, to very formal (as it is described below). The level of formality depends on the context of the testing, including the organization, the maturity of testing and development processes, time constraints and the people involved.
During test analysis, the test basis documentation is analyzed in order to determine what to test, i.e. to identify the test conditions. A test condition is defined as an item or event that could be verified by one or more test cases (e.g. a function, transaction, quality characteristic or structural element).
Establishing traceability from test conditions back to the specifications and requirements enables both impact analysis, when requirements change, and requirements coverage to be determined for a set of tests. During test analysis the detailed test approach is implemented to select the test design techniques to use, based on, among other considerations, the risks identified
During test design the test cases and test data are created and specified. A test case consists of a set of input values, execution preconditions, expected results and execution post-conditions, developed to cover certain test condition(s). The ‘Standard for Software Test Documentation’ (IEEE 829) describes the content of test design specifications (containing test conditions) and test case specifications.
Expected results should be produced as part of the specification of a test case and include outputs, changes to data and states, and any other consequences of the test. If expected results have not been defined then a plausible, but erroneous, result may be interpreted as the correct one. Expected results should ideally be defined prior to test execution.
During test implementation the test cases are developed, implemented, prioritized and organized in the test procedure specification. The test procedure (or manual test script) specifies the sequence of action for the execution of a test. If tests are run using a test execution tool, the sequence of actions is specified in a test script (which is an automated test procedure).
The various test procedures and automated test scripts are subsequently formed into a test execution schedule that defines the order in which the various test procedures, and possibly automated test scripts, are executed, when they are to be carried out and by whom. The test execution schedule will take into account such factors as regression tests, prioritization, and technical and logical dependencies
5.2 Categories of test design techniques
The purpose of a test design technique is to identify test conditions and test cases.
It is a classic distinction to denote test techniques as black box or white box. Black-box techniques (which include specification-based and experienced-based techniques) are a way to derive and select test conditions or test cases based on an analysis of the test basis documentation and the experience of developers, testers and users, whether functional or non-functional, for a component or system without reference to its internal structure. White-box techniques (also called structural or structure-based techniques) are based on an analysis of the structure of the component or system.
Some techniques fall clearly into a single category; others have elements of more than one category.
This syllabus refers to specification-based or experience-based approaches as black-box techniques and structure-based as white-box techniques.
Common features of specification-based techniques:
•
Models, either formal or informal, are used for the specification of the problem to be solved, the software or its components.
•
From these models test cases can be derived systematically.
Common features of structure-based techniques:
•
Information about how the software is constructed is used to derive the test cases, for example, code and design.
•
The extent of coverage of the software can be measured for existing test cases, and further test cases can be derived systematically to increase coverage.
Common features of experience-based techniques:
•
The knowledge and experience of people are used to derive the test cases.
•
Knowledge of testers, developers, users and other stakeholders about the software, its usage and its environment.
•
Knowledge about likely defects and their distribution
5.3 Specification-based or black-box techniques
5.3.1 Equivalence partitioning
Inputs to the software or system are divided into groups that are expected to exhibit similar behavior, so they are likely to be processed in the same way. Equivalence partitions (or classes) can be found for both valid data and invalid data, i.e. values that should be rejected. Partitions can also be identified for outputs, internal values, time-related values (e.g. before or after an event) and for interface parameters (e.g. during integration testing). Tests can be designed to cover partitions. Equivalence partitioning is applicable at all levels of testing.
Equivalence partitioning as a technique can be used to achieve input and output coverage. It can be applied to human input, input via interfaces to a system, or interface parameters in integration testing.
Example:
If you are testing for an input box accepting numbers from 1 to 1000 then, there is no use in writing thousand test cases for all 1000 valid input numbers plus other test cases for invalid data.
Using equivalence partitioning method above test cases can be divided into three sets of input data called classes. Each test case is a representative of respective class.
So, from the above example, we can divide our test cases into three equivalence classes of some valid and invalid inputs.
1. One input data class with all valid inputs. Pick a single value from range 1 to 1000 as a valid test case. If you select other values between 1 and 1000 then result is going to be same. So one test case for valid input data should be sufficient.
2. Another input data class with all values below lower limit (for example any value below 1, as an invalid input data test case).
3. Last input data with any value greater than 1000 to represent third invalid input class.
5.3.2 Boundary value analysis
Behavior at the edge of each equivalence partition is more likely to be incorrect, so boundaries are an area where testing is likely to yield defects. The maximum and minimum values of a partition are its boundary values. A boundary value for a valid partition is a valid boundary value; the boundary of an invalid partition is an invalid boundary value. Tests can be designed to cover both valid and invalid boundary values. When designing test cases, a test for each boundary value is chosen.
Boundary value analysis can be applied at all test levels. It is relatively easy to apply and its defect finding capability is high; detailed specifications are helpful.
This technique is often considered as an extension of equivalence partitioning. It can be used on equivalence classes for user input on screen as well as, for example, on time ranges (e.g. time out, transactional speed requirements) or table ranges (e.g. table size is 256*256). Boundary values may also be used for test data selection
Referring to our example mentioned above we will have the following test cases:
1. Test cases with test data exactly as the input boundaries of input domain (values 1 and 1000 in our case)
2. Test data with values just below the extreme edges of input domains (values 0 and 999 in our case)
3. Test data with values just above the extreme edges of input domain (values 2 and 1001 in our case
5.3.3 Cause Effect Analysis
The main drawback of the previous two techniques is that they do not explore the combination of input conditions.
Cause effect analysis is an approach for studying the specifications carefully and identifying the combinations of input conditions (causes) and their effect in the form of a table and designing test cases
It is suitable for applications in which combinations of input conditions are few and readily visible.
5.3.4 Cause Effect Graphing
This is a rigorous approach, recommended for complex systems only. In such systems the number of inputs and number of equivalent classes for each input could be many and hence the number of input combinations usually is astronomical. Hence we need a systematic approach to select a subset of these input conditions.
Guidelines for graphing:
-
Divide specifications into workable pieces as it may be practically difficult to work on large specifications.
-
Identify the causes and their effects. A cause is an input condition or an equivalence class of input conditions. An effect is an output condition or a system transformation.
-
Link causes and effects in a Boolean graph which is the cause-effect graph.
-
Make decision tables based on the graph. This is done by having one row each for a node in the graph. The number of columns will depend on the number of different combinations of input conditions which can be made.
-
Convert the columns in the decision table into test cases.
Example:
A program accepts Transaction Code - 3 characters as input. For a valid input the following must be true.
1st character (denoting issue or receipt) + for issue- for receipt
2nd character - a digit
3rd character - a digit
To carry out cause effect graphing, the control flow graph is constructed as below.
In the graph:
(1) or (2) must be true (V in the graph to be interpreted as OR)
(3) and (4) must be true (? in the graph to be interpreted as AND)
The Boolean graph has to be interpreted as follows:
-
node (1) turns true if the 1st character is ‘+’
-
node (2) turns true if the 1st character is ‘-’ (both node (1) and node (2) cannot be true simultaneously)
-
node(3) becomes true if the 2nd character is a digit
-
node(4) becomes true if the 3rd character is a digit the intermediate
-
node (5) turns true if (1) or (2) is true (i.e., if the 1st character is ‘+’ or ‘-‘) the intermediate
-
node (6) turns true if (3) and (4) are true (i.e., if the 2nd and 3rd characters are digits) The final
-
node (7) turns true if (5) and (6) are true. (i.e., if the 1st character is ‘+’ or ‘-‘, 2nd and 3rd characters are digits)
-
The final node will be true for any valid input and false for any invalid input
|
A partial
decision table corresponding to the above graph: Node
|
Some
possible combination of node states
|
|||||
|
(1)
|
0
|
1
|
1
|
1
|
0
|
1
|
|
(2)
|
0
|
0
|
0
|
0
|
0
|
0
|
|
(3)
|
0
|
0
|
0
|
1
|
1
|
1
|
|
(4)
|
0
|
0
|
1
|
0
|
1
|
1
|
|
(5)
|
0
|
1
|
1
|
0
|
0
|
1
|
|
(6)
|
0
|
0
|
0
|
1
|
1
|
1
|
|
(7)
|
0
|
0
|
0
|
0
|
0
|
1
|
|
Sample Test
Case for the Column
|
$xy
|
+ab
|
+a4
|
+2y
|
@45
|
+67
|
The sample test cases can be derived by giving values to the input characters such that the nodes turn true/false as given in the columns of the decision table
5.3.5 Decision table testing
Decision tables are a good way to capture system requirements that contain logical conditions, and to document internal system design. They may be used to record complex business rules that a system is to implement. The specification is analyzed, and conditions and actions of the system are identified. The input conditions and actions are most often stated in such a way that they can either be true or false (Boolean). The decision table contains the triggering conditions, often combinations of true and false for all input conditions, and the resulting actions for each combination of conditions. Each column of the table corresponds to a business rule that defines a unique combination of conditions, which result in the execution of the actions associated with that rule. The coverage standard commonly used with decision table testing is to have at least one test per column, which typically involves covering all combinations of triggering conditions.
The strength of decision table testing is that it creates combinations of conditions that might not otherwise have been exercised during testing. It may be applied to all situations when the action of the software depends on several logical decisions.
5.3.6 State transition testing
A system may exhibit a different response depending on current conditions or previous history (its state). In this case, that aspect of the system can be shown as a state transition diagram. It allows the tester to view the software in terms of its states, transitions between states, the inputs or events that trigger state changes (transitions) and the actions which may result from those transitions. The states of the system or object under test are separate, identifiable and finite in number. A state table shows the relationship between the states and inputs, and can highlight possible transitions that are invalid. Tests can be designed to cover a typical sequence of states, to cover every state, to exercise every transition, to exercise specific sequences of transitions or to test invalid transitions.
State transition testing is much used within the embedded software industry and technical automation in general. However, the technique is also suitable for modeling a business object having specific states or testing screen-dialogue flows
5.3.7 Use case testing
Tests can be specified from use cases or business scenarios. A use case describes interactions between actors, including users and the system, which produce a result of value to a system user. Each use case has preconditions, which need to be met for a use case to work successfully. Each use case terminates with post-conditions, which are the observable results and final state of the system after the use case has been completed. A use case usually has a mainstream (i.e. most likely) scenario, and sometimes alternative branches.
Use cases describe the “process flows” through a system based on its actual likely use, so the test cases derived from use cases are most useful in uncovering defects in the process flows during real-world use of the system. Use cases, often referred to as scenarios, are very useful for designing acceptance tests with customer/user participation. They also help uncover integration defects caused by the interaction and interference of different components, which individual component testing would not see.
5.3.8 Structure-based or white-box techniques
Structure-based testing/white-box testing is based on an identified structure of the software or system, as seen in the following examples:
o
Component level: the structure is that of the code itself, i.e. statements, decisions or branches.
o
Integration level: the structure may be a call tree (a diagram in which modules call other modules).
o
System level: the structure may be a menu structure, business process or web page structure.
In this section, two code-related structural techniques for code coverage, based on statements and decisions, are discussed. For decision testing, a control flow diagram may be used to visualize the alternatives for each decision
5.3.9 Use case testing
Tests can be specified from use cases or business scenarios. A use case describes interactions between actors, including users and the system, which produce a result of value to a system user. Each use case has preconditions, which need to be met for a use case to work successfully. Each use case terminates with post-conditions, which are the observable results and final state of the system after the use case has been completed. A use case usually has a mainstream (i.e. most likely) scenario, and sometimes alternative branches.
Use cases describe the “process flows” through a system based on its actual likely use, so the test cases derived from use cases are most useful in uncovering defects in the process flows during real-world use of the system. Use cases, often referred to as scenarios, are very useful for designing acceptance tests with customer/user participation. They also help uncover integration defects caused by the interaction and interference of different components, which individual component testing would not see.
5.4 Structure-based or white-box techniques
Structure-based testing/white-box testing is based on an identified structure of the software or system, as seen in the following examples:
o
Component level: the structure is that of the code itself, i.e. statements, decisions or branches.
o
Integration level: the structure may be a call tree (a diagram in which modules call other modules).
o
System level: the structure may be a menu structure, business process or web page structure.
In this section, two code-related structural techniques for code coverage, based on statements and decisions, are discussed. For decision testing, a control flow diagram may be used to visualize the alternatives for each decision.
5.4.1 Statement testing and coverage or Basic path testing
In component testing, statement coverage is the assessment of the percentage of executable statements that have been exercised by a test case suite. Statement testing derives test cases to execute specific statements, normally to increase statement coverage.
Basis Path Testing is white box testing method where we design test cases to cover every statement, every branch and every predicate (condition) in the code which has been written. Thus the method attempts statement coverage, decision coverage and condition coverage
To perform Basis Path Testing
•
Derive a logical complexity measure of procedural design
o
Break the module into blocks delimited by statements that affect the control flow (eg.: statement like return, exit, jump etc. and conditions)
o
Mark out these as nodes in a control flow graph
o
Draw connectors (arcs) with arrow heads to mark the flow of logic
o
Identify the number of regions (Cyclomatic Number) which is equivalent to the McCabe’s number
•
Define a basis set of execution paths
o
Determine independent paths
•
Derive test case to exercise (cover) the basis set
McCabe’s Number (Cyclomatic Complexity)
•
Gives a quantitative measure of the logical complexity of the module
•
Defines the number of independent paths
•
Provides an upper bound to the number of tests that must be conducted to ensure that all the statements are executed at least once.
•
Complexity of a flow graph ‘g’, v(g), is computed in one of three ways:
o
V(G) = No. of regions of G
o
V(G) = E - N +2 (E: No. of edges & N: No. of nodes)
o
V(G) = P + 1 (P: No. of predicate nodes in G or No. of conditions in the code)
McCabe’s Number = No. of Regions (Count the mutually exclusive closed regions and also the whole outer space as one region) = 2 (in the above graph)
Two other formulae as given below also define the above measure: McCabe’s Number = E - N +2(= 6 – 6 +2 = 2 for the above graph) McCabe’s Number = P + 1(=1 + 1= 2 for the above graph)
Please note that if the number of conditions is more than one in a single control structure, each condition needs to be separately marked as a node.
When the McCabe’s number is 2, it indicates that there two linearly independent paths in the code. i.e., two different ways in which the graph can be traversed from the 1st node to the last node. The independents in the above graph are:
i)
1-2-3-5-6
ii)
ii) 1-2-4-5-6
The last step is to write test cases corresponding to the listed paths. This would mean giving the input conditions in such a way that the above paths are traced by the control of execution. The test cases for the paths listed here are show in the following table.
|
Path
|
Input Condition
|
Expected Result
|
Actual Result
|
Remarks
|
||
|
i)
|
value of ‘a’ > value of ‘b’
|
Increment ‘a’ by 1
|
||||
|
ii)
|
value of ‘a’ <= value of ‘b’
|
Increment ‘b’ by 1
|
||||
5.4.2 Decision testing and coverage
Decision coverage, related to branch testing, is the assessment of the percentage of decision outcomes (e.g. the True and False options of an IF statement) that have been exercised by a test case suite. Decision testing derives test cases to execute specific decision outcomes, normally to increase decision coverage.
Decision testing is a form of control flow testing as it generates a specific flow of control through the decision points. Decision coverage is stronger than statement coverage: 100% decision coverage guarantees 100% statement coverage, but not vice versa.
5.4.3 Other structure-based techniques
There are stronger levels of structural coverage beyond decision coverage, for example, condition coverage and multiple condition coverage.
The concept of coverage can also be applied at other test levels (e.g. at integration level) where the percentage of modules, components or classes that have been exercised by a test case suite could be expressed as module, component or class coverage.
Tool support is useful for the structural testing of code.
5.5 Experience-based techniques
Experienced-based testing is where tests are derived from the tester’s skill and intuition and their experience with similar applications and technologies. When used to augment systematic techniques, these techniques can be useful in identifying special tests not easily captured by formal techniques, especially when applied after more formal approaches. However, this technique may yield widely varying degrees of effectiveness, depending on the testers’ experience. A commonly used experienced-based technique is error guessing. Generally testers anticipate defects based on experience. A structured approach to the error guessing technique is to enumerate a list of possible errors and to design tests that attack these errors. This systematic approach is called fault attack. These defect and failure lists can be built based on experience, available defect and failure data, and from common knowledge about why software fails.
Exploratory testing is concurrent test design, test execution, test logging and learning, based on a test charter containing test objectives, and carried out within time-boxes. It is an approach that is most useful where there are few or inadequate specifications and severe time pressure, or in order to augment or complement other, more formal testing. It can serve as a check on the test process, to help ensure that the most serious defects are found.
Error Guessing
Error guessing is a supplementary technique where test case design is based on the tester's intuition and experience. There is no formal procedure. However, a checklist of common errors could be helpful here.
5.6 Choosing test techniques
The choice of which test techniques to use depends on a number of factors, including the type of system, regulatory standards, customer or contractual requirements, level of risk, type of risk, test objective, documentation available, knowledge of the testers, time and budget, development life cycle, use case models and previous experience of types of defects found.
Some techniques are more applicable to certain situations and test levels; others are applicable to all test levels.

No comments:
Post a Comment