Testing throughout the software life cycle
3.1 Software development models
Testing does not exist in isolation; test activities are related to software development activities.
Different development life cycle models need different approaches to testing.
3.2 V-model (sequential development model)
Although variants of the V-model exist, a common type of V-model uses four test levels, corresponding to the four development levels.
The four levels used in this syllabus are:
Component (unit) testing;
Integration testing;
System testing;
Acceptance testing.
The aim is to work together rather than be confrontational. Keep the focus on delivering a quality product.
• Results should be presented in a non-personal way. The work-product may be wrong, so say this in a non-personal way.
• Attempt to understand how others feel; it is possible to discuss problems and still leave all parties feeling positive.
In practice, a V-model may have more, fewer or different levels of development and testing, depending on the project and the software product. For example, there may be component integration testing after component testing, and system integration testing after system testing.
Software work products (such as business scenarios or use cases, requirements specifications, design documents and code) produced during development is often the basis of testing in one or more test levels. References for generic work products include Capability Maturity Model Integration (CMMI) or ‘Software life cycle processes’ (IEEE/IEC 12207). Verification and validation (and early test design) can be carried out during the development of the software work products
3.3 Iterative-incremental development models
Iterative-incremental development is the process of establishing requirements, designing, building and testing a system, done as a series of shorter development cycles. Examples are: prototyping, rapid application development (RAD), Rational Unified Process (RUP) and agile development models. The resulting system produced by iteration may be tested at several levels as part of its development. An increment, added to others developed previously, forms a growing partial system, which should also be tested. Regression testing is increasingly important on all iterations after the first one. Verification and validation can be carried out on each increment.
3.4 Testing within a life cycle model
In any life cycle model, there are several characteristics of good testing:
For every development activity there is a corresponding testing activity.
Each test level has test objectives specific to that level.
The analysis and design of tests for a given test level should begin during the corresponding development activity.
Testers should be involved in reviewing documents as soon as drafts are available in the development life cycle.
Test levels can be combined or reorganized depending on the nature of the project or the system architecture. For example, for the integration of a commercial off-the-shelf (COTS) software product into a system, the purchaser may perform integration testing at the system level (e.g. integration to the infrastructure and other systems, or system deployment) and acceptance testing (functional and/or non-functional, and user and/or operational testing).
3.4 Test levels
For each of the test levels, the following can be identified: their generic objectives, the work product(s) being referenced for deriving test cases (i.e. the test basis), and the test object (i.e. what is being tested), typical defects and failures to be found, test harness requirements and tool support, and specific approaches and responsibilities.
3.5 Component testing
Component testing searches for defects in, and verifies the functioning of, software (e.g. modules, programs, objects, classes, etc.) that are separately testable. It may be done in isolation from the rest of the system, depending on the context of the development life cycle and the system. Stubs, drivers and simulators may be used.
Component testing may include testing of functionality and specific non-functional characteristics, such as resource-behavior (e.g. memory leaks) or robustness testing, as well as structural testing (e.g. branch coverage). Test cases are derived from work products such as a specification of the component, the software design or the data mode
Typically, component testing occurs with access to the code being tested and with the support of the development environment, such as a unit test framework or debugging tool, and, in practice, usually involves the programmer who wrote the code. Defects are typically fixed as soon as they are found, without formally recording incidents.
One approach to component testing is to prepare and automate test cases before coding. This is called a test-first approach or test-driven development. This approach is highly iterative and is based on cycles of developing test cases, then building and integrating small pieces of code, and executing the component tests until they pass.
3.6 Integration testing
Integration testing tests interfaces between components, interactions with different parts of a system, such as the operating system, file system, hardware, or interfaces between systems.
There may be more than one level of integration testing and it may be carried out on test objects of varying size. For example:
1. Component integration testing tests the interactions between software components and is done after component testing;
2. System integration testing tests the interactions between different systems and may be done after system testing. In this case, the developing organization may control only one side of the interface, so changes may be destabilizing. Business processes implemented as workflows may involve a series of systems. Cross-platform issues may be significant.
The greater the scope of integration, the more difficult it becomes to isolate failures to a specific component or system, which may lead to increased risk.
Systematic integration strategies may be based on the system architecture (such as top-down and bottom-up), functional tasks, transaction processing sequences, or some other aspect of the system or component. In order to reduce the risk of late defect discovery, integration should normally be incremental rather than “big bang”.
Testing of specific non-functional characteristics (e.g. performance) may be included in integration testing.
At each stage of integration, testers concentrate solely on the integration itself. For example, if they are integrating module A with module B they are interested in testing the communication between the modules, not the functionality of either module. Both functional and structural approaches may be used.
Ideally, testers should understand the architecture and influence integration planning. If integration tests are planned before components or systems are built, they can be built in the order required for most efficient testing
3.7 System testing
System testing is concerned with the behavior of a whole system/product as defined by the scope of a development project or programme.
In system testing, the test environment should correspond to the final target or production environment as much as possible in order to minimize the risk of environment-specific failures not being found in testing.
System testing may include tests based on risks and/or on requirements specifications, business processes, use cases, or other high level descriptions of system behavior, interactions with the operating system, and system resources.
System testing should investigate both functional and non-functional requirements of the system. Requirements may exist as text and/or models. Testers also need to deal with incomplete or undocumented requirements. System testing of functional requirements starts by using the most appropriate specification-based (black-box) techniques for the aspect of the system to be tested. For example, a decision table may be created for combinations of effects described in business rules. Structure-based techniques (white-box) may then be used to assess the thoroughness of the testing with respect to a structural element, such as menu structure or web page navigation.
An independent test team often carries out system testing.
3.8 Acceptance testing
Acceptance testing is often the responsibility of the customers or users of a system; other stakeholders may be involved as well.
The goal in acceptance testing is to establish confidence in the system, parts of the system or specific non-functional characteristics of the system. Finding defects is not the main focus in acceptance testing. Acceptance testing may assess the system’s readiness for deployment and use, although it is not necessarily the final level of testing. For example, a large-scale system integration test may come after the acceptance test for a system.
Acceptance testing may occur as more than just a single test level, for example:
A COTS software product may be acceptance tested when it is installed or integrated.
Acceptance testing of the usability of a component may be done during component testing.
Acceptance testing of a new functional enhancement may come before system testing.
Typical forms of acceptance testing include the following:
->User acceptance testing
Typically verifies the fitness for use of the system by business users
->Operational (acceptance) testing
The acceptance of the system by the system administrators, including:
testing of backup/restore;
disaster recovery;
user management;
maintenance tasks;
Periodic checks of security vulnerabilities.
->Contract and regulation acceptance testing
Contract acceptance testing is performed against a contract’s acceptance criteria for producing custom-developed software. Acceptance criteria should be defined when the contract is agreed. Regulation acceptance testing is performed against any regulations that must be adhered to, such as governmental, legal or safety regulations.
->Alpha and beta (or field) testing
Developers of market, or COTS, software often want to get feedback from potential or existing customers in their market before the software product is put up for sale commercially. Alpha testing is performed at the developing organization’s site. Beta testing, or field testing, is performed by people at their own locations. Both are performed by potential customers, not the developers of the product.
Organizations may use other terms as well, such as factory acceptance testing and site acceptance testing for systems that are tested before and after being moved to a customer’s site
4. Levels of Testing in detail
Overview
In developing a large system, testing usually involves several stages (Refer the following figure.
Unit Testing
Integration Testing
System Testing
Acceptance Testing
Initially, each program component (module) is tested independently verifying the component functions with the types of input identified by studying component’s design. Such a testing is called Unit Testing (or component or module testing). Unit testing is done in a controlled environment with a predetermined set of data fed into the component to observe what output actions and data are produced.
When collections of components have been unit-tested, the next step is ensuring that the interfaces among the components are defined and handled properly. This process of verifying the synergy of system components against the program Design Specification is called Integration Testing.
Once the system is integrated, the overall functionality is tested against the Software Requirements Specification (SRS). Then, the other non-functional requirements like performance testing are done to ensure readiness of the system to work successfully in a customer’s actual working environment. This step is called System Testing.
The next step is customer’s validation of the system against User Requirements Specification (URS). Customer in their working environment does this exercise of Acceptance Testing usually with assistance from the developers. Once the system is accepted, it will be installed and will be put to use.
Unit Testing
Unit Testing is done mostly by developers it advocates to address the goal of finding faults in modules (components):
Examining the code :Typically the static testing methods like Reviews, Walkthroughs and Inspections are used
Proving code correct :After coding and review exercise if we want to ascertain the correctness of the code we can use formal methods. A program is correct if it implements the functions and data properly as indicated in the design, and if it interfaces properly with all other components. One way to investigate program correctness is to view the code as a statement of logical flow. Using mathematical logic, if we can formulate the program as a set of assertions and theorems, we can show that the truth of the theorems implies the correctness of the code.
• Use of this approach forces us to be more rigorous and precise in specification. Much work is involved in setting up and carrying out the proof. For example, the code for performing bubble sort is much smaller than its logical description and proof.
Testing program components (modules)
In the absence of simpler methods and automated tools, “Proving code correctness” will be an elusive goal for software engineers. Proving views programs in terms of classes of data and conditions and the proof may not involve execution of the code. On the contrary, testing is a series of experiments to observe the behavior of the program for various input conditions. While proof tells us how a program will work in a hypothetical environment described by the design and requirements, testing gives us information about how a program works in its actual operating environment.
To test a component (module), input data and conditions are chosen to demonstrate an observable behavior of the code. A test case is a particular choice of input data to be used in testing a program. Test case are generated by using either black-box or white-box approaches
Integration Testing
Integration is the process of assembling unit-tested modules. We need to test the following aspects that are not previously addressed while independently testing the modules:
Interfaces: To ensure “interface integrity,” the transfer of data between modules is tested. When data is passed to another module, by way of a call, there should not be any loss or corruption of data. The loss or corruption of data can happen due to mismatch or differences in the number or order of calling and receiving parameters.
Module combinations may produce a different behavior due to combinations of data that are not exercised during unit testing.
Global data structures, if used, may reveal errors due to unintended usage in some module.
Integration Strategies
Depending on design approach, one of the following integration strategies can be adopted:
· Big Bang approach
· Incremental approach
Top-down testing
Bottom-up testing
Sandwich testing
Big Bang approach consists of testing each module individually and linking all these modules together only when every module in the system has been tested
Though Big Bang approach seems to be advantageous when we construct independent module concurrently, this approach is quite challenging and risky as we integrate all modules in a single step and test the resulting system. Locating interface errors, if any, becomes difficult here.
The alternative strategy is an incremental approach, wherein modules of a system are consolidated with already tested components of the system. In this way, the software is gradually built up, spreading the integration testing load more evenly through the construction phase. Incremental approach can be implemented in two distinct ways: Top-down and Bottom-up.
In Top-down testing, testing begins with the topmost module. A module will be integrated into the system only when the module which calls it has been already integrated successfully. An example order of Top-down testing for the above illustration will be:
The testing starts with M1. To test M1 in isolation, communications to modules M2, M3 and M4 have to be somehow simulated by the tester somehow, as these modules may not be ready yet. To simulate responses of M2, M3 and M4 whenever they are to be invoked from M1, “stubs” are created. Simple applications may require stubs which simply return control to their superior modules. More complex situation demand stubs to simulate a full range of responses, including parameter passing. Stubs may be individually created by the tester (as programs in their own right) or they may be provided by a software testing harness, which is a piece of software specifically designed to provide a testing environment.
In the above illustration, M1 would require stubs to simulate the activities of M2, M3 and M4. The integration of M3 would require a stub or stubs (?!) for M5 and M4 would require stubs for M6 and M7. Elementary modules (those which call no subordinates) require no stubs
Bottom-up testing begins with elementary modules. If M5 is ready, we need to simulate the activities of its superior, M3. Such a “driver” for M5 would simulate the invocation activities of M3. As with the stub, the complexity of a driver would depend upon the application under test. The driver would be responsible for invoking the module under test; it could be responsible for passing test data (as parameters) and it might be responsible for receiving output data. Again, the driving function can be provided through a testing harness or may be created by the tester as a program. The following diagram shows the bottom-up testing approach for the above illustration

No comments:
Post a Comment