Saturday, August 30, 2014

Real Time Software Testing -Chapter-3

Testing throughout the software life cycle
3.1 Software development models

Testing does not exist in isolation; test activities are related to software development activities.
Different development life cycle models need different approaches to testing.
3.2 V-model (sequential development model)

Although variants of the V-model exist, a common type of V-model uses four test levels, corresponding to the four development levels.
The four levels used in this syllabus are:
                Component (unit) testing;
                Integration testing;
                System testing;
                Acceptance testing.
 The aim is to work together rather than be confrontational. Keep the focus on delivering a quality product.

• Results should be presented in a non-personal way. The work-product may be wrong, so say this in a non-personal way.

• Attempt to understand how others feel; it is possible to discuss problems and still leave all parties feeling positive.















In practice, a V-model may have more, fewer or different levels of development and testing, depending on the project and the software product. For example, there may be component integration testing after component testing, and system integration testing after system testing.
Software work products (such as business scenarios or use cases, requirements specifications, design documents and code) produced during development is often the basis of testing in one or more test levels. References for generic work products include Capability Maturity Model Integration (CMMI) or ‘Software life cycle processes’ (IEEE/IEC 12207). Verification and validation (and early test design) can be carried out during the development of the software work products

3.3 Iterative-incremental development models

Iterative-incremental development is the process of establishing requirements, designing, building and testing a system, done as a series of shorter development cycles. Examples are: prototyping, rapid application development (RAD), Rational Unified Process (RUP) and agile development models. The resulting system produced by iteration may be tested at several levels as part of its development. An increment, added to others developed previously, forms a growing partial system, which should also be tested. Regression testing is increasingly important on all iterations after the first one. Verification and validation can be carried out on each increment.
3.4 Testing within a life cycle model

In any life cycle model, there are several characteristics of good testing:
                For every development activity there is a corresponding testing activity.
                Each test level has test objectives specific to that level.
                The analysis and design of tests for a given test level should begin during the corresponding development activity.
                Testers should be involved in reviewing documents as soon as drafts are available in the development life cycle.

Test levels can be combined or reorganized depending on the nature of the project or the system architecture. For example, for the integration of a commercial off-the-shelf (COTS) software product into a system, the purchaser may perform integration testing at the system level (e.g. integration to the infrastructure and other systems, or system deployment) and acceptance testing (functional and/or non-functional, and user and/or operational testing).
3.4 Test levels

For each of the test levels, the following can be identified: their generic objectives, the work product(s) being referenced for deriving test cases (i.e. the test basis), and the test object (i.e. what is being tested), typical defects and failures to be found, test harness requirements and tool support, and specific approaches and responsibilities.

3.5 Component testing

Component testing searches for defects in, and verifies the functioning of, software (e.g. modules, programs, objects, classes, etc.) that are separately testable. It may be done in isolation from the rest of the system, depending on the context of the development life cycle and the system. Stubs, drivers and simulators may be used.
Component testing may include testing of functionality and specific non-functional characteristics, such as resource-behavior (e.g. memory leaks) or robustness testing, as well as structural testing (e.g. branch coverage). Test cases are derived from work products such as a specification of the component, the software design or the data mode

Typically, component testing occurs with access to the code being tested and with the support of the development environment, such as a unit test framework or debugging tool, and, in practice, usually involves the programmer who wrote the code. Defects are typically fixed as soon as they are found, without formally recording incidents.
One approach to component testing is to prepare and automate test cases before coding. This is called a test-first approach or test-driven development. This approach is highly iterative and is based on cycles of developing test cases, then building and integrating small pieces of code, and executing the component tests until they pass.
3.6 Integration testing

Integration testing tests interfaces between components, interactions with different parts of a system, such as the operating system, file system, hardware, or interfaces between systems.
There may be more than one level of integration testing and it may be carried out on test objects of varying size. For example:
1. Component integration testing tests the interactions between software components and is done after component testing;
2. System integration testing tests the interactions between different systems and may be done after system testing. In this case, the developing organization may control only one side of the interface, so changes may be destabilizing. Business processes implemented as workflows may involve a series of systems. Cross-platform issues may be significant.


The greater the scope of integration, the more difficult it becomes to isolate failures to a specific component or system, which may lead to increased risk.
Systematic integration strategies may be based on the system architecture (such as top-down and bottom-up), functional tasks, transaction processing sequences, or some other aspect of the system or component. In order to reduce the risk of late defect discovery, integration should normally be incremental rather than “big bang”.
Testing of specific non-functional characteristics (e.g. performance) may be included in integration testing.
At each stage of integration, testers concentrate solely on the integration itself. For example, if they are integrating module A with module B they are interested in testing the communication between the modules, not the functionality of either module. Both functional and structural approaches may be used.
Ideally, testers should understand the architecture and influence integration planning. If integration tests are planned before components or systems are built, they can be built in the order required for most efficient testing

3.7 System testing

System testing is concerned with the behavior of a whole system/product as defined by the scope of a development project or programme.
In system testing, the test environment should correspond to the final target or production environment as much as possible in order to minimize the risk of environment-specific failures not being found in testing.
System testing may include tests based on risks and/or on requirements specifications, business processes, use cases, or other high level descriptions of system behavior, interactions with the operating system, and system resources.
System testing should investigate both functional and non-functional requirements of the system. Requirements may exist as text and/or models. Testers also need to deal with incomplete or undocumented requirements. System testing of functional requirements starts by using the most appropriate specification-based (black-box) techniques for the aspect of the system to be tested. For example, a decision table may be created for combinations of effects described in business rules. Structure-based techniques (white-box) may then be used to assess the thoroughness of the testing with respect to a structural element, such as menu structure or web page navigation.
An independent test team often carries out system testing.
3.8 Acceptance testing

Acceptance testing is often the responsibility of the customers or users of a system; other stakeholders may be involved as well.
The goal in acceptance testing is to establish confidence in the system, parts of the system or specific non-functional characteristics of the system. Finding defects is not the main focus in acceptance testing. Acceptance testing may assess the system’s readiness for deployment and use, although it is not necessarily the final level of testing. For example, a large-scale system integration test may come after the acceptance test for a system.

Acceptance testing may occur as more than just a single test level, for example:
                A COTS software product may be acceptance tested when it is installed or integrated.
                Acceptance testing of the usability of a component may be done during component testing.
Acceptance testing of a new functional enhancement may come before system testing.

Typical forms of acceptance testing include the following:

->User acceptance testing

Typically verifies the fitness for use of the system by business users

->Operational (acceptance) testing

The acceptance of the system by the system administrators, including:
                testing of backup/restore;
                disaster recovery;
                user management;
                maintenance tasks;
                Periodic checks of security vulnerabilities.

->Contract and regulation acceptance testing

Contract acceptance testing is performed against a contract’s acceptance criteria for producing custom-developed software. Acceptance criteria should be defined when the contract is agreed. Regulation acceptance testing is performed against any regulations that must be adhered to, such as governmental, legal or safety regulations.

->Alpha and beta (or field) testing

Developers of market, or COTS, software often want to get feedback from potential or existing customers in their market before the software product is put up for sale commercially. Alpha testing is performed at the developing organization’s site. Beta testing, or field testing, is performed by people at their own locations. Both are performed by potential customers, not the developers of the product.
Organizations may use other terms as well, such as factory acceptance testing and site acceptance testing for systems that are tested before and after being moved to a customer’s site


4. Levels of Testing in detail
Overview
In developing a large system, testing usually involves several stages (Refer the following figure.
                Unit Testing
                Integration Testing
                System Testing
               Acceptance Testing 













Initially, each program component (module) is tested independently verifying the component functions with the types of input identified by studying component’s design. Such a testing is called Unit Testing (or component or module testing). Unit testing is done in a controlled environment with a predetermined set of data fed into the component to observe what output actions and data are produced.
When collections of components have been unit-tested, the next step is ensuring that the interfaces among the components are defined and handled properly. This process of verifying the synergy of system components against the program Design Specification is called Integration Testing.
Once the system is integrated, the overall functionality is tested against the Software Requirements Specification (SRS). Then, the other non-functional requirements like performance testing are done to ensure readiness of the system to work successfully in a customer’s actual working environment. This step is called System Testing.
The next step is customer’s validation of the system against User Requirements Specification (URS). Customer in their working environment does this exercise of Acceptance Testing usually with assistance from the developers. Once the system is accepted, it will be installed and will be put to use.

Unit Testing
Unit Testing is done mostly by developers it advocates to address the goal of finding faults in modules (components):
Examining the code :Typically the static testing methods like Reviews, Walkthroughs and Inspections are used

Proving code correct :After coding and review exercise if we want to ascertain the correctness of the code we can use formal methods. A program is correct if it implements the functions and data properly as indicated in the design, and if it interfaces properly with all other components. One way to investigate program correctness is to view the code as a statement of logical flow. Using mathematical logic, if we can formulate the program as a set of assertions and theorems, we can show that the truth of the theorems implies the correctness of the code.
                 
• Use of this approach forces us to be more rigorous and precise in specification. Much work is involved in setting up and carrying out the proof. For example, the code for performing bubble sort is much smaller than its logical description and proof.

Testing program components (modules)
                In the absence of simpler methods and automated tools, “Proving code correctness” will be an elusive goal for software engineers. Proving views programs in terms of classes of data and conditions and the proof may not involve execution of the code. On the contrary, testing is a series of experiments to observe the behavior of the program for various input conditions. While proof tells us how a program will work in a hypothetical environment described by the design and requirements, testing gives us information about how a program works in its actual operating environment.
To test a component (module), input data and conditions are chosen to demonstrate an observable behavior of the code. A test case is a particular choice of input data to be used in testing a program. Test case are generated by using either black-box or white-box approaches

Integration Testing
Integration is the process of assembling unit-tested modules. We need to test the following aspects that are not previously addressed while independently testing the modules:
                Interfaces: To ensure “interface integrity,” the transfer of data between modules is tested. When data is passed to another module, by way of a call, there should not be any loss or corruption of data. The loss or corruption of data can happen due to mismatch or differences in the number or order of calling and receiving parameters.
                Module combinations may produce a different behavior due to combinations of data that are not exercised during unit testing.
                Global data structures, if used, may reveal errors due to unintended usage in some module.
Integration Strategies
Depending on design approach, one of the following integration strategies can be adopted:
· Big Bang approach
· Incremental approach
                Top-down testing
                Bottom-up testing
                Sandwich testing 

Big Bang approach consists of testing each module individually and linking all these modules together only when every module in the system has been tested












Though Big Bang approach seems to be advantageous when we construct independent module concurrently, this approach is quite challenging and risky as we integrate all modules in a single step and test the resulting system. Locating interface errors, if any, becomes difficult here.
The alternative strategy is an incremental approach, wherein modules of a system are consolidated with already tested components of the system. In this way, the software is gradually built up, spreading the integration testing load more evenly through the construction phase. Incremental approach can be implemented in two distinct ways: Top-down and Bottom-up.
In Top-down testing, testing begins with the topmost module. A module will be integrated into the system only when the module which calls it has been already integrated successfully. An example order of Top-down testing for the above illustration will be:





The testing starts with M1. To test M1 in isolation, communications to modules M2, M3 and M4 have to be somehow simulated by the tester somehow, as these modules may not be ready yet. To simulate responses of M2, M3 and M4 whenever they are to be invoked from M1, “stubs” are created. Simple applications may require stubs which simply return control to their superior modules. More complex situation demand stubs to simulate a full range of responses, including parameter passing. Stubs may be individually created by the tester (as programs in their own right) or they may be provided by a software testing harness, which is a piece of software specifically designed to provide a testing environment.
In the above illustration, M1 would require stubs to simulate the activities of M2, M3 and M4. The integration of M3 would require a stub or stubs (?!) for M5 and M4 would require stubs for M6 and M7. Elementary modules (those which call no subordinates) require no stubs

Bottom-up testing begins with elementary modules. If M5 is ready, we need to simulate the activities of its superior, M3. Such a “driver” for M5 would simulate the invocation activities of M3. As with the stub, the complexity of a driver would depend upon the application under test. The driver would be responsible for invoking the module under test; it could be responsible for passing test data (as parameters) and it might be responsible for receiving output data. Again, the driving function can be provided through a testing harness or may be created by the tester as a program. The following diagram shows the bottom-up testing approach for the above illustration

Bottom up Approach




driver must be provided for modules M2, M5, M6, M7, M3 and M4. There is no need for a driver for the topmost node, M1.

System Testing
The objective of unit and integration testing was to ensure that the code implemented the design properly. In system testing, we need to ensure that the system does what the customer wants it to do. Initially the functions (functional requirements) performed by the system are tested. A function test checks whether the integrated system performs its functions as specified in the requirements.
After ensuring that the system performs the intended functions, the performance test is done. This non-functional requirement includes security, accuracy, speed, and reliability.
System testing begins with function testing. Since the focus here is on functionality, a black-box approach is taken (Refer Test Techniques). Function testing is performed in a controlled situation. Since function testing compares the system’s actual performance with its requirements, test cases are developed from requirements document (SRS). For example a word processing system can be tested by examining the following functions: document creation, document modification and document deletion. To test document modification, adding a character, adding a word, adding a paragraph, deleting a character, deleting a word, deleting a formatting, etc. are to be tested.
Performance testing addresses the non-functional requirements. System performance is measured against the performance objectives set by the customer. For example, function testing may have demonstrated how the system handles deposit or withdraw transactions in a bank account package. Performance testing evaluates the speed with which calculations are made, the precision of the computation, the security precautions required, and the response time to user inquiry.
Acceptance Testing

Acceptance testing is the customer (and user) evaluation of the system, primarily to determine whether the system meets their needs and expectations. Usually acceptance test is done by customer with assistance from developers. Customers can evaluate the system either by conducting a benchmark test or by a pilot test. In benchmark test, the system performance is evaluated against test cases that represent typical conditions under which the system will operate when actually installed. A pilot test installs the system on an experimental basis, and the system is evaluated against everyday working.

Sometimes the system is piloted in-house before the customer runs the real pilot test. The in-house test, in such case, is called an alpha test, and the customer’s pilot is a beta test. This approach is common in the case of commercial software where the system has to be released to a wide variety of customers.
A third approach, parallel testing, is done when a new system is replacing an existing one or is part of a phased development. The new system is put to use in parallel with previous version and will facilitate gradual transition of users, and to compare and contrast the new system with the old.

Test types
A group of test activities can be aimed at verifying the software system (or a part of a system) based on a specific reason or target for testing.
A test type is focused on a particular test objective, which could be the testing of a function to be performed by the software; a non-functional quality characteristic, such as reliability or usability, the structure or architecture of the software or system; or related to changes, i.e. confirming that defects have been fixed (confirmation testing) and looking for unintended changes (regression testing).
A model of the software may be developed and/or used in structural and functional testing, for example, in functional testing a process flow model, a state transition model or a plain language specification; and for structural testing a control flow model or menu structure model. 


4.1 Testing of function (functional testing)

The functions that a system, subsystem or component are to perform may be described in work products such as a requirements specification, use cases, or a
functional specification, or they may be undocumented. The functions are “what” the system does.
Functional tests are based on functions and features (described in documents or understood by the testers) and their interoperability with specific systems, and may be performed at all test levels (e.g. tests for components may be based on a component specification).
Specification-based techniques may be used to derive test conditions and test cases from the functionality of the software or system. Functional testing considers the external behavior of the software (black-box testing).
A type of functional testing, security testing, investigates the functions (e.g. a firewall) relating to detection of threats, such as viruses, from malicious outsiders. Another type of functional testing, interoperability testing, evaluates the capability of the software product to interact with one or more specified components or systems.

4.2 Testing of non-functional software characteristics (non-functional testing)
Non-functional testing includes, but is not limited to, performance testing, load testing, stress testing, usability testing, maintainability testing, reliability testing and portability testing. It is the testing of “how” the system works.
Non-functional testing may be performed at all test levels. The term non-functional testing describes the tests required to measure characteristics of systems and software that can be quantified on a varying scale, such as response times for performance testing.

Types of Performance Tests (Non-Functional Testing)

1. Stress tests – evaluates the system when stressed to its limits. If the requirements state that a system is to handle up to a specified number of devices or users, a stress test evaluates system performance when all those devices or users are active simultaneously. This test brings out the performance during peak demand.
2. Volume tests – addresses the handling of large amounts of data in the system. This includes
                Checking of whether data structures have been defined large enough to handle all possible situations,
                Checking the size of fields, records and files to see whether they can accommodate all expected data, and
                Checking of system’s reaction when data sets reach their maximum size.

3. Configuration tests – analyzes the various software and hardware configurations specified in the requirements. (E.g. system to serve variety of audiences)
4. Compatibility tests – are needed when a system interfaces with other systems (e.g. system to retrieve information from a large database system)

5. Regression tests – are required when the system being tested is replacing an existing system (Always used during a phased development – to ensure that new system’s performance is at least as good as that of the old)
Security tests – ensure the security requirements (testing characteristics related to availability, integrity, and confidentiality of data and services)
Timing tests – include response time, transaction time, etc. Usually done with stress test to see if the timing requirements are met even when the system is extremely active.
Environmental tests – look at the system’s ability to perform at the installation site. If the requirements include tolerances to heat, humidity, motion, chemical presence, moisture, portability, electrical or magnetic fields, disruption of power, or any other environmental characteristics of the site, then our tests should ensure that the system performs under these conditions.
Quality tests – evaluate the system’s reliability, maintainability, and availability. These tests include calculation of mean time to failure and mean time to repair, as well as average time to find and fix a fault.
10· Recovery tests – addresses response to the loss of data, power, device or services. The system is subjected to loss of system resources and tested if it recovers properly.
11· Maintenance tests – addresses the need for diagnostic tools and procedures to help in finding the source of problems. To verify existence and functioning of aids like diagnostic program, memory map, traces of transactions, etc.
12· Documentation tests – ensures documents like user guides, maintenance guides and technical documentation exists and to verify consistency of information in them.
13· Human factor (or Usability) tests – investigates user interface related requirements. Display screens, messages, report formats and other aspects are examined for ease of use.

4.3: Testing of software structure/architecture (structural testing)
Structural (white-box) testing may be performed at all test levels. Structural techniques are best used after specification-based techniques, in order to help measure the thoroughness of testing through assessment of coverage of a type of structure.
Coverage is the extent that a structure has been exercised by a test suite, expressed as a percentage of the items being covered. If coverage is not 100%, then more tests may be designed to test those items that were missed and, therefore, increase coverage. Coverage techniques are covered in Chapter 4.
At all test levels, but especially in component testing and component integration testing, tools can be used to measure the code coverage of elements such as statements or decisions. Structural testing may be based on the architecture of the system, such as a calling hierarchy

4.4: Testing related to changes (confirmation testing (retesting) and regression testing)
After a defect is detected and fixed, the software should be retested to confirm that the original defect has been successfully removed. This is called confirmation. Debugging (defect fixing) is a development activity, not a testing activity.
Regression testing is the repeated testing of an already tested program, after modification, to discover any defects introduced or uncovered as a result of the change(s). These defects may be either in the software being tested, or in another related or unrelated software component. It is performed when the software, or its environment, is changed. The extent of regression testing is based on the risk of not finding defects in software that was working previously.
Tests should be repeatable if they are to be used for confirmation testing and to assist regression testing.
Regression testing may be performed at all test levels, and applies to functional, non-functional and structural testing. Regression test suites are run many times and generally evolve slowly, so regression testing is a strong candidate for automation

4.5 : Maintenance testing
Once deployed, a software system is often in service for years or decades. During this time the system and its environment are often corrected, changed or extended. Maintenance testing is done on an existing operational system, and is triggered by modifications, migration, or retirement of the software or system.
Modifications include planned enhancement changes (e.g. release-based), corrective and emergency changes, and changes of environment, such as planned operating system or database upgrades, or patches to newly exposed or discovered vulnerabilities of the operating system.
Maintenance testing for migration (e.g. from one platform to another) should include operational tests of the new environment, as well as of the changed software.
Maintenance testing for the retirement of a system may include the testing of data migration or archiving if long data-retention periods are required

No comments: