1.Testing Principles:
Principle 1 – Testing shows the presence of Defects
Running a test through a software system can only show that one or more defects exist. Testing cannot show that the software is error free. Consider whether the top 10 wanted criminals website was error free. There were no functional defects, yet the website failed. In this case the problem was nonfunctional and the absence of defects was not adequate as a criterion for release of the website into operation. In Chapter 2, we will discuss confirmation testing, when a previously failed test is rerun, to show that under the same conditions, a previously reported problem no longer exists. In this type of situation, testing can show that one particular problem no longer exists.
Although there may be other objectives, usually the main purpose of testing is to find defects. Therefore tests should be designed to find as many defects as possible.
Principle 2 – Exhaustive testing is Impossible
If testing finds problems, then surely you would expect more testing to find additional problems, until eventually we would have found them all. We discussed exhaustive testing earlier when looking at the Ariane 5 rocket launch, and concluded that for large complex systems, exhaustive testing is not possible. However, could it be possible to test small pieces of software exhaustively, and only incorporate exhaustively tested code into large systems?
Exhaustive testing: a test approach in which all possible data combinations are used. This includes implicit data combinations present in the state of the software/data at the start of testing.
Principle 1 – Testing shows the presence of Defects
Running a test through a software system can only show that one or more defects exist. Testing cannot show that the software is error free. Consider whether the top 10 wanted criminals website was error free. There were no functional defects, yet the website failed. In this case the problem was nonfunctional and the absence of defects was not adequate as a criterion for release of the website into operation. In Chapter 2, we will discuss confirmation testing, when a previously failed test is rerun, to show that under the same conditions, a previously reported problem no longer exists. In this type of situation, testing can show that one particular problem no longer exists.
Although there may be other objectives, usually the main purpose of testing is to find defects. Therefore tests should be designed to find as many defects as possible.
Principle 2 – Exhaustive testing is Impossible
If testing finds problems, then surely you would expect more testing to find additional problems, until eventually we would have found them all. We discussed exhaustive testing earlier when looking at the Ariane 5 rocket launch, and concluded that for large complex systems, exhaustive testing is not possible. However, could it be possible to test small pieces of software exhaustively, and only incorporate exhaustively tested code into large systems?
Exhaustive testing: a test approach in which all possible data combinations are used. This includes implicit data combinations present in the state of the software/data at the start of testing.
Principle 3 – Early testing
When discussing why software fails, we briefly mentioned the idea of early testing. This principle is important because, as a proposed deployment date approaches, time pressure can increase dramatically. There is a real danger that testing will be squeezed, and this is bad news if the only testing we are doing is after all the development has been completed. The earlier the testing activity is started, the longer the elapsed time available. Testers do not have to wait until software is available to test.
Work-products are created throughout the software development life cycle (SDLC). As soon as these are ready, we can test them. further we will see that requirement documents are the basis for acceptance testing, so the creation of acceptance tests can begin as soon as requirement documents are available. As we create these tests, it will highlight the contents of the requirements. Are individual requirements testable? Can we find ambiguous or missing requirements?
Many problems in software systems can be traced back to missing or incorrect requirements.
In early testing we are trying to find errors and defects before they are passed to the next stage of the development process. Early testing techniques are attempting to show that what is produced as a system specification, for example, accurately reflects that which is in the requirement documents. Ed Kit (Kit, 1995) discusses identifying and eliminating errors at the part of the SDLC in which they are introduced. If an error/defect is introduced in the coding activity, it is preferable to detect and correct it at this stage. If a problem is not corrected at the stage in which it is introduced, this leads to what Kit calls ‘errors of migration’. The result is rework. We need to rework not just the part where the mistake was made, but each subsequent part where the error has been replicated. A defect found at acceptance testing where the original mistake was in the requirements will require several work-products to be reworked, and subsequently to be retested.
Studies have been done on the cost impacts of errors at the different development stages.
What is undoubtedly true is that the graph of the relative cost of early and late identification/correction of defects rises very steeply
The earlier a problem (defect) is found, the less it costs to fix.
The objectives of various stages of testing can be different. For example, in the review processes, we may focus on whether the documents are consistent and no errors have been introduced when the documents were produced. Other stages of testing can have other objectives. The important point is that testing has defined objectives.
Principle 4 – Defect clustering
Problems do occur in software! It is a fact. Once testing has identified (most of) the defects in a particular application, it is at first surprising that the spread of defects is not uniform. In a large application, it is often a small number of modules that exhibit the majority of the problems. This can be for a variety of reasons, some of which are:
• System complexity.
• Volatile code.
• The effects of change upon change.
• Development staff experience.
• Development staff inexperience.
This is the application of the Pareto principle to software testing: approximately 80 per cent of the problems are found in about 20 per cent of the modules. It is useful if testing activity reflects this spread of defects, and targets areas of the application under test where a high proportion of defects can be found. However, it must be remembered that testing should not concentrate exclusively on these parts. There may be fewer defects in the remaining code, but testers still need to search diligently for them.
Principle 5 – The pesticide paradox
Running the same set of tests continually will not continue to find new defects. Developers will soon know that the test team always tests the boundaries of conditions, for example, so they will test these conditions before the software is delivered. This does not make defects elsewhere in the code less likely, so continuing to use the same test set will result in decreasing effectiveness of the tests. Using other techniques will find different defects.
For example, a small change to software could be specifically tested and an additional set of tests performed, aimed at showing that no additional problems have been introduced (this is known as regression testing). However, the software may fail in production because the regression tests are no longer relevant to the requirements of the system or the test objectives. Any regression test set needs to change to reflect business needs, and what are now seen as the most important risks.
Principle 6 – Testing is context dependent
Different testing is necessary in different circumstances. A website where information can merely be viewed will be tested in a different way to an e-commerce site, where goods can be bought using credit/debit cards. We need to test an air traffic control system with more rigors than an application for calculating the length of a mortgage.
Risk can be a large factor in determining the type of testing that is needed. The higher the possibility of losses, the more we need to invest in testing the software before it is implemented.For an e-commerce site, we should concentrate on security aspects. Is it possible to bypass the use of passwords? Can ‘payment’ be made with an invalid credit card, by entering excessive data into the card number? Security testing is an example of a specialist area, not appropriate for all applications. Such types of testing may require specialist staff and software tools
Principle 7 – Absence of errors fallacy
Software with no known errors is not necessarily ready to be shipped. Does the application under test match up to the users’ expectations of it? The fact that no defects are outstanding is not a good reason to ship the software.
Before dynamic testing has begun, there are no defects reported against the code delivered. Does this mean that software that has not been tested (but has no outstanding defects against it) can be shipped? We think not!
2.FUNDAMENTAL TEST PROCESS
We previously determined that testing is a process, discussed above. This process is detailed in what has become known as the fundamental test process, a key element of what testers do, and is applicable at all stages of testing.
The most visible part of testing is running one or more tests: test execution. We also have to prepare for running tests, analyze the tests that have been run, and see whether testing is complete. Both planning and analyzing are very necessary activities that enhance and amplify the benefits of the test execution itself. It is no good testing without deciding how, when and what to test.
The fundamental test process consists of five parts that encompass all aspects of testing
(1) Planning and control.
(2) Analysis and design.
(3) Implementation and execution.
(4) Evaluating exit criteria and reporting.
(5) Test closure activities.
Although the main activities are in a broad sequence, they are not undertaken in a rigid way. An earlier activity may need to be revisited. A defect found in test execution can sometimes be resolved by adding functionality that was originally not present (either missing in error, or the new facilities are needed to make the other part correct). The new features themselves have to be tested, so even though implementation and execution are in progress, the ‘earlier’ activity of analysis and design has to be performed for the new features
2.1 Test planning and control
Planning is determining what is going to be tested, and how this will be achieved. It is where we draw a map; how activities will be done; and who will do them. Test planning is also where we define the test completion criteria. Completion criteria are how we know when testing is finished. Control, on the other hand, is what we do when the activities do not match up with the plans. It is the on-going activity where we compare the progress against the plan. As progress takes place, we may need to adjust plans to meet the targets, if this is possible. Therefore we need to undertake both planning and control throughout the testing activities.
The main activities of test planning are given below:
• Defining the scope and objectives of testing and identifying risks.
• Determining the test approach (techniques, test items, coverage, identifying and interfacing the teams involved in testing, testware).
• Detailing what is required to do the testing (e. g. people, test environment, PCs).
• Implementing the test policy and/or the test strategy.
• Scheduling the test analysis and design tasks.
• Scheduling test implementation, execution and evaluation.
• Detailing when testing will stop the exit criteria.
We would normally consider the following parts for test control:
• Measuring and analyzing results.
• Comparing expected and actual progress, test coverage and exit criteria.
• Making corrections if things go wrong and deciding actions
2.2 Test analysis and design
Analysis and design are concerned with the fine detail of what to test (test conditions), and how to combine test conditions into test cases, so that a small number of test cases can cover as many of the test conditions as possible. The analysis and design stage is the bridge between planning and test execution. It is looking backward to the planning (schedules, people, what is going to be tested) and forward to the execution activity (test expected results, what environment will be needed).
Test design involves predicting how the software under test should behave in a given set of circumstances. Sometimes the expected outcome of a test is trivial: when ordering books from an online book retailer, for instance, under no circumstances should money be refunded to the customer’s card without intervention from a supervisor. If we do not detail expected outcomes before starting test execution, there is a real danger that we will miss the one item of detail that is vital, but wrong.
These topics will be discussed in more detail in Chapter 4, when test case design techniques are presented. The main points of this activity are as follows:
• Reviewing requirements, architecture, design, interfaces and other parts, which collectively comprise the test basis.
• Analyzing test items, the specification, behavior and structure to identify test conditions and test data required.
• Designing the tests.
• Determining whether the requirements and the system are testable.
• Detailing what the test environment should look like, and whether there are any infrastructure and tools required.
2.3 Test implementation and execution
The test implementation and execution activity involves running tests, and this will include where necessary any set-up/tear-down activities for the testing. It will also involve checking the test environment before testing begins. Test execution is the most visible part of testing, but it is not possible without other parts of the fundamental test process. It is not just about running tests. As we have already mentioned, the most important tests need to be run first. How do we know what are the most important tests to run? This is determined during the planning stages, and refined as part of test design.
As tests are run, their outcome needs to be logged, and a comparison made between expected results and actual results. Whenever there is a discrepancy between the expected and actual results, this needs to be investigated. If necessary a test incident should be raised. Each incident requires investigation, although corrective action will not be necessary in every case. Test incidents will be discussed in Chapter 5.
When anything changes (software, data, installation procedures, user documentation, etc.), we need to do two kinds of testing on the software. First of all, tests should be run to make sure that the problem has been fixed. We also need to make sure that the changes have not broken the software elsewhere. These two types are usually called confirmation testing and regression testing, respectively. In confirmation testing we are looking in fine detail at the changed area of functionality, whereas regression testing should cover all the main functions to ensure that no unintended changes have occurred. On a financial system, we should include end of day/end of month/end of year processing, for example, in a regression test pack.
Test implementation and execution is where the most visible test activities are undertaken, and usually have the following parts:
• Developing and prioritizing test cases, creating test data, writing test procedures and, optionally, preparing test harnesses and writing automated test scripts.
• Collecting test cases into test suites, where tests can be run one after another for efficiency.
• Checking the test environment set-up is correct.
• Running test cases in the determined order. This can be manually or using test execution tools.
• Keeping a log of testing activities, including the outcome (pass/fail) and the versions of software, data, tools and testware (scripts, etc.).
• Comparing actual results with expected results.
• Reporting discrepancies as incidents with as much information as possible, including if possible causal analysis (code defect, incorrect test specification, test data error or test execution error).
• Where necessary, repeating test activities when changes have been made following incidents raised. This includes re-execution of a test that previously failed in order to confirm a fix (confirmation testing), execution of a corrected test and execution of previously passed tests to check that defects have not been introduced (regression testing)
2.4 Evaluating exit criteria and reporting
Remember that exit criteria were defined during test planning and before test execution started. At the end of test execution, the test manager checks to see if these have been met. If the criteria were that there would be 85 per cent statement coverage (i. e. 85 per cent of all executable statements have been executed (see Chapter 4 for more detail)), and as a result of execution the figure is 75 per cent, there are two possible actions: change the exit criteria, or run more tests. It is possible that even if the preset criteria were met, more tests would be required. Also, writing a test summary for stakeholders would say what was planned, what was achieved, highlight any differences and in particular things that were not tested.
The fourth stage of the fundamental test process, evaluating exit criteria, comprises the following:
• Checking whether the previously determined exit criteria have been met.
• Determining if more tests are needed or if the specified exit criteria need amending.
• Writing up the result of the testing activities for the business sponsors and other stakeholders.
2.5 Test closure activities
Testing at this stage has finished. Test closure activities concentrate on making sure that everything is tidied away, reports written, defects closed, and those defects deferred for another phase clearly seen to be as such.
At the end of testing, the test closure stage is composed of the following:
• Ensuring that the documentation is in order; what has been delivered is defined (it may be more or less than originally planned), closing incidents and raising changes for future deliveries, documenting that the system has been accepted.
• Closing down and archiving the test environment, test infrastructure and testware used.
• Passing over testware to the maintenance team.
• Writing down the lessons learned from this testing project for the future, and incorporating lessons to improve the testing process
3.THE PSYCHOLOGY OF TESTING
One last topic that we need to address before we move onto the more detailed coverage of topics in the following chapters is the basic psychology behind testing.
A variety of different people may be involved in the total testing effort, and they may be drawn from a broad set of backgrounds. Some will be developers, some professional testers, and some will be specialists, such as those with performance testing skills, whilst others may be users drafted in to assist with acceptance testing. Whoever is involved in testing needs at least some understanding of the skills and techniques of testing to make an effective contribution to the overall testing effort.
Testing can be more effective if it is not undertaken by the individual(s) who wrote the code, for the simple reason that the creator of anything (whether it is software or a work of art) has a special relationship with the created object. The nature of that relationship is such that flaws in the created object are rendered invisible to the creator. For that reason it is important that someone other than the creator should test the object. Of course we do want the developer who builds a component or system to debug it, and even to attempt to test it, but we accept that testing done by that individual cannot be assumed to be complete. Developers can test their own code, but it requires a mindset change, from that of a developer (to prove it works) to that of a tester (trying to show that it does not work). If there are separate individuals involved, there are no potential conflicts of interest. We therefore aim to have the software tested by someone who was not involved in the creation of the software; this approach is called test independence. Below are people who could test software, listed in order of increasing independence:
• Those who wrote the code.
• Members of the same development team.
• Members of a different group (independent test team).
• Members of a different company (a testing consultancy/outsourced).
Of course independence comes at a price; it is much more expensive to use a testing consultancy than to test a program oneself.
Testers and developers think in different ways. However, although we know that testers should be involved from the beginning, it is not always good to get testers involved in code execution at an early stage; there are advantages and disadvantages. Getting developers to test their own code has advantages (as soon as problems are discovered, they can be fixed, without the need for extensive error logs), but also difficulties (it is hard to find your own mistakes). People and projects have objectives, and we all modify actions to blend in with the goals. If a developer has a goal of producing acceptable software by certain dates, then any testing is aimed towards that goal.
If a defect is found in software, the software author may see this as criticism. Testers need to use tact and diplomacy when raising defect reports. Defect reports need to be raised against the software, not against the individual who made the mistake. The mistake may be in the code written, or in one of the documents upon which the code is based (requirement documents or system specification). When we raise defects in a constructive way, bad feeling can be avoided
We all need to focus on good communication, and work on team building. Testers and developers are not opposed, but working together, with the joint target of better quality systems. Communication needs to be objective, and expressed in impersonal ways:
• The aim is to work together rather than be confrontational. Keep the focus on delivering a quality product.
• Results should be presented in a non-personal way. The work-product may be wrong, so say this in a non-personal way.
• Attempt to understand how others feel; it is possible to discuss problems and still leave all parties feeling positive


No comments:
Post a Comment