Software Testing Life Cycle:
It is part of SDLC (Software Development Lifecycle),it refers to cpmprehensive group of testing related actions specifying details of every action along with specification of best time to perform such action.
Quality and Testing go hand in hand.Both are reciprocating each other.Quality is notoriously hard to define. If a system meets its users’ requirements that constitutes a good starting point. Testing is performed to ensure the quality of a product and Quality is achieved once a product is Tested or verified.In the examples of software system failures quoted in previous 'Introcuction' post we found that system had an obvious functional weakness which made system failure and cause various diasters.
One role for testing is to ensure that key functional and non-functional requirements are examined before the system enters service and any defects are reported to the development team for rectification. Testing cannot directly remove defects, nor can it directly enhance quality. By reporting defects it makes their removal possible and so contributes to the enhanced quality of the system. In addition, the systematic coverage of a software product in testing allows at least some aspects of the quality of the software to be measured.
However testing never ensures that product is going to meet Quality.Offcourse Testing is performed to make sure that it meets certain quality guidelines and Quality itself is a very broad term.For some of end user quality is how user friendly is the software product ,however for some it may be how quickly it loads.There could be variety of parameters,that means Testing and Quality is a kind of never ending activity.There is always in scope of improvement in quality and there is always a undiscovered areas to test,as the basic Testing priciple says Exhaustive Testing is not possible.
Hence, Testing is one component in the overall quality assurance activity that seeks to ensure that systems enter service without defects that can lead to serious failures.There is no thumb rule which says that A tested product is flawless or glitchfree.
How Much Testing is Enough?
The next most important aspect is setting criteria that will give you an objective test of whether it is safe to stop testing, so that time and all the other pressures do not confuse the outcome. These criteria, usually known as completion criteria, set the standards for the testing activity by defining areas such as how much of the software is to be tested and what levels of defects can be tolerated in a delivered product.
How Much Testing is Enough?
The very obivious question whcih comes to our mind with such a ambiguity in Quality and Testing. How much testing is enough, and how do we decide when to stop testing?
We have so far decided that we cannot test everything, even if we would wish to. We also know that every system is subject to risk of one kind or another and that there is a level of quality that is acceptable for a given system. These are the factors we will use to decide how much testing to do The most important aspect of achieving an acceptable result from a finite and limited amount of testing is prioritization. Do the most important tests first so that at any time you can be certain that the tests that have been done are more important than the ones still to be done. Even if the testing activity is cut in half it will still be true that the most important testing has been done. The most important tests will be those that test the most important aspects of the system: they will test the most important functions as defined by the users or sponsors of the system, and the most important non-functional behavior, and they will address the most significant risks.
The next most important aspect is setting criteria that will give you an objective test of whether it is safe to stop testing, so that time and all the other pressures do not confuse the outcome. These criteria, usually known as completion criteria, set the standards for the testing activity by defining areas such as how much of the software is to be tested and what levels of defects can be tolerated in a delivered product.
Priorities and completion criteria provide a basis for planning but the triangle of resources in Figure1.2 still applies. In the end, the desired level of quality and risk may have to be compromised, but our approach ensures that we can still determine how much testing is required to achieve the agreed levels and we can still be certain that any reduction in the time or effort available for testing will not affect the balance – the most important tests will still be those that have already been done whenever we stop.
What Exactly Testing is?
As of now we have recognized that it is an activity used to reduce risk and improve quality by finding defects, which is all true. However, we need to understand a little more about how software testing works in practice before we can think about how to implement effective testing.
Testing and Debugging:
Debugging is the process that developers go through to identify the cause of bugs or defects in code and undertake corrections. Ideally some check of the correction is made, but this may not extend to checking that other areas of the system have not been inadvertently affected by the correction.
Testing, on the other hand, is a systematic exploration of a component or system with the main aim of finding and reporting defects. Testing does not include correction of defects – these are passed onto the developer to correct. Testing does, however, ensure that changes and corrections are checked for their effect on other parts of the component or system.
Effective debugging is essential before testing begins to raise the level of quality of the component or system to a level that is worth testing, i. e. a level that is sufficiently robust to enable rigorous testing to be performed. Debugging does not give confidence that the component or system meets its requirements completely. Testing makes a rigorous examination of the behavior of a component or system and reports all defects found for the development team to correct. Testing then repeats enough tests to ensure that defect corrections have been effective. So both are needed to achieve a quality result.
Static Testing and Dynamic Testing:
Static testing is the term used for testing where the code is not exercised. This may sound strange, but remember that failures often begin with a human error, namely a mistake in a document such as a specification. We need to test these because errors are much cheaper to fix than defects or failures (as you will see). That is why testing should start as early as possible, another basic principle explained in more detail later in this chapter. Static testing involves techniques such as
->Requirement Reviews
->Design Reviews
->code walkthrough
->Code Inspection
which can be effective in preventing defects, e. g. by removing ambiguities and errors from specification documents; this is a topic in its own right.
Dynamic testing is the kind that exercises the program under test with some test data, so we speak of test execution in this context.
Dynamic testing involves levels of testing such as :
->Unit Testing
->Integration testing
->System Testing
->Alpha Testing
->UAT Testing
->Installation Testing
->Beta Testing
The discipline of software testing encompasses both static and dynamic testing.
Testing as a process:
We have already seen that there is much more to testing than test execution. Before test execution there is some preparatory work to do to design the tests and set them up; after test execution there is some work needed to record the results and check whether the tests are complete. Even more important than this is deciding what we are trying to achieve with the testing and setting clear objectives for each test. A test designed to give confidence that a program functions according to its specification, for example, will be quite different from one designed to find as many defects as possible. We define a test process to ensure that we do not miss critical steps and that we do things in the right order. We will return to this important topic later, where we explain the fundamental test process in detail.
Testing as a set of techniques:
The final challenge is to ensure that the testing we do is effective testing. It might seem paradoxical, but a good test is one that finds a defect if there is one present. A test that finds no defect has consumed resources but added no value; a test that finds a defect has created an opportunity to improve the quality of the product. How do we design tests that find defects? We actually do two things to maximize the effectiveness of the tests. First we use well-proven test design techniques, and a selection of the most important of these is explained in detail in Chapter 4. The techniques are all based on certain testing principles that have been discovered and documented over the years, and these principles are the second mechanism we use to ensure that tests are effective. Even when we cannot apply rigorous test design for some reason (such as time pressures) we can still apply the general principles to guide our testing

No comments:
Post a Comment