Testing can show that defects are present, but cannot prove that there are no defects.
Testing reduces the probability of undiscovered defects remaining in the software but, even if no defects are found,it is not a proof of correctness.
2. Exhaustive testing is impossible:
Testing everything (all combinations of inputs and preconditions) is not feasible except for trivial cases.
Instead of exhaustive testing, we use risks and priorities to focus testing efforts.
3. Early testing:
Testing activities should start as early as possible in the software or system development life cycle and should be focused on defined objectives.
4. Defect clustering:
One phenomenon that many testers have observed is that defects tend to cluster. This can happen because an area of the code is particularly complex and tricky, or because changing software and other products tends to cause knock-on defects.
Testers will often use this information when making their risk assessment for planning the tests
5. Pesticide paradox:
If the same tests are repeated over and over again, eventually the same set of test cases will no longer find any new bugs.
To overcome this 'pesticide paradox', the test cases need to be regularly reviewed and revised, and new and different tests need to be written to exercise different parts of the software or system to potentially find more defects.
6. Testing is context dependent:
Testing is done differently in different contexts. For example, safety-critical software is tested differently from an e-commerce site.
7. Absence-of-errors fallacy:
Finding and fixing defects does not help if the system built is unusable and does not fulfill the users' needs and expectations. Testing everything (all combinations of inputs and preconditions) is not feasible except for trivial cases. Instead of exhaustive testing, we use risks and priorities to focus testing efforts.
Testing activities should start as early as possible in the software or system development life cycle and should be focused on defined objectives.
## Test levels
In the waterfall model, testing tends to happen towards the end of the project life cycle so defects are detected close to the live implementation date.
It is difficult to go back and fix the feedback. The V-model was developed to address the problem of the waterfall approach.
Within the V-model, validation testing takes place especially during the early stages, e.g. reviewing the user requirements, and late in the life cycle, e.g. during user acceptance testing.
A common type of V-model uses four test levels:
There are 4 levels in V-model
* Component testing (unit testing): Verify the function of software components (e.g. modules, programs, objects, classes etc.) that are separately testable
* Integration testing: Test interfaces between components, interactions to different parts of a system such as an operating system, file system and hardware or interfaces between systems
* Integration testing: Test interfaces between components, interfaces between parts of a system or subsystems
* System testing: Concerned with the behavior of the whole system/product. The main focus of system testing is verification against specified requirements
* Acceptance testing: Validation testing with respect to user needs, requirements, and business processes to determine whether or not to accept the system.
* Acceptance testing: Validation testing is based on user needs, requirements, and business processes to determine whether or not to accept the system.

In practice, a V-model may have more, fewer or different levels of development and testing, depending on the project and the software product
## Testing Technique
There are many different types of software testing technique.Each individual technique is good at finding particular types of defect and relatively poor at finding other types.
Each testing technique falls into one of a number of different categories but there are two main categories: static and dynamic testing.
There are many different types of software testing technique but there are two main categories: static and dynamic testing.
### - Static testing
Static testing techniques do not execute the code, it is used to test any form of document including source code, design documents and models, functional specifications and requirement specifications.
It starts early in the life cycle, generally before any tests are executed on the software so it is called non-execution technique.
By detecting defects at an early stage ( when they are documentary),it will take less effort for fixing and prevent the faillures at the late stage (Ex: acceptance testing stage).
Static testing is testing any form of document including source code, design documents and models, functional specifications and requirement specifications. We don't execute the system with static testing.
It starts early in the life cycle, helps to prevent that bugs in document can cause the system failure , save time and effort.
### - Dynamic testing
With dynamic testing methods, software is executed using a set of input values and its output is then examined and compared to what is expected.It is applied as a technique to detect defects and to determine quality attributes of the code.
With dynamic testing methods, software is executed using a set of input values and its output is then examined and compared to what is expected.It is applied as a technique to detect bugs and to determine the quality of the product.
Dynamic techniques are divided into the following categories:
1.Specfication-based testing (black-box):you have no knowledge about the system or component inside the box works, only know the input/output.
1.Black-box testing: you have no knowledge about the system or component inside, only know the input/output.
2.Structure-based testing(white-box):you know the internal structure of the software to derive test cases
2.White-box testing:you know the internal structure of the software to create test cases
3.Experience-based testing:it is based on the experience of technical, business and similar systems
3.Experience-based testing:it is based on the testers' experience of technical, business and similar systems
There is also the grey-box testing: It means have some knowledge of the internal structure (but not in detailed) to design test cases and test the application from the outside
_Equivalence partitioning_:You divide a set of test conditions into groups or sets that can be considered the same ( the system handle them equivalently). We need test only one condition from each partition. If one in a parttion does not work, we assume none in that partition work. You may try more than one value from a partition.
1. Equivalence partitioning:You divide a set of test conditions into groups or sets that can be considered the same ( the system handle them equivalently). We need test only one condition from each partition. If one in a parttion does not work, we assume none in that partition work. You may try more than one value from a partition.
Ex:Test a software that identify the range of balance values and the rates of interest as following:
Ex:Test the software that calculate the interest due, identify the range of balance values that earn the different rates of interest. $0-$100: 3% interest rate, $100-$1000: 5% interest rate, >=1000$: 7% interest rate.
We will have valid partitions and invalid partitions:
2. Boundary value analysis:it is based on testing at the boundaries between partitions

Ex:The same as the previous example. The boundaries will be: -$0.01,$0.00,$100.00,$100.01,$999.99,$1000.00
_Boundary value analysis_: it is based on testing at the boundaries between partitions
3. Decision tables: Combine the input conditions and indentify the outcomes.
Ex:The same software as the previous example. The boundaries we can use:
Ex:If you are a new customer opening a credit card account, you will get a 15% discount on all your purchases today. If you are an existing customer and you hold a loyalty card, you get a 10% discount

New* * * * |T |T |F |F |
_Decision tables_: Combine the input conditions and indentify the outcomes.
Loyalty* * |T |F |T |F |
Ex: A bank apllication with the requirement as below: If you are a new customer opening a credit card account, you will get a 15% discount on all your purchases today.
If you are an existing customer and you hold a loyalty card, you get a 10% discount.
We can define test cases with the table like this:
Discount % |X |15|10|X |

4. Use case testing: Use cases describe the process flows through a system based on its most likely use. This makes the test cases derived from use cases particularly good for finding defects in the real-world use of the system (i.e. the defects that the users are most likely to come across when first using the system)
_Use case testing_: Use cases describe the process flows through a system based on its most likely use. This makes the test cases derived from use cases particularly good for finding bugs in the real-world use of the system (i.e. the bugs that the users are most likely to come across when first using the system)
__Statment Coverage=(Number of statements exercised/Total number of statements) X 100%__
Statment Coverage=(Number of statements exercised/Total number of statements) X 100%
Ex:
...
...
@@ -119,56 +82,90 @@ There is also the grey-box testing: It means have some knowledge of the internal
6 ENDIF
We have two read statements,one assignment statement, and then one IF statement on three lines. Analyze the the coverage of the following test cases:
We have different statement coverage with different input values:
* Test 1: A = 2, B = 3
* Test 2: A =0, B = 25
* Test 3: A =20, B = 25

In Test 1, the value of C will be 8, so we will cover the statements on lines 1 to 4 and line 6, we have 83% statement coverage
If we have Test 1 and Test 2, we only have 85% statement coverage.
In Test 3, the value of C will be 70, so we will print 'Large C and we will have exercised all six of the statements, now statement coverage = 100%.
So we will choose test 3 because it is more effective than the first 2 tests together.
In Test 2, the value of C will be 50, so we will cover exactly the same state ments as Test 1.
_Decision coverage and decision testing:_
In Test 3, the value of C will be 70, so we will print 'Large C and we will have exercised all six of the statements, now statement coverage = 100%
Decision Coverage=(Number of decision exercised/Total number of decision) X 100%
Test 3 is more effective than the first 2 tests together, you just take Test 3 because it can reach the goal of 100% coverage with only one test case.
A decision is an IF statement, a loop control statement (e.g. DO-WHILE or REPEAT-UNTIL), or a CASE statement, where there are two or more possible exits or
outcomes from the statement.
Decision coverage is stronger than statement coverage, 100% decision coverage always guarantees 100% statement coverage.
2. Decision coverage and decision testing:
Ex:
__Decision Coverage=(Number of decision exercised/Total number of decision) X 100%__
1.READ A
A decision is an IF statement, a loop control statement (e.g. DO-WHILE or REPEAT-UNTIL), or a CASE statement, where there are two or more possible exits or outcomes from the statement.Decision coverage is stronger than statement coverage, 100% decision coverage always guarantees 100% statement coverage.
2.READ B
Ex:
3.C=A-2*B
1 READ A
4.IFC <0THEN
2 READ B
5.PRINT "C negative"
3 C=A-2*B
6.ENDIF
4 IFC <0THEN

5 PRINT "C negative"
With Test 1: A = 20, B = 15 the value of C will be -10, we will print "C negative" and we have 100% statement coverage.
But we only cover the True outcome of IF statement, we have not checked the False outcome. So we have to add Test 2
They now cover both of the decision outcomes True and False.
6 ENDIF
#### _+ Experience-based testing technique:_
_Error guessing_
With Test 1: A = 20, B = 15 the value of C will be -10, we will print "C negative" and we have 100% statement coverage. But we only cover the True outcome of IF statement, we have not checked the False outcome. So we have to add another test:
Error guessing is a technique that is used as a complement to other more formal techniques. There are no rules for error guessing.
The tester should think of situations in which the software may work incorrectly.
Typical conditions includes division by zero, blank input, empty files and the wrong kind of data (e.g. alphabetic characters where numeric are required).
Test1: A=20, B=15
Test2: A=10, B=2
_Exploratory testing:_
This now covers both of the decision outcomes True and False.
Exploratory testing is an approach in which the test design and test execution activities are performed in parallel without formally documenting test cases.
It is most useful when there are no or poor specifications and when time is severely limited.
It can also serve to complement other formal testing, helping to establish greater confidence in the software.
#### _+ Experience-based testing technique:_
## Test Principals
There are some principlals for testing
1. Error guessing:
1. Testing shows presence of bugs:
Error guessing is a technique that is used as a complement to other more formal techniques. There are no rules for error guessing. The tester should think of situations in which the software may work incorrectly.Typical conditions includes division by zero, blank input, empty files and the wrong kind of data (e.g. alphabetic characters where numeric are required).
Testing can show that bugs are present, but cannot prove that there are no bugs.
Testing reduces the probability of undiscovered bugs remaining in the software but, even if no bugs are found,it is not a proof of correctness.
2. Exhaustive testing is impossible:
Testing everything (all combinations of inputs and preconditions) is not feasible except for trivial cases.
Instead of exhaustive testing, we use risks and priorities to focus testing efforts.
3. Early testing:
2. Exploratory testing:
Testing activities should start as early as possible in the software or system development life cycle and should be focused on defined objectives.
4. Bug clustering:
One phenomenon that many testers have observed is that bugs tend to cluster. This can happen because an area of the code is particularly complex and tricky, or because changing software and other products tends to cause knock-on bugs.
Testers will often use this information when making their risk assessment for planning the tests
5. Pesticide paradox:
If the same tests are repeated over and over again, eventually the same set of test cases will no longer find any new bugs.
To overcome this 'pesticide paradox', the test cases need to be regularly reviewed and revised, and new and different tests need to be written to exercise different parts of the software or system to potentially find more bugs.
6. Testing is context dependent:
Testing is done differently in different contexts. For example, safety-critical software is tested differently from an e-commerce site.
7. Absence-of-errors fallacy:
Exploratory testing is an approach in which the test design and test execution activities are performed in parallel without formally documenting the test conditions, test cases or test scripts. It is most useful when there are no or poor specifications and when time is severely limited. It can also serve to complement other formal testing, helping to establish greater confidence in the software.
Finding and fixing bugs does not help if the system built is unusable and does not fulfill the users' needs and expectations.
> _Source: Foundation of software testing- Dorothy Graham_