Technicians - Part II
In the previous post (here), I briefly explained whether a Software Tester is a Technician and whether or not Software Engineering has any principles.
In this post, I'd like to rabble on and propose what I think can be a fundamental approach to Software Testing.
Before I give the details of the approach, let us look at how scientific methods work.
Any science/art or field of Engineering depends on Theoretical principles for its practice.
Any Theory is effective if practical evidence supports the outcomes or predictions the theory suggests.
If the predictions are incorrect, either the predictions are modified according to the Theory and practical outcomes are verified, OR the Theory is rejected (or sometimes modified to include a broader set of results). In the image below, the scientific method is explained.
Now, apply this to Software Testing.
'Theory' would be analogous to the business context for which a Software Solution is being suggested.
Through background research, one can create a list of predictions on how software should behave.
Predictions would be the expected outcomes or oracles, as some people call them.
The actual 'Testing' of the predictions (or oracles) involves evaluating the implementation of the software solution by putting it to use (analogous to conducting experiments and collecting practical data), which gives you the actual outcomes.
Once testing is done, the actual outcomes are analysed, and the results are communicated (Test Reporting).
As mentioned in the image below, the possibility is high at the later stages of the experiment. So one has to be really careful during these stages.
We can see that the risk of making an error is high during the conducting of an experiment, analysis of results and communication of results.
This can also be analogised to Software Testing. The reverse may be true for Developing a Software Solution, as errors are more likely to creep in during the initial stages of development (understanding the business context and implementing the design).
Evaluating Software, analysing results and communicating them should be very carefully done to reduce the risk of errors.
Now, if a Test 'fails', there can be three possibilities (I can think of, off the top of my head)
1. Either the expected outcomes are incorrect or misinterpreted (OR)
2. The business context itself is misunderstood (OR)
3. The implementation was incorrectly performed, so the expected outcome and, thus, the prediction were not satisfied.
If a Test 'passes,'
There can be the following possibilities (there can be more, these are what I could think of)
1. The outcome has been interpreted as positive when it was actually negative
2. The Outcome has been interpreted as a positive and IS actually positive
3. The outcome was negative, but the person reporting the outcomes is not willing to report a negative out of fear of being the centre of criticism.
Now, does the job of the Tester end after reporting results?
I think not. Because Testing is an activity about learning about the product through Tests. So, after one cycle of tests, the Tester is now more knowledgeable about the software solution than before. The Tester's perspective has changed about the software.
This perspective needs to be utilised for Testing, but the Tester should not be biased by the outcomes of the tests. The Tester should learn about the product so much that it helps the Developers create and modify it for efficient use by end users.
Now, I may need to be more lucid in explaining how Testing should be done because Testing is an experiential field. One can only learn it through experience. For me, most of the Testing experience cannot be explained in words. I put as many words to my thoughts as possible.
IMAGES COURTESY: INTERNET
In this post, I'd like to rabble on and propose what I think can be a fundamental approach to Software Testing.
Before I give the details of the approach, let us look at how scientific methods work.
Any science/art or field of Engineering depends on Theoretical principles for its practice.
Any Theory is effective if practical evidence supports the outcomes or predictions the theory suggests.
If the predictions are incorrect, either the predictions are modified according to the Theory and practical outcomes are verified, OR the Theory is rejected (or sometimes modified to include a broader set of results). In the image below, the scientific method is explained.
Now, apply this to Software Testing.
'Theory' would be analogous to the business context for which a Software Solution is being suggested.
Through background research, one can create a list of predictions on how software should behave.
Predictions would be the expected outcomes or oracles, as some people call them.
The actual 'Testing' of the predictions (or oracles) involves evaluating the implementation of the software solution by putting it to use (analogous to conducting experiments and collecting practical data), which gives you the actual outcomes.
Once testing is done, the actual outcomes are analysed, and the results are communicated (Test Reporting).
As mentioned in the image below, the possibility is high at the later stages of the experiment. So one has to be really careful during these stages.
We can see that the risk of making an error is high during the conducting of an experiment, analysis of results and communication of results.
This can also be analogised to Software Testing. The reverse may be true for Developing a Software Solution, as errors are more likely to creep in during the initial stages of development (understanding the business context and implementing the design).
Evaluating Software, analysing results and communicating them should be very carefully done to reduce the risk of errors.
Now, if a Test 'fails', there can be three possibilities (I can think of, off the top of my head)
1. Either the expected outcomes are incorrect or misinterpreted (OR)
2. The business context itself is misunderstood (OR)
3. The implementation was incorrectly performed, so the expected outcome and, thus, the prediction were not satisfied.
If a Test 'passes,'
There can be the following possibilities (there can be more, these are what I could think of)
1. The outcome has been interpreted as positive when it was actually negative
2. The Outcome has been interpreted as a positive and IS actually positive
3. The outcome was negative, but the person reporting the outcomes is not willing to report a negative out of fear of being the centre of criticism.
Now, does the job of the Tester end after reporting results?
I think not. Because Testing is an activity about learning about the product through Tests. So, after one cycle of tests, the Tester is now more knowledgeable about the software solution than before. The Tester's perspective has changed about the software.
This perspective needs to be utilised for Testing, but the Tester should not be biased by the outcomes of the tests. The Tester should learn about the product so much that it helps the Developers create and modify it for efficient use by end users.
Now, I may need to be more lucid in explaining how Testing should be done because Testing is an experiential field. One can only learn it through experience. For me, most of the Testing experience cannot be explained in words. I put as many words to my thoughts as possible.
IMAGES COURTESY: INTERNET
Informative thank you.
ReplyDelete