Carrying out precise and correct software testing follows numerous principles. International Software Testing Qualifications Board distinguishes seven fundamental ones, which we are going to discuss today. Curious to find out? Read an article about key ISTQB testing principles!
ISTQB testing principles – table of contents:
- Testing reveals defects but cannot prove their absence
- Thorough testing is impossible
- Early testing saves time and money
- Malfunction snowball effect
- Pesticide paradox
- It depends on the context
- Advertising flawless software is a no-go
Testing reveals defects but cannot prove their absence
Testing increases the probability of finding mistakes, which in turn facilitates chances to fix them. However, it cannot fully guarantee that software is free of all defects even if the vast majority get spotted and fixed. Due to the inability to create flawless software, many consider the process as negative by design, as you’ll never get a positive result and always find some “dirt” in the programs.
Thorough testing is impossible
The above rule of thumb states that detecting all malfunctions of software is futile. However, that doesn’t apply to simple short programs. This, in turn, indicates that there is a chance to see all combinations of inputs and preconditions to test some programs completely. When evaluating sophisticated software, even the best AI can’t execute all necessary measurements, let alone manual testers. Automated assessors will run through apps more efficiently and accurately, but they still cannot guarantee flawless performance. To do so, you have to embark on additional tasks like prioritizing, risk analysis, as well as finding and running other testing techniques.
Early testing saves time and money
Many professionals also call this principle “shifting left.” The sooner you spot defects, the easier you can fix them, hence static and dynamic testing should begin as soon as possible. In a nutshell:
- Static testing – assesing tne product without running the code.
- Dynamic testing – evaluation of the code of a module or system during its performance
Detecting defects in the first phases of implementation facilitates further diagnosis. But when two areas of software interact, amending defects becomes troublesome due to the inability to pinpoint the one that has the error. In such cases, it takes extra time, effort and manpower to tackle. All in all, it’s the rapid response to surfacing obstacles that can prevent cracks from multiplying
Malfunction snowball effect
Most glitches tend to cluster in most critical modules, so their in-depth examination reveals and sufficiently eliminates most. These groups become the major focus of running risk analysis to map out and establish the future conduct of actions. The majority of flaws surface after following the paths the users take but in these cases, knowledge alone doesn’t render the modules are impeccable.
The Pareto principle, says that 80% of results originate from only 20% of causes. In other words, 80% of bugs exist in 20% of modules. If you encounter numerous malfunctions in a module, keep digging as they’ll be there.
Running the same tests repeatedly may fail because they may have been designed incorrectly in the first place and will never prove effective. You have to amend and upgrade testing to increase the chance of finding new faults in the software.
Creating a completely new system of diagnosis won’t do the trick either. Following the previous combinations may stop the assessment process at the same level. This principle is coined ‘pesticide paradox’ because pesticides that control pests also lose effectiveness after a given amount of use.
It depends on the context
The way of executing testing depends on the subjects examined. Thus, testing an accounting program, a video game, or a social networking application vary substantially. It also depends on the situation, for instance, an analysis focusing on the practicality of an app like checking its attractiveness to users, ease of use, visual layer, etc. also differs from those evaluations aimed at functional attributes of the program, e.g. performing correct calculations.
Advertising flawless software is a no-go
Applying various types of diagnostic tools cannot guarantee spot-on apps. Many who claim and advertise their apps as such are wrong, yet probably it is only for the marketing efforts they make the claim. You can execute multiple manual and automated tests to increase the probability of uncovering and fixing as many errors as possible, but still, there is no guarantee of perfect performance. In some cases, the obstacles concern operating software, e.g. the program may not meet all user expectations.
ISTQB testing principles – summary
This is how ISTQB, at a basic level, presents seven ISTQB testing principles that a software tester should follow. First, they indicate the infeasibility of full software diagnosis, hence it is crucial, among other things, to modify tests, as well as to conduct a thorough search in the key modules. These actions enhance the search and clearance of the majority of defects decreasing the likelihood of failures in the future.