We all know that testing is crucial to business success. Inefficient testing leads to late products, ineffective testing leads to products shipped with bugs. Finally, from a business perspective, it ultimately summarizes to balancing speed with risk management. It is great to have high level testing improvements in mind. But certain tactics are required to implement the approaches so as to realize maximum benefits. Before doing so, we need to review a few practical considerations.
- A concrete plan – It cannot be considered as easy as installing a software product and expect the improvements to occur naturally.
- Static Analysis is one great way to start testing early. There are 5 main phases in a software development life cycle – Requirements, Design, Implementation, Testing and Production. Static Analysis is generally performed in the implementation phase of the development. Developers gain insight into these problems while the code is still fresh in their mind, thus fixes require less time, effort and expense. This ensures a more robust software delivery to the QA and security teams which means fewer problems and shorter testing cycles. Here, a broad range of problems can be uncovered such as concurrency problems, security vulnerabilities such an buffer overruns, injections and mishandling of sensitive data.
- Fuzzing is another great way of improving testing. It helps to find flaws in the way the code handles network and file processing with specific protocols and formats. It enables automatic deployment of test cases without creating manual tests. Once the code is run, the fuzzer automatically and intelligently inputs malformed data to ensure the absence of critical vulnerabilities. Its results are similar to that of static analysis further improving the robustness of the product with minimal overhead.
- Improvement in Testing: There exist plenty of opportunities to improve things around the test cases and testing plans. This can be achieved by understanding what actually needs to be tested. Much of the project code does not change between releases. With a clear understanding of which code has been changed and which functionality has been impacted by the changes, and how the test cases exercise the functionality, testing can be improved significantly. This can be done by prioritizing testing plans and by automatically identifying testing gaps that need to be filled.
- A measurement strategy – This is required in order to know that the changes are effective, to know what is to be improved and to know that when it is enough. Ideally, this can be done through objective measurement. This helps to avoid blind interpretations, and use the measurements to guide automation. In many organizations testing is executed by different technologies, by separate teams with different objectives. This results in duplicate efforts which results into testing gaps. These gaps cause bugs which are missed out until production. There are many long and short tests which need to be run. Scheduling these tests is extremely important. By identifying which of the test have already been run, it can be possible to run a number of automated tests instead of running a long and slow manual test. This helps in better testing coverage with minimal wasted efforts.
- Testing third party components – Getting an upper hand in testing of in-house code is extremely important. But products nowadays include a significant amounts of open source or vendor codes. Chances are that much stress has not been given on how these third party components affect the application or the product from a testing perspective. But understanding the code can have a big impact on the robustness of the product. One common assumption is to consider the third party component to be already tested. This is an approach of least resistance which never works out well. One option is to establish a testing sign off standards with the vendors to ensure their code is well tested, but this does not work with the open source components. It can be considered to accept only the component for which the source code is available and test it as the rest of the in-house code. Though it increases the testing burden significantly, but results into the same level of testing as the entire product. Unfortunately, this is not possible for certain components and also its makes one liable for maintaining the code. Software Composition Analysis rather than dynamically testing the product, works from a database of components to identify what is contained in the product. Thus, by knowing what the software contains, one can better understand the licensing and compliance obligations as well as security vulnerabilities.
ConclusionOut of all expert recommendations on testing, most of them focus on early initiation of testing so that problems are found sooner and more time is found to fix them. To increase automation so that it is easier to associate test failures with specific code changes, to get better insight on what really needs to be tested and ensure that resources are not wasted on unnecessary and redundant testing. To achieve this, many teams have implemented unit testing and test automation which often fall under the Agile or the DevOps umbrella.