What’s the software quality deficit gap? It’s the time taken to evolve a product from initial release until perceived as being of good quality. It is variable depending on the number of copies of the software in circulation and how thoroughly it is used. And – what is more important – the number of iterations required to fix the bugs, plus the time taken between iterations. Therefore, a deficit exists until the desired quality can be obtained. This might be within a few quick iterations. But a large deficit will require a large number of iterations over a long period to obtain good software quality.

Quality deficits – only a problem of time-to-market?

Thus, you might be faced with a commercial dilemma. How to do thorough testing and impact the launch date of the product? Or will you do ‘enough’ testing and hit the launch date? But you know, then you’re going to have to fix problems down the road. In traditional software development methodologies, testing is conducted in the latter part of the project life cycle. Usually during the QA process once all the finished components are being assembled. In industry, this is commonly referred to as ‘integration testing’. With this practice, many development teams consider testing as an outsourced function, more than likely offshore to reduce costs.

Expensive bug-fixing

However, when testing is done this later in the process, the time taken to fix any subsequent issues is usually quite lengthy. The costs associated with fixing bugs later in the development process are extremely high. The cost ratio is considered to 5:1 for non-critical software, but 100:1 for bugs in critical software systems. It is a protracted situation because the original developer may have moved onto a different project. As a consequence, you will lose lots of time to analyze and understand the initial code base.

To overcome this issue, modern development processes like Agile and Scrum development promoted by the likes of Google have made progression as a solution to this dilemma. These incremental software development methods focus on the rapid development of useful software. However, they can fall foul of missing the critical point: Making sure the software application has been thoroughly tested.

Eliminate shortcomings that prevent software quality

No matter what the methodology used, the pressure of time-to-market might remain the central point for shipping without thorough testing. But now there’s a sea-change. Development teams get increasingly measured on customer satisfaction and quality metrics. The solution for cutting the quality deficit in software development is to eliminate these seven common shortcomings :

1. No clear set of requirements for the product.
2. Lack of a clearly defined API for each module with tests for all boundary conditions.
3. Not taking a common sense approach to testing at a logical functional level.
4. Missing the use of code coverage tools to ascertain testing completeness.
5. Not testing in a layered approach until the faults are found – only use unit tests when it is necessary.
6. Unclearness what needs to be re-tested when a change is made.
7. Not having an environment where anyone can run any test anytime.

The solution – implement a robust software testing process

Implementing a software development process that imparts quality to every software application that your organization ships will not happen overnight, nor will it be simple. But you can’t expect customer loyalty if it relies on field usage to highlight the majority of software issues. The current trend towards IoT, connected devices and ubiquitous computing where software is present everywhere and at all times makes this even more critical. Clearly, implementing a robust software testing process is the way to prevent software quality deficits from the first version.

This post is the first in a series of posts addressing the above-mentioned shortcomings in detail, especially when using modern software development processes.

Further reading


Share on linkedin
Share on twitter
Share on whatsapp

Leave a Comment

Related Posts

Martin Heininger

Testing AI-based systems

AI-based systems are becoming popular. However, the broad acceptance of such systems is still quite low in our society. Trust in and acceptance of AI-based

System testing in virtual environments
Marcus Eggenberger

System testing in virtual environments

When it comes to system level testing, virtual environments offer several strong benefits for you. In this blog post, I want to highlight some of

Improving test efficiency
John Paliotta

Improving test efficiency

Effective software teams are always looking for ways to make their members more efficient. They realize that improving the development process is an good way

Debugging without Security risks
Niroshan Rajadurai

Debugging without security risks

Security researcher Michael Myng found the keylogging code in software drivers that were preinstalled on HP laptops to make the keyboard work. He discovered the

Quantifying the cost of fixing bugs
Lynda Gaines

Cost of fixing vs. preventing bugs

When you think about improving software quality, your initial thoughts might be the cost of investing in new tools and engineering labor to implement them.

Hey there!

Subscribe and get a monthly digest of our newest quality pieces to your inbox.