Overall software quality: We’re gonna need bigger data

Overall software quality: we need bigger data

In our previous post on silos, we focused on code and tests, and hinted at development-centric quality indicators. True to our “big picture” approach, let’s take a step back, and look at overall software quality. We’re gonna need bigger data!

What is overall software quality?

Whatever the method, projects grow and live following a development cycle. Tasks such as specification, conception, development, testing, validation, verification, change management are going to keep teams busy pursuing a common objective: produce a quality product.

And before we start breaking down quality into categories, let’s consider the topics we are going to feed our quality assessment:

  • Requirements, refined from customer to technical needs
  • Models, to design and simulate systems
  • Code, implementing specifications
  • Tests, from unit to integration, from verification to validation
  • Change requests, handling bugs and enhancement requests

We could add more topics, depending on the project or the industry, but it is already clear that evaluating software quality based on code and tests alone is not enough.
This is not a surprise, as we have been hinting at this already in a few posts, but overall software quality spans all topics, because they impact one another. An overall analysis can help track objectives, and anticipate cascading issues.

We’re gonna need bigger data

In terms of pure data volume, an overall software quality approach seems like a huge endeavor.
Are we supposed to store all data from all silos, all the time? And for all projects we want to monitor? That would be “big data” indeed.

Of course, a smart approach is needed. To save space and time, these data should be:

  • Relevant: By only keeping the data actually needed to compute quality indicators
  • Normalized: Data should be origin-independent. It should be no different to store data from tool A or tool B, if they address the same topic
  • Incremental: If a data doesn’t change, there’s no need to store it again
  • Safe: Some data can be sensitive, or restricted to a given perimeter, and can’t be stored as is (or at all)
  • Respectful: GDPR is a reality (as discussed in the Software code of ethics posts), stored data have to respect privacy

In the case of industrial projects, we are eventually going to store a lot of data, so we have to do it optimally, and efficiently.

Rating the overall rating

Overall software quality is a powerful concept, as it assists us in broad analysis and anticipation. But is it the perfect quality rating?
Well no, it should not be the be-all-end-all-of-quality assessment. There are two aspects to consider:

  • It’s only one rating
    The overall rating is a global assessment of a whole software project based on data from all silos.
    It greatly depends on how you aggregate these data.
    For example, passing a routine medical checkup means your various health levels are within acceptable ranges.
    Analyzing the same health levels looking for trends can produce a different assessment.
  • It’s not the only rating
    The overall quality rating is just the tip of the iceberg. All quality indicators (silo-specific or inter-silos) are available for a more focused analysis, and should be used for an efficient monitoring.
    For example, the previous medical checkup can focus on diabetes and use blood sugar, weight or vision indicators.

As a conclusion, overall software quality is good for eagle view monitoring and handling project portfolios, but it must be coupled with a finer analysis to make efficient decisions.

Software quality assessment is based on relevant indicators from diverse data values, and also the relations between them. So we need “big data” in the sense that they come from a wide scope of sources and tools, rather than the sheer volume of them (although we will have for sure to handle thousands of requirements, and tests, and millions of lines of code)

After all, that’s what we naturally do every day: making an overall assessment of a situation, informed by a finer analysis of its parts.

Share your thoughts

Do you have to monitor projects quality by handling a lot of data coming from many sources? How big is this effort?

Further readings

Share:

Leave a Comment

Related Posts

Maintain Software Quality Coderskitchen
Andreas Horn

How to maintain software quality?

In my previous blog post “What is software quality and how to measure it?” I have explained what quality is and how we can measure

Software Quality Metrics Coderskitchen
Flavien Huynh

Software quality: From metrics to habits

There is more to software quality than preventing bugs. It also gives us the opportunity to reach better code. From the different levels of quality

Monitoring Software Quality Coderskitchen
Flavien Huynh

Software quality monitoring: Real use case

Let’s continue unfolding the story of software quality monitoring on a real project.In the previous post, we saw that the price of setting up a

Hey there!

Subscribe and get a monthly digest of our newest quality pieces to your inbox.