What is Quality?
On the surface, Quality can be an abstract concept. Like Love: everyone wants it, plans for it, goes to great lengths for it, and even once achieved it is difficult to maintain. On one hand, folklore surrounds figures such as Steve Jobs in their legendary quest for industrial design and quality, and on the other hand, everyone claims to deliver it. It can also be like Beauty, in that it is “in the eye of the beholder”. It is certainly perceptual and expectation based.
How does one differentiate “good” quality, “industry standard” quality, “high” quality, “exceptional” quality etc.? As software professionals, we are often asked to defend our quality, isn’t it time we had a clear answer? Quality is a journey: you have to know where you want to go, how to get there, and most importantly how much you are willing to give to get there. Taking these similes one step further – abstract concepts make for difficult comparisons. Can you say which of your children you love more? No, but you can certainly compare their non-abstract aspects. E.g. You can say which one is a faster runner. If you can measure it, you can compare it. So let’s learn our measurements first so we know first where we want to be.
What is a high quality software application?
Bringing this back to software applications, aspects of quality can be compared such as:
1. Numbers of bugs /kLoC (1000 lines of code) can be measured and compared. E.g. Infoworld reports that industry standard applications see 5-20 bugs/kLoc in production. Its probably a multiple of that in the QA phase of the SDLC. Regardless, at least we have a test whereby we can call an application “industry standard”. You can further break this down to be more rigorous to capture certain aspects of code quality e.g security specific bugs.
2. Performance: you can get industry standard data such as the Gomez lists per industry e.g. the top mobile retail sites average between 4 and 8 seconds in response time. This gives us some absolute standards to begin the discussion.
3. Usability: Surprisingly, measurements can even be made in this perception-centered area. The SUS scale helps us put numbers around usability. An industry average score is 68.
4. Similarly, user experience or look and feel can also be measured via questionnaire format using the SUPR-Q measure.
5. Uptime: if you are running a hosted application, uptime metrics should be monitored and compared to the industry. The published averages are just that: average. Do you need to be average or better in your specific field? E.g. if you are launching a new service in a competitive environment you probably need usability and performance better than average to build that critical user base compared to an employee-focused app for internal purposes, the client may not want to spend as much for something that the users are required to use anyway. Not that they need to be forced to endure poor quality but the standards may be less stringent.
So if you had to justify your work to a tough client or come in strong at a presentation you can go in armed with statistics. Such metrics are essential for internal as well as external benchmarks. Internally, it gives your team a target of what is expected and once you have baselined your current quality you can incentivize continued improvement/maintenance of quality. Externally, it makes a strong statement once you have arrived at a hard fought level of quality that your ascent was intentional and likely eliminates low-end competitors. At Informulate, for example, at client presentations we refer to one of our projects where we delivered an application of ~500 kLoc with less than 350 bugs (0.75 bugs/kLoc or 10X the industry average).
After Metrics, what?
Now that we know where we need to be vis-à-vis the industry, how do we get there and what tradeoffs do we need to make in the process? Check back for the next installment on Managing For Quality