Can you measure “quality”? (spoiler – yes you can!)
There are 3 main questions that I need to be able to answer in order to monitor a project.
- How many “thingies” can we get done each iteration? – a.k.a. Velocity
- yep, “thingies” is a technical term…the units of measure are neither man-days nor gummy-bears…they are just thingies
- how long will it take me to complete all of the “thingies” – a.k.a. Burn-up
- are we producing quality code and experiences? – excuse me, I once earned a trophy for “fewest bugs” on a project…how dare you ask this question!
Questions 1 and 2 have some well-known solutions, and are pretty easy to measure.
However, question number 3 is a little more difficult to measure. Can you reallymeasure quality?
And if you aren’t able to measure quality, how are you able to learn from changes you make? Hence why measuring quality is one of my 3 main questions.
Let’s clarify something here first. By “measure”, I do not mean using an instrument to determine the size/length of an object (think: tape-measure)…instead, I am referring to the reduction of uncertainty based on one, or more, observations.
(If you’re like me…that definition might take a while to sink in…so I will wait here for you to read that definition a few times.)
Okay…back to the quality question from above.
While it is beneficial to know the number of bugs you create each iteration, that number alone doesn’t tell the whole story. Again, think tape-measure…all we know is that we injected 12 new bugs this iteration.
How has that reduced your uncertainty? Consider the plethora of variables:
- maybe 12 bugs is a lot for a 4-person team, but it isn’t that much for an 8-person team
- maybe 12 bugs is a lot for a 2-week iteration, but it is average for a 4-week iteration
- maybe 12 bugs is a lot for an average-skilled team, but our team of studs can crank out 12 bugs in 1 day (<– true story! ask me about it sometime)
My point is, there is still a lot of uncertainty with only capturing the number of bugs injected.
A more clear picture would be to measure the number of bugs created versus the number of bugs resolved, over time. This begins to paint a picture of how quickly your team can respond to issues, and ultimately, answers question number 3. I have used this measurement of “bugs created” versus “bugs resolved” on multiple projects to answer the quality question.
Measuring something that matters:
On my current project, we didn’t have a responsiveness problem. In fact, the entire team values “taking the pain early” and getting issues addressed quickly.
Because of this, we came up with a different way to answer the question of, “are we producing quality code and experiences?”
We needed to measure something that mattered to our team and our project. Again, our goal of measuring something is to reduce our uncertainty based on observations. Borrowing Jeff Patton’s UX Layer Pyramid, we began classifying bugs as:
- Functional – does the bug reduce functionality of the app, or keep the user from completing an activity?
- Experience – does the bug make it more difficult for the user to get the information they want, or require extra work for the user to complete an activity?
- Visual – is the bug purely about the aesthetics, or does the bug trigger a negative emotional response?
Measuring on a regular cadence:
From these classifications, now we can begin to reduce our uncertainty about where our bugs are coming from. On our project, we go over the bugs every 2 weeks, and another ad hoc time near a release.
By measuring on a cadence like this, we can answer follow-on questions like:
- “Should our automated-testing approach have caught this before the development was finished?”
- “What ratio of bugs are Functional?” (Hopefully really low!)
- “How has the ratio of Experience bugs changed over time?”
- “If we make a change to our process, does the ratio of Visual bugs go up, down, or remain the same?”
I am certain that there are other ways to measure the quality of what you are producing.
So I ask you…
Do you know where your bugs are coming from?
How are you able to measure the quality of the code and experiences your team produces?