Mastodon

Half-Baked, Fully Shipped: Toilet Paper, Software Quality, and the Problem You Didn’t See Coming

Half-Baked, Fully Shipped: Toilet Paper, Software Quality, and the Problem You Didn’t See Coming

No deep-dive posts this week (sometimes life just be lifin’, y’all). Instead, here’s another Half-Baked, Fully Shipped, courtesy of my toddler and his newfound passion for unrolling entire rolls of toilet paper.


Have you ever tried to roll up a toilet paper roll after your toddler has completely unrolled it? (Honestly, I had it coming. I know I shouldn’t have turned my back on him for that 4.5 seconds.)

If you have, then you know: it’s basically impossible to get it just right again. No matter how carefully you roll, something is always a little… off. And that’s because the misalignment started way back at the beginning, when the first few sheets went off track. By the time you notice, the whole roll is a misaligned mess.

Rolling up that toilet paper, I had an interesting thought: this is exactly what happens in software product development. Tiny misalignments often start early, long before anyone realizes there’s a problem.

Maybe a requirement was a little vague, or a test case didn’t match real-world usage. Maybe a seemingly small decision threw off the whole system. Or maybe your high-quality product worked so well you decided you didn’t need your team of testers anymore... while still shipping new features 🙃.

At first, it’s fine. But these misalignment stack up and the longer they go unchecked, the harder it is to course-correct. It’s like performance issues that creep up slowly. Load times increase by a measly fraction of a second here, and other fraction of a second there. Nothing major, nothing that sets off any alarm bells. But six months later support tickets are piling up because your app has gotten unbearably slow. No one meant for this to happen - but those small dips in performance added up.

Sometimes, the failures caused by quality misalignments are loud, like an outage, a Sev 1, or a fire-drill-worthy bug. Other times, failures are quieter, almost invisible, until it’s too late. Maybe it’s a drop in engagement that no one immediately links to usability issues. Or a seemingly small performance lag that doesn’t seem urgent until six months later, when churn rates spike.

So how do you catch small quality gaps before they become big issues? Some teams rely on metrics, looking at things like user drop-off rates, error logs, and regression trends to spot early warning signs. Others lean on intuition. Others on testing to surface issues before they reach production. But there are so many more ways. (Watch this space for an upcoming post on this topic.)

But really, it’s about having a well-tuned quality heuristic - a mix of data, experience, and awareness - that helps teams notice when something is off, even before it becomes a problem. That’s what I see as a quality team’s responsibility: fine-tuning our software quality heuristic and making sure we catch those small moments of misalignment before they snowball into real problems.