Mastodon

Testing Without Testers: Who Owns Quality When There’s No QA?

Testing Without Testers: Who Owns Quality When There’s No QA?

Lately, I’ve been spending some time looking into interesting companies. Part of it is curiosity, but mostly, I’ve been scoping out places I might want to work next 😅. One thing I’m noticing is that more and more companies don’t have a dedicated QA or testing function. Now, that’s not necessarily surprising - I know that plenty of teams distribute testing responsibilities across different roles. But it got me thinking: Do these teams really consider what happens to quality when no one owns it?


If your company doesn’t have a QA team, do you think that means y’all aren’t testing?

Of course not. Every team that builds software tests, whether they realize it or not. The real question is how well they do it.

Many think of testing as something that happens at the end of development - a final gate before release. And, if they don’t have a dedicated QA team they assume testing just... doesn’t happen. But in reality, testing is happening at every stage of the product development lifecycle (PDLC). It’s just not always intentional or particularly effective at preventing critical issues from slipping through the cracks

So, Who’s Testing? *

*Even if they don’t call it that

When there’s no formal QA function, other roles may naturally take on parts of the testing process. And often this is done in ways that aren’t structured or scalable:

  • Product Managers might try to spot major gaps by clicking through new features before launch, relying on gut instinct, customer feedback, and a few hurried test scenarios to make sure things seem right.
  • Designers run usability tests, but their focus is usually on the ideal user journey. Edge cases? Not always on their radar.
  • Engineers test for code correctness, but without someone dedicated to challenging assumptions, their testing tends to focus on what they expected to happen rather than what could go wrong.
  • Customer Support turns into an unofficial bug triage team - troubleshooting issues and reporting critical failures after they’ve reached users.

So, yeah, testing is happening - kind of. But when no one owns it, gaps start to show. Users become the real testers and, over time, quality declines, releases get riskier, and teams spend more time firefighting than innovating.

The Risks of Testing Without Testers

Every function involved in building software contributes to quality - but none of them specialize in it. And when no one’s job is to own quality, testing can become scattered, inconsistent, and reactive.

Limited risk assessment: Without a QA function, teams might be testing, but are they testing what really matters? Probably not. Instead, it’s a game of chance. Maybe they’ll catch the big issues, maybe they won’t. Good testers don’t just find problems - they help teams anticipate what could break and prevent the worst from happening

Gaps in exploratory testing: Automated tests are great at catching regressions, but they don’t think like a user (at least not yet. Maybe some AI tool will prove me wrong someday soon 🙃). Good testers are critical thinkers - they challenge assumptions, ask “what if”, and catch subtle issues that automation can’t.

Inconsistent testing approaches: With each team and function focused on their own goals, testing can be uneven. And, when trade-offs need to be made, testing is usually the first to go. Good testers adapt their testing approach to fit business needs while making sure the quality bar stays where it should be.

Quality as a Team Effort

Now, the best teams - whether they have a dedicated tester or not - understand that quality isn’t just the job of QA. It’s a team-wide effort to integrate quality into every stage of product development. That means:

Product and design teams working with QA early

The best products are built with quality in mind from the very beginning, with PMs, designers, and testers working together to test assumptions and usability before any code is even written.

Developers treating testability as a first-class citizen: writing clean, testable code is considered a best practice for a reason. When engineers write testable code, quality is just part of how great software gets built.

Leadership making quality a priority: not only is quality a competitive advantage, but it also benefits the team. Cutting corners might save teams time upfront, but dealing with the resulting bugs and rework slows teams down even more in the long run. Besides, shipping quickly doesn’t mean much if you’re shipping buggy products that get in the way of your users. When leadership prioritizes quality, teams can move fast without breaking things.

The strongest teams don’t just check for quality. They all build it in to everything they do - together.

So as I keep exploring companies and finding more without dedicated testers, I’ll continue to wonder: do they even know what they’re missing?

What’s Next?

In my next post, I’ll dive into how much testing is enough and how to figure out the right level of investment based on things like risk, product maturity, and business priorities.

And since not every company has dedicated testers, I’ll also be covering practical ways teams can structure their approach to quality - even without a QA team.

But for now, I’d love to hear from you: How does testing happen at your company? Is there a structured approach? Is it spread across different roles? And be honest: how do you think it’s actually working out? Leave me a comment below!