The QA Team I Wish Every Company Had

So, I know I said last time that my next post would be about how much testing is enough. But… I changed my mind. And that’s fine, right? This blog is new, and honestly, I might be the only one reading it 😅.
Instead, today I want to talk about something I’ve been thinking about for a while: the QA roles that should exist and how I wish companies would evolve their teams to better support product quality.
(By the way, this is a bonus post! I’m too excited to wait until next week. From here on out, expect weekly posts. Unless, of course, life happens.)

QA Roles Are a Mess (But They Don’t Have to Be)
Ask five different companies what their QA team looks like, and you’ll probably get five different answers. Some companies have full-fledged test engineering teams. Others have a handful of “manual” testers (I don’t love that term, but I’m rolling with what’s most commonly used here). Some companies have no testing function but still expect developers to take care of testing (as if that’s all a QA team does). Others haven’t thought about testing enough to have expectations in the first place.
What I haven’t really seen, though, is a QA team structured in a way that truly fits into the product development lifecycle (PDLC) as a strategic function. But what if we rethought QA roles entirely? If I were building a team from scratch, here’s how I’d do it.
The Ideal QA Team: Three Roles That Matter
One of the problems in QA is that the same titles don’t mean the same thing across companies (or even within teams at the same company). A ‘QA Engineer’ might be an exploratory tester in one company while in another they're building automation frameworks. That lack of clarity makes it hard to build a QA team that actually works.
Instead, what if we structured QA around three distinct roles, each with a clear focus and place in the product development process.
Product Quality Architects (PQAs) → The Product-Focused Testers
If you think of a typical ‘manual QA’ or tester, PQAs are probably the closest match. But, many companies aren’t thinking about them in the way they should, and that’s a problem. It leads to two major issues.
1: QA teams that aren’t always working on the right things (or in the right ways). Sometimes, this is even because their roles are inadvertently defined in a way that limits their impact, and
2: Companies that don’t value the work that their QA team is doing. And because expectations are misaligned, QA’s contributions often go unrecognized.
What Should PQAs Actually Do?
Think of PQAs as the ultimate product quality experts within QA. They know the ins and outs of the features they own and make sure quality is built in before the first line of code is written. I would expect a PQA on my team to:
- Act as subject matter experts (SMEs) for specific products or features. They deeply understand how things are supposed to work and help define what “working as expected” truly means.
- Shape testing strategies by identifying edge cases, gaps, and risks early. They work alongside PMs, designers, engineers, CX teams, and others to ensure that real-world user behaviors and historical data informs test priorities.
- Clarify requirements during ideation. PQAs help spot assumptions and fill in missing details so teams don’t discover gaps too late in the process.
There are some Quality “smells” that tell me when a team needs the expertise of a PQA. One red flag is when teams frequently run into surprises late in the PDLC due to unclear or misinterpreted requirements. This usually means “testing” isn’t happening until after development has already started. PQAs help shift testing left, spotting gaps before they become more expensive problems. Another sign is when engineering teams spend more time fixing production bugs than shipping new features. That’s a clear indicator that testing isn’t aligned with business risks or real-world usage. PQAs help by defining a risk-based test strategy, ensuring coverage focuses on what matters most.
Software Development Engineers in Test (SDETs) → The Technical Testers
If PQAs are the product experts, SDETs are the technical backbone of the QA function. They don’t just write automation. They define how testing is implemented and ensure the entire automated test strategy is scalable, maintainable, and effective. But just like with PQAs, companies don’t always set up SDETs for success.
This can often result in “good” automation coverage that doesn’t actually provide value. If automated tests are only written to check a box instead of being part of a larger quality strategy, teams often end up with lots of flaky, redundant, or low-impact tests. Test coverage might look great on paper but if it’s not catching real issues, does it matter?
What Should SDETs Actually Do?
Ideally SDETs should be deeply embedded in engineering teams, ensuring that automation isn’t just present but that it’s actually solving the right problems. I would expect an SDET on my team to:
- Work with PQAs and engineers to drive risk-based testing. SDETs ensure that automation prioritizes high-impact areas instead of just aiming for some coverage percentage.
- Own the test automation strategy. SDETs don’t just write tests. They define what should be automated, what shouldn’t, and why. By partnering with PQAs, they both ensure automation and exploratory testing complement each other instead of competing for attention.
- Build and maintain scalable test frameworks. Their focus isn’t solely on automating individual test cases. It’s also ensuring that the test infrastructure supports a rapid and reliable feedback cycle.
- Improve developer-led testing. They advocate for better unit and integration test practices, ensuring that bugs get caught as early as possible.
So, what are some quality smells that indicate that you might need the help of an SDET? One clear one having is a high automated test coverage percentage but still seeing major issues slip into production. When automation isn’t covering real business risks, SDETs can help realign automated testing efforts. Another clear red flag is having flaky or unreliable test suites that slow down development. If tests are constantly breaking and can’t be trusted an SDET can help troubleshoot test stability, improve CI/CD pipelines, and eliminate test debt.
Quality Advocates → The Cross-Team Quality Leaders
PQAs focus on what should be tested. SDETs focus on how to test it at scale. But who makes sure that quality is actually prioritized across the org? That’s where Quality Advocates come in.
Too often companies treat quality as an individual team problem rather than a responsibility shared across the organization. When no one is accountable for quality at the org level, teams lack a shared vision of what “good” looks like and where they stand relative to it.
What Do Quality Advocates Actually Do?
Quality Advocates don’t own execution - they own enablement. Their job is to ensure that every function understands what good quality looks like and how to achieve it.
Quality Advocates are most valuable in large organizations, where teams operate in silos and quality alignment gets tricky. In smaller companies, an SDET or PQA might take on parts of this role for their team. I would expect a Quality Advocate on my team to:
- Define and evangelize quality standards across teams.
- Lead training, onboarding, and education around testing best practices. They help developers write better tests, improve QA processes, and advocate for sustainable test strategies.
- Use quality metrics to identify problem areas. They track how often bugs escape to production, where test coverage is weak, and where teams struggle with quality debt.
- Influence product and engineering leaders to make quality a first-class priority. Advocates don’t just work within QA - they work with leadership to ensure quality is factored into roadmaps and company goals.
The Quality smells that indicate you need Quality Advocates tend to be more systemic (i.e. problems that span multiple teams). For example, you might find that testing is inconsistent across teams or functions. One team might have great automated tests, while another barely tests at all. In cases like these a Quality Advocate can ensure best practices are applied consistently. Another example is developer resistance to testing. If developers are pushing back on testing, a QA Advocate can pinpoint why and find ways to shift the culture so quality becomes a shared responsibility.
What’s Next?
If I were building a QA team from scratch, these are the roles I’d be considering. Each of these roles - PQAs, SDETs, and Quality Advocates - plays an important and complementary part in ensuring quality is considered throughout the product development process.
But, of course, not every company needs all three of these roles right away (or even at all). Some teams already have strong automation but need better product-level testing. Others are shipping fast but struggling with cross-team alignment on quality expectations.
So, in my next post, I'll cover how you decide what kind of QA team you actually need and walk through the things that I'd consider when structuring a new QA function. Until then I (really) would love to hear from you: If you could rebuild your QA team from scratch, what would it look like?