The Dying Business? Really?

August 27, 23:04 PM
/

Panelists

Matthew Heusser
Michael Larsen
Senthil Ayappan
Transcript

How many times has the death of testing been touted? How many talks have been giving declaring the death of testing? Granted, those talks have a specific context that is not often noticed, but the statement gets said a great deal. In this episode, Senthil Ayyappan, Head of Market Strategy for Qualitest, joins Matthew Heusser and Michael Larsen to talk about how Quality Engineering is much more than basic testing and how it is very much a growing industry. In short, reports of testing’s death, especially Quality Engineering’s death, have been greatly exaggerated.

References

Transcript

Michael Larsen (INTRO):

Hello and welcome to The Testing Show.
Episode 104.
The Dying Business? Really?
This episode was recorded July 16, 2021.

If you are a software tester, you’ve no doubt heard the phrase “testing is dead” a time or three, but the testing of yesteryear is not what’s needed in today’s world. To that end, Senthil Ayyappan, Head of Market Strategy for Qualitest, joins us to talk about how Quality Engineering and Digital Engineering is very much a growing industry.
And with that, on with the show!

Matthew Heusser (00:00):
Thank you, Michael, for the wonderful introduction. Welcome to The Testing Show, everybody. We’ve done about a hundred episodes now, and people need to understand the value of what we bring, which implies the business of testing. This is time we talked about that and Qualitest is a wonderful company to talk about that with. In the years that it’s been around, it’s grown from eight employees to… 3,500 last time I checked, and Senthil Ayappan the head of Market Strategy for the company came up through the testing ranks, understands the value, and understands how to position it & explain it. Thanks for coming on the show today, Sen.

Senthil Ayyappan (00:37):
Thanks for having me here, Matt and Mike. I’ve always been intrigued by your show. It’s such a pleasure to be here and talking to you and the entire Quality Engineering community. I started as an automation engineer, and I’ve been extremely passionate about the industry. Qualittest is an organization that I joined about two months back. Super excited the way Qualitest has been expanding. We made about four acquisitions this year. We’re no longer about 3000-3500 employees. We have close to about 5,000 employees today and we have over 70 nationalities and are rapidly expanding.

Matthew Heusser (01:16):
70 nationalities. It’s impressive. When you think about how the company had to grow to accomplish that, and then the kind of inclusion policies you have to have to make that possible, especially with things like hiring rules in different countries and employee validation and verification and those sorts of things. But now let’s talk about a different aspect. Qualitest insiders, Michael and I have heard these metrics about the size of the quality sourcing industry, the historical and projected growth rate. We’re not really in a position to share them, at least not on the podcast, not in public yet. Are you? I know that Qualitest has been funding some research in this. Can we talk about it on the show? Is it a little bit too early?

Senthil Ayyappan (02:05):
I can definitely give a gist. We’ve been working with leading analysts. They’re estimating the global QM market spend to be around $72 billion. This is expected to grow at about 4-6% CAGR [Compound Annual Growth Rate] over the next three years. Well, the pandemic slowed it down and it grew by 2-3%, but now it’s really bouncing back quite well. I like to split the QA market into two segments. One is the next generation QA. By next generation QA, I’m referring to big data, cloud, immersive technologies, connected ecosystems and the likes of packaged testing. So that part of the business constitutes about 35% of the overall QA spend. And that’s growing quite aggressively. That’s growing at an 8-10% CAGR. The traditional QA, on the other hand is growing, but at a much slower rate at about 2-3%, we are seeing that customers are focusing more on cloud-based custom apps. That’s the fastest growing segment followed by the COTS applications. We are seeing that the BFSI segment; that’s Banking, Financial Services, and Insurance accounts for the lion’s share of the market, about 40% of the global QM market. Telecom and retail and consumer goods are closely following that. Unsurprisingly, North America seems to dominate the market when it comes to QA services spend.

Matthew Heusser (03:38):
So you could extrapolate from that data. The opportunities are going to be in the areas of either new technologies or banking, financial services and retail, so that if someone wanted to have a career in quality that wasn’t a generalist full-stack developer, those would be places to look at for growth. Did I hear that right?

Senthil Ayyappan (04:08):
Your observation is quite right. Those are the leading industries and that’s a leading market. What’s happening here is quite revolutionary. Gone are the days when developers used to call this as “you’re doing something that an eighth-grader can do”. Just imagine it’s a $72 billion industry and it’s rapidly growing. The industry is not questioning the role of Quality Engineering itself, but it is challenging the way Quality Engineering has been done. That’s a challenge. We got to embrace it really well.

Michael Larsen (04:42):
Sen, I’m appreciating a lot of what you’re sharing here. I know that a lot of people that are listening to the show are what I like to refer to as “out in the fields”. We’re the ones that are actually doing the work just pounding away and trying to make testing be effective as much as possible. Now, oftentimes, one of the things that I like to say that I bring to being a tester or that I try to bring… I’m here to help make it possible for things to change. Above and beyond just me saying, “Hey, I found a bug. Let’s fix it!” but “how can we do things better?” If we’re talking about quality in general, we should be talking about making a better experience for everybody. You’ve got a lot of background. You’ve been a tester. You’ve worked in automation, worked in senior management around software testing and quality. So that’s something I’m curious about is how would you recommend to some of us a way to be able to be more effective to how we might influence change in our organizations?

Senthil Ayyappan (05:43):
I really love that question, Mike, thanks for bringing that up. The way testing was done a few years back, and the way it’s being done now is quite different. I started my career about 21 years back. During the initial period of meeting the role of test engineer, the KPIs or the Key Performance Indicators were more silo based. It was about how I would stop defects from getting into UAT and production. It was nothing about collaboration and I would go bombard the applications with a number of defects, be proud of it, and the developers would start a triage meeting with a Sam Rayburn quote, which would say “any jackass can kick down a barn, but it takes a good carpenter to build one”. We would kind of laugh it off. But then, in hindsight, there’s a bit of angle to it, in the sense that if the measurements are more for the testing team to be successful or the development team to be successful, it does not harness good collaboration across the organizations. That was one of the challenges laid out by the development community to the testing community. That’s led to a beautiful evolution. Testing is shifting both left and right, or I would say it has shifted left and shifted right. The boundaries between roles are blurring. There was a metamorphosis of Quality Assurance to Quality Engineering, and there’s slowly a convergence happening from Quality Engineering to Digital Engineering. So now what does that mean to the development world and what does it mean to the business users? We are sitting right in the middle, influencing both sides of the world. To the development world, if we are finding defects, if we are dealing with, like, a test as performance for an engineer to perform, it really does not help. So what do we do as a testing community to help the development community, to help improve the developer velocity? That drives a lot of collaboration, you all doing independent testing in an integrated way. You’re being the true quality orchestrator in the engineering process. Similarly, you know, shifting right, wearing the end user hat of how our user is our product owner. She’s translating those requirements and need earlier in the cycle and operating as a cohesive force. I think that is what’s going to lead to a total experience, a total release velocity improvement. And that’s the way testing communities are helping more and more today.

Matthew Heusser (08:27):
So along those same lines, if I heard you correctly, there’s a possibility for a Quality Engineering discipline to be the difference between, “eh, it works just as good as anything else”, and “brand loyalty. This is my company, this is my product”. I really liked that. And it doesn’t matter whether you’re working in software or you’re developing software for a tractor. Those questions or comments or feature… it’s a fine line between a defect and a feature request. So you wrote an article a couple of years ago (before shift-left and shift-right were really popularized) a while back called “Who Moved my QA Cheese?” In that, you suggested for quality to continue to thrive and succeed as a discipline, we would have to sort of go chase value at a different place. Tell us more about the article and how do you think that has turned out? How do you see things today?

Senthil Ayyappan (09:31):
It’s funny you bring that up. I don’t know if you recall the book by Spencer Johnson. It was written in 1998, around the same time when I started my career, it was a 94 page book focusing on change management. It’s about two pairs of mice (if I remember the characters right, it’s Hem & Haw and Sniff & Scotty), where a pair of mice finds a paradise of cheese. And then when it’s about to run out, the other one ventures out and tries to find another paradise of cheese while the other mice resist change and just sticks around there. I was very inspired by the book earlier in my career. And then I think about four years back, I wrote this blog. I asked, “Who Moved my QA Cheese?” It was a reflection of how we were seeing QA evolve into QE and the metamorphosis of roles. I made some predictions on what were the role changes that were happening. One of the predictions we made in terms of role changes, it’s been much faster than what we thought it would be. Quality Assurance has morphed into Qqality Engineering, quite seamlessly. There is a blurring line between Quality Engineering and Digital Engineering. And the pandemic really took it in a different way. We see newer roles that have arrived. Test Managers, Test Leads have started playing the role of pseudo product owners. From data testing, we have evolved to Data Scientists in Test. The adoption of AI has been very fast. And if somebody needs to test these AI models, so these are new roles that are coming up. IoT [Internet of Things] specialists are picking up really well from a Quality Engineering standpoint. It’s more about the last mile testing rignt now, so Site Reliability Engineers, Seasonal Readiness Test Engineers, those are roles that are evolving. If you go back and look at it in the testing community, about 70% of them come from a science and technology background. So technically there’s no difference in skills when they leave college between a test engineer and a developer. The test engineers pick up the technical skills quite well. And they’ve taken to it like fish to water. The folks from the other streams who come in and don’t find technology so appealing, they have started mimicking what users would do. The picking of the industry knowledge, the application functional knowledge, ecosystem knowledge, they are really reducing the dependency of business users and SMEs during the quality life cycle so that they can focus on things that are more critical.

Michael Larsen (12:13):
I kind of want to follow up on what you just said here, if that’s okay. What I’d love to know is how were you measuring testing before and how relevant are those methods today? More to the point also… and I think that I’ve heard you mention this -and I’ve heard others around Qualitest mentioned this- and that’s the idea of testers who are adapted to working in difficult conditions, to where testing measurement isn’t all that relevant, to be honest, because it much more matters that we get the work done. In some cases we’re doing work in environments that are not really easy to measure. So you are known for using the metaphor of “Testers as Sherpas”. in Nepal, Tibet, in the very highest parts of the world. And when people decide that they want to go explore and they want to go climb, they’re the ones that are seeking the glory and have all the quote unquote technical gear. But the ones who do the hard work are the Sherpas themselves, because they are acclimated to that environment as a literal lifestyle. And this is their day to day life. So I’m kind of curious as to how those two things kind of blend together. And I know that’s a lot to consider, but hopefully that makes sense.

Senthil Ayyappan (13:28):
That’s a very interesting way to put it, Mike. If you look at these sherpas, they went through a DNA adaptage just to survive, thrive better in those conditions, right? They have fewer red blood cells than the rest of us. They have more nitric oxide, which helps them perform better in high altitudes. And if I can correlate their living in the borders of Nepal and Tibet, they have to really adapt well to survive in those conditions. Now Quality Engineers are very similar to sherpas. So they are in the borders of development and listeners/users. They have to shift-left and shift-right. If you look at it, there were traditional metrics. They are very soon getting antiquated. The metrics are like, how many defects are you leaking into production and user acceptance testing? What does your automation percentage and regression and functional, how fast are you executing your test cases? But then that doesn’t really help. It doesn’t promote collaboration. It got us thinking, we went back, we looked at how many users would get impacted by an incident. And by how many minutes would they go down? We benchmark the user impact minutes as a KPI. And we started to improve that goal so that, you know, there’s alignment between business needs and how it performs. We were looking more at automation and how we would do regression and how would we do functional bBut we never thought about if I do everything in the same sprint. And if I improve my in sprint automation percentage, how can I really help the developers? You know, how can I influence the developer velocity? If I identify the defects very early in the stage and I get a build and I’m able to identify it, but 13 minutes to two hours versus waiting a couple of days to getting it back to them, the developers would really appreciate it. As an ecosystem, it’s for the better, right? More than siloed KPIs, the industry’s heading more towards collaborative KPIs. When I say collaborative KPIs, the release velocity is a KPI that both parties have to work together. That’s very, very important. And net promoter score for the business that becomes a giant KPI for the development teams and the Quality Engineering team, right. That’s becoming very important. So there’s a very big myth that’s getting shattered and we are seeing a lot of this in the last few months. Customers were thinking they need to have one large SI doing both development and Quality Engineering. So they have one throat to choke or one part to back to where you would look at. And that’s getting shorter because what they were getting as SIs, with strong development capabilities, but diluted Quality Engineering capabilities. Now they’re adapting to a best in breed approach where they’re getting the best development teams in the industry, modern Quality Engineering companies to do Quality Engineering for them. And they’re having collaborative KPIs, which would allow for both parties to work together towards a common business outcome and reduce friction. So that’s the way, Mike, it’s evolving. I hope that helps.

Matthew Heusser (16:53):
Well, we’ve covered a lot today and I want to respect your time, but before we get going, I did have one follow-up I wanted to ask, and this is something Michael and I have been kicking around for a long time. We’ve picked up on it in sort of your language as well. And that is, we think that testing, quality particularly, quality done well overlaps with and shares a lot with the product owner or product manager role. It might be more pointed where the product manager kind of has a vision for the product and the quality engineers, the person in the weeds that says, “I know this error message is technically correct, but it doesn’t tell the customer what they need to do to solve their problems”. Do you agree with that line of thinking, or do you have anything to help us figure out how to navigate as we sort of reinvent the role going forward?

Senthil Ayyappan (17:47):
No, you’re absolutely right there, Matt. Now, if you look at the last 18 months, it’s been truly testing times. The pandemic has cost a lot, but it’s also been a true transformation in its own way. Distributed Agile model has been well adopted. It has its own pros and cons. Now a lot of the teams sitting remote across shores, the product owner expected to work with them 24/7, it’s taking a big toll and that’s causing a lot of communication issues. That’s causing a lot of friction between the way developers work and the way the product owners want things to happen. It’s requiring somebody who can really broker it and play that role, too, like a pseudo product owner. One of our customers had a very similar issue, they couldn’t do releases for about six months, how we looked at it as we onboarded a test lead, who came with a lot of experience working in Agile models and had done a lot of business stakeholder management. He took on the role of a pseudo product owner with the help of some workforce transformation training initiatives from our side as well. And he was able to bridge the gap between how the product owners were thinking, how they were prioritizing features to how the development team wanted to go about It. Was able to speak the technical lingo and understand the business needs that really helped in through-release velocity. We were able to release every two weeks. The quality improved. The mobile app rating improved drastically. The business users who were playing the roles of product owners, they didn’t have to stay up late work with all teams to get projects executed on time. The pseudo product owner was a hybrid role, which is so successful, now the customer’s looking at replicating this model across other Scrum teams at scale.

Michael Larsen (20:01):
Awesome. Well, thank you so much for joining us today and to everybody else, thank you for listening, and we look forward to having you on another edition of The Testing Show and listeners, we hope that you’ll join us. Thanks again.

Matthew Heusser (20:16):
Thank you, Michael. It’s been a pleasure.

Senthil Ayyappan (20:17):
Thank you, Mike and Matt.

Matthew Heusser (20:20):
Hey everybody. I know you’re used to hearing Michael at the end of the show after it’s over with the final announcements, but this is kind of like the credits in a superhero movie. When you sit around for it, you hear the special bonus. After we recorded this, we’ve been talking about that whole “Who Moved my QA Cheese?” and we’d like to create sort-of a series where we talk to practitioners, people that are in the middle of doing it, about some of these concepts and see how they resonate and connect them with their own experience. And that should be coming up as special bonus episodes of The Testing Show. So watch out for it and thanks for listening.

Michael Larsen (OUTRO):
That concludes this episode of The Testing Show.

We also want to encourage you, our listeners, to give us a rating and a review on Apple podcasts, Google Podcasts, and we are also available on Spotify.

Those ratings and reviews, as well as word of mouth and sharing, help raise the visibility of the show and let more people find us.

Also, we want to invite you to come join us on The Testing Show Slack channel, as a way to communicate about the show.

Talk to us about what you like and what you’d like to hear, and also to help us shape future shows.

Please email us at thetestingshow (at) qualitestgroup (dot) com and we will send you an invite to join group.

The Testing Show is produced and edited by Michael Larsen, moderated by Matt Heusser, with frequent contributions from our many featured guests who bring the topics and expertise to make the show happen.

Additionally, if you have questions you’d like to see addressed on The Testing Show, or if you would like to be a guest on the podcast, please email us at thetestingshow (at) qualitestgroup (dot) com.

Recent posts

Get started with a free 30 minute consultation with an expert.