“How to Reduce the Cost of Software Testing”: A Decade Later

September 16, 07:41 AM
/

Panelists

Matthew Heusser
Michael Larsen
P
Petteri Lyytinen
Transcript

Back in 2012, Matthew Heusser, Michael Larsen, and a number of other testers and test professionals came together to write a book devoted to the topic of “How to Reduce the Cost of Software Testing”. To celebrate the ten year anniversary of that experience, Matt and Michael welcome fellow author Petteri Lyytinen to share their experiences writing the book and to see if, ten years later, we have made progress on our goal or if there is more that we can do in regard to the goals we set out a decade ago.

References:

Transcript:

Michael Larsen (INTRO):

Hello and welcome to The Testing Show.

Episode 123.

How to Reduce the Cost of Software Testing: A Decade Later

This show was recorded September 1st, 2022.

Back in 2012, your show hosts, Matthew Heusser and Michael Larsen, along with several others wrote a book called “How to Reduce the Cost of Software Testing”. To mark the ten year anniversary of that publication, we welcome fellow author Petteri Lyytinen to share experiences writing the book, and to see if, ten years later, we have made progress or if there is more that we can do regarding what we set out to accomplish a decade ago.

And with that, on with the show.

Matthew Heusser (00:01):
So, yeah, we’re back together again. 11 years ago, probably starting 13 years ago, 26 of us got together. I’m holding it in my hand, “How to Reduce the Cost of Software Testing. I was an editor wrote a couple chapters, Michael and Petteri, who we have on today, wrote a couple of chapters. And what have we learned? Has the industry moved forward? Could we, as some have said, republish, these go to a conference and each one of these chapters is a testing talk and no one would know the difference, or have we actually learned anything? I think we’ve learned some things, but today we want to talk about our personal perspectives on the book, the impact on the market, and maybe a little bit about the context and what we learned. So Petteri, you wrote the chapter, “Can’t waste money on a defect that isn’t there”. And at the time you were living in Finland, we did get to meet up once in the US at a conference. Tell us about what you’ve been doing since.

Petteri Lyytinen (01:08):
Well, first of all, thanks for having me here. This was really a great opportunity. I was living with my then girlfriend and now wife. We moved to Estonia about 10 years ago. And actually, after the book came out, I even started my own company. We just bought a house and I don’t have a proper Internet connection by the way yet. So if I’m breaking up a little, unfortunately I can do about it right now.

Michael Larsen (01:38):
One of the things that we were talking about just before we got on here, and this is a little tangent that I wanted to put out here (just for grins) because of talking about what this book gave a springboard to. As a matter of fact, because in the process of writing this book, it also started the idea of expanding on some of these with a podcast. Now, this was before The Testing Show podcast. This was a decade ago back with Software Test Professionals’ “This Week in Software Testing”. Because of that. And that started, I was contacted and saying, “Hey, I know that you have some audio editing capabilities. Would you be interested in helping us produce the show?” I said, “Yes”. A few episodes in, “Hey, would you be interested in being on the show since you’re one of the authors that’s working on this book?” “Yes.”

(02:29):
“Hey, would you be interested in being a regular contributor because you’re already actively involved in the show?” “Yes”. And the neat thing too about this, when you’ve got a book that’s published by Auerbach, Taylor and Francis, CRC, whichever name plate you want to give, it’s really cool to be able to walk in on job interviews. And if somebody says, “So how are you actively involved? Or what are some of the things that you have contributed?” You’re able to pull out the book you’ve actually written. If you have it in hand, it’s really impressive and say, “Well, I contributed to this book. Would you be interested in checking out my chapter? Or would you be interested in checking out the podcast I helped produce that was started from this book? “So that’s a long way of saying this book has had some legs to it, as far as I’m concerned. It has set the stage for my career of the past 10 years. And I don’t consider that a small benefit at all.

Matthew Heusser (03:27):
<laugh> oh yeah. I mean, I would agree when we started, I was at Socialtext and Michael, through the Association for Software Testing and their conferences, Michael met my boss, Ken Pier. And eventually after I went on the consultant route, started working with Qualitest, Michael took my old job, basically, at Socialtext. And we’ve been working together on something for the podcast for years and years. I don’t know if you know this, Michael, but Petteri Lyytinen and I have met at a STP conference in Vegas. Gosh, when was that? It was 2014, 2015.

Petteri Lyytinen (04:03):
It was 2010.

Matthew Heusser (04:05):
Oh, you’re right. No, it was before the book came out. One thing I think we did get wrong was pricing. The book is 80 bucks or something. If you have a safari membership, you can get it for five bucks a month or whatever, and you can probably get a free trial for safari. If you wanna check it out.

Michael Larsen (04:22):
To be fair, I think it was marketed a bit as an academic book and it is a specialty press covering. while it’s unfortunate that it was that expensive. I would much have preferred It had been in line with say the cost of an O’Reilly book or a Packt book, but you can’t always get what you want. It also made sense considering the fact that it was a niche target book for a niche audience, but I do agree if there was a way that it would’ve been 50% less expensive, I think it would’ve sold a lot more copies.

Matthew Heusser (04:52):
If we could’ve… I was teaching a Calvin College at the time. And if I would’ve gone on to teach Software Engineering course and required every semester, and it’s almost a monopoly, the way that it’s done, we really wanted it to be for practitioners. So Petteri wrote your chapter on continuous integration.

Petteri Lyytinen (05:09):
Yeah. I had three main points and one of them was continuous integration and tightening the feedback loop time.

Matthew Heusser (05:17):
And since that, at the time CI was kind of this new fuzzy, why would we wanna do that? Who’s gonna maintain the build? And now for most of my customers, that’s the heartbeat of the project. Sometimes testing is poo-pooed, I think is the right term. But the reality is the only thing a feedback loop is, is assessed to see whether the item you’re looking at is what you need it to be and provide information about how it isn’t what it needs to be. That’s testing. So it should be done well, I would think. I dare say the industry has got some more room to grow in that area. Now, Michael does a lot of things. In your day job, your role has expanded. You’re like an automation specialist or something, last time I checked, is that right? Is that still right?

Michael Larsen (06:09):
In part, yeah. it varies. In a lot of cases, we’re still exploring. We’re still tracking things down. We’re still trying things out. We’re making reasoned calculations and observations and sharing that information and giving that feedback. And if we happen to get to a point to where we recognize patterns, then yes, automation comes into play. It’s this push and pull. It always is. Everything’s gonna be automated, but everything can’t be automated. And so you still have to test and you still have to know what you’re doing when you’re testing.

Matthew Heusser (06:42):
I think that there’s two pieces of it. What’s the most powerful thing I can do right now to figure out if the software is fit for use and then how do I check it automatically on every build? And the second piece, you can write tools to go do. I’m just not as interested in the second piece. Like if I wanted to be a programmer, I’d go get a programming job. I’d been a programmer. I’ve written hundreds of thousands of source lines of code. I’m really, really interested in those questions about what, is the software good enough? Now what I look at the book and I particular look at my chapter at the end about things you could do to reduce test costs right now, cut documentation, test things that actually yield bugs, develop a regression test cadence, elaborate a test strategy, decrease your time, spent reproducing the problem.

(07:30):
Stop fighting to manage the project, walk around and listen, write test automation that attacks the business logic level. When I look at these for the most part, I think that this whole story is what became LEAN testing around 2015 to 2020. What I was trying to do was come up with the things you could do to go faster, but that wasn’t integrated yet with constraint theory. So we didn’t say have a way to measure your work in progress and figure out what is slowing you down. That’s the constraint, and then find a way to make the constraint faster and more effective. We hadn’t tied in the idea of eliminating waste. We had, but it wasn’t presented that way. Tied in the idea of eliminating waste with the idea of improving flow. So let me ask, I think that the reducing the cost of software testing stuff led into the LEAN testing. And now we may be at a “what’s next” point now that we’re coming out of coronavirus. I know that Michael has had one job, but he interacts with a large number of people through his volunteer work. Peter has bounced around a little bit, I think. Do we still have work to do in the LEAN software area, especially for testing, or is that work done? And we’re gonna go on move on to something else?

Michael Larsen (08:45):
I think it’s safe to say we still have some room to grow there and some room for improvements. A lot of what is happening right now, depending upon who you’re listening to or what you consider to be current. There’s a lot of talk about, “Is testing important?” The answer is, “Yes”. Are testers important? There’s where you get a little bit of a differing opinion. Yes, you have to have testers, but the question comes down, who’s doing the testing? Does it make sense to have a dedicated software tester for a project or can any number of people on the team participate in that role as a tester? Could the developer be the primary tester? The answer is, “Yes, to a point”. I think that’s very reasonable and should be encouraged. The testing skills and the resources that we have, especially if we’ve done it for a long time, we can teach those to a lot of people and teach them in a way that can help them be effective and to also help them to be as quick or as slow and methodical as they, or we, would want them to be.

(10:02):
The challenge, again, is I think we’ve gone enough times to where there have been enough black eyes and bruised egos to where I don’t think anybody says testing is irrelevant, or we don’t have to do testing, or we’ll catch it in our unit tests. That’s been pretty clearly proven to not be the case. So I almost know of no organization that has decided that, “Well, testing is irrelevant”. They’ve definitely decided that testing can be streamlined and that we might be able to get more bang for our buck for how we do the testing. There’s definitely room for improvement there in the actual process and the details.

Matthew Heusser (10:43):
I almost say that I’m seeing, it’s hard to generalize to the whole industry. We just don’t have enough data. Everybody’s different. I almost wanna say that I’m seeing pushback the other way. I’m seeing organizations that have eliminated tester as a role that are coming back and saying, “Hey, could you build a team to do testing for us?” or “We need a project rescue. What could you do with a million dollars over the next six months to sort of shore up our processes?” Well, lots of things, let’s talk. I’m seeing organizations that have abandoned the tester role, then have a quality crisis within 12 to 24 months. So it’s actually like a Renaissance. I think that pendulum is swinging the other way, but like I said, a few of the early adopters of things like Extreme Programming, if you’re gonna get rid of testers, your skill knowledge of the entire organization for testing actually needs to go up. And until the industry is willing to recognize that reality… and I don’t mean writing unit tests in Python. I mean test design. For this function. What are the 10 most powerful scenarios we can create? That’ll tell us the most about whether this function works or not. And that’s something we study, which is a knowledge that I don’t think it’s in danger of being lost, but it’s not doing great. So Peter, is this resonating with you or has your experience been different?

Petteri Lyytinen (12:12):
Well, there’s definitely some truth to it. At least in Finland, test automation has been a rapidly growing field. I don’t see so much companies saying that they want to get rid of testers, but maybe they’ve been asking for a couple fewer testers who are doing actually more automation. So I think in a way Finland is in a kind of a bad spot in the sense that there’s kind of a heavy trend and marketing towards adding automation and maybe belittling the skills of the actual tester. I don’t know. We’ll see. I think Finland will follow with the pendulum thing you said. How soon that happens? No idea.

Matthew Heusser (12:59):
Oh, so you’re not saying that they’ve eliminated the tester role, but you’ve got SDETs who are coding testers or are you having the programmers do the writing of the test automation?

Petteri Lyytinen (13:09):
A bit of both. I think the unit testing and Test Driven Design/Test Driven Development have been trendy. I don’t know if anyone is using the TDD term so much, but basically they’re doing the same thing. So developers are writing unit tests and lots of them. And then you have the test automation specialists who are doing some kind of scripted automation and then maybe less actual testing.

Matthew Heusser (13:37):
It’s interesting how those trends happen. For a while there, I could do a tutorial on exploratory testing at a conference and people were interested in the concept and they would show up to learn and then suddenly a switch flipped and they, they were already experts. “Of course, we do exploratory testing. Scmoratory testing. Of course, we do it.” And it’s fair to say that most organizations I worked with started to have a small exploratory test practice. I mean, small. Some small percentage of the time, they would consciously move off the script, off the automation and do testing by hand. And then once it actually entered into the practical domain of people using it, and it was embarrassing to say, you didn’t know what that was and nobody wanted training on it. Nobody wanted consulting on it. They all knew it. I think the same thing has happened with unit tests. Instead of fighting it in most organizations,

(14:29):
now you say, “Show me your unit test file” and they can show it to you. The thing is, there’s like four tests. They’re like 75 lines each, of which 73 lines are set up, there’s a function call, and there’s an assertion. Most organizations have unit tests. Now they’re not very good <laugh> and there are very many of them and they don’t wanna do TDD because it takes too much discipline and work. Do you agree, Michael? And what’s the tester or test manager to do in that situation? We think we don’t need testing, but we’ve moved the needle half an inch. Of course we still need it.

Michael Larsen (15:05):
So in my personal experience, it is always gonna be one of those… I hate to say these words, but “it depends”. It really does. To put it bluntly, are you doing something that is gonna be used on the regular? The reason I mentioned this and I don’t wanna belittle that, I think, yes, if you are working with a platform that has recurring elements that most people are going to use on a regular basis, then yes, there is a benefit to having a battery of tests that go through and look at those elements and make sure that they’re doing what they’re supposed to do. For the past couple of years, though, I’ve been working on custom projects, projects that have been done for integrations and changes for other companies to integrate what they do. I’m sorry, I’m being vague here, I just don’t want to get in trouble by mentioning anybody.

(15:58):
But in general, what I frequently do is, there is a basic series of steps that are always going to be repeated or that I’m gonna do in most cases. Those are the simple, low hanging fruit. Automating those is brain dead for me because it saves me time. I don’t have to run that all the time, but every once in a while I get thrown in a curve ball of, “Hey, we need to do this in a different way. And yeah, you’re not gonna be able to test this that way because really they’re set up so that you have to give them a humongous drop of all their data.” And then we have all of the stuff that splits it out into different files that then need to be examined. That is not an example that is going to work with my core test. Maybe a couple of them will, but I can’t just say, “Push a button and let it go do its thing.”

(16:52):
This actually requires me gathering up data. It requires me having a basis to look at. Tt requires me sending over and making sure that they’re processed and they’re passed through or they’re split up. And then I go into the server to make sure that we get what we expect, that our outputs are done the right way. And then I have a certain number of steps I can do to say, “All right, do we have a match?” And in this case, understand, a match is not necessarily “do I get identical output?” because that’s never gonna happen. Because they’re always updating and adding people and information. Instead, what patterns do I see that give me a realistic feeling that this has been done correctly. And I can say, “Yes, this is within the threshold.” That’s something that is gonna take your average computer programmer forever to try to put together an algorithm and asserts to be able to say, “Yes. Yes, Yes. Yes. Yes. Well, yes here… but maybe not here… but yes here… but we should probably have a conversation here.” That’s where you’re still gonna need to have real sapience, if you will <laugh>, into what the product does and be able to have those conversations with stakeholders and say, “Look, are we behaving the way that you expect us to, based off of this output?” That’s gonna need people. That’s not going to be something that we’re gonna automatically fix with the latest Python script.

Matthew Heusser (18:20):
So wait a minute. Let me see if I heard you, right. You got an assignment from on high to do some kind of analysis with production data to make sure that things look right for lack of a better, you could provide more detail. But I think that’s a common assignment in our role.

Michael Larsen (18:36):
Yeah, absolutely.

Matthew Heusser (18:36):
And that would be like, “We have this software, what we wrote that is accounting software that tells who’s past due and it’s software as a service.” And we can run it without actually knowing the client IDs or whatever. It’s not proprietary information. They, they waive permissions, whatever, when you look at their production data and monitor it and make sure that it’s right, you know, weird things, like typically in any given day, only 1% of accounts are passed due for more than 180 days. You could write guidelines. You could look at it and figure it out. You might have to dive into it, but to write an AI or ML script, either to hard code assumptions, which might not be valid, which you couldn’t figure out, or if you made it true machine learning, you would need a lot more data. It’s just not possible to do accurate ML without huge amounts of data.

Michael Larsen (19:26):
Correct.

(19:28):
This is the kind of thing, it’s most efficient, best use of reducing risk in this way that you’ve asked me to do is for a human being to look at. I had a colleague at Fang companies, Netflix. I had a colleague at one of the Fang companies a few years ago confess to me her frustration because they were analyzing production data as a real time live, or at least daily performance report of the web scale application. Management wanted her to automate it. And really it’s like a bunch of graphs and like a person needs to look at it and see if it makes sense and dive into it and figure it out. It didn’t take very long every day, but that was the best use of the time. I think there are still a lot of problems like that in testing. Advocating for that role, educating about that role, I think there’s still work to be done.

(20:24):
I think that’s a fair statement. Much of this comes down to, and I’ll borrow from my own chapter. Yes, I’m doing a plug of my chapter. It was a look on “Trading Money for Time”, which I think is still relevant. Especially if you think about the inflationary pressures that have been placed on… you’re spending even more money today than you were 10 years ago, by a order of magnitude greater. And what we have to realize is that we can throw money at a problem, but you’re still gonna take time to scale up on it. You’re still gonna take time to be able to understand how it works. And most automation projects don’t stick around for the long term. Socialtext was a great example of a long-lived automation project that was solid for an extended period of time. But even it ran into its own problems with mortality because over time, the product needed to change.

(21:22):
The product needed to adapt. Core libraries needed to be modified in a way that the original scripts, the way that they were written had to completely be reworked. So even that long-lived, well invested project that covered our butts a lot. They’re not eternal. They have to continuously be maintained. You have to have the expertise and the knowledge on both sides to be able to do that. And it does make sense at a certain point, you may have to say, “We’re gonna completely transition to something else because we’re gonna be transitioning the entire underlying format of the application. We’ve decided that instead of the core libraries that we had, that we had written that made sense 10 years ago, we’re gonna update to a totally different and more modern framework because of responsiveness, because of accessibility, because of any number of modernized changes.” If you do something that drastic, you’re literally gonna have to reinvent the wheel on everything that you automated. It’s not going to be an investment that’s gonna last forever.

Petteri Lyytinen (22:30):
Yeah. If I may chime in, you know, how in classic werewolf movies, the hero always saves the day by shooting a silver bullet at the werewolf’s heart. And it works every single time. What a beautiful one size fits all solution. However, software testing is not a fairy tale. It’s not a werewolf movie. Every application is unique. So you can’t just apply a static set of statements and approaches and expect to get a high quality product every time. The same with the changes that happen over time, you could have a perfect solution at the beginning, but then six months down the line, what you have is completely worthless because of the changes that happen over time.

Matthew Heusser (23:17):
Yes, I was gonna say something along similar lines, and that is automation is an investment. So whatever you’re paying now, when you say, “Oh, I wanna do some tooling.” You’re gonna pay more for the tooling than you were for the human testing, because your software developer engineer in test, SDET, is a programmer. They’re gonna be more expensive than your human testing, 99 times out of a hundred. And then what you wanna do is, in theory, either the demand on that is gonna go down eventually, and it’s just gonna do maintenance, or it’s gonna tighten your feedback loop. And you’re gonna get more information earlier, which was Peter’s point in his chapter, or the bugs you find are gonna be more important, more valuable, enable you to release earlier. You need a model for how those automations, how the tooling running, is going to provide you value and save you money.

(24:11):
And most of those models have this implicit, “We’re going to be building the software and developing it forever.” And the reality is, probably not. Your software’s probably gonna get shut off eventually. You might go into a maintenance mode where it’s not really getting major updates or fixes. You might move to a craftsmanship mode or an API delivery mode where you can test one little piece by hand in isolation and deploy it and not damage the entire system. It might become obsolete. Plenty of Fish was a web app. It was a dating app. And then they released a mobile app. I don’t even know if the web app even runs anymore. There are plenty of apps that started out as web apps that have moved to mobile that just, they basically shut the web app off. “We’re gonna be mobile first.” So any automation that you wrote against the old system or the system before the rewrite, maybe if you’re lucky, you can reuse some business logic stuff, and you can say, we’re gonna have the same fundamental business flows.

(25:04):
You still have to create an account. You still have to log in. You still have to search. But for the most part, you’re probably only gonna be able to reuse 25% of your work if you’re lucky. So I’ve seen one company I consulted with, they brought me back. You may have heard this story before; they brought me back four years later to help with their test automation journey. And I said, “What test automation, what are you talking about?” And I was like, “We’re just getting started”, said, “What are you talking about? You had two people full- time billing at a large hourly rate for two years when you build the thing. What happened to that code?” They lost it! They had an entire changeover in management and the new management didn’t even know that it existed. So assume that that person was billing it a hundred bucks an hour for a year.

(25:52):
That’s 200 bucks a year, $200,000 a year times two people 400,000 a year times, two years, $800,000 Capital investment was the two years of testing they got out of it worth the $800,000? I don’t know. So my pre-supposition is that if you can’t do that kind of math, you should either bring in someone that can, or maybe be a lot more skeptical about claims, about improvement with software test tooling. And I think that if you read our book and squinted and turned your head, you could have gotten that out of it, but it wasn’t as clear as it could have been.

Michael Larsen (26:30):
<laugh> yeah, I think in some way… and I know that we’ve joked about this a couple of times where we’ve said, “When we titled the book, How to Reduce the Cost of Software Testing, we in one way, tongue in cheek said, are we literally writing ourselves out of jobs? And it proved that that wasn’t the case. But what was interesting was that was the question that we felt, or that people in management were telling us was the important question. Software testing costs too much. How can we reduce the cost of it? And then we published the book, answering their question and what happened? It turned out they said, “Well, I think you might have taken us just a little bit too literal there. What we’re really asking is how do we get the most out of our software testing?” Absolutely fair point. And I think that we did a good job with the things we covered in the book.

(27:27):
And yes, had we instead pivoted and said, “We are going to talk to you about LEAN software testing”, I honestly don’t know if it would’ve gotten a lot of attention at that point because, oh, it’s just selling another version of testing, yada, yada, yada, ho hum. But with the title, it was kind of provocative. It was something to say, “Hey, we’re willing to tell you, here are some things that you can do as organizations to get the best bang for your buck, for the testing that you are doing.” Is it gonna be the be-all and end-all in all cases? Probably not, but we’re gonna give you some places to have some conversations and to start talking about this and over time, then that should be able to bring you to, “Okay, cool. Now how can we really maximize this and get the best benefit possible?” I hope that made sense.

Matthew Heusser (28:20):
Yeah. What’s the best bang for your buck in software testing? I think that’s good framing, which would be the 80/20 rule, which would be, how do we get 80% of the value for 20% of the effort, which would be how to reduce the cost? I think we took that on in the book. So Petteri, what have you been up to lately?

Petteri Lyytinen (28:39):
I’ve been working for a single client for about five years now. One thing regarding out of the book on a personal level, it was published 10 and 11 years ago. And after that I started getting calls from head hunters and the demand grew so much that I quit my job and seven years ago started my own business. So I’ve been running that ever since. Right now, I’m working on videos streaming service, a couple actually, testing a Netflix competition and writing occasional blog posts through this company that I work with.

Matthew Heusser (29:18):
I’m glad to hear that you did well for you. I’m super excited. And I’m not saying everyone go out and write a book. I’m not saying that to anybody. What I am saying is that developers can write open source code and they can put it up in GitHub. So how are we demonstrating our learned expertise? All the authors got a small check. We split up the advance. So I didn’t get any money off of it. I think I have literally received two or three checks. And that was the royalties. The first royalty money that came in, went to the advance. I think I’ve gotten three checks, four maybe from the book that might total 50 bucks, but it was a labor of love. I’m really happy that we did it. Happy it’s gone well for you. And I think that, now that… I don’t wanna say we’re at normal, but now that the coronavirus measures have been reduced to the point that we can pop our heads back up again and look around, I think it’s time for us to restart this conversation and figure out what’s next.

(30:17):
And one place to do that is on The Testing Show Slack, which we’re restarting. And we’d love to hear from you. It’s thetestingshow(at)qualitytestgroup(dot)com com. If you want to be a guest, if you want to get involved, if you want to learn more and if you want to talk about what’s next… you know, when I was going to 12 testing conferences a year, I really felt like I got a finger on the pulse of where testing was at. And it’s just harder today. I’m hoping we can do it online. Hope you’ll join us. And with that, I’m gonna say, thanks for being on the show. Let’s talk again soon. Let’s bring some more of those chapter authors on.

Michael Larsen (30:59):
Thanks for having me as always. <laugh>

Petteri Lyytinen (31:02):
Thanks for having me. It’s been a pleasure.

Michael Larsen (OUTRO):
That concludes this episode of The Testing Show. We also want to encourage you, our listeners, to give us a rating and a review on Apple podcasts, Google Podcasts, and we are also available on Spotify. Those ratings and reviews, as well as word of mouth and sharing, help raise the visibility of the show and let more people find us. Also, we want to invite you to come join us on The Testing Show Slack channel, as a way to communicate about the show. Talk to us about what you like and what you’d like to hear, and also to help us shape future shows. Please email us at thetestingshow (at) qualitestgroup (dot) com and we will send you an invite to join group. The Testing Show is produced and edited by Michael Larsen, moderated by Matt Heusser, with frequent contributions from our many featured guests who bring the topics and expertise to make the show happen. Additionally, if you have questions you’d like to see addressed on The Testing Show, or if you would like to be a guest on the podcast, please email us at thetestingshow (at) qualitestgroup (dot) com.

Recent posts

Get started with a free 30 minute consultation with an expert.