Testing in the News: CrowdStrike and Lessons for Testers

August 14, 00:31 AM
/

Panelists

Matthew Heusser
Michael Larsen
R
Rachel Kibler
Vikul Gupta
Transcript

Michael Larsen (INTRO):

Hello, and welcome to The Testing Show.

Episode 148.

Testing In The News: CrowdStrike and Lessons For Testers.

This show was recorded on Wednesday, July 24, 2024.

In this episode, Matthew Heusser and Michael Larsen welcome 1-800-CONTACTS Software Test Engineer Rachel Kibler and Qualitest CTO Vikul Gupta to consider the findings and speculations around the recent CrowdStrike update that caused global outages and brought travel to a standstill. We also look at lessons testers can learn to be better prepared for situations like this lie this in the future.

And with that, on with the show.

Matthew Heusser (00:00):
Welcome back to The Testing Show. If you’re listening to this, by the time you hear this and the show’s been edited, the CrowdStrike news is probably going to be slowly filtering out into the background as some new amazing thing has happened, but it’s an opportunity for us to talk about what we can do better, and these don’t come along that often. Today we want to dive in with our guests. We have Rachel Kibler, a friend of the show, been on the show before former board member of the Association for Software Testing.

Rachel Kibler (00:30):
Hi, it’s good to be here. Thanks for having me again.

Matthew Heusser (00:34):
And Vikul Gupta, EVP and CTO for Qalitest North America. Welcome back Vikul.

Vikul Gupta (00:40):
Thank you, Matt.

Matthew Heusser (00:41):
And as always we have Michael Larsen. Of course.

Michael Larsen (00:43):
Hey, Everybody. Glad to be here, as usual. Once upon a time when we did the podcast many years ago, one of the key things that we did was we focused on what was happening in the news for the first couple of minutes, and then we went on to our main topic. We’re revisiting that this time, and of course, CrowdStrike is what a lot of us have been talking about, and I think there were a fair number of snap judgments; what happened, who happened, who was to blame? All that went with this, and we’re going to discuss a bit of that today.

Matthew Heusser (01:13):
It’s interesting because a lot of people are saying, “Oh no! We can’t. Let’s not jump to conclusions. Let’s wait for the after-action report.” We’ve seen this before. I think we did a podcast on Knight Capital, which was a high-volume frequency trading firm on Wall Street that lost $400 million in an hour and doesn’t exist anymore. It took a while. I mean, it was a year, year and a half before the Securities Exchange Commission released the report. The failure seemed very similar and we actually really know about that one, so I thought I’d jump into that one. Talk about CrowdStrike for a minute, talk about similarities, and see where we go from there. So it was a complex system failure and there were different components that each failed. If any one of them had gone right, it wouldn’t have happened. Each of those pieces arguably was correct according to some requirement that needed to be pushed back on and wasn’t.

(02:05):
For example, it was a trading software that had in the software, in the production code, there was also test code. They were intermingled. The test code only ran when you had a certain feature flag turned on called Power Peg, so they just turned the feature flag off in production and everything was good. Well, they wanted to write a new feature, and for the new feature, they reused the flag. So we’re going to rip out the old code. We’re going to put new code in this production code. Good. We’re going to get rid of that test code running in production. Then we turn the flag on in production. Everything’s going to be okay. They should have used a different feature flag; that would’ve fixed it. They reused the feature flag, but that still would’ve been okay because they ripped out all the code. Then they deployed it.

(02:49):
It had eight deploy points that it was deployed to eight different production nodes. They only deployed it to seven of them, so they deployed the seven, turned the flag on, and then the eighth one, ran the test code. What did the test code do? At the highest possible frequency, It bought high and sold low, and it used the exact same protocols. It’d send the same little TCP/IP packets that you needed to do the work all the way through because they wanted to test the system end to end, so it had full validation. It took ’em an hour to figure out what was wrong, turn the system off, and you can kind of understand high volume. If we turn the system off every second, we’re down, costs us so much money. So an hour seems like a reasonable response time, but the systems were so integrated that hour costs ’em that much money.

(03:34):
Now come to CrowdStrike and you have a system that is only on 1% of the world’s computers, but there’s a much higher preponderance it’s going to be on mission-critical systems. CrowdStrike is on hospitals, ER systems, 9 1 1 systems, and aviation, all four of the major air carriers were out of business. Every second they were down, they lost however much money. The air carriers, if they go more than an hour or so, then your pilots fall off crew rotation. They’ve had more than eight hours at work. They can’t fly, so you have to cancel for the night and start the next morning. When you start the next morning, you’ve got all the people who didn’t fly and all the people who were supposed to fly. It can take you 3, 4, or 5 days to clean out the system. It’s a mess, and people didn’t get where they were supposed to go.

(04:22):
There’s some reporting that because of the hospital failures, people could have died through this. I don’t have names, but it seems to make logical sense. Again, cascading system failure, we don’t know as much about it. That’s why I talked about Knight Capital first. We do know that this was code because it’s antivirus software, it hooks into the main kernel of the operating system. It kind of has to arguably because any virus that sneaks in isn’t going to have permission to get rid of it. It has to have the highest possible permission. It can get rid of any virus if the virus comes back and gets rid of it again and the virus can’t override and get rid of it. The way those usually work is there’s a code that’s tested. It runs every time. If the antivirus reinfects on boot up, it’s going to get it.

(05:05):
Well, of course, we got to crash on boot-up. There’s the code itself and there’s the data updates. So the arguable theory is we’re going to test the code with this sample and the data’s going to be fine. Looks like nobody tested “What if a data update is a zero space DAT file that caused a crash and the crash was in the kernel? It would happen again on boot up and again on boot up, and again. That’s what we know so far. It seems like we are having more of these interconnected systems that affect the entire global economy and it’s happening more often. So the question is what can we learn from this? Was that a good summary? Did I miss anything, Vikul?

Vikul Gupta (05:41):
No, I think you did. Just mention the content validator part. Reuters has come out with an analysis that says that there were issues in their quality process itself. Maybe tied to that, and then get to talking about mistakes. This is a mistake that happened in the quality, but then people who are doing quality, what mistakes do they make? I was reading an article in Reuters where they were saying that this outage happened because of the CrowdStrike file content sensor. Initial analysis says that this was a bug in the content validator systems. They had two templates. They passed validation in spite of carrying problematic data. This is a classic example where the organization had a quality process, but there were issues with it.

Matthew Heusser (06:30):
They had a content validator, which means that for all the data files that come in, they’re going to check ’em, but nobody thought to ask, what happens if a data file has zero space? Just nobody thought of it. They had automation, they had tooling, they had code, they had SDETs, and they had all the smart things, but nobody thought of this test scenario. Some large percentage of the time, when you do your retrospective, we all go, “Oh, gee, I didn’t think of it. Why didn’t anyone think of this?” Well, nobody did. I know that you’re doing a talk on tester training and new tester training and things to make sure you cover and mistakes. It’s the same root cause, right? You’re in a different space, but it’s the same root cause.

Vikul Gupta (07:08):
Well, you are absolutely right, Matt. Even in this case, they had the content validation system and the processes, and in a typical testing world, when we create test cases as testers, we create these positive happy path scenarios and then negative scenarios. I think, Matt, that’s what you’re referring to. If their data file was corrupted or was wrong or was empty or whatever, they should have been a test case to catch that. That’s called the negative test case. So Rachel, your view?

Rachel Kibler (07:36):
Yeah, there are some classic failures here, but also it was a team failure. I think one of the things that can mitigate this kind of risk is having a pre-mortem instead of a retrospective having a pre-perspective where we talk about what can go wrong. We use that nightmare headline practice that Elizabeth Hendrickson talks about, what can actually go wrong? What’s the worst thing that could happen? Oh, well, we could take down four major airlines and all of the emergency services and hospitals and things like that. That could have been caught in a nightmare headline scenario. Well, then how do we get there? And then some tester might think, “Oh, well, we haven’t tried this.” So other things that a tester could do is getting too focused on the positive or too focused on the negative and having too much of a plan before going in and not thinking outside the box as they get in and get their hands on the code.

Michael Larsen (08:35):
I thought that something was interesting that I read yesterday. Now, a lot of this CrowdStrike stuff, there’s been a lot of pontification, especially from us on the tester side of things. I understand that we oftentimes find ourselves in this situation where we are frustrated. We feel that a lot of times the tester’s value is either downplayed or, “Oh, automated tests will catch this, or we’ll have AI come in and take care of this”. Just want to share this, and I want to give proper credit to this because it’s something that a friend of ours from the show who has been here before, Curtis Sternberg, wrote. He was pointing out that while everybody’s rushing out going, “Oh, their QA process may have been faulty or something may have been wrong here, or maybe they didn’t have the best people.” That does not seem to be the case.

(09:21):
It seems that they did have very skilled people on the team that they were working on a lot of these things here, but this was what I found interesting. He said that CrowdStrike is a DevOps shop. It prioritized rapid product delivery. The point was how quickly can we push out a fix? How quickly can we make it so that we can be up and running? I realize that this is a speculation, but I thought it was interesting to say that part of the challenge here is it’s not just a matter of we want to be able to make sure that we’re pushing good, that we have good quality practices, but speed also plays into this. If your goal is to be able to make sure that you can rapidly fix something, that you can very quickly patch a system, or if a threat has been discovered, you can immediately get that information out there.

(10:12):
If that is your priority and you are saying, “We want to immediately solve this problem or we want to immediately fix it,” if you are not able to look at and then say, “Okay, what happens if a zero-length file is put in here and it crashes the kernel? Was that anticipated in this model of let’s quickly put this out there? And again, when I say quickly put this out there, I don’t mean let’s slapdash throw this out. We mean can we very rapidly solve this problem and fix this so that we can get people up and running as fast as humanly possible? That is a good goal, do not get me wrong, but what I’m saying here is that by over-prioritizing the quickness, they may have left themselves open to a lack of imagination or a thought of, “Well, nobody’s going to do that in a real environment,” until it does, and let’s face it, I think probably every organization, just as Rachel was saying with, “What is our most terrible news that we could happen?” I think honestly every organization possibly monthly goes through that, if not legitimately, that some terrible thing was caught at a reasonable point in time because somebody did step back and say, “Hey, what if we do this? What if we do that?”

Matthew Heusser (11:33):
It’s usually caught. You might have these terrible things happen. They just don’t get public.

Rachel Kibler (11:37):
Yeah, exactly.

Vikul Gupta (11:38):
One of the things, Michael, I do agree with you. Many companies, try to prioritize speed over quality. I think it’s a balance. You want to release quickly without compromising quality. I’ve spoken before in previous podcasts with you guys with digital quality is all the more important because you’re directly interacting with the end users and at this stage users, they have a lot of options. If not this, then something else. If not that is something else. I look at my 21-year-old kid or an 18-year-old kid, they’re installing and deleting apps like this. Think of ourselves, we stick to an app, stick to a bank, stick to the same barber for ages together. We don’t like change, but this generation, they have too many options. We recommend a lot of times to the key organizations to balance speed without compromising quality. Test prioritization is a strategy. Now, one of the common strategies, Michael, which relates to what you were saying is changing map-based testing.

(12:36):
You prioritize those tests that are impacted because of the code change. That is a great strategy if the developer or the ecosystem has a clear idea of what the change is. I remember my time when I was a developer, if someone would ask me, I was like, “I have only changed these files. I’m only focusing on the code,” but this is not about code. As we all know now, it’s about a data configuration file. It’s a typical issue of “It works in my environment. I don’t know why it doesn’t work in yours.” Now there is technology, but that technology has to have mature processes. I don’t know enough about CrowdStrike to say what caused it, but it seems like it was a data configuration file issue that kind of bypassed a lot of those validations. One of the costs could be prioritized. They had 8,000 test cases. They said, “Oh, we’ll run only these 80 test cases and this is good enough.” In a normal scenario, it wouldn’t have mattered, but in this scenario it did, so how you select the test set from a prioritization standpoint is very critical.

Matthew Heusser (13:38):
To Michael’s point, if it didn’t crash on boot up so that you can’t install the update… you’re ready to go, you’re going to put it in there, but you can’t just install the update because it crashes on boot up. Let’s boot it again, but it crashes on boot up. You never get to the point in the loading sequence where it could have updated itself. If it hadn’t had that, Delta might’ve been down for 12 minutes and American Airlines for 15 and not the end of the world. I totally agree. The problem is a lack of creativity. We didn’t think, and sometimes it’s systemic, sometimes it’s just do your job test these requirements. As a system, here’s a systemic risk. We need to be more creative. We need to think outside the box. How do we encourage that? What does that look like?

Rachel Kibler (14:17):
A lot of it is experience. It’s bouncing ideas off of someone else. We talk about developers having rubber ducks, so testers can do this too. Have a rubber duck that you explain your test ideas to or better a live person that you can bounce ideas off of. Having an idea of what you want to test before you go into testing will help, but also taking notes during your testing can help because it can generate more test ideas. When we get stuck doing the same thing over and over again, then we may as well automate it and move on. If we’re going to be the robot, we should let a computer do it. If we rely on our automated tests and then do our own exploratory testing, and take different paths through the software, then we build our chops of actually thinking more creatively.

Michael Larsen (15:09):
I love that phrase of having a rubber duck or the idea of talking your things out with either a person, if you have the benefit of that or if you’re working by yourself and you don’t really have the opportunity to do that, then yes, in this current day and age with large language model AI, AI can be your rubber duck in a manner of speaking provided that it understands or is up to date with the information that you want to work with. I’ve developed course material around being able to say, “Hey, let’s see what can be generated. Let’s see if we can make an additive structure.” That would’ve taken me forever, to be frank, and that’s what a lot of people I think are looking at using AI tools for is not so much to say, “Here, do this for me” because if I ask AI to do something for me and I don’t verify it or even test it, I could find myself in a world of hurt. Very often that has happened to people. Matt and I joke about this and when I say joke, I mean Matt does some work right now with college students in regard to their writing about technology and programming and teaching

Matthew Heusser (16:21):
College course at night.

Michael Larsen (16:21):
Yeah, okay, there you go. Want to make sure I got the context right there. Matt and I have actually had conversations where he’s come back to me and he said, “What do you think? Is this person completely leaning on AI to say this because I’m reading this”… and I want to be fair. For some of them, English is not their first language, but something is telling me that this reeks of AI, they’re using AI to write the text itself. We’ll go through it, so Matt’s rubber ducking with me in this regard and as I’m reading it sometimes, and I’ve said, “Do you ever notice that if you’ve ever eaten a burrito from a great place and you accidentally bite into tinfoil that reaction you get?” Well, you have the taste and that shock of biting into tinfoil. That’s my visceral reaction when I am reading texts that come out to me as “this is AI” and I say, “Well, I cannot a hundred percent guarantee that this person is using AI to generate their answer, but it sure tastes like it.”

(17:18):
If Matt’s okay with me sharing this, this was my comment. I understand if you can’t necessarily hit somebody and say, “You’re using AI and I’m going to mark you down for that,” but what you can say is… you can say, “I don’t know if you used AI for this or not, but if you did, I’m sorry because the writing is terrible, or if you didn’t, well, I’m doubly sorry because the writing is terrible.” It’s interesting that when you hit somebody in their vanity, “I’m sorry, I’m not giving you full points, not because I think you cheated or not because I think you went out to use AI, but because it’s giving me an answer that is horribly unsatisfactory”, that can have an interesting effect on what people do.

Matthew Heusser (18:02):
Part of that is a big part of who I am as a writer and the writing is weak in various levels of lack of human experience. It uses a lot of third-person passive voice. It writes in a sing-song voice. It uses adverbs and adjectives in a particular way. There are things you can identify whether AI wrote it or a human. It doesn’t really answer the question. It kind of dances around the question in a way that a person wouldn’t, which brings us back to using AI for your test ideas. The way the large language models work is they are sort of the geometric average of what someone is most likely to say. What we call hallucinations is bios typically look like this, so you ask it about someone that doesn’t know about and you say, give me a bio for this person has never heard of.

(18:46):
They’ll make something up, things like that, so if we’re using it for a rubber duck for test ideas, I think it can come up with the thing we should have known. Everybody knows, we just didn’t think of it. That’s sort of sweeping the floor of basic test ideas and we’re talking about ChatGPT, not any advanced testing tool. There are some that have been trained a little bit with common failure modes that do a better job, but if we’re just using the basic tools, I think we can get those “Oh, of course. I should have thought it. I just didn’t,” and we could go through as huge output that it produces because it’s not going to have context, so it’s going to have ideas that don’t make any sense. You could pick off 3, 4, or 5 things you didn’t think of, add them to your best ideas to run, and do better testing.

Rachel Kibler (19:27):
One of the things, it’s the same as having a test plan and then sticking to that solely. We get stuck in anchoring bias, so we need to go into ChatGPT with our own ideas first and then ask it our questions of “What are some test ideas here?” We know this. If we just rely solely on AI, we’re going to do a bad job of testing. We need to have our creativity and our own thinking before we just anchor ourselves to what ChatGPT says.

Vikul Gupta (19:57):
I think my view on AI, gen AI in general, then how they intersect with testing. Matt and Michael, we talked about this with Alicia in our last podcast. I don’t see AI as a replacement for experience as a tester. The tribal knowledge we have testing similar applications. I like how Matt said, right, it’s an average. Now if we are clear on that, there are various ways we can use technology to our advantage. I love, Michael when you picked on the rubber duck. Yeah, it is a rubber duck. We use it to help validate ideas, and bring in a lot more thought. Within Qualitest, one of the things we are doing is we are providing this as a tool to increase productivity, not to replace testers. We’ve created a platform called QualiGen, generative AI for Qualitest customers. Where we using AI to validate the requirements; is this requirement ambiguous?

(20:58):
That is something technology can do a lot better using NLP and other things. Look at 50,000 requirements and see if there’s a conflict in one, you’re saying JDK version 4.2, and in another, you’re saying 6.3. Conflicting. So if you use it just for that, it works. Then feed those requirements into gen AI to come up with scenarios, but then take a pause, don’t let it go further. Have an SME review those scenarios. What have we done? Rather than starting from scratch, we’ve given him a 70% start. Now instead of spending five days, he’s spending five hours looking at these scenarios, building on top, refining them, improving them, and then he goes back to the genAI engine and says, “Auto-create the test cases. The manual test cases. Again. Stop. Again. Review.” Then you can create feature files. From feature files, you can create Selenium scripts, Playwright scripts, and so on.

(21:55):
You have to use AI responsibly for the right area. If you’re thinking, “It’s mature enough to do autonomous automation.” Absolutely not, and I think that’s the point, Michael, you were making or Rachel you were talking about. I do see it as a rubber duck. I do see it as a way. This also helps us, where what we did well two years back in a project, we learned from it. The same engine. Not only is it leveraging Gen AI, but we are also supplementing it with our tribal knowledge. It’s an average like Matt said, but we can make it better than average by sharing our tribal knowledge with that engine and getting it to do better.

Michael Larsen (22:37):
That’s an interesting comment and I’d like to circle back to Rachel if I can because this is part of why we asked Rachel to be part of the show. There is this tribal knowledge that we have developed. People come into software testing from many different angles. The most famous phrase that we’ve been quoted more times than not is, “I just fell into software testing.” It was the case with me. I’ve actually, over the years, because of challenges in the work environment, I’ve gone to do different jobs that inevitably pulled me back into software testing. So I was able to carry over a lot of that tribal knowledge and also bring over knowledge from other industries, if you will. I did a stint as an application engineer for a bit. I did a stint in technical support for a little while, and did a lot of customer service-type level things.

(23:25):
Those can all inform that branch of tribal knowledge that you need to work with and oftentimes you will find that your testing ideas don’t just come from within the testing community itself, but a lot of them do. In this case, to circle back, what are some of the things that you think that either newer testers or even veteran testers can benefit from? Stepping back and saying, “Maybe there are some mistakes that we make because of biases, because of lack of time, because of prioritizing speed, because of not really thinking about risk. What can we do? How can we better prepare ourselves or situate ourselves to take advantage of that?”

(24:09):
We are better as a community. We’re better as a team that cares about quality. We’re better as a community of testers. Using that is really vital. Getting your delivery team together before the code is written to talk about risk, to analyze risk. I like doing risk-storming every once in a while with my teams to talk about the biggest risks and how we can mitigate them but also go to conferences. It engages you with an entire community and the hallway conference where you just exchange ideas. That’s invaluable. Finding those spaces online where you can engage with people. There are many great communities that share ideas and share experiences. Having your own lived experience is absolutely valuable. We all have varied backgrounds. I come from music and math and law because why not, but also being able to experience other people’s stories and learn from them. We’re better together.

(25:13):
I like that.

Vikul Gupta (25:15):
I’ll come at it from a technology aspect. Some of the things I see new testers and sometimes even veteran testers make is… I’ll give an example; automation. They’re very focused on how much automation coverage we have, but in that process, they lose sight of have they selected the right framework. Should that maintenance be easy, self-healing capabilities are there or not? Many times I will go to a customer where I’ll ask the question, “What is your automation coverage?” They’ll very proudly say, “80%, 90%, sometimes close to 100%,” and then the moment I ask, “How much of this automation is actually leveraged in your last cycle?” and the number is like 27%, 37%, 48%. Why this disconnect? “Oh, it’s very costly and time-consuming to update automation.” “So what do you do with that?” “Oh, we go back to manual.” I think we should keep in mind when we are selecting the right tool, the right framework so that if things change, which they will because you’re releasing every two weeks, every four weeks, it should be easy for you to update your automation and use it. From a concept perspective,

(26:20):
I think most of the companies I’ve worked with are “N -1” automation, which is they’re automating the previous release. A lot of mature companies are moving towards a concept called in-sprint automation, and it’s not just a technology thing, it’s a process thing. You have to estimate it right. You have to have your requirement payload a little bit smaller so that you give time for automation to work, to be developed, and to be executed. The other thing I’m seeing is a lot of companies, irrespective of which vertical you’re talking about, whether it’s banking, retail, insurance, etc., a lot of applications are not just traditional standalone digital or cloud applications. They are AI-infused applications. There’s AI in that application. Now, if there is AI in that application from a testing standpoint, we have to evolve. How do you test a normal application versus an AI-infused application?

(27:14):
I normally give an example, Michael. If you go to Amazon and search for something, the moment you open up the page, it’ll give a recommendation. If you go at six in the night, it’ll have a different recommendation, so an AI-infused application is a non-deterministic output. What does that mean? In a traditional application, you put in 70, and you’re supposed to get 70, no matter when you put it in, you’ll get 70, but in an AI-infused application, the output could be different. So how do you plan for that? With AI comes this whole concept of bias. I heard one of you use bias, so how do you test for bias? The classical example is Apple Card came out, and one of the founders of Apple applied along with his wife. He got approved, but his wife got rejected. They share the same finances! Because that approval process was biased against women. How do you test for that? Those are concepts that these age testers have to understand, learn, and start utilizing as part of their testing strategy. Technology is evolving. We need to keep pace with the technology.

Michael Larsen (28:24):
I fully appreciate that. Now I realize, of course, we are running up on our time limit here. First off, thanks to everybody for coming out and participating with us today. I should also mention that three of us on this call are going to be at the same event in October, so we’re going to spend just a little bit of time here pushing to mention that Matt, myself, and Rachel are speakers and workshop presenters at the Pacific Northwest Software Quality Conference, PNSQC, this October. So, Rachel, this is our chance to allow you to plug in two things that you are presenting there. Go for it.

Rachel Kibler (29:05):
I’m really excited about being at PNSQC. I’m doing a workshop for beginners using AI in software testing. Really excited about that workshop. I get to do it with one of my favorite people, Carl Kibler. We share the same last name and we’re going to talk about how to use AI effectively in your testing and go into some of the stuff that we’ve talked about today. And then I’m also giving a talk on mistakes that I’ve made in training new software testers. They have been… many, so it’ll be a vulnerable talk, but hopefully, you can come away with some ideas on how to train new testers and not make the same mistakes. I want people to make their own mistakes, unique mistakes, not the ones that I make.

Matthew Heusser (29:54):
Get out the bottom floor-level mistakes so we can get to the interesting ones.

Rachel Kibler (29:57):
Exactly (laughter)-

Matthew Heusser (29:59):
Michael and I are doing a half-day Lean Software Testing course. It really should be hitting the highlights, so it should be, I think, pretty powerful and we’re doing a talk on “Hip or Hype in AI Testing”, which, at this point, I think we can all agree it’s at least a little bit of both, right?

Rachel Kibler (30:14):
Absolutely (laughter). So we are excited about that happening. That’s probably the big thing that all of us are focused on going forward, at least the three of us that are on this call today. Vikul, of course, is focused on the goings-on with Qualitest and the offerings that they have, and of course, you can always check out their site, which graciously hosts this podcast.

Matthew Heusser (30:36):
Vikul, is there anything new and exciting in your corner of the world you want to make sure we get on the air?

Vikul Gupta (30:41):
I think we’re seeing a lot of traction on… I do agree with Michael about releasing fast. Most of the companies we are working with are wanting to release faster. We have to find ways where we can release faster without compromising quality and that’s what we are focusing on. We are looking at adopting gen AI to not replace, but help us assist in testing faster.

Rachel Kibler (31:07):
Alright, well hey, thank you so much everybody for participating with us today. Rachel, Matt, Vikul, thank you so much for being part of the show as always, and to our audience, thank you very much for joining us and we will see you next time. Take care everybody.

Vikul Gupta (31:22):
Absolutely.

Matthew Heusser (31:23):
Bye

Vikul Gupta (31:23):
Bye-Bye.

Michael Larsen (OUTRO):
That concludes this episode of the Testing Show.

We also want to encourage you, our listeners to give us a rating and a review on Apple Podcasts, YouTube Music, and Spotify.

Those ratings and reviews help raise the visibility of the show and let more people find us.

Matt and Michael have written a book that has distilled many years of this podcast along with our decades of experiences and you can get a copy of it for yourself. “Software Testing Strategies: a Testing Guide for the 2020s” is available from Packt Publishing and is available from Packt directly, from Amazon, Barnes and Noble, and many other online booksellers.

Also, we want to invite you to come join us on The Testing Show Slack channel as a way to communicate about the show.

Talk to us about what you like and what you’d like to hear. Also to help us shape future shows, please email us at [email protected] and we will send you an invite to join the group.

The Testing Show is produced and edited by Michael Larsen, moderated by Matt Heusser with frequent contributions from our many featured guests who bring the topics and expertise to make the show happen. Thanks for listening.

Recent posts

Get started with a free 30 minute consultation with an expert.