The Testing Show: Record and Playback

February 25, 03:38 AM
The
Transcript

There have been record and playback tools available for the past few decades, with varying results and reputations. They often over promise and under deliver. Today, Matthew Heusser and Michael Larsen are joined by Rex Feizi of Subject7 and Leandro Melendez (Señor Performo) to talk about the latest generation of record and playback tools, how to leverage them effectively and use them in spaces where they are intended (as well as explore other areas where they can be beneficial) and get some good advice about how to make the best use of these tools both now and into the future.

 

itunesrss

Panelists:

 

 

References:

Transcript:

Michael Larsen:

Hello, everybody, and welcome to The Testing Show. If all goes well, this show is going to air in September. I just say all goes well because, right now, California is on fire! And I am hoping that that doesn’t means that I end up getting evacuated or something, uh, cross fingers. But if all goes, well, this show should be coming out middle of September. We are glad to be speaking with you. I’m Michael Larsen. I am your show producer. Today, we would like to welcome our guests. First off, we’re going to welcome first time guest, Mr. Rex Feizi… Did I pronounce that right?

 

Rex Feizi:

Hi. Yes, that’s correct.

 

Michael Larsen:

Right on. We also have a recurring guest who’s coming back to us and that is Leandro Melendez, Leandro Melendez.

 

Leandro Melendez:

Hola Amigos. Yeah. Yeah. Very well said. Thank you very much. Very happy to be back every time I’m delighted being able to contribute.

 

Michael Larsen:

Thank you, Leandro. And of course we have our MC, Matthew Heusser. Matt, shall we get the show on the road?

 

Matthew Heusser:

Thanks, Michael. Yeah, it sounds great. So just to frame it up, we talk about a lot of things on this show, but we focus on the doing of testing. We’ve been talking about record/playback tools a lot recently. I think I mentioned Subject7 before. Rex is the CTO of Subject7, which is a small tool vendor, but talking about the place for record /playback. And I want to start with an assertion and see if you agree or disagree. So there’s a fundamental disconnect generally between test and development. Generally test can only see the user interface and is focused on unexpected combinations in that user interface, that many of the interesting bugs are found including security.

 

Rex Feizi:

Correct.

 

Matthew Heusser:

And on the other hand, you have programmers who write code at the unit level. And the focus of the conversations around testing for programmers is unit testing. What we typically see is this tension between testing the front end like a user and testing something more programmatically, probably at a lower more component level. And that tension creates conflict where people have to figure out whether they’re more of a programmer or tester, or whether they’re more of a user interface tester. And when you throw in test automation, then the conflict becomes a lot of different ideas being thrown out there. What’s right? I don’t know. As a context driven person, it probably depends on the work environment that I’m working in. What’s best for us right now in this environment? That’s sort of the premise I want to put on this conversation… Before we go there, is that premise correct?

 

Rex Feizi:

Absolutely. Absolutely. I’ve seen it in many projects in instances over the years, all of what you summarize is very accurate.

 

Leandro Melendez:

I would add to that as well. It’s some pattern that happens… still happens in the industry that as you mentioned, these lines in between who creates the automation, where if it’s at a unit level in the backend service level, front end through UI, is even manual… Who works on each one of those? Definitely something that’s still on happening that most of the industry has been trying to push against. Like, as you were mentioning a developer, I won’t get as far as doing something on the UI, but this is shared ownership that should start to be changing where many elements in our teams need to be able to start creating automations, code, tests, and putting everything together at every level. But still, as you mentioned, the front end is a little bit far from some of the guys creating all the gears and bolts and nuts. It’s something that is still a bit divided, but starts to merge a little bit on latest technologies where even the front end has a lot of code and a lot of automations only for the code where it is still needed just to test at the front end level and verify. So it’s changing, but very much like you described it.

 

Matthew Heusser:

So you see a merger happening where the difference between a back end person and a front end person difference between the user interface and writing code to manipulate the sub systems. You see that as those coming together?

 

Leandro Melendez:

Yeah. Especially since many of the applications and the front end… Let’s call them SDKs or environments like front ends that you start putting everything together on a prebuilt set of commands. All of these have lots of code, lots of JavaScript, lots of CSS and things that should be…

 

Rex Feizi:

You mean like Angular?

 

Leandro Melendez:

Angular! Thank you very much.

 

Michael Larsen:

You got things like Angular and React.

 

Leandro Melendez:

I was going to say… sure, but yeah, all these front end interfaces that are coming up, as you mentioned, as Angular, React, and some others, depend and start to send so much to the front end lately that some front end applications have been getting thicker, heavier, require more resources and especially more code. And when you need to start to test on them for behaviors, for the flow, the user flows, for anything that you need to work on them, it starts to get to be a little bit at the code level. So I see some sort of integration there and many teams that I have worked with and start to feel pains for not still having that line, that silo, where I’m only a front end developer, I’m a designer. And all I’m going to do is to make it look pretty rather than the usability and where you need, almost probably, someone with good knowledge of code, loops, and -making a pain point for me- like performance, where it is better to talk to each other and remove that line, that division, that silo, where a developer can go in and check the JavaScript, the front end code, if there’s a communication, a call to a service, that probably is causing the problem that a functional automation at the front end level is detecting, a manual tester, a feature toggle, whichever technique you use. So it’s starting to merge or a recommendation is to merge them a little bit.

 

Michael Larsen:

Yeah. I could also mention that you are also getting into situations, especially with these framework designs, like React, Angular, Node. I’m sure that there’s a whole bunch of other ones out there that I’m not thinking about. Those are the ones that I just have some experience with. There’s a lot of cross pollination there. You know, when you have the front end versus the back end, what’s being used, where, how are you interacting with them? And there are certain things, the benefit of the framework is it makes a lot of work easy for development purposes and cobbling up something that’s usable quickly. WordPress even fits into that. Great for speed. And they’re great for being able to get things put together. But if you have to do any modifications that require you to monkey with the theme or the underlying code, it can get hairy. Especially if you’re dealing with the code environment, that dynamically names things. That’s a royal pain It’s actually been one of the biggest bugaboos that I’ve dealt with. Fortunately, most of the projects I’ve worked on in the past eight years have gone away from that dynamic naming, but dynamic stuff is still in there. And testing for that dynamic stuff is a real pain. Uh, those, those are the examples where I’ve basically just said, Nope, I’m just not going to really worry too much about the automation of this.

 

Matthew Heusser:

That’s a different conversation I want to have is coverage, where we say, “Oh, it’s too hard.” Rex and I were talking about on the phone… what, yesterday? “It’s too hard to automate this. So I just won’t.” Where do we write that down? Where do we say this is a known risk that we should be checking every now and again, especially if the code changed, the answer is, “we probably don’t,” then it breaks and we’re like, “It’s weird. We don’t have a test for that. Oh, no!” I’m not criticizing you personally, Michael, but it’s really common. I think that testers are really good at complaining about how we’re put in impossible situations. We do a pretty good job of ignoring the things that we do to help put ourselves there as a community.

 

Michael Larsen:

Absolutely no argument there. Yeah.

 

Matthew Heusser:

Let’s get real. That’s great. So I think what I’m hearing is that as front end becomes more complex with things like React. We either need… there needs to be some code at some level that can do some thinking. Like there needs to be some algorithms or else, whatever we do, if it’s just clicking buttons, it’s going to break real soon. It’s going to be brittle.

 

Leandro Melendez:

Yeah. And on some circumstances when you’re front end, as Michael mentioned… and I have experienced that pain several times… When you have dynamic naming on the items in the page or the front end of what you are working with, sometimes you say, “Okay, I’m done trying to be always updating and catching up”, and you might go on and say, “Hey, is there an interface that detects them more by type of check?” What does it have inside or even visual, like figure out if this is a button or just click on this area of my page or the monitor in my computer, those are so complicated to automate at a code level where you just don’t have that option or it becomes too time consuming just to maintain them. If it just a single execution of, you’re just going to mess with that only one time. Okay, let’s do it once, make it optimal, execute, do all our tests for this cycle. And it’s the only cycle we’re good. But most of the time, that’s not the case.

 

Rex Feizi:

In my experience, it’s more difficult, definitely to automate applications that have UIs that are a little bit more sophisticated than the old JSPs and ASPs of the world. But it surely doesn’t mean that it’s not doable and it’s cannot be done in a way that it can be robust and beneficial to the entire project and saving a lot of time. And just going back quickly to the previous conversation, even before Angular and before all these new frameworks out there like React and others, a lot of teams because of the complications of UI automation, they try to attack the problem by just putting it in different buckets, surface level tests, unit tests, and all of that. But the problem really at the end of the day is that every time you release the software, you need to have a way to make sure the end to end functionality is intact. In some cases, in some software projects, it’s very, very difficult to have the end to end automated, but once they do it after such a long time and they make sure it’s robust and it’s repeatable, then they really see the benefits coming back. Definitely it’s more difficult with the new frameworks and because a lot of logic has shifted from the back end to the front end, now the UI type of tests is sort of a necessity for automation now, but also Selenium has gone from being the RC that was really brittle. It’s been really maturing up and thinking about the challenge of automating the UI, providing the library that People can use to build a UI automation.

 

Matthew Heusser:

That’s great. Thanks. And I think what we’re getting here is… there’s a paper we’ll link to that… I’ll go throw the next thing out. We talked about two different things. I think. There’s a stable UI or a changing UI. If it’s the UI is changing, then usually the stuff we write is going to be pretty brittle. There are techniques around that. You can write little tiny snippets and only do one thing at a time. You can use code reuse. But I find that if the UI is really frequently, radically changing, that a lot of tests are just going to break and we’re going to have to rewrite a bunch of stuff. Maybe if we’re really lucky and we’ve expressed the tests in terms of the business domain, we can just write the inside of the test and the use case still works, but still it’s a lot work versus a stable UI. And then the app can be more simple or more complex. If the app is more complex and you have more adapted behaviors coming where IDs are not predictable and that sort of thing, then the sort of simplistic approaches don’t work. So let me stop here and say, is that a good way? And there’s so many ways to do this, but in our experience, if we had to come up with a couple of different variables to look at, to try to categorize ways to think about test tooling, stability of the UI, complexity of the app, are those the right ways to do it? Or would you do it a different way?

 

 

Leandro Melendez:

Well, if I may add… When a big item, as you mentioned is a very mutable UI that constantly gets changing. If it’s a project that constantly releases through an Agile methodology that keeps changing as well, the UI and all these elements… I’m going to say a phrase that I heard on a software development, project management, and I think it applies very good to some sort of automations. I’m going to say change development with software automation for testing are way easier when everything is frozen. So this makes a stable environment, but you have some sort of tooling that can detect and update some of the items on your UI and easily trigger them without being impacted too much by mild changes. It’s very easy. If you have a project that is more or less static in the UI, doesn’t change too much, you can create some automation quick and easy easy with tools that are more or less out of the box. And you can, as the article mentions record and replay. It most probably will work and stay working for a while, or you will need very little tuning. As you mentioned, all these depends a lot on the project. How many releases do they have if it’s waterfall, if it’s a flavor of Agile, if the front end or the UI interface is one of those that are a bit complicated or the ones that you cannot change the naming of the items, how they will be recognized by the automation. All this depends a lot. There are techniques to make this manageable, but as I was mentioning earlier, it depends a lot if it’s a single release and you are just doing more or less of a Waterfall-ish, and this is it, you won’t be changing much of the environment, or if it’s again, an Agile flavor, you need to put more or less on the team. As I was saying earlier, if the developer or the UI engineer are generating a change at the code or the back end or the UI elements, why not ask them, “Hey, before you check in this code, why don’t you update my record and replay automation and just change the name for the new element that you updated so everything stays still working”, but as you will mention that, it always depends. People doesn’t like my answer for many questions like that. But usually the answer is, “It depends.”

 

Matthew Heusser:

Well as a consultant, I say, “It depends” all the time, but then I say, “On what?” and I give them the variables and I give them suggestions on if it were like this, I do this. So you bring up a really interesting point, which is frequency of release. The thing about a stable UI, the UI isn’t changing. If you use record and playback there, but if the UI isn’t changing much, they’re just not going to be a lot of value in testing anyway. Not much is changing. You could manually test it unless you’re frequently releasing. If you’re frequently releasing and the UI is relatively stable, you might get a pretty good payback out of record and play, I would think, because now you can run this thing every two days and get a quick smoke test and see that you didn’t break anything big. Maybe you watch it. And it’s pretty easy to record. And it’s simple. So you can rerecord it if you need to. You have a comment, Michael?

 

Michael Larsen:

Yeah, actually, I do have one I want to pop in here. I don’t know if this is something that other people use I’m guessing they probably do. I actually like to use record and playback for when I’m exploring an app. If I move to a new team, that’s when I pull out my record and playback. I will show my bias here. There is a tool I like to use. Some are probably familiar with it, and I’m not trying to use this to advertise or plug anybody. It just happens to be what I’ve used the past few years. And that’s the Katalon Recorder browser plugin. For anybody who’s ever used. Selenium IDE, it’s pretty similar. “Oh, Hey, we’ve got this new thing that’s been plugged in.” Neat. I’m going to throw on Katalon recorder and I’m just going to walk through and do a few interactions, run through a full workflow, for example. After that, I shut it off and I go back and I look, if I happen to see some really clean, “Hey, here’s our ID. It’s very defined. It doesn’t change.” That’s great information. That means, “Okay, now I can go in and I can use C# .NET Core, Java, or fill in the blank”. As long as you know that those IDs are similar, then you can create whatever functions, methods to call those and they’re going to respond pretty reliably, but it also helps because, oftentimes, maybe they just haven’t put in something. And so I get this obnoxious XPath statement, and those are good because whenever I get those obnoxious XPath statements, all I have to do is print out a table and say, “Hey, can I ask you a question? What’s going on here?” And oftentimes that’s because that’s a dynamic element and they’ll go, “Oh yeah, that’s a dynamic element. Uggghhh! Yeah, sorry. That’s going to be a little hard.” Okay. At least I know what I’m getting into with that. And I can work with it. That’s cool. Other times it’s like, “Oh yeah, there’s no reason why you should have to deal with that. Let me give you an ID for it.” And that’s great because it gives us that chance to have those conversations. Also, if I get on a brand new product, I can just go around and say, “Man, this is a really big field with a lot of editing that has to be done. Is there an API that I can go and punch in these values and make sure that they’re recorded?” “Oh yeah, absolutely. Here. That’s how you do this.” Cool! So by doing at least that for me personally, record and playback has been a blessing. I’m happy to have that tool.

 

Matthew Heusser:

I think one thing you can do, if you’re early in the process, you’re going under rapid succession, you can do a recording and run through and watch it run for just a single feature. And then maybe you throw that away or… Can you export that into some other format or?

 

Michael Larsen:

Oh yeah, it’s totally exportable. Katalon is exportable. You can export it to a number of different languages and to a number of different unit test frameworks. It’s very literal what it exports out.

Matthew Heusser:

To doing the record and playback for a feature and then exporting it into some other format that is more like real software engineering. And then massaging that other format into something that you can use, I think is a reasonable path we’ve seen work before for development. Usually doesn’t stay in that quick, dirty record playback. It usually becomes something.

 

Rex Feizi:

Basically the magic word that Mike used was exploring with the recorder. That’s a very, very well way of putting it. Basically, if you just want to see how something works, because remember the record and playback, it just records the action. If you’re lucky, we’ll play them back correctly. If you’re not so fortunate, then you have to get your hands dirty at some point and figure out how to change the elements that you’re trying to deal with. If they’re are dynamic or you have to find something that is more reliable on the UI side, doesn’t change that much, maybe pointed to a label on the page and stuff like that. You have to learn that at some point, if you want to have a reliable playback, but then the problem is that you still need to do verifications. It’s not just playing back the test that actually validates what you want to validate. You need to verify some things in the path to make sure that you’re actually validating a bunch of things. The validation can not be done with the record and play, but like Michael was saying, it really gives you an insight, sometimes, if the recorder is good, it gives you some idea of what’s going on and what are the options you have. And it just a very good introduction into automation from my perspective. Now, the other thing is the UI elements changing in my view, it’s a very small percentage of the bigger problem. The bigger problem is when the flow changes in the UI. The simplest example is if the user and password are both in the first page of the login versus first ask you for the username and then you click next and then the password, if that’s the flow that is changing, no matter what you do, you need to change that underlying automated test. But if the elements on the screen, they change positions and there are a bunch of divs, or layers introduced in the middle, with a little practice, you can get around that with a good CSS or a good XPath, you can always get around that problem.

 

Matthew Heusser:

And there’s a couple of other tools that I think are coming out around that. Some of them are AI based, where they actually try to figure out, “Oh, the button just moved. I’m going to click it. It’s in a different X path. It’s fine.” Machine learning based. And other tools are like Applitools, probably the most best known one, where they actually, you can do screen capture recordings of various parts of the screen by div at various times. And then it will compare it to the picture it took yesterday or in the last run. If the difference is significant, it’ll throw it in a folder and you can review it after the test run and say, “This is fine. This is fine. This is fine. This is fine. This is fine. Oh, that’s a bug. Oh, that’s a bug. This is fine. This is fine.” I haven’t worked with the technology much myself. I was on, I think, two projects that used it. And I’d be a little concerned with getting the verification points right. And that doesn’t fix it. Like the Applitools stuff. Doesn’t fix it if the XPath changes, it’s still going to click on the wrong thing and you can get an error message. You’ve got to go debug it. So now you’re talking about using them both combined. There’s a lot of tools in there and it’s very code intensive. So is there a place for the kind of still not code, but the metaphor I would use for this is Scratch, which is a programming language for children, which is very visual. You can see the loops and you can step through it and you can debug it, but it’s really visual. And it looks more like assembling puzzle pieces or a spreadsheet or a grid than it does writing code. That’s an advanced form of testing to deal with complexity. Does that also fit into this mix?

 

Rex Feizi:

In my view, it definitely does. I mean, you abstract away the complexities of coding and you abstract away the complexity of the UI and manage that all in the framework. You can definitely have a successful regression automation because most of these visual tools right now, they’re mostly for quick tests that are like, for example, imagine you want to test Amazon shopping cart. You have to take into account the discount, the different taxes in different States. It’s a much more sophisticated type of testing than just visually comparing things, for now. Again, the part where you cannot identify the elements on the screen might be a challenge. But the bigger challenge is how do you set up the test data? How do you set up the tests to be run in parallel? Or in one of them, checking out an item will not cause the other one to miss an item in their tests. That sort of a thing becomes much more complicated and takes away the bigger chunk of the complexity of the automation than the elements on the screen or things moving around and stuff like that. People have solved those problems mostly. And either with the self healing thing that is getting popular these days that finds out if something has changed on this screen. But again, if an automation test fails and you go to a figure, you have hundred tests ran and you have 20 failures out of those 20, 19 of them are not really related on UI things changing because the first situation is once they change, you make it better. You make the XPath better until you get something that it’s not really broken easily, but then the data or running things in parallel and performance and stuff like that becomes much more of a problem than UI changes. And for those, you have to get your hands dirty is what I’m trying to say at some point.

 

Leandro Melendez:

Yeah. And on those specialties, as you touched the point of performance, which is a little bit of my area, if I may add to what you were saying, another technique that as well could be used to ease off this process of the end to end tests tests… Some organizations that I have been at notice that they try to do not only end to end tests with these record and replay tools, what they end up trying to do is covering all the functional testing universe with these type of tools and getting into each text box. If it receives negative testing where the characters, some of these tests for record and replay are recommended only for more or less happy path end to end to make sure that you can get from A to D or to Z as you were mentioning but not to check every possible permutation on every step. They become unmanageable. What is recommended on this is to try the automation pyramid, try to automate the tiny bits at the unit level full coverage. Then at the service level, try to do a little less coverage for just important points. Manual or front-end divide a little bit. Feature toggles enable your application to get into some sort of debug mode where some of your automations can just test these textbooks without having to go through all the previous steps. Listing it off to making it more manageable. And in the end, you have only a few on small universe of record and replay automations that at times you are okay if they break because of changes because you have probably five, 10, and that’s all that you have to maintain. Specifically mentioning, just end to end and front end. When you have a UI change, you can use some type of automation tools that you mentioned that they can self heal, or do you just update a few parameters and you’re done with them, but not at end to end level. That’s another big recommendation. Everything that you can test elsewhere or at a more efficient, cheap, fast, or at a different level, please do. And anything that you cannot especially end to end, you are just trying to get from A to Z, but not trying all the permutations in between, make sure that you are testing the right things at each level with different tools and that way record and replay become, in my opinion, a little bit more manageable.

 

Michael Larsen:

I have to say I have been immensely enjoying this conversation. And I think that there’s so much that we can cover here. Seriously. We could go on for this for hours, but as always, we do have a time limit with this show. So I’m going to have to put my hat on and say, it’s time for us to make that little shift. And we’re going to go into what are you up to time? So Leandro, Rex, this is where we go and we tell everybody what we’re up to, where you can find us where you can learn more about us and what we’re planning on doing in the next coming months. Have at it.

 

Leandro Melendez:

Well, on the Qualitest side, I’m hosting the “Performance and Beyond” show starting the first Monday of this coming month, September, we’ll have the next episode where I’m going to be talking a little bit about the Agile projects, why some companies are chasing it, and what are the reasons why a company should be chasing it and the benefits that they should be getting from getting into any flavor of an Agile methodology. There are some confusions, some are FrAgile, WaterFile, or whatever combination that might not be giving them the best results. So I invite everybody to watch that episode. If you want any other topic to be taken or tackled over the show as well are very welcome as well. I will be presenting on the STPcon Software Test Professional conference, which is going to be virtual as well. The ends of September, I’m creating a cool video for it. We’ll be talking about performance testing principles, history, how to integrate and some tips. Checking notes. It’s 29 and 30 of September. Don’t miss it. It’s going to be virtual. Everyone everywhere in the world will be able to join it coming up soon as well in the YouTube channel for Señor Performo, “Nosotros los Performers” your performance novella will be coming up with more episodes. We just talked about what type of performance testing can you do with not generating load and more topics to come.

 

Rex Feizi:

Cool.

 

Matthew Heusser:

Wow. I did not know that there’s going to be two Qualitest podcasts now. It’s going to be on performance with Leandro as your host. That’s super exciting, man. What’s the title for people to search for in iTunes?

 

Leandro Melendez:

It’s a YouTube channel.

 

Matthew Heusser:

Oh, cool.

 

Leandro Melendez:

It’s a video series. One is a Qualitest “Performance and Beyond” because I’ll be covering not only performance testing but mostly other topics rather than performance testing. Look for it on YouTube, on the Qualitest channel or on Qualitest’s page in the blog section, I believe. Look for “Performance and Beyond”, or Google it, plain and simple and you’ll be able to access the episodes. Right now, we are on episode three. First Monday of September, episode four, talking about Agile and the same on YouTube. Look for Señor Performo ENG, English for the video, series, like a Mexican Telenovela “Nosotros Los Performers” where we hope to be sharing lots of performance knowledge.

 

Matthew Heusser:

That’s really neat, man. That’s great. Let’s try to keep in touch more. I know you’re doing a lot of interesting stuff and we’ll try to get the links and stuff like that to Michael, so we can put them in the show notes so we can make sure we’re good.

 

Michael Larsen:

Yeah. I’ll be happy to spread the word on that, Leandro. This is awesome. And hey, since I’m talking, I guess I’m might as well jump in. So I have confirmation, I know for a fact that it is happening now, so I can crow about it. I will be conducting a workshop at the virtual Pacific Northwest Software Quality Conference that will be happening in October. I am going to be doing a workshop based around getting your hands dirty in accessibility. We’ll be using an example site and we’ll be doing some before and after comparisons, as well as showing some tools that you can use, some of them programmatic, some of them manual and interactive so that you can actually see and get your head around the idea of accessibility testing. So I will make sure to put the specific link to that in the show notes.

 

Rex Feizi:

Michael thank you for having me on the show. I will be focusing on the next generation of record and play for the next few months. We are hoping we can provide much more reliable playback using a few algorithms and techniques we have been working on. We’ll send you guys a link to give it a shot once it’s ready some time in Q4. I also wish California the best during these difficult times and hope everyone will be safe and sound. Thank you.

 

Matthew Heusser:

And I’ll add, it looks like I might be at Brighton Test Conference, which is going to be available over video. I have no idea what it costs or you’re just very early, but it’s going to be early October. We’ll let you know more about that. And it looks like I’m going to be at KWSQA presenting, just like a monthly presenter. I think that’s at the end of October, that’s going to be on Teaching Testing with Simulations and Games. Should be fun. Nice use of your lunch hour. I know that one is complimentary… and I think it’s time for us to say goodbye. Thanks for helping us put this together.

 

Michael Larsen:

All right, thanks, everybody.

 

Leandro Melendez:

Thanks so much to everybody as well.

Rex Feizi:

Good meeting you virtually, guys.

 

Matthew Heusser:

Yeah. Thank you.

 

Michael Larsen:

That concludes this episode of The Testing Show. We also want to encourage you, our listeners, to give us a rating and a review on Apple podcasts. Those ratings and reviews, help raise the visibility of the show and let more people find us. Also, we want to invite you to come join us on The Testing Show Slack channel, as a way to communicate about the show. Talk to us about what you like and what you’d like to hear, and also to help us shape future shows. Please email us at thetestingshow(at)qualitestgroup (dot) com and we will send you an invite to join group. The Testing Show is produced and edited by Michael Larsen, moderated by Matt Heusser, with frequent contributions from our many featured guests who bring the topics and expertise to make the show happen. Additionally, if you have questions you’d like to see addressed on The Testing Show, or if you would like to be a guest on the podcast, please email us at thetestingshow (at) qualitestgroup (dot) com.

Recent posts

Get started with a free 30 minute
consultation with an expert.