Insights Podcasts The Testing Show: Continuous Testing with Gerie Owen

The Testing Show: Continuous Testing with Gerie Owen

April 11, 2018

Qualisense AI T-KIA

Continuous Testing. Sounds cool, doesn’t it? The bigger question, though, is “What does is it really mean?”

In the DevOps world, it’s something that more and more organizations are hoping that they can get a handle on, and Gerie Owen joins Matthew Heusser and Michael Larsen to talk about her experiences with Continuous Testing, DevOps and the literal sea change that such an endeavor taking on Continuous Testing actually is.

















MICHAEL LARSEN:   Hello and welcome to the Testing Show, Episode… 55 .

[Begin Intro Music]

This show is sponsored by QualiTest. QualiTest Software Testing and Business Assurance solutions offer an alternative by leveraging deep technology, business and industry-specific understanding to deliver solutions that align with client’s business context. Clients also comment that QualiTest’s solutions have increased their trust in the software they release. Please visit to test beyond the obvious!

[End Intro]

MICHAEL LARSEN: Hello everybody and welcome to the show. I’m Michael Larsen your show producer and back channel dude. We’d like to welcome our moderator Matthew Heusser?

MATTHEW HEUSSER: Hello.  Good time zone.

MICHAEL LARSEN: And, our special guest.  We’d like to welcome Geri Owen.  Hi, Gerie?


MICHAEL LARSEN: With that, you are our show runner and moderator, so I’m going to say, “Do your thing.”

MATTHEW HEUSSER: Okay.  Well, let’s introduce Geri.  Geri is a test architect at QualiTest?


MATTHEW HEUSSER: Now tell us a little bit about your background and what you’ve been doing for QualiTest lately?  I have to add I’m very pleased, through our partnership, Excelon helped place Geri with QualiTest, and it sounds like it’s been working out?

GERIE OWEN: Yes.  Well, currently, as Matt said, I’m test manager.  I work with the offshore teams.  I’ve also been doing a lot of thought leadership for the company.  I speak at conferences, do blog posts and Twitter posts, and whatnot.  I’ve been in testing now for quite a few years.  I was previously with Eversource as a test architect, and I was also with MetLife.  I really love testing.  Some of my more recent areas that I really enjoy getting into is what testing can do in DevOps.  I spoke at DevOps NYC last week and happened to actually be the only tester speaker, and I thought that was interesting.  So, what I’m trying to do is promote more of what testing can do with DevOps, and a lot of that is Continuous Testing.

MATTHEW HEUSSER: That’s our subject today.  We’ve got the new segment that you picked out.  It’s on, “Continuous Testing Challenges for Dev Teams in 2018.”  We’ll put a link in the Show Notes.  I’m most interested in sort of four bullet points.  The one that really aligned with me (and I don’t know about your experience) is, “The lack of clarity makes Open Source a time suck,” and that really resonated with me in that I’ve had these conversations where people say, “Oh, you should just Name of Tool,” and it turns out that there’s like a much deeper conversation just installing Name of Tool and getting it to work with your version of Ruby and your version of WebDriver and your version of the Operating System and your version of the Package Installer that you’re using can be very, very challenging.  I don’t think we talk about that much.  We think “free” means “free for Open Source.”  Did that resonate with you?  Do you have any comments?

GERIE OWEN: Yeah.  That absolutely does.  Well, any DevOps Organization is going to have a big technology stack and the tools and especially if they don’t all integrate well, it can be a real challenge, especially when you get into Continuous Testing .

MICHAEL LARSEN: I can totally appreciate that.  We’re going through something similar right now.  The company that I currently work for because of pulling our products closer with the parent company that we work with, we’re looking at taking our various pieces (which is the other part that you have, the Dev Teams not necessarily being on the same page) and trying to get everybody on the same page can be difficult, especially if you’re doing things just a little bit differently with each organization, which we are.  We put together a system when our continuous delivery and continuous integration and (I guess) Continuous Testing solution was developed a few years ago, Docker was brand new, and there was a lot of stuff in Docker that was not standardized yet.  It was a little bit loose in that regard.  So, because of that, we ended up having to implement some very interesting solutions.  The problem is, when you implement interesting solutions, they’re great for that point in time and then when things just work, you just go about your day a lot of the times until it turns out, “Well, we need to integrate with this other group now and so we’re going to go ahead and make this little change,” and that little change causes a chain reaction of events that can be very difficult to overcome.  I’ve been living this very actively for the past couple of weeks, and so this is near and dear [LAUGHTER] to my reality.

GERIE OWEN: Yeah.  I mean, that certainly is a big challenge.  I don’t think you’re the only company that’s an issue for.  I think that tends to happen frequently in organizations that are using DevOps Tools.

MATTHEW HEUSSER: So, when you say, Michael, that you’ve “been living it the past couple of weeks,” does that mean you’ve run into obstacles, that you’ve been Yak Shaving or you have to do a thing to do a thing to get the thing to do a thing to do a thing?

MICHAEL LARSEN: That’s exactly what it means.  So, to put it very simply, it’s that I’m more than just the release manager.  Basically now, I’m the product owner for our Continuous Delivery Platform.  It’s now my baby.  That doesn’t necessarily mean I’m the one that does everything for it, but I’m getting more and more of it.  I had to unpack a whole bunch of stuff that was written.  Once I did that, I did a whole lot of documentation.  Saying, “Hey.  We need this to talk to this to talk to this to talk to his,” and a lot of that didn’t exist.  You know, not because people were lazy, but just because well, you know, everybody knew what was happening.  Well, when you lose people in your organization (over time) and somebody else has to come up, that knowledge disappears.  If knowledge disappears, you have to reinvent the wheel or figure out how to interpret something that somebody else wrote a couple of years ago and make it relevant to today, and that’s what I mean by my being knee deep in this for the past couple of weeks.  It is I’m going through and I’m mapping out everything to make sure that we really do understand where everything is.  So that, if there’s something that depends on something else, we know that, “Hey.  If we upgrade Docker, we’re also going to have to upgrade these three components to make sure that everything works smooth.”

GERIE OWEN: That’s really interesting, because one of the presentations at DevOps NYC was on that very topic about, “Documentation” and how “in all the need for speed, we tend to not do enough documentation,” and the lady that gave the presentation suggested just a quick little Excel sheet that they were using to document each piece.

MICHAEL LARSEN: As an example (if you use Jenkins), Jenkins has a configure for every job.  Inside of the UI, there’s a description section that tells you exactly what it is.  It doesn’t take that long for you to say, “Hey.  This job does X.  Oh, by the way, X requires that you have access to this, this, this, and this.  Go over to this page, if you want to see some more details.”  It’s little things like that, that make it possible so that other people can share that information, know what’s happening, and it doesn’t have to be a black box.  Matt and I discussed this one time.  We were just talking about, “Hey.  Could you tell me a little bit about how you launch Docker in your testing?”  It took me 20 minutes to go through and look through the various jobs to finally find “the one” that did exactly what we were looking for and so I could tell them how we were doing it.  It was that opaque, and I think that a lot of organizations get into that just because they can make something work and then they move onto the next thing and it’s just, “We’ve got to do it as long as it’s working.”

GERIE OWEN: All that goes to quality on the tester’s role.  Really, in the whole tester’s role in DevOps to instill a responsibility for quality among the entire team.  There it is, making that they take that extra couple of minutes to document.

MATTHEW HEUSSER: Back to the article, “People are not on the same team when it comes to test automation.”  I think that fits in.  Another one that surprised me (for bullet point three), “Organizations are struggling to control test automation data.  Most organizations are struggling to determine the best way to implement Machine Learning in Artificial Intelligence.”  We’ve already done a whole show on this.  We had Peter Varhol over, but I think the way we talk about Machine Learning and AI is just so naïve.  “We’re struggle to even do X manually.”  “Well you should just Machine Learning that.”  Like, “No, because we don’t even know how it works.”

GERIE OWEN: Yeah.  We’re looking at Machine Learning as a solution before we look at solutions that could be a lot less complicated.  I think if we look at more, “What does the customer want?  What do they really need?”  Maybe we wouldn’t have to implement Machine Learning just because that happens to be one of the biggest things in our industry.  Of course, in testing, for Machine Learning, it’s very complex.

MICHAEL LARSEN: So, we’re talking Continuous Testing, continuous integration, delivery, employment.  Other than just the fact that it’s got the “continuous” tag on it, what makes Continuous Testing special or unique or is it even special or unique?  It is even really that far removed from stuff we’re already doing?  I guess the remedial question is, “Why is it a term we should care about?”

GERIE OWEN: I think a lot of people believe that Continuous Testing is all about automation, which a good part of it is automation.  You have to have automation, but it’s much more than that.  It really involves continuous risk analysis and process improvement and analyzing test optimization.  Analyzing, “Do you really need this test cast?”  So, your automation is creating a suite of the most important test cases that you need to run.  It also involves developing that culture where the whole team is responsible for quality.  It’s a much broader concept that just increasing test automation throughout the whole delivery pipeline.  Continuous Testing has to start with a review of our processes, “Where are the bottlenecks?”  Testing is always, especially in DevOps, considered to be the bottleneck.  Part of what we really need to do is look at, “What constrains our processes?  Do we have too many test cases?  Are the defects not being turned around fast enough?”  It could be anything, and we need to develop a strategy, a multi-layered test strategy, that includes all the types and levels of testing we need, but also a strategy for incorporating that testing into the continuous delivery pipeline.  That’s the real challenge, I think.

MATTHEW HEUSSER: I significantly agree that getting the tooling into the pipeline is a huge part of the work.  What you said earlier that, “It’s more than just test automation, and it’s continuous risk assessment,” also agree.  I think there’s a risk with that.  It’s a transference risk.  So, we saw this thing happen.  Every term that I’ve seen applied where someone will say, “We test all the time.  We test from the beginning, like as soon as the first build is up, we’re testing.  We’re just testing, so we’re doing Continuous Testing.”  I want to say, “I think there’s more to it than that.”  It’s the same thing with Agile.  Like, “Yeah.  We were Agile before it was cool.  We were doing handoffs and lots of builds and phase development, so we’re Agile.”  [LAUGHTER].  I’m like, “What?”  I think when reduced to a buzzword and we simplified a sentence (and this is not a criticism of you), it just can’t be described in a couple of sentences.  The shorter that we get it, the tighter that we get it, the more risk people have of misunderstanding it and specifically to reinterpret it, in light of what they’re doing already and say, “Oh, of course, we’re already doing test-driven development.”  This has happened to me.  Someone was basically saying, “Well, we write the test cases before we ever see any code.  We’re doing test-driven development.”  What they meant was, as the programmers are writing code in their little corner, we (the testers) are writing documents in our little corner, so that when we get the code complete build, we can begin testing.  I don’t think that’s what it means.


MATTHEW HEUSSER: Shame on me for not being specific enough when I was speaking about it and said, “Who’s doing this?”  And, they raised their hands.  Going back to Continuous Testing, maybe we could put it a different way, “Are there things that you need to be doing such that if you’re not doing them, you’re not really doing Continuous Testing.

GERIE OWEN: Continuous Testing involves assessing the business risk and coverage.  That’s really the primary goal.  It’s also establishing a safety net that helps the teams and protects the users’ experiences.  The other real important part of it, of course it’s real important of DevOps, is the actionable feedback for each stage of the pipeline.  DevOps is about amplifying feedback loops, and Continuous Testing has a real part of that.  Those are just some of the things that Continuous Testing is involved in that’s more than just testing from beginning to end.  It’s more about managing risk for the customer.

MICHAEL LARSEN: So, in many cases, I would think that.  I can understand why some might take the more narrow view that Continuous Testing just means that every time something gets modded, you are always working that test process in.  I guess, if we’re looking at risk assessment, for those that we are talking about this from, “Okay.  We’re looking at continuous risk assessment as part of Continuous Testing, how do you do a continuous risk assessment?  Can you?  It seems to me that that’s something that is—”

MATTHEW HEUSSER: No, a person has to do that.

MICHAEL LARSEN: Right.  Thank you.


MICHAEL LARSEN: Thank you.  So, I guess that’s—


MICHAEL LARSEN: —what I’m trying to reach here.  The base question is, “How do we differentiate the human factor and the machine factor?”

MATTHEW HEUSSER: I’m with you.  I think one piece you need to have—and we’re doing some work with Sauce Labs right now, they’ve asked us to help with some of the research on the Litmus Test for Continuous Testing and QualiTest has a stake in the ground too—is automated checking of the stuff for every X, where every X could be a day’s work.  It could be a single commit.  It could be a batch of commits tied to a story, which becomes (in Git terms) a “push.”  I think that the generally-accepted definition is, “every push.”  I would add, “How fast do those run as part of Continuous Testing?”  Like, if they take 24 hours to run where I’ve got customers that it easily takes 8 hours to run all the tooling, that’s basically an overnight run.  That’s not really fast enough feedback for me to call that Continuous Testing.  So, we have to keep those tests tighter, and that means we can’t try to have mass inspection.  There are other ways to cover the risk than just the automation run.

GERIE OWEN: Right.  I think if the automation runs can point to us areas in which there are defects or there may be defects, that’s when your real targeted exploratory testing comes in.  Yeah.  That’s got to be done by humans.  The analysis of the risk to look at the results of the automation and say, “How serious is it?”  To be able to say with some degree of clarity and reliability that, “This code is or is not ready to be released.”  That’s not something that can be automated.  That analysis.

MATTHEW HEUSSER: How fast should the CI run?  Give me a time.

GERIE OWEN: I think it depends on the number of test cases you’re running.  You know, you can’t run more than 2 hours, I wouldn’t think.

MATTHEW HEUSSER: Okay.  Two hours.  I would say one hour, but you’re a little more generous than me.  Yeah.  That makes sense to me.  Right.  So, you make a commit at 11:00 and then you come back from lunch, you should have feedback that you can use.  Then, you’ve got to have someone doing the analysis on failures.  I’ve worked with teams where, as long as the build was red but it wasn’t too red, that was okay.  Like, “We can 35 failings in the tests and that’s when we actually do something.”  That seems kind of wonky.  You’re assessing the business risk.  Let’s say we’re doing all that, and we’ve got the whole team or someone is reasonable for having a list of emergent risks.  These are things that Microsoft released a new browser, “We should probably do something about that.”  That, we are feeding into the pipeline of work for the whole team.  Okay.  We’re doing all that.  It’s running in an hour.  Are we doing Continuous Testing now?

GERIE OWEN: Well we’re getting closer, but I still don’t see in that the cultural aspects making sure that everybody is responsible for quality.  It seems like the tester is doing everything.  Whereas, “How about a little bit more on the development side before they commit the code?  How about a little bit more attention to quality there?”  The whole aspect of “quality is everybody’s responsibility,” is a major part of Continuous Testing as well.

MATTHEW HEUSSER: Right.  So, that would be the engineering practices.  I like the term “recidivism.”  Right.  Regressions.  Where something—


MATTHEW HEUSSER: —worked or was broken once and then now it’s broken again and you want to reduce that.  You want to improve first-time quality so you can do less testing at the end.  Totally agree.  Let’s say we’ve done that, and we get to the cultural piece.  So, it’s not just testing at the end.  We have a variety of techniques to reduce the risk.  We are creating examples before we write the code and shared understanding and people are actually not saying, “It’s not my role,” but instead looking for how to contribute to quality across the whole team.  So, you’ve got a team of 10-12 people who are doing all that.  Are we doing Continuous Testing now?

GERIE OWEN: Well, we still have to look at the other end.  Production.  We have to look at the monitoring and production.  I think that’s critical when you’re doing continuous releases.  You have to look at they are actually happening in production, and that too has to be part of Continuous Testing because production is really the last stage in the pipeline.  I think as much as it’s foreign to the tester, testing in production is important.  I think it has to happen.

MATTHEW HEUSSER: I wouldn’t say that continuous monitoring is required for Continuous Testing, but it just so happens that everybody I know who has actually been successful does it.

GERIE OWEN: I think that the importance of continuous monitoring is because then, hopefully, you’ll find—if a defect does happen or if there’s a performance issue or whatever—it before the customer does.  Hopefully.  If you’re not continuously monitoring, you have no hope of finding it before the customer.

MATTHEW HEUSSER: That’s great, Geri.  Okay.  So, we’ve got a continuous integration system that runs tests end-to-end from checkout to green bar in 2 hours.  We’ve got a culture of risk management.  We’ve got a culture of everybody contributes to quality.  We do a variety of things to reduce risk across the cycle.  We have monitoring in place.  Are we missing anything else?

GERIE OWEN: I think that’s pretty close.  I would say that’s probably a good definition of what Continuous Testing is.

MICHAEL LARSEN: One of the things that I think comes up a lot, I mean, let’s just take the whole Agile and talking about, you know, “We’re improving quality by speeding things up.”  So, Velocity, of course.  In some ways, I think that Velocity can certainly be helpful in many ways that it gets you focused on small portions of the work, but Velocity also can be a metric that is a double-edged sword in the sense that we’re always looking for ways of doing something faster.  Continuous Testing, it sounds to me like, we have to do continuous monitoring.  We have to do, you know, everything from the beginning and everything else like that.  From what I’m hearing, it doesn’t sound like these are necessarily things that are going to make things go faster, which perhaps that’s the misnomer, “Is Velocity really talking about doing things faster?”  Because we can do a lot of stuff fast terribly; but, if we want to do good quality work, I guess, what constraints do you see?  What are some of the challenges you see for anybody who would want to implement a Continuous Testing philosophy?  We’ve already gone through kind of breaking it down and what it all means, and it’s more than just the buzzword and automate everything.  What constraints do you see that we still have to overcome or that organizations will have to work through for this to be successful?

GERI OWEN: Well, I mean, you have some of the typical testing challenges you always do.  You have environment, data.  Luckily, with some of the technology now, you know, you can implement service virtualization so that you can spin up environments and put together what you need so that you can test components earlier, even though the integrated pieces aren’t ready.  Data is always a major issue.  You need to be able to plan ahead to know what you’re going to need for data.  You need to be able to maybe use synthetic data generation so that the test doesn’t have to wait until you have everything in place that you need.  Yeah.  You just need to work around a lot of the typical challenges using more tools.


MATTHEW HEUSSER: I would add to that and that gets, “so, where do I start? What do I do?” So I’ve got a company that I work with that has a Windows based application, that’s client/server. Tests take too long to run. We could put that into the cloud and we could make fifteen of these things, or something, and to them at the same time but then you’ve got to virtualize the back end database and have…


MATTHEW HEUSSER: Fifteen of those, and then there’s some third party things that we connect to, so do we service virtualize those? Well, there’s a state; we’re going to actually put data in and take it out of these third parties. Service virtualization is… not easy and it’s not particularly good at remembering those states and some of the API’s and the GUI are constantly changing because it’s under development. So… where do we start? And that’s just the tooling piece. That doesn’t even include -and driving Windows is always hard- that doesn’t even include the other pieces; the risk management pieces. How are you gong to do continuous monitoring for that? A lot of continuous testing is designed to do continuous delivery. We’re not going to do that for a Windows app that has a significant amount of the front end. And what about the legacy stuff? Everything is all interconnected, so we can’t just do the quickest, lightest, fastest, bestest end-to-end testing, we’re going to significantly more testing to release this thing. so, do we spend a year building up coverage, or do we spend a year cleaning the code so that it’s got a component architecture (usually something that successful companies have with this strategy) and then how do we build a component architecture if I don’t have tests around it, so I don’t have confidence that I didn’t break something? There’s a hole in my bucket, dear Liza! Right?

GERIE OWEN:  [Laughter] Yeah, I mean that’s definitely a challenge. I think maybe releasing things in small chunks, then if you can build up your test suite from that, I think, too, that you need to be constantly optimizing your test suite because when you build up tons and tons and tons of test cases, some of them at least are going to become obsolete as changes are made. I think you have to be continuously monitoring your test harness and knowing what it is that you are really trying to test.

MATTHEW HEUSSER: Yeah, there’s a lot of work to do there. So I’m an incremental guy. I’m very much “what is the smallest experiment we can run that could add value and we could just stop right there and have value?




MATTHEW HEUSSER:  Instead of the “grand composer” who writes a symphony and says “at the end of two years at the end of our DevOps journey we’re really going to be in a good place. You probably want some of both. You want the visionary and you want the tactical person.




MICHAEL LARSEN: How about analytics? How does analytics fit into this equation? There have been a number of features that we have worked on that we are aggressively targeting because somebody said it was really really critical and then we’ve come back a year later and realized that this “absolutely essential feature” is being used by five percent… maybe for that five percent it’s essential but for 95% of the users, it’s overhead. That seems like something we’ also want to put into this continuous testing. Not just being able to say “hey, are we testing things right?” But “are we sure we’re testing the right stuff?”


GERIE OWEN: Exactly!


MICHAEL LARSEN: How would we encourage testers to get into looking at the analytics and being able to advocate?


GERIE OWEN:  Well, there again, that’s the whole aspect of  “being the champion of the customer and understanding what the customer really wants.” Yeah, that’s probably a culture change for testers. They need to look at the data to analyze it and determine “yeah, are these the features we should be testing or are there other features that we are not paying enough attention to that we should be?” Definitely! That’s al part of the culture change and the whole DevOps team needs to look at that. That’s not just a testing thing. It is part of testing but I think that’s part of something that the whole DevOps team needs to look at.


MATTHEW HEUSSER:  I was working with a team recently and mentioned the “R.C.R.C.R.C” heuristic for regression testing by Karen Johnson. They didn’t really have strong tooling in place yet. It was a new product and they wanted to have a strategy just to get the thing out the door. IT didn’t have GUI tooling. So R.C.R.C.R.C is Recent Core Risk Configuration-Sensitive Repaired and Chronic. So Core is essential functions which must continue to work. So how do you do that? Well, you have analytics and you see that 20% of the features customers use 80% of the time and those are your core features. For E-Commerce maybe it’s a little easier because you have Log-In to Check-Out. Those are kind of important, so Search, Add to Cart, Checkout… those are kind of important. You an use your analytics to figure it out and because it’s never been released, it’s Release 1.0, it’s going out shortly, they didn’t have any data. It amazes me how many customers I work with that have a product that’s years old nd either don’t have any data or the testers don’t know where to get the data or if they know where to get it, they are not using it. It’s right there! I wanted to mention two pieces that I think are often left out of Continuous Testing. Gerie definitely implied them. I want to make them explicit. If you are doing something like Agile and you have stories, the story probably need to be explored in significant detail by a human, so that’s actually like “Excellent Story Testing” and then some of those explorations, some of those acceptance criteria, we’re going to institutionalize as end-to-end tests. Some small subset. Everybody that I have worked with that has had a lot of success has really done that feature testing.


GERIE OWEN:  Right, and I think that we could go even further and implement Acceptance Test Driven Development, so that the testers are actually in on the development of the requirements to ensure the testability from their inception.


MATTHEW HEUSSER: Yep, and I would say that’s one of those things that can improve quality before you get to the end of the line and then you’re going to have your acceptance tests. This is where I think a lot of companies fall down. “We have out ten things and we check them all off” and say “Yep, we checked out Acceptance Tests, put it in prod.” Or we just create ten automated tests and one of them we can’t make because it’s broken and we get that fixed and then run all our tests and put it in prod… or at least move it to staging and then put it in the main line of the branch and then run all of the tests. That idea of “there’s a lot more things I could test than these ten acceptance tests and I’m going to run through them once” and then a few of those are going to become… the acceptance tests are going to come online… we lose that and I think you see that because of things we can’t automate like printing. I se companies just not testing at all because they forget it exists. That’s the risk. The other one is Continuous Human Exploration, which is when we’e got that list of risks. There might be some things that you just want to… “can you just spend an hour and check out how scrolling works on Google Chrome on the iPhone?” Because I’ve actually worked with large organizations that were doing just this  and they found the number of dollars flowing in primarily was from google Chrome on the iPhone, so the smallest complaint got a lot of attention but am I going to automate the speed t which scrolling occurs on that device in the cloud?  If we fix it, maybe not. So I see all of those pieces fitting together.


GERIE OWEN: Yeah, exactly and that’s all part of the analysis. Is this something that’s very important? Is this something that… “can we automate?” Is it something that we need to test manually every time? That’s all part of eliminating bottlenecks. It’s all part of the analysis that needs to go into eliminating botlenecks.


MATTHEW HEUSSER:  Yeah! Yeah! Totally! Jut the right tool for the job. It it’s a one time risk that we want to heck out, then we should have a way to capture that and decide if that’s the best use of our time and if it is we should make the investment, time box it and move on. So, we’ve talked about a lot, I don’t really have any closing thoughts. Usually this is the part of the show where we talk about what we’re working on now and our audience hears from me a lot so I’ll let Michael and Gerie do a parting thought, talk about anything that’s coming up, anything that we’re missing in this conversation about DevOps.


MICHAEL LARSEN:  And also, of course, ways people can get a hold of you, if you happen to be on Twitter… how people can find you, of course.


GERIE OWEN: Well, some of the things I’m working on is focusing on teaching the hole culture of DevOps to testers. I just finished a DevOps One on One for our QualiTesters. I’m going to be doing more on the whole Continuous Testing topic, trying to help testers understand it. As we saw today, just even understanding the term, there’s a lot to it, so I’ll be doing more on that as well as speaking at More DevOps Days, encouraging the whole DevOps process.

MATTHEW HEUSSER: OK! Is that everything?



MATTHEW HEUSSER: All right! I think we should call it a day. Thanks, Gerie.


GERIE OWEN: Right! We’ll be chatting soon.


MICHAEL LARSEN: All right. Thanks.


MATTHEW HEUSSER: Thanks, Michael.

MICHAEL LARSEN: Thanks everybody. Bye.


GERIE OWEN: Thank You. Take care. Bye Bye.


MICHAEL LARSEN: That concludes this episode of The Testing Show. We also want encourage you, our listeners,  to give us a rating and a review on Apple Podcasts. Those ratings and reviews help raise the visibility of the show and let more people find us.

Also, we want to invite you to come join us on The Testing Show Slack Channel as a way to communicate about the show, talk to us about what you like and what you’d like to hear, and also to help us shape future shows. Please email us at TheTestingShow(at)QualitestGroup(dot)com and we will send you an invite to join the group.

The Testing Show is produced and edited by Michael Larsen, Moderated by Matt Heusser, with frequent contributions from Perze Ababa, Jessica Ingrassellino and Justin Rohrman as well as our many featured guests who bring the topics and expertise to make the show happen.

Additionally, if you have questions you’d like to see addressed on The Testing Show, or if you would like to BE a guest on the podcast, please email us at TheTestingShow(at)qualitestgroup(dot)com.

Thanks for listening and we will see you again in May 2018.

[End Outro]