The Testing Show: Claims and Practices, Part 2

February 25, 22:11 PM
Transcript

We conclude our two part series by talking about Code Review as a Service, who might need such a thing, what it promises and how it corresponds with what organizations are actually doing.

As was the case last week, we share opinions and talk about the fact that marketing often drives the perception, but the devil is always in the details and the details may not be as compelling or flashy but they are relevant and more times than not really tell a fuller story of what is going on.

itunesrss

Panelists:

Larsen

Perze

Heusser

References:

Transcript:

MICHAEL LARSEN: What you are about to hear is part two o a to part series based around claims and practices in the software testing world. If you have not heard the first part of this episode series, we do strongly encourage you go back and listen to it. However, we’re not the boss of you, so if you want to just plow right ahead and listen to this week’s version without listening to the first part, who are we to stop you? Regardless, thank you so much for listening to The Testing Show, and… on with the show!
[INTRO]

MATTHEW HEUSSER: So, let’s move on, before I get myself in trouble. This was, I thought, interesting. There’s a new company called, “PullRequest .” I want to talk to them, and they do code review as a service. My guess is they’re going to have to move to a—if they’re successful—CrowdSource Model . That’s the only way I see it working. They are talking about trying to have the same code reviewer with the same company for a long time. So, it’s a long term contract. The code reviewers get to know the code base and have subject-matter expertise. It’s just a really cool idea. I want to talk about it. The reason I bring it up during this show is they claim between 20 percent and 50 percent of a programmer’s time is spent in code review. I don’t know where that number came from, but it’s not 50 percent. It’s probably not 20 percent at most companies. I’m very skeptical. If I were them, the claim that I would be making is not that, “It takes time (actual hours).” It’s that, “It causes delays,” because you have to wait for the code review. Now, your work-in-progress is growing, and now you’re multitasking. You get results back while you’re working on the previous thing, and that actually costs you time. But, I don’t know where this 20 percent to 50 percent number came from. I have concerns about companies that throw numbers like that around.

MICHAEL LARSEN: Well, that certainly doesn’t jibe with what I’ve seen. Again, I’ve worked with companies who are pretty meticulous and pretty sincere about their code reviews, but 20 percent to 50 percent, nowhere near. I would be surprised if it’s 10 percent, to be honest.
PERZE ABABA: I think it depends on where you’re looking at this and what the scope is that you’re looking at. We have this one guy at work. I’m just going to call him, “Mr. M.” “Mr. M.” is known to be one of the most strict code reviewers in the company. If he’s the one that’s assigned to perform code reviews, you better write your code well, because he will find—he will go through every single nook and cranny of your code—what’s actually affected by what you did and tell you how you broke it in very elegant and very straightforward, dispassionate manner. Pretty much boils down to, “This guy is sort of the Game of Thrones of code reviewers that we’ve seen.” His approach is something that we’ve actually tried to model across the company. Not everybody has the same line of thinking as he does, but then we came up with some guidelines based on the comments that he has on our JIRA tickets. So I guess it’s a certain expertise, but he does understand the context. He does understand the technology. He does understand the industry. He does understand the customers as well and how that gets affected. So, it might not be the way your typical code review and PullRequest reviewer will do, but I guess it depends on how much time you have to be able to go through what you have. If you’re not under the gun in finishing something and you’re not just some guy that’s part of the team that will have to look at someone else’s work and now your, whatever you’re working on, is now at risk of not being able to finish because of a deadline, then there is value to that. So, you have a very dispassionate way in looking at someone else’s code and making sure it kind of fits specific guidelines. But, what I’m skeptical about this is, it’s really the context of where some of these code reviewers are coming from. It can be as scary as a person trying to employ a very specific Coding Best Practice against a certain context, so to speak.

JUSTIN ROHRMAN: Yeah. I worry about this in the same way I worry about all Crowd Services. It’s that, they all lack important context and you need that to do good testing or good code reviews. Unless their only job is to review a bit of code against coding standards, then they would need a lot of understanding of the code base, “How effective would the service really be at the things that code reviewers generally do?”

MATTHEW HEUSSER: Yeah. Well, the only think that’s been mentioned online—I’m reading the comments—someone pointed out, “Code review is useful for knowledge sharing, so you learn about other aspects of the code base that you might touch someday, especially on a small team. So, outsourcing that means you’re keeping your programmers, probably, in their silos.” That’s the only real criticism I’ve seen so far that seems legit. That’s an interesting challenge there .

JESSICA INGRASSELLINO: Yeah. Before you mentioned the comments, that’s one of the things that I was thinking about. It’s that, even for testers that can be a really good way to understand, even if you’re not 100 percent sure what’s “in the code,” you can see kind of the trends of what’s going out of the code, what’s going into the code, if it’s like a bug fix, and you can see the comments on the code. So, you start to learn about how your development team works, what kinds of things they value, what kinds of bugs are getting what kinds of attention from different members of your team, and the larger the team grows, of course, that gets a little bit tricky. But, when you’re working on a small or mid-sized team, that information for a test can be really valuable as well, and I would worry about losing that kind of insight if, say, somebody else were to be doing that. Although, there’s nothing stopping or preventing somebody from going in and looking at a Pull Request theoretically. Right? So, that’s something that could happen. But, would do it if you weren’t pulled in on it? That’s kind of the question, because we’re all operating at all cylinders firing at all times (it seems). So, I don’t know.

MATTHEW HEUSSER: Okay. So, again, we have, “It seems interesting. I want to learn more, but where did your claim come from. I don’t know. I don’t know how to find out.” It’s just too common. Let’s see, but the “industry estimates peg code review taking between 20 and 50 percent.” So, there’s no link. There’s no reference. There’s nothing to go read. I’ve done presentations. You guys have seen me do presentations where I say, “Touch time. Touch time on a typical story on an Agile Team in North America is between 30 percent to 10 percent. It depends on how you measure it. It depends on when the story is first defined or when it enters the sprint and changes the numbers, but I think that when I say those things no one raises their hand and calls, “BS.” Secondarily, I’m more likely to actually poll the audience to get a feel for what those numbers are, or “Did they just throw them out?” So, it’s hard. I think Box at the testing has applied epistemology, and epistemology is the study of what we know and how we know it. So, welcome to it. Closing thoughts?

MICHAEL LARSEN: As is always the case, “the devil is in the details,” and I think that there’s a lot of interesting opportunities. I’m not going to sit here and say that, “There isn’t a space for this.” Or that, “It’s all BS.” There probably are legitimate uses for these items; but, as is often the case, we’re looking at a lot of hype and a lot of marketing, and I’m much more curious to see who’s actually adopting these things. With so much churn right now, with so many differing levels of activity and smaller organizations getting some traction here, it’s not just, “Well, let’s use whatever works and everybody else adopts it.” Now there is a space for just about everything out there. It’s kind of a Wild West Approach right now. It’s going to take us some time to see which of these things are actually going to come into play, “What will Machine Learning really tell us? Will it really tell us that it’s going to actually do our testing and make our coffee for us?” I don’t think so. “Is it going to help us be able to pinpoint some data elements that we can categorize and utilize as part of our testing?” Yeah, but that’s nowhere near as cool to talk about.

MATTHEW HEUSSER: Right. Yeah. The small plus-one advantage, which is worth talking about, is drowned out by the potential, “Not real, probably not going to happen, revolutionize the industry plus one million,” because marketing.

JESSICA INGRASSELLINO: “Because marketing.”

MATTHEW HEUSSER: Other thoughts?

JESSICA INGRASSELLINO: [LAUGHTER]. I love the way you phrase things. That’s great, “Because marketing.” I mean, I guess, overall, in this conversation, I’m thinking about broader possibilities, and “because marketing,” you know it does give us these kind of claims that, “We should examine if we are going to consider doing anything.” And also, “Okay. Well, what is definitely way in the future? What are some things we can do now?” It also make me think, earlier back in the conversation, of continuing to draw from all of our knowledge and all of our resources that extend well beyond test or technology and into other fields that seem disparate at the moment, but in reality have a lot to offer us in terms of kind of pushing the envelope in terms of what we do and what we accomplish and how we accomplish it, whether or not it’s inside of our organizations or with a combination of our organizations’ tooling plus custom tooling. I think it behooves us as testers and as people who study “the craft” to keep our minds open to all of these things and make sure that we don’t get locked into any kind of boxes or lulled into the security of claims or too skeptical of claims. Because, you know, a radical claim might spark a really good idea.

MATTHEW HEUSSER: Yeah. That’s a good point. Sure. It’s too easy for us to shout down ideas. Thank you, Jess. As always, the voice of reason on the show. Justin? Perze?

JUSTIN ROHRMAN: Like always, my thoughts on the matter are fairly distilled and it’s pretty much exactly what you and Michael said. I think if you take these claims and get really specific about what the companies mean by the words, then it’s a lot less contentious. There is probably some use for it in there somewhere, but it makes the marketing impact less valuable for the companies using it. So, here we are.

PERZE ABABA: Yeah. There’s definitely a lot of potential with all of these things that are coming up. I mean, I’m trying to get out of the habit of shooting things down that don’t necessarily give me the data that I need to be able to understand what they’re talking about or, “Give me more information to show me, you know, the value of what you’re saying.” But, just because I don’t understand it doesn’t mean it doesn’t work. So, that’s the challenge on my side. You have all of these things that are being given to us, there’s got to be something within our context somehow that can use this. I do think that Machine Learning, you know, has a future in helping us test better. I’m trying my hardest to kind of still understand it and how to apply that, you know, within my context. But, you know, I am looking forward to what it can do, as compared to how the other areas—like movie recommendations, for example—have gotten a lot of gains, primarily because of very specific Machine Learning Algorithms. If it helps my team to test more effectively or if it helps my team to be better at calculating risks, for example, on “what we should test now,” because there’s only so many of us to go around, then I am definitely all for it. I think it’s a very exciting future, but you know, we’ll see when we get there.
MATTHEW HEUSSER: Yeah. Things are definitely getting more complex. We’re talking about creating multiple instances of virtual machines in the Cloud and spinning them on and off in a Blue/Green Split Framework Tool , in real time, with pushbutton deploys, and all of that. It’s an entire level of abstraction that just didn’t used to exist. So, “Could we use Machine Learning at some point to improve that?” Maybe. Probably worth watching. Any announcements? Anybody doing anything? Anything exciting coming up?

MICHAEL LARSEN: Hi everybody, this is Michael and I’m flying this in in post, which means this is after we recorded everything and for my shameless self promotion I am going to promote our show. Literally. So you all know that we very much appreciate your reviews and your sharing the show with others. It really helps us. The more visibility the show gets and the more people get to listen to it the more it allows us t be able to bring show to you. Frankly, I want to say “thank you” to those who are reviewing the show and I would like to say thanks to our most recent reviewer. So this one goes out to “Veretax” who gave a review of “Relevant and Relatable Content”: “I’ve been following Matt Heusser and so many of the other guests on The Testing Show for something like thirteen years. That inludes Matt’s original podcast, one hundred episodes of testing content and discussion, of which I guested on on more than one occasion. [He’s talking about “This Week in Software Testing ”, which is a show that Matt and I did with Software Test Professionals]. With Michael Larsen as producer, the production values and intellectual value of each show is adequately sized and quite filling. Every week, they cover another topic that aligns with issues I can relate to and if they haven’t yet hit a topic that you are interested in, send them a line and they may cover it in a future episode.”

Well, Veretax, thank you so much for the great words and hey, you are definitely welcome to be on the show if there’s a topic you’d like to hear us tal

k about and you feel like you can address it, we’d love to have you! And that is an open invitation to everybody that has something they’d like to share or that they’d like to talk about and if they would like to be part of this show, plain and simple, send us an email at “thetestingshow(at)qualitestgroup(dot)com”, tell us what you’d like to hear, tell us what you would like to talk about, tell us why you would be a great guest for the show and, chances are, if you are talking about something that you have an expertise in, you probably are a great guest for the show and we’d love to talk with you. Not to beat the drum too hard, but your reviews rally do help us. If you write a review for us and post it on Apple Podcasts or in other locations and send us a link to the review, we will gladly read it on air. And with that, back to the show…
MICHAEL LARSEN: Well, let’s see. So, this show is scheduled to be going live middle of October at this point. So, by the time this show goes live, I shall either be presenting or just having finished doing my talk about, Accessibility and Inclusive Design at Pacific Northwest Software Quality Conference . I’m also planning on doing a dry run of this report for the Bay Area Software Testing Meetup , which is going to be happening before this Podcast goes live. But, I do want to give a shout out. Seeing as the fact that Perze always gets to talk about the New York City Testers Meetup, I think it’s only fair that the Bay Area Software Testers get a little bit of plug time, and the fact of the matter is that Bay Area Software Testers is a Meetup that’s sponsored in and around the San Francisco Bay Area, primarily in San Francisco. We have a fairly small, but dedicated, group that looks to put these events on. We are looking for speakers. We are looking to encourage speakers from any discipline in and around Testing, specifically those that are not around Automation Tools. We have so many Meetups that cover those. We’re looking for those other topics, those other things about Testing, and like what we do here on The Testing Show. If this show interests you and the way that we do them and you’re like, “Yeah. I wish people would talk about more of those topics at our Meetups,” well BAST is a place that you can do it. We want to encourage you. If you happen to be in the Bay Area, or heck, if you happen to be visiting the Bay Area, during the point in time we’re doing—

MATTHEW HEUSSER: Or, get a grant. Get a grant for your travels.

MICHAEL LARSEN: We already did.

MATTHEW HEUSSER: From the Association for Software Testing .

MICHAEL LARSEN: We got a grant for this year.

MATTHEW HEUSSER: Did you spend it yet?

MICHAEL LARSEN: We have not spent it yet, as a matter of fact.

MATTHEW HEUSSER: So, there you go.

MICHAEL LARSEN: So, there you go. So, if somebody wants to come in and talk about something that is burning up in their reality and they would like to present it to some Bay Area Software Testers, BAST would love to have you. I will link in the Show Notes how to get more information about BAST; and, if you’re in the Bay Area and you’re curious as to what BAST is all about, come on out and talk with us.

MATTHEW HEUSSER: Great. Anybody else?

JESSICA INGRASSELLINO: Yes. In the end of November, I will be doing a couple of things with ConTEST NYC at Anna Royzman’s Conference, and I will be delivering the Closing Keynote . I will be doing a Full Day Master Class on Developing Test Strategy . So including, but not limited to, Automation. It’s for people of every range to work collaboratively to develop skills, whether they’re expert coders, all the way to, “I have never written a line of code in my life.” I’m using some differentiated instruction techniques that I’ve used in my Coding Classes and my Music Classes to actually bring together groups of people in hopes that we can all learn from each other’s experiences. It will be Multilanguage. So, people don’t need to be too concerned about if they’re in Python or Java or Ruby, and I’m really looking forward to teaching that. Then, I’ll be presenting a talk with Anders Dinsen about, Play and the improvisational process and how that can be helpful to testing, and we’ll also be using protreptic dialogue to help testers really dig into that aspect of their own thinking. So, overall, it should be a really exciting three days.

PERZE ABABA: All right. I guess that would be my cue to talk about, you know, NYC Testers . So, we’ve had a hiatus this past summer because of how busy the organizers are, and we’re trying to look at and evaluate what our approaches and strategies are. But, when we do get the decision to have, you know, the next NYC Testers Meetup, we will make sure to have an announcement. So look out for it this coming fall, and we do look forward to seeing everyone again in the Greater Manhattan Area.

MATTHEW HEUSSER: Yes.
JUSTIN ROHRMAN: I just done running CAST, maybe three weeks ago. So, I’m taking a much-needed break.
MICHAEL LARSEN: [LAUGHTER].

JUSTIN ROHRMAN: Nothing big going on for me at the moment.

MICHAEL LARSEN: And, well deserved. For those who did not get a chance to go to CAST, it was amazing. It was really cool, and I think that the venue, it’s going to be so hard to top that.

[LAUGHTER].

JUSTIN ROHRMAN: Awesome. Thanks. I’m really glad.

MATTHEW HEUSSER: I thought CAST was great. How many attendees did we have?

JUSTIN ROHRMAN: We had about 160 people.

MATTHEW HEUSSER: That’s what I thought. It felt like a smaller Conference this year, but it was just amazing. So, I’d like to do a Testing Show on Marketing Testing, because I think we’ve got this beautiful jewel. CAST is nonprofit. Nobody made any money. They pay the keynoters a little bit, but Justin’s a volunteer. It was in his hometown, so I think (like) he didn’t even get a hotel. Right? He probably went home.

JUSTIN ROHRMAN: Yeah. That’s right.

MATTHEW HEUSSER: So, because it’s nonprofit, mostly not run what I’d think of as “business people,” they’re testers, and it’s got a really sweet vibe to it. There’s a lot more. It’s been contentious in the past. This year it felt pretty collaborative. I was probably the most contentious person there, which tells you something. But, it really felt collaborative. But, it feels like we’ve got this beautiful jewel that people don’t know about. If I can help get the word out, I’m really proud of that. I’d like to do a show on, Marketing Testing. Not selling it, but marketing it as a service that is valuable. Part of that is finding our customers—

MICHAEL LARSEN: Maybe using some Machine Learning?

MATTHEW HEUSSER: Oh, “Finding our customers’ unmet need.”

JESSICA INGRASSELLINO: [LAUGHTER].

MICHAEL LARSEN: “Because marketing.”

MATTHEW HEUSSER: Right. Yeah. Right.

JESSICA INGRASSELLINO: [LAUGHTER].

MATTHEW HEUSSER: Yeah. Yeah. I do not mean making up buzzwords to scare people or intimidate them otherwise make them think they’ve got to jump on the bandwagon or they’re going to die. Because, if they don’t do this, they’re a dinosaur. Not that anyone in our industry has ever done that. By that, I mean, part of that is discovering the customers’ unmet needs and then figuring out how to explain it to them. So the customer comes chasing after you, asking for help, as opposed to—and we do this, like, just as testers or as test managers, just on the job, with our business customers, developer customers—selling testing, which would be chasing after them saying, “You really want this. You really need this. That can’t go to production. It hasn’t been tested yet. You’ve got to test it. You’ve got to test it. You’ve got to follow the process.” Maybe, that’s bad selling. I guess selling can be done well. But selling is where you’re working to convince the other person, and marketing is where they come to you and the activities that make them come to you. I think it’d be fun to talk about. Plus, we’re going to have an interview with the CEO of QualiTest coming up . But, for some reason, he seems busy and hard to schedule. It’s crazy. I don’t know. He seems to be running a business or something. State of Testing Survey. QualiTest partnered with PractiTest on another State of Testing Survey , which we’ll link to in the Show Notes, and the company has its own analysis, which I’ve been following them for a couple of years now. Surveys like this are very hard to do well with any degree of accuracy or meaning. We don’t know, even, what the words mean on the multiple choice questions that we get, and we don’t know what people mean by their answer. We don’t know the percentages that are valid, but you can do a better or worse job on the analysis. I think that PractiTest does a better job. So, that’s all I’ve got for now. Thanks everybody for listening. Thanks everybody for being on the show.

MICHAEL LARSEN: Thanks for having us.

JESSICA INGRASSELLINO: Thank you.

JUSTIN ROHRMAN: See ya.

PERZE ABABA: Thank you.

[END OF TRANSCRIPT]

Recent posts

Get started with a free 30 minute consultation with an expert.