In this episode, The Testing Show crew is joined by Yaron Kottler, QualiTest’s US CEO. We discuss the changes sat Yahoo and the elimination of most dedicated software testers, how this change is happening in a variety of companies, and what these changes actually mean for the dedicated testing role, and what may happen in the future.
We discuss the idea of software testing as a trusted advisor to an organization, and the benefits of wham that role works, as well as the frustrations and costs when that role does not work.
Panelists:
References:
- Yahoo’s Engineers Move to Coding Without a Net
- Testing vs. Checking
- Collins, Harry & Evans, Robert, Rethinking Expertise
- Hendrickson, Elisabeth, Better Testing, Worse Quality?
- Grove, Andy, Only the Paranoid Survive: How to Exploit the Crisis Points That Challenge Every Company
Transcript:
This episode of The Testing Show is sponsored by Qualitest. Deploy Software that You and Your Customers Trust with QualiTest Testing and Business Assurance Services. Learn more about QualiTest at www.qualitestgroup.com.
MATTHEW HEUSSER: Welcome to The Testing Show. I’m your host, Matt Heusser. Today, we have Michael Larsen from Socialtext.
MICHAEL LARSEN: Hello from San Francisco.
MATTHEW HEUSSER: Justin Rohrman.
JUSTIN ROHRMAN: Hey there. Good morning.
MATTHEW HEUSSER: And Yaron Kottler, the CEO of QualiTest USA. Welcome to the show, Yaron.
YARON KOTTLER: Thanks. It’s great to be here.
MATTHEW HEUSSER: I know you’re a busy guy. We really appreciate it. So, let’s get started. The first issue I wanted to discuss—I think it’s the big news for the tester world, and it’s not really a surprise—Yahoo recently eliminated their traditional tester role. In a big article by IEEE Spectrum they announced it publically. So, what do you think of that?
JUSTIN ROHRMAN: I was reading through this article. They didn’t actually get rid of “testing.” They just got rid of the “tester role” and the “bad testing” that was happening in the organization. Is that right?
MICHAEL LARSEN: That looks to be my read on it. It’s not so much that testing has gone away. They’ve decided instead to have the development team or the programmers be responsible for quality up front. Whether or not they do it with automation or they’re doing the testing themselves or they trade off a role, testing is still happening. They just don’t have a separate and distinct test team doing the job any longer.
MATTHEW HEUSSER: It’s really not clear to me, because they have some testers who have been allowed to keep their roles but now they are doing automation. That’s what the article says.
JUSTIN ROHRMAN: Are these like Developers in Tests? They’re basically writing test infrastructure, tooling, framework, and stuff like that?
MATTHEW HEUSSER: I think that is the case. Maybe we need to get someone from Yahoo on the show. We were talking about, before we hit record, and Yaron pointed out that the context-driven testing school likes to separate testing and checking. So, risk management is still happening, but I’m not quite sure how that risk management has shifted. Clearly, there are some people writing tools like Selenium that will run on every build. They’re doing continuous deployment. The customers can tell them if there are problems. And, I’m sure there are people saying, “I wonder what happens if I click this?” But, it’s really not clear to me how much testing is happening and where.
JUSTIN ROHRMAN: If you have these other automation frameworks, then I think it’s that they just can’t exist without people testing before and after. I mean, to even have an automated test run, you have to have run through the product yourself. Well, ideally, you’ve, [LAUGHTER], run through the product yourself and seen for yourself how it works or does not work. And then, on the backend, if something fails, you’re going to be stepping through that same product area to investigate. So, there’s always testing on the front and backend of these automated test performances or whatever you want to call them.
YARON KOTTLER: I don’t think anybody is talking about “no testing,” as Matt pointed out. They’re talking about eliminating the dedicated, independent testing role forcing everybody to take much more responsibility for what they do, at least at a team level. Having worked with a lot of different organizations that are going through the same sort of migration into the so called, “DevOps continuous build, continuous release, continuous test model,” it is indeed much more automated checking. It is much more end-to-end user acceptance. It is much more risk management as well as testers becoming much more involved in surrounding activities like release management, change management, data management, and so on.
MICHAEL LARSEN: That’s been my experience. As we’ve been working with Socialtext, Socialtext has been moving to more releases. It has been shifting the way that it has tested. We do still do certainly a lot of exploratory testing. We still do a lot of what would be classic quality assurance software testing role work; but, in addition to that, as much more of this stuff has become automated, as much more of this stuff has become part of the general flow of getting your release out, we have migrated into doing different things. In addition to being a software testing, I’m the release manager, and a fair chunk of my time is spent going through and making sure that the deployments of our software is clean and makes sense. Actually, I test that a fair amount more than it seems I do the software.
MATTHEW HEUSSER: You make an interesting point there at the end. Socialtext has a test team with more than one person on it, but I think the value of testing is often unclear. Testing is that thing we have to do so we can get to production. And, as soon as, “It’s just another step in the process that we don’t understand,” then you get, “How can we do it as cheaply as possible? How can we do it as quickly as possible? Let’s outsource it. Let’s offshore it. There’s no value in it. Ideally, we shouldn’t have to do it,” kind of thinking, eventually your team goes away. I have mixed feelings about that. I can totally see how testers, technical staff, that aren’t primarily production coders, need to stay relevant, and that job is going to change over time.
But, part of me says, “Traditional activities are less relevant, less valuable,” and part of me thinks that, “We haven’t done a good job of explaining the value we bring and in some case the organizations aren’t doing a good job.” But, reading the Yahoo piece, the work those testers were doing seemed like something out of the middle 1980s. They were creating manual test scripts and maintaining them and running them as part of a large batch regression testing phase that happened at the end and that took more than a month to deploy any change. Is it just me, or does that seem silly?
YARON KOTTLER: I would say that Yahoo is not the first organization to make that transition, definitely not in Silicon Valley. This started 5-to-10 years ago. Facebook, Google—they’ve been doing this sort of thing for 1 year. And, I can tell you for a fact, working with both organizations, there’s still a lot of testing going on there, and there are still a lot of dedicated testers. It doesn’t mean that all phases in the product or all phases in the development lifecycle have them, but a lot of testing is still going on and a lot of independent testers still exist. It is, in many cases, a question of elements like product maturity, how critical the product is, and probably most importantly how good, mature, and well-communicating is the testing organization to be able to demonstrate the value that you’re talking about.
MATTHEW HEUSSER: It’s more shocking to me an organization like Microsoft that recently eliminated the tester, Software Development Engineer in Test, and said, “We’re going to do concurrent engineering,” because a lot of the SDETs were doing a lot of more valuable things. I guess Microsoft is so big it doesn’t matter. But, your greater point that testing has a separate and distinct role has moved to a place where we bring them on certain projects when need them, when we can add value. I think that’s more of a shift in some of these cutting-edge companies. There certainly are internal projects where you’re creating a report where you can do just fine on your own. So then, the tester, they’re going to have to be a lot better. There’s going to be less of them. Like, right now, I’m on a project where I go a one-hour brief in, “Here’s how to use the tool and go,” and I don’t know that all testers can do that.
YARON KOTTLER: I don’t think that all domains enable you to do that. There is a sense of understanding the business context and I would probably say that any sort of product that a company like Yahoo, Facebook, or Google developed is probably easier to understand than a company that, say, generates complicated statistical models that drive early stage identification of cancer in certain types of images, for example. It’s not exactly something you can get an one hour overview of and jump into, so to speak.
MATTHEW HEUSSER: Yeah. I think that’s a great example. And, the project that I was talking about was e-commerce retail, and you couldn’t just hire someone off the street, even if they had testing skills, and have them jump into that. The pixel placement of a bullet point was something that I spent half-hour on yesterday, because conversions matter. And, it was actually working in IE10, and it was off by about 7 pixels in IE11. There are plenty of context where, “I just don’t care. Whatever.” It’s not a bug. Figuring out whether or not it’s a bug and who to talk to, yeah, I think you make a strong point there. Domain expertise is something I think we tend to ignore. “Any good tester can jump from any project to any project,” and that’s not really true.
MICHAEL LARSEN: Yes. There are certain testing skill; and, having the ability of knowing how to utilize those and be able to pivot from one opportunity to another is important. I used to believe that a tester could test anything, and then I went to work for a company that did capacitance technology using human body model measurements for movement. The electricity inside of your body actually drives that. Now, how are you going to test that in a realistic way, especially, say, 10-to-15 years ago when this was still being pioneered? I had that experience of having to discover how much domain knowledge I had to learn to be effective in that role; and, in some ways, I ultimately struggled with it. I moved on to do other things, such as doing software testing for immigration law.
JUSTIN ROHRMAN: How long did it actually take you to learn these domains and start being effective? There’s this guy named Harry Collins. He wrote several books, but one of them was called, Rethinking Expertise, where he models how, when you come to have some amount of expertise in a particular field, one of the premises in the book is that, through total immersion in a field, you can get good enough pretty fast. On the term of a couple of months in most domains. That might not be true if you’re looking at NASA or trying to figure out the physics of laser; but, for most of us, that’s just not what we’re doing.
MICHAEL LARSEN: Absolutely. And, Rethinking Expertise, fantastic book. There is a certain amount of knowledge that anybody can pick up. They refer it to as, “beer-mat knowledge,” trivia, stuff that you can use so that you can have a general understanding of how something works and you can talk about it with half a brain. Many of us can get to that level fairly quickly. “Can you be really effective in areas that are deeper or that require a little bit more analysis?” I’m going to say, “That’s probably going to take more time.” It may not take a tremendous amount of time, but it’s definitely going to be more involved than just quickly looking at something and saying, “Uh, yeah, I think that looks good.”
MATTHEW HEUSSER: Yeah. And, to be clear, when I said earlier that, “We tend to discount expertise and think that a tester can jump in,” I don’t like that rhetoric. I think that’s not right. Testers tend to use that language, “Oh, it’s pretty easy. You can pick it up pretty quick.” I have mixed feelings about that. As you mentioned, there are some context where that matters more than in others. Certainly learning to learn is sort of a meta-skill that can be valuable.
YARON KOTTLER:Perhaps it would be worth also thinking about what sort of culture gets to a point where testers are considered blockers, bottlenecks, stoppers to a certain extent, and that they, therefore, are eliminated. Not to say that there aren’t benefits to it. Reading the Yahoo piece, it’s pretty clear the sense of responsibility for one’s work has grown significantly across that entire team. They don’t necessarily talk about all of the problems that they’ve dealt with, and they, themselves, talk about how long it took to get to that point. I think 1-1/2 years or 2 years. But, what does that mean, that in a certain organization, those sort of discussions are happening?
MATTHEW HEUSSER: That’s a great point, Yaron. One thing we should link to is, the old Elizabeth Hendrickson, More Testers, Lower Quality? I think she did a presentation on it 10 years ago, and her point was developers are under pressure to deliver new organization.
Maybe they come out with Version 2.0 and it’s got a ton of regression bugs because they don’t have a test practice. So, they create a test practice. They hire a bunch of testers, and it’s the tester’s job to do the testing. The developers are under intense pressure to meet deadlines. So, what do they do? They do bad work, and they throw it over the wall to the testers. It’s one of the, sort of, classic examples that I have seen where testers are a safety net. I think that’s good. But, in that case, it’s more like, “Great! Since now we have a safety net, we can take ridiculous risks.” And, that sets up this separation between, “Your work is good.” That sets us up for, “Why didn’t QA find that bug?” “Well, wait. Why was the bug created?”
JUSTIN ROHRMAN: There are slogans out there that encourage this mindset. [LAUGHTER]. The whole, “Go fast, break things” mantra.
MATTHEW HEUSSER: Now, I don’t have a problem with, “Go fast and break things,” if you’ve got all the continuous delivery pieces in place and your customers are getting your software for free.
YARON KOTTLER:And, I’m not sure they, “break things” in production… too often.
JUSTIN ROHRMAN: [LAUGHTER]. Valid point.
MATTHEW HEUSSER: Yeah. I read Sauce Labs was down, like, yesterday. For something like 24 hours Sauce Labs was down. There are companies that rely on that to run all their Selenium tests, but, again, that, software companies are paying for.
MICHAEL LARSEN: I think a lot of this also comes down to the idea of, “What is being provided, and what is the level of expectation?” There is a continuum. I think sometimes we’re guilty of looking at software as though it’s a “zero-or-one” type thing, to borrow the metaphor. Not all software is developed the same and not all software serves the same purpose and not all software has the same level of expectation. We all have something. It needs to meet some need. Otherwise, we want to do something else. But, because of that, it’s often the false dichotomy of, “Are you playing a video game or are we developing a pacemaker?” That has a huge impact on my or somebody else’s life. You’re not going to test those things the same, and I think it’s important that we understand the rhetoric behind that, too. And sometimes, I think we get a little bit overwrought with it as, “What is our purpose? Is our purpose that we are the last person on the field to make sure that bug does not go out into production because—? What?”
MATTHEW HEUSSER: Well, speaking of those questions, I think those are great questions. One of the things that I suspected when I read this piece was they didn’t have an answer to those questions, and they didn’t have anyone—this is my suspicion, and I could be wrong—there wasn’t anyone, speaking to the executive level about the value that they were providing. Test was not a trust advisor. Test was not providing information to help people make decisions. Because, if they were, I suspect the conversation around, “How do we change to move faster,” would’ve been very different. Yaron will point out that there’s lots more going on here than just that. They were using testing practices that were 20 years out of date and they were developing in big batches and had to regression test everything.
And, maybe, part of that was just that the programmers were writing too many bugs and not taking personal responsibility for their work. In any event, I think that one thing we could talk about for a main topic is: Is that trusted advisor role something that testers should have; and, if it is, how do we pursue it? Because, testing is, in my experience, in large organizations, continually being reframed as low value, “How do we do it cheaper? How do we do it faster?” The common complaint is, “We’re not invited in the room for the decisions.” Should testers be doing something about that? And, if so, what?
YARON KOTTLER:As someone that sells testing for a living, I can tell you that becoming that trusted advisor is one of our main goals. The QualiTest Mission is to become our customers’ trusted advisors in that space, and trust is earned. Trust is not something that is demanded or regulated or so on. It’s something that is earned. I would ask myself, if I was working at place where that was neither the case nor had the potential to ever become a reality, frankly, if I wanted to stay in that place. Because, I can tell you, having a very broad view of the industry that, while you’re right, there is a drive to commoditize and cost save and so on, there is a parallel drive to do more and more testing and still have testing be part of the SDLC in areas of enabling time to market rather than standing in the way of it, in delighting our users, making sure that they get exactly what they want, and managing risk. So, there’s trends and countertrends, I would say.
JUSTIN ROHRMAN: Yeah. I think, actually getting to the role of being a trusted advisor, even if you, like, seek out testing knowledge, it’s hard to find information about how to do that. If you’re from the traditional school, you talk in terms of, “Test cases—what’s done and what’s not done.” That doesn’t really give people information, and intuitively I think they figure that out eventually. Or, if you’re a software-testing geek, you start talking in terms of, “Oracles and heuristics and social science and measurements” and things like that. And then, the people just don’t want to hear that.
MATTHEW HEUSSER: Yawn. Yawn. I just fell asleep listening to Justin. Just kidding.
JUSTIN ROHRMAN: [LAUGHTER]. Buzzkill.
MATTHEW HEUSSER: But, you know, social science, measurement, oracles, and heuristics, let’s talk about things that people understand even less.
JUSTIN ROHRMAN: That’s my point. The executives aren’t looking for that stuff. They want information about how we are impacting the business right now.
MATTHEW HEUSSER: I totally hear you, and I agree with you. I was just trying to amplify that and make sure the audience got it.
JUSTIN ROHRMAN: [LAUGHTER].
MATTHEW HEUSSER: My experience has been that executives want to talk about money, “How does this impact money?” Midlevel managers tend to talk about process, “How many test cases did we run yesterday, and how many test cases are left?”
And, “How can we predict when we’re going to be done?” And technical people tend to talk about, “Things.” Being able to actually speak at the level of the person you’re talking to, to adjust your message, is a really powerful way to do it. In a meeting with an executive, if we say, “Look, I’m not going to hit my date,” right? Or, “We lose $1 million a week every week that this thing doesn’t go out.” “Okay. Well, we can get it to here, which should earn you this much money, by skipping these seven steps which have these risks and that would get it to an one paragraph sound bite.” I’ve had a lot of success with that. The next challenge is you have to be in the room to be able to say that. So, you have to kind of work your way through middle management. When your value starts to be questioned and people don’t understand it and your response is something like, “Well, of course, we need to manage the test data with heuristics and oracles,” that’s not good.
MICHAEL LARSEN: I think that pretty much sums it up. For us to be trusted, it’s the Covey line, Seek First to Understand, Then to Be Understood. If we’re so focused on trying to just get our point across, we’re not going to reach that point of being trusted, because really we’re just there to hear ourselves talk. And so, it’s really critical that we take the time to say, “What is it that our customer needs, and are we actually meeting that?” And, be brave enough to say, “If we’re not meeting it, what do we need to do so that we can?” And sometimes, that conversation is going to be difficulty, and it might actually mean I have to completely relearn how to communicate with my teams or with my upper management. Or, I may need to take on a totally different level of responsibility than what I have, and it may not be classic testing as I’m used to doing it.
MATTHEW HEUSSER: You know, we’re talking pretty abstractly. It’d probably be good if I gave a couple of real examples. It’s common for me to come into an organization that is convinced that testing is late or behind. One way to get people’s attention is to help solve problems and actually make a commitment. So, if you say, “This will take me 2 weeks,” and you get it done in over 1 month. And then, you say, “It’ll take me 3 weeks,” and you get it done in 1-1/2 months.” That’s a great way to lose any influence you had. We can do an analysis and say, “These are the reasons the testing is taking so long.” It’s actually set up. One company I worked with recently, 90 percent of the tester’s was set up. So, if you did the math, if he had a tool, so that he could do the set up scripted, we would immediately stop being the bottleneck. We can also measure the process and say, “Yes. Testing is the bottleneck by this much.” Or, “It’s not the testing that’s the bottleneck. It’s just testing is at the end.” So, if we could find a way to dribble work out or regression testing is taking us 2 weeks. It’s pretty common. We want to deploy once a month. That’s half of our time we’re spending in regression testing, and the developer is sitting around waiting for bug fix notes. We could get that down to 1 week. And then, we could do this, and then we could do that. So, once we’ve tightened up testing so that it’s not really conceived as a problem but it’s got the same throughput as everything else, it happens concurrently with everything else, then we can use those same tools to analyze the rest of the process. And now, we’re talking about, “Gee, we have this many bugs pop out of development, we think we can cut that number in half,” by whatever prevention technique we want to put in place. If we prevent that, we’re going to save this much time of find, fix, and retest. Now, we’re talking in terms of time and money. We’re going to improve velocity. We’re going to get to market faster.
Now, we can get someone’s attention, because we’re speaking their language. Making and keep commitments, get testing, not the bottleneck, be able to quantify the potential of the improvement, I think are three ways to walk in the right direction.
YARON KOTTLER:Everything that you guys are saying can be summarized as, “Align with the business.” Understand what the business wants. The business is sometimes represented by end user, sometimes represented by your manager or your manager’s manager or the CIO or a business analyst or different people. Whether from a business process or from what the company actually does and how it even makes its money, understand what keeps your boss and your boss’s boss up at night, and make sure that whatever you’re proposing aligns with them. It doesn’t mean throw away all the important, good stuff you’re doing. Learn their language and talk to them in their language and be aligned with what the organization is ultimately trying to do. That’s my first point. And, my second point, some people may read this and, you know, every few years I hear, [LAUGHTER]. Every few years, I hear Matt asking, “Is this the end of testing? Are we all out of business? Who cares we’re all going to be outsourced into oblivion?” And, I think that the reality is and the trends are that, that’s not the case. The opposite is, in reality, the case. Not only are we seeing that the entire industry is growing at about 7 percent or 8 percent—the “industry” being the testing industry—which is faster than the tech industry. Independent testing is growing at about 15 percent to 16 percent. So, the expertise in testing is being appreciated more and more these days. I think that is great news. You know, a few years, there was Agile. Now, there is DevOps. There are changes, and we always need to adjust. But ultimately, I see DevOps continuous integration, continuous release type trends as a Net Positive.
MATTHEW HEUSSER: They’re certainly a technology enabler. I agree. There’s a book by Andy Grove called, Only the Paranoid Survive. I think that these kind of is-testing-going-away questions are encouraging us to keep reinventing ourselves to stay relevant. Thank you for the gentle reminder, Yaron, and I really appreciate it. And, there’s more testing conferences than ever before. They’re having higher attendance than ever before. I think you’re dead set on right that the industry is growing. It’s just changing.
Thanks, everybody. It’s been a great show, and we’ll see you again in two weeks.
Thank you for listening to The Testing Show sponsored by QualiTest. For more information, please see https://www.qualitestgroup.com. Do you have questions, or is there a topic or an idea you’d like to have covered? Please e mail us at: [email protected], and we will answer questions that we receive on air.