Today, we are joined by Alan Page of Microsoft to discuss the Unified Engineering model, what that means, how it is working at Microsoft, and how that might effect software testing and software testers going forward.
Also, what happens when the Air Force has a database crash and can’t recover their data, and is the Testing community really anti automation?
Panelists:
References:
- Database corruption erases 100,000 Air Force investigation records
- How the Pentagon punished NSA whistleblowers
- “The majority of voices in the #testing community are trashing automation. Fine but then then stop complaining when no one wants to hire you.” –Noah Sussman
- Alan Page: The “A” Word: Under the Covers of Test Automation
- AB Testing: Alan and Brent talk testing
- How Microsoft dragged its development practices into the 21st century
- Arlo Belshee: Is Pair Programming for me?
- TestBash Philadelphia 2016
- Targeting Quality 2016: KWSQA
- Test Masters Academy Fall 2016
Transcript:
MICHAEL LARSEN: Welcome to The Testing Show. I’m Michael Larsen, the show producer, and this morning, we are joined by Justin Rohrman.
JUSTIN ROHRMAN: Good morning.
MICHAEL LARSEN: Perze Ababa.
PERZE ABABA: Good morning, everyone.
MICHAEL LARSEN: Our special guest, Alan Page.
ALAN PAGE: Good morning.
MICHAEL LARSEN: And, of course, our MC, Mr. Matt Heusser. Take it away, Matt.
MATTHEW HEUSSER: Thank you Michael, and welcome to the show everyone. I hope you’re having a good week so far, and we’ve got a ton to cover today. We want to talk about some of the things happening in “tester land” and some of the things happening in general software delivery,” then talk about Microsoft and what Alan has been doing with concurrent engineering there. So, let’s start with the news segment. Perze, you had story out of Lockheed Martin?
PERZE ABABA: Yes. I was going through randomly on Twitter yesterday and found this rather interesting news where there was a report out of the Air Force. They were saying that there’s a bunch of Lockheed Martin employees were informed due to a database crash that happened, I believe it’s a week ago. The cause really was just, “Hey. The database crashed, and there’s no more data.” What was really affected was the system that tracks ongoing investigations and inquiries, so there’s apparently over 100,000 records that were affected because of this. As of today, I believe they’re still looking for ways to see if they can recover some of that data or even asking people around if they have backup in other locations. That’s really very surprising to me that a group supposedly as important as the Air Force has really dropped the ball, [LAUGHTER], on this one. I mean, what do you guys think?
MICHAEL LARSEN: Oh, if I was a conspiracy theorist. I’m not a conspiracy theorist, but it’s posts like this that just make you step back and go, “Umm, seriously?”
JUSTIN ROHRMAN: Are the backups on a conspicuous politician’s e-mail service?
PARTICIPANTS: [LAUGHTER]
JUSTIN ROHRMAN: [WOMP, WOMP]. That was a bad joke. So, did they try to use the back-up files and they just didn’t work? What happened there?
PERZE ABABA: I don’t think there’s even any reference to an active recovery from a given backup. It seems to me that there’s no backup at all. They were trying to recover it from the database that crashed, but they are having problems with it.
MICHAEL LARSEN: Which sounds to me like they’re either was no backup or backup was so old to be unusable, or to be fair and to give a little bit of benefit of the doubt, it is entirely possible that a change to a system can happen that somebody didn’t test where, when you go back to actually restore a backup because something in the database doesn’t map the same way that the original backup that was made did, you can find yourself in a situation where the backup doesn’t work anymore. I have actually seen that happen. I have done a test where we were testing backup and restore, a change had been made to backup, and we were verifying the different system configurations. We were able to do a backup. “Yeah. That all work. Okay. Oh, by the way, let’s make sure that we can restore this.” [CRUNCH]. “Ooh, that’s not good,” but call me “kooky,” if you’re going to be testing a change to backup, “a restore and verify you can get up and running again,” or am I being silly?
MATTHEW HEUSSER: There’s a couple of really bizarre things about this. One being, as Perze pointed out, the article indicates that backups were taken and they can’t do anything with them. The amount of money that is being spent every day on these kinds of projects, you could hire a programmer, and they could take the CSV and they could write SQL statements and they could reimport it. You’re going to have some data. If it’s a relational database management system and it crashed and it probably is because it’s using FOIA requests, and those didn’t exist prior to around 1999. Then, the system crashes and then you turn it back on, you lose one day’s data. That’s what happens.
JUSTIN ROHRMAN: That’s a good point. All those backups usually go back in time like 12-or-24 hours. So, they should have stacks and stacks of tapes or whatever they use now.
MATTHEW HEUSSER: Well, my concern with the backup is, it’s not in the right format to actually restore, in which you’d had to hire a programmer to go write a for-loop to go do an import. Even the database itself, so it crashed, I guess if the hard drive is bad, then you’d have to hire a hard drive recovery specialist to go pull the files.
ALAN PAGE: I have a hard time believing that there were no recoverable backups and the database is on one hard drive. Today, I just don’t believe that non-redundancy is an issue, and I’m not much of a conspiracy theorist, but this is a press release statement over some larger story, which may not be exciting X-Files stuff. Still, knowing software, this is a nice story for our parents to read and go, “Oh. The database crashed. They lost stuff.” But because we know about software, there’s a lot more to this story, and it may be totally uninteresting. It may be a conspiracy theory, or it may be somewhere in between. But, the story is some nice fluff that’s just not giving us all the details.
MATTHEW HEUSSER: So, there’s an interesting corollary to this. So, this is about whistleblowers of Waste, Fraud, & Abuse and Freedom of Information Act Request. There was an article in the Washington Post, I want to say, “It was about a month ago,” about an executive in the IG’s office for the FBI, but it was his job to protect whistleblowers. We will be able to find this, because his grandfather had actually stopped Hitler in what was the Beer Hall Putsch. So, the first time Hitler tried to rise to power, he tried to organize a revolution in a Beer Hall, and a couple of loyal German Army officers stood up and raised their pistols and said, “No, you’re not doing that,” and put him in jail. That guy’s grandfather was working for the Inspector General’s Office of the FBI and it was his job to protect whistleblowers, and he was eventually fired and investigated himself for doing his job. Because they were trying to go through backdoors and just arrest the whistleblowers for all kinds of ridiculous stuff and just drive them out of society. Well, Snowden knew about this. So, when the claims were made that, “Edward Snowden should’ve followed channels,” he actually knew the channels didn’t work. Given that this is a database that contains all of that data, it just raises more questions, and I have to agree with Alan. It just doesn’t make any sense. One positive thing I can say—if this is positive—I did a consulting assignment with Lockheed Martin in 2012. The Nondisclosure Agreement has ended, and I think I’m comfortable saying in public, they have a wide variety of skill level on their teams. So, incompetence is its possible answer. We should probably move on from “general software world” to “tester world” with this Twitter thread with Noah Sussman that has happened over the past couple of days. Michael, do you want to talk about that?
MICHAEL LARSEN: Oh, sure. This seems to be a common occurrence. First off, I communicate with Noah on various things, and I like a lot of what Noah does and what he talks about. What we’re ultimately coming down to is a statement that is made that ends up getting turned into hyperbole, “Gee. On Twitter, that never happens.” The biggest factor here is the idea that, if you are CDT or if you are attesting that you are anti-automation, and I think that we’ve proven, time and time again that, generally speaking, we’re not anti-automation. In fact, I couldn’t do my job without automation. What tends to come into play, though, is that there’s an interpretation that because we say that, “automation is tool, that automation helps you do other things,” it opens it up to the possibility that you can work on other stuff, that somehow that is an interpretation of “automation is bad,” and that’s the way that the spin has been rolling with this. And again, I’m not trying to impugn Noah. I think Noah makes some great points. I think the fact is that automation does a lot of good stuff for us. Automation has saved me from a ton of drudgery, especially in my peripheral role as a release manager and doing machine setup so that I can actually do the releases. If I had to do all of that on my own or actually test that out on my own, I’d never get a release out. So, I am not, by any stretch of the imagination, anti-automation. It’s not that I don’t think automation has a definite place in checking and verification and validation and such, it’s just that automation, I believe it is a tool that is useful for certain things. It’s useful for a lot of things, but there are also areas where automation has to go through some manual, exploratory, and learning before it can be even implemented and then made helpful.
JUSTIN ROHRMAN: I have a slightly different take on that. My feeling is that Noah and other people in programmer-heavy testing roles like that see a massive body of work from the CDC community where we have this special way of taking an extremely critical eye at everything, and it’s always front loaded with, “Here’s the parts that are crap. This is what you don’t want to do.” Instead of, “Here are my success stories.”
ALAN PAGE: If I can jump in here with a shameless plug, I have an e-book out called, The A Word. Stories about automation, which is, it’s a bunch of essays on ways that automation generally doesn’t work or can work. You could say I live on the edge of the CDT community. I don’t consider myself a member, but I’m obviously context-driven. When I hear people speak out against automation, they’re speaking out against the stupid ways you can do automation. Thinking that, “Everything can be automated,” or betting the company on some brand-new, GUI automation tool. I think we often throw the baby out with the bathwater. “We” being the general test community saying, “Automation may be bad in this situation,” but it’s not. I agree with everything that Michael said. I think automation used correctly is very helpful. It saves you from boredom. It saves a lot of time. But, I think when we fall into the extremes of, “Let’s use automation for everything and let’s use this GUI automation tool to solve all of our problems and solve world peace,” is where we fall into traps.
JUSTIN ROHRMAN: The other part of the equation is they speak a slightly different language. “They,” meaning Noah and the other people who feel that way. They speak a slightly different language than we do. The CDT people are people who self-identify in that way, have mostly accepted the words that come from Bach and Bolton—like “check” and “test” and things of this nature. He refuses to acknowledge those for whatever reason. He may have perfectly valid reasons. When people approach the problems using those words, he’s like, “Whatever. Don’t care. I’m already doing something that works.”
ALAN PAGE: Honestly, if I can jump in, those words are a turnoff for me too.
MATTHEW HEUSSER: I tend to agree, and I think Justin really hit on something strong. Then, if you go after someone who knows what they’re doing isn’t working and they feel kind of awkward and embarrassed about it and you come in as the expert and you have this special language that actually helps them differentiate what’s going wrong, you can be very helpful and it works. But, when you talk to someone who is succeeding anyway, before they ever met you and they’re doing just fine, you’re going to have a lot of conflict when you call up and say, “No. You’re using the wrong word.” When we put ourselves “out there” in public, that kind of criticism, especially when you’re succeeding, it doesn’t seem particularly helpful. It took me about two years to say, “Yeah. I think the testing and checking distinction is helpful. It’s worth making,” but I try to do it with a soft touch. I found the actual tweet that started it all off. What Noah said was, “The majority of voices in the testing community are trashing automation. Fine, then. Stop complaining when no one wants to hire you,” which I thought was kind of a fire‑brand comment. He kind of has a point, but in the general software community, I do see a lot of people saying, “I just want to push a button, get a cup of coffee, and come back with a green bar and completely eliminate any human investigation into risk.” I think that’s naïve and crazy for lots of reasons. But, maybe eliminating the regression test phase as a phase is a reasonable thing, and there are lots of little things you can do to help that happen.
PERZE ABABA: It’s a question of value really, right? I mean, when you aren’t in a given context, for example, you don’t go in guns blazing and say, “We can’t use automation here. That should be your primary salvo, but based on our understanding of how we deliver software or how we build software, we can definitely identify the areas where automation can help. In my world, continuous delivery with static analysis and all that good stuff is definitely part of that because there’s a question of skill from the developers, from the code reviewer perspective, where there’s been so many things that have been missed, even though they declared that, “This particular feature is done,” but I still have a lot of questions. I will get to talk to Noah face-to-face next week for the NYC Testers Meetup. I do question his tweet. I still don’t understand who he refers to as “the majority,” which he used for counting. I do have a lot of questions, but if you’re one of those Luddites where you would just say, “No automation, because humans are so much better,” I don’t know if there’s a place for that in the way we deliver software.
MATTHEW HEUSSER: The big thing that I have is, “Where do we start?” If you Googled, “Software test automation. How do I do it,” you’re likely to get either “test-driven development,” which I think is a good start. Let’s get the code in better quality before it gets to test. Or, “Record, playback, GUI” stuff. Usually, when I come in on a consulting assignment, I say things like, “Let’s start with continuous integration. Let’s start with virtual servers that you can do on demand. Let’s start with radically decreasing the commit-to-testable-build time. How do we do that?” Those are the things that I am much more interested in automating at the typical consulting assignment, and the other things come later. So, maybe I come off that way. There’s also article by Kaner that we could link to. It’s a blog post. I’m sure some of you have read it, where he did a workshop a while back, and there were some attendees at the last workshop on teaching software testing that said, “They were not interested in automation and didn’t want to talk about it.” He took some issue with that.
MICHAEL LARSEN: I also think we’re dealing with a false equivalency here. I’m intimately familiar with the huge benefits that automation provides. If you think that I’m doing all of those steps manually every single time the same way, you’re crazy. Of course I’m going to write a script to help me do that. As I’ve mentioned multiple times on this, if you take a bunch of commands and put it in a script and run them at the same time—guess what?—you’re doing automation. So, come on.
ALAN PAGE: I think that we, in this room, have a view of automation. I think “cracked” is the wrong work, but it’s perhaps a minority, and there’s a lot of generalization in this Tweet stream and even in this conversation we’re having around “good automation versus bad automation.” Matt was talking about “a lot of people just want to see that their tests turn green and that’s good enough,” and I think, “Well, that’s idiotic. If that was automation, I would come out and say, “Automation is bad.” Those of us that have done it and sort of it get it, and Noah definitely gets it and Noah definitely sees people speaking out against automation, but I think the disconnect is, again I’ll re-plug The A Word, I speak out against “bad automation.” There’s a lot of “bad automation” meaning the approach is bad, the goals are bad. There’s tolerance for flaky test, etcetera. There’s all kinds of ways to screw up automation, but there’s also all kinds of ways to use it to make your job so much easier and so much more effective. So, I think when you’re speaking in generalizations, you can look at the bad automation and say, “Yeah. Automation is bad. We shouldn’t do it, because this is stupid. It’s wasting time.” Or you can look at it and go, “I couldn’t do my job without automation. But, I’m not Noah. I actually talked to Noah last night, but I feel like there is some background where his statement came from. I can’t agree with where “the majority” came from, but there are a lot of voices speaking out against automation but I feel like they’re speaking out against the stupid automation and throwing the baby out with the bathwater.
MATTHEW HEUSSER: It’s probably fair to say that, that sort of nuanced, advanced view is pretty darn hard to communicate on Twitter and may often come off as opposed to. I think we’ve probably taken this conversation as far as we can without getting Noah on the phone, so we’ve got to get him on the show. So, speaking of having on the show, we’ve got Alan Page from Microsoft, formerly the director of Microsoft Engineering Excellence and is still a happy, proud technical person, right? You’re a contributor on a project right now?
ALAN PAGE: Still employed. Last Monday was 21-year anniversary at Microsoft.
MICHAEL LARSEN: Wow.
ALAN PAGE: Yeah. Which is crazy—right?—because it means I’m old.
PARTICIPANTS: [LAUGHTER].
PERZE ABABA: Congratulations, Alan.
MICHAEL LARSEN: We should also mention that Alan is one of the voices of the AB Testing Podcast and two of the three who happen to listen to the podcast happen to be here on the show today. So, I’m just going to make that point.
PARTICIPANTS: [LAUGHTER].
ALAN PAGE: Exciting to be here. I’m the “A” half of AB Testing, and I will tell Brent you said, “Hello.”
MICHAEL LARSEN: Awesome.
MATTHEW HEUSSER: That’s cute. Alan and Brent, AB Testing.
MICHAEL LARSEN: You just figured that out?
MATTHEW HEUSSER: No. I’m telling my audience.
MICHAEL LARSEN: Oh, okay.
MATTHEW HEUSSER: I met Alan at STAREAST probably 2010. He’s been around. I think he was given a keynote then. I think he might’ve been on the old Twist Show actually. So, we’ve known each other a long time. It’s good to see someone that’s, sort of, been around, and specifically, Alan was involved in this transition Microsoft has gone through where they used to have a separate distinct testing group. It was a kind of a peer-to-development. Now, they’re doing something they call “concurrent engineering.” So, tell us a little bit about concurrent engineering, Alan.
ALAN PAGE: Internally, they call it “combined engineering,” which I hate. I don’t like the term. I prefer “unified engineering” or “concurrent engineering” is fine. The transition sort of came out of necessity, in some cases. In some cases, the change came about sort of like, “Oh, I think this would be cool.” One thing I can say—it’s hard to generalize about all of Microsoft, when we have, you know, 45,000 engineers or so—is that I don’t think we have anyone in a “test” title anymore as a full-time employee. We had, at our peak—I don’t think we ever went over, but we were really close to—10,000 testers across the company, which was a lot. I think, overall, it was like 1.2 dev to 1 tester across the company. So, a pretty high ratio, and then one-by-one, especially as more and more of the company began making cloud-based products—services, web apps, etcetera—those teams began to merge, meaning the developers and testers became one team. That, in some teams, this was done very effective. I like to think that the team I’m on is a very effective engineering team. We can talk about some of the reasons that works, and I can tell you about some of the places where teams have made this change and it has been very ineffective. In fact, I heard of one team at Microsoft—and I’m not naming them for anonymity, I’m not naming them because I can’t remember which team—who are considering moving back and adding a separate team to their product. So, going backward is, at least, an admission of defeat, not necessarily failure, but worth talking about some of the reasons why teams end up being successful or being unsuccessful in making a move like this.
MATTHEW HEUSSER: Yeah. I think that you hit on something big there, in that, if I understand it correctly, concurrent engineering isn’t that different than embedding a tester in a multidisciplinary Scrum team. There are lots of skills. There is some overlap. The team has enough to deliver the product, but not everybody knows how to do everything.
ALAN PAGE: Yeah. And, that’s absolutely critical. I’ll jump right in to, where the teams, where I’ve seen this fail, not just once but repeatedly, dozens of times, is when they merge the former testers with the former developers and say, “Okay. Now, everybody owns quality.” Furthermore, they assume everyone does the same thing from beginning to end. So, “Everybody owns design, implementation, unit test, integration test, exploratory testing,” etcetera, and those teams end up failing because some people actually can do it all. Those are generally people that are going to grow into your architects and very broad-minded systems‑thinking technical people, but good software teams aren’t made up of a bunch of people that can all do the exact same thing. You need specialization—generalizing specialists and specializing generalists are the core of making teams like this work. You need to realize the skills of the team and use those, orchestrate those people to work together and make good software. So, when you have your typical Agile team with some embedded testers, that’s bringing test specialization. I don’t expect those testers to implement the features as well as do all the testing. Another mistake I’ve seen is, if you embed testers on a team, or former testers into an integrated team, and everybody shovels their stuff over to them to test. We can go into the specifics of my team in a bit, but where that tends to work out better is where those test specialists do some exploratory testing and they make sure the holes in testing around –ilities, etcetera are all covered, but more importantly, they do a lot of coaching and reviewing and pairing with the programmers on the team to make sure that they’re writing quality code that is well tested. A lot of developers at Microsoft rarely, if ever, wrote unit tests, let alone functional tests, but that transition wasn’t that tough. To think that programmers aren’t good at writing unit tests or functional tests, think they can’t do it? Wrong. In fact, having them do it, I’ve heard the argument that, “We don’t want them to write those tests because they’re there too highly paid as programmers to spend their time,” which is you can laugh here if you want, but it’s actually critical that developers write their own unit and functional tests. Because what I’ve found in every case is that developers that do that tend to write simpler, more maintainable code in the first place, because they have to think about how to test it from the very, very beginning. That’s true, whether you’re doing TDD or ATDD or whatever. Just testing your own code makes you write better code in the first place.
MATTHEW HEUSSER: Yeah. I totally agree, as far as the unit level, and that’s where, if I have no context at all and someone says, “Where should we start test automation,” I usually say, “unit test, TDD,” because you have to be a consumer for your own API. You’re like, “That’s really awkward and weird and hard to instantiate an object. Maybe I should change that,” before you actually write the code. I’ve seen it be very powerful. So, it seems like, in terms of “the button you can push that comes back with a green bar” or we we’re talking, before we started, about “the PowerPoint that solves all your problems and it’s just going to be like this,” people kind of want easy answers. I imagine, when Microsoft converted from a large department-based approach to integrated teams, there’s a real desire for sort of cookie cutter, “Every team should do it like this. Here are the roles. Here are the responsibilities. Go,” and I’m hearing you say that it was a lot more complex than that and teams did it differently?
ALAN PAGE: It happened at different times. It wasn’t like a corporate decree. Teams slowly made this transition, and anytime you’re doing an organizational change—and I’ll generalize first and then I’ll give some specifics—explaining the “why” about it is really important. So, some teams that failed, it’s like, “Testers, you are now developers. Developers, you are now testers,” or whatever. “Everybody is an engineer,” was the mantra, and then, “Go.” All activities still need to happen. What you get from combined engineering is not double throughput. This reminds me of the article from like a year ago, where the guy—a tangent here—wrote pair programming and said, “It won’t work, because you’ll have to hire twice as many developers.” So, back on topic here is, the throughput isn’t going to double. All the activities still need to happen. When you don’t realize that, when you don’t tell people “why” you’re going to be in trouble, the teams that have been successful, it’s very clear why they’re successful. They recognize that the back-and-forth Ping-Pong game that we played for many years between developers and testers, where the developer says, “Here’s my code. Find the bugs,” and the tester says, “Here, I found some bugs.” The developer says, “Thank you for finding my bugs. Here are some fixes for those.” The tester says, “Thank you very much for your fixes, half of them are fixed, half of them aren’t. Please take it back.” This goes back‑and‑forth and back-and-forth, and it’s very inefficient. By reducing the loop to a developer’s brain for the bulk of those things, you get a lot more efficiencies out of development, and the reason you do that—going back to the why—is we’re not shipping every three years anywhere. Most products in Microsoft don’t even ship every year. Most products, excluding like Windows, which even on the Insider Program, it ships monthly. Programs are shipping monthly, weekly, daily, which I don’t know how you would do that on a phased or staged or predictive development model. So, the teams need to understand why they’re doing that, and then, “Well management supports the idea that all the activities still need to happen and testing is still really important whether I have someone dedicated specifically to that or not,” are the teams that have been very successful with this.
MATTHEW HEUSSER: Are you familiar with the work that Arlo Belshee was doing a Microsoft for a while?
ALAN PAGE: Yeah. I know Arlo.
MATTHEW HEUSSER: A lot of these working on double productivity doesn’t really examine the starting place. So, if the starting place is bad, sure, you could double productivity. If Arlo was coaching a team that started off bad, I think I could see you doubling the productivity over time.
ALAN PAGE: Oh, yeah. I’m not questioning productivity going up. What I’m questioning is, “I’ve made this organizational change, now productivity goes up.” The magic in the middle is missing.
MATTHEW HEUSSER: Exactly. It’s a questions mark. You look at it like, “This is going to be coaching and skill development and training and work involved,” and initially that’s going to be an investment which means you might actually slow down a little when you start, right? I mean, that’s more realistic?
ALAN PAGE: Absolutely. Absolutely. Teams need to figure out how to make this transition.
MATTHEW HEUSSER: So, tell me about a couple of teams—some did it better and some didn’t.
ALAN PAGE: It’d be easier to talk about my team and how we work. So, I was hired on to this team almost exactly a year ago to be the “quality guy” on an engineering team. There are very few, if any, former testers on this team but what my boss determined was that, “I have a bunch of developers who are cranking out code but I need someone to sort of think about quality and testing end-to-end to make sure we actually build a quality product despite ourselves.” So, that’s been my role. So, I’ve done a ton of coaching and writing unit tests. Our developers write unit tests, integration tests in Selenium or a web-based app. They have learned how to be pretty good testers up to a point. I make sure that things don’t fall through the cracks and I’m building a quality product. So, it can be process things or engineering. Process, it can be, sometimes for that test specialist in a large team, rather than the embedded tester in a Scrum team, I am the quality guy on a team of about 100 engineers. What I’ve found is not all developers are like these “great testers,” but a lot of developers—or a fair number of developers—actually are really good sort of systems-thinking, end-to-end testers. I don’t do them often. We did a bug bash, maybe, six months ago. I think the VP requested that I do it, and I was kind of hemming and hawing. But, we had a few people that found some just amazing bugs that I think any tester would be proud to have found themselves. So, the fact that they can find those was, “Oh wait. These people that are programmers by day can be very good testers.” The other thing I did, which has worked out really well is, I took all the –ilities—the things like globalization, security, performance, privacy, etcetera. Those things typically fall to a test team or a test specialist. I don’t scale that well, so I put my peer engineering managers in charge of tracking and driving those things across the team. In the same way, like in the old world, we’d have a test lead do. They’ve done, mostly, very, very in driving those areas across the product and getting improvements there. So, my job is to make sure those things aren’t forgotten and then to figure out how to get the engineering team—where there are no dedicated testers—to absorb those things and make them part of their job, and so far so good. I like to think that the team I’m on is doing this pretty well. There are a lot of teams, and it will be, perhaps, a detriment to my 22nd year at Microsoft to mention them by name, but there are a lot of teams that those things have fallen off the radar. They have an engineering team, but they, maybe, don’t give the –ilities the attention they used to get or their development team don’t really write a lot of testing beyond the very rudimentary unit tests and then quality overall suffers.
MATTHEW HEUSSER: Wow. So, you’re talking 100:1? Is that the future? An envisioned future of testing easily could be, “We’re more like architects and coaches that supervisor overall process and step in when we can and delegate than testers because we’ve got to delegate because the ratio is going to flip?” Is that?
ALAN PAGE: I don’t think 100:1 is the norm. I think 100:1 is something you get away with when you’ve done this for 21 years, or 25 years if I go back pre-Microsoft, and even there, I do, behind the curtain there is some scale with some vendors that help fill in the holes of the functional testing.
MATTHEW HEUSSER: Really?
ALAN PAGE: That the automation doesn’t cover.
MATTHEW HEUSSER: Hmm.
ALAN PAGE: So, there are a few testers hidden behind the curtain, but it’s temp work.
MATTHEW HEUSSER: Do they have to be onsite?
ALAN PAGE: No. [LAUGHTER]. They don’t actually.
MATTHEW HEUSSER: Maybe we can talk later.
ALAN PAGE: [LAUGHTER]. Matt’s always looking for his angle.
PERZE ABABA: Alan, I have a quick question for you: In a well-executed concurrent engineering context, how do you guys deal with bugs? What’s your bug strategy, if you have one?
ALAN PAGE: Find’em and fix’em. Find them, fix them, or punt them is the strategy.
PERZE ABABA: So, pretty much, there’s no bugs that are just sitting in the backlog for more than 30 days. Is that what the goal is, or is it less than 30 days?
ALAN PAGE: Yeah. Absolutely. In fact, I can air dirty laundry, because I don’t mind doing it about my own team. I don’t like talking about other teams with that amount of detail. Yeah. The goal is, in fact the rule now is, “There are no bugs older than 30 days.” I actually prefer to have no bugs at all, “Find them, fix them, or punt them. If they’re bad, they’ll come back.” But, we lost track of that for a while. We had a backlog drift up a little bit. We have a handful of bugs older than 30 days, and those are not being taken care of promptly and closed. So, until we caught up a little bit on the backlog, our interim goal is, “No bugs older than 30 days.” Our product is moving fast enough that, so far, what I find is any bug older than 30 days is obsolete anyway. So, it’s a combination. With those old bugs, it’s a combination, not just fixing and just keeping on top of them and closing the ones that are obsolete. We’ll reel that in.
MATTHEW HEUSSER: That’s a great question, Perze. Anybody else have questions for Alan? I’ve talked a lot. Don’t all go at once.
PARTICIPANTS: [LAUGHTER].
PERZE ABABA: It sounds like we’re all intimidated, right? [LAUGHTER]. Maybe that’s a thing.
MICHAEL LARSEN: I think we’re good.
ALAN PAGE: Got enough nuggets in there?
MICHAEL LARSEN: Oh, yeah. No. I think we’re solid.
MATTHEW HEUSSER: Yeah. Once again, we let the news take up the time, but it seems like that was good for our energy. All right. Well, thanks for coming, Alan. Tell us about your podcast. Tell us about where people can read your stuff, see your stuff, and learn more about what you’re doing.
ALAN PAGE: All right. The podcast is AB Testing, for Alan and Brent, and you can find us on iTunes and also on https://www.angryweasel.com/ABTesting. Then also, https://www.angryweasel.com/ABTesting/ is where you’ll find me, occasionally writing on my blog @alanpage on Twitter. If you want to see me live, I’ll be somewhere near New York. In November, I’m going to be speaking at TestBash in Philadelphia and talking at much greater length with many more details a little bit about what we talked about today, which the title of my talk is, Testing Without Testers and Other Stupid Ideas That Sometimes Work.
MATTHEW HEUSSER: That’s a great title, and I have to say, having seen Alan a couple of times, you’re a great mixture of a strong content speaker that also is entertaining and fun and it’s rare that you really get both, honestly, at a conference. One of my issues lately is that we haven’t had enough strong content speakers at some of these conferences.
ALAN PAGE: I’m excited to be there. It should be a fun time.
MATTHEW HEUSSER: Yeah. I won’t be able to make TestBash Philly, but we’re going to be all over. So, September is the month that’s firming up for me right now. There’s the Atlanta Quality Conference, OpenStack. There’s KWSQA, and there’s Anna Royzman’s Quality Leadership Testing Summit.
JUSTIN ROHRMAN: Test Masters Academy.
MATTHEW HEUSSER: Test Masters Academy. That’s it. Test Masters Academy. It’s all the same days. So, right now, we’re looking at plane tickets. “Can I get to one, one day and fly to another one on another day, and how are we going to make that work?” I don’t think I’m going to make OpenStack, but we’ll get a list of other events in the show notes. You’ve probably heard a lot about CAST, if you listen to the show a lot, but give us one more teaser about CAST, Justin.
JUSTIN ROHRMAN: CAST 2016, in August, in Vancouver, British Columbia. This year, the theme is, Software Testing as a Development Catalyst. How we help propel development forward and deliver better software faster.
MICHAEL LARSEN: All right. To give one more conference plug, I’m part of the group that’s helping shape this, so I have a little bit of pride in this, the 34th Annual Pacific Northwest Software Quality Conference is happening October 17th through 19th 2016. It is happening in Portland, Oregon. The theme this year is, Cultivating Quality Software. So, if you are looking for a conference to attend and you are looking for something in the middle of October, PNSQC is a pretty darn good conference, if I do say so myself. Alan and I have both presented at PNSQC. In fact, I think all of us, with the exception of Justin, have at some point presented there.
JUSTIN ROHRMAN: Yeah. I have not been to that conference yet. I should make it there one day.
MICHAEL LARSEN: I encourage it.
MATTHEW HEUSSER: So, Perze has presented at PNSQC?
PERZE ABABA: No. I have not.
MICHAEL LARSEN: Oh, okay. We’ve got to get you out there.
MATTHEW HEUSSER: Not yet.
MICHAEL LARSEN: Not yet.
PERZE ABABA: Well, you know, the cool thing that you mentioned that is that I just got realigned to a new boss, and apparently he was invited to speak. So, I am actually helping him craft his slides and be his primary evaluator for his talks. So, maybe I’ll be there.
MICHAEL LARSEN: All right.
MATTHEW HEUSSER: At the one in Portland, PNSQC?
PERZE ABABA: Yep. That is correct.
MATTHEW HEUSSER: That’s awesome. That’s a great coincidence. So, one thing, I will try to recommend resources if you want to go deeper. There are some things in software that are becoming harder to determine if they are correct, Oracles are failing us. For instance, recommendations on something like Amazon or an e-commerce engine or a tag cloud, and another one would be search engine results or Big Data systems. So, the folks at QualiTest do have a blog post on, How to Test Big Data Systems. We’re going to link to it in the show notes. Tell us if you’re interested in it. E-mail us at: [email protected], and we can do more of that kind of thing, which I think is a great way for testers to add value. Because, when it is coded, how you figure out if it is fit for use is unclear and that’s the point that people realize they need help, and if we have answers, then maybe we have an opportunity to help them, and that’s really all I have. That’s everybody for coming today. I really appreciated it. I’ll given Alan the last word.
ALAN PAGE: “The last word,” well thank you. Hey, thanks for having me on the show. Great talking with you guys. Great talking about what’s going on in my team. Have a great rest of your day.
MICHAEL LARSEN: Awesome. Thanks very much.
PERZE ABABA: Thanks, Alan.
JUSTIN ROHRMAN: See ya.
ALAN PAGE: All right. See you guys.
[END OF TRANSCRIPT]