Avoiding the Commodity Trap

February 25, 03:43 AM
Transcript

What does it take to differentiate yourself as a tester? How can you demonstrate the unique values and attributes you can bring to the role of tester? How can you push back against the race to the bottom where “everyone can do the job”? Is that really true? These questions and more we posed to Andy Tinkham, and shared ideas as to how we can bring much more to the table that we often think we can.

Also, can software be specified in such a way that it can actually be made error free? Justin had a chance to look at that very idea at the DeepSpec Workshop at Princeton University, and he shared his findings with us.

itunesrss

Panelists:

Heusser

PerzeAndy

C2 Solutions

References:

Transcript:

MICHAEL LARSEN: Welcome to The Testing Show.  I’m Michael Larsen, the show producer, and today, we have on our panel Justin Rohrman.

JUSTIN ROHRMAN: Good morning.

MICHAEL LARSEN: Our special guest, Andy Tinkham.

ANDY TINKHAM: Hello.

MICHAEL LARSEN: And, of course, our moderator, Mr. Matt Heusser.  Take it away, Matt.

MATTHEW HEUSSER: Thanks Michael, and welcome everyone.  Today, we want to talk about differentiating yourself as a tester, and there’s a couple of interesting things that have been happening in the world that are sort of focused around that.  Justin, as the president of the Association for Software Testing, just got back from DeepSec Workshop.

JUSTIN ROHRMAN: DeepSpec.

MATTHEW HEUSSER: That’s right.  It’s not security.  It’s specifications.  So, Justin, tell us a little bit about that.

JUSTIN ROHRMAN: This workshop was last week in Princeton, New Jersey on the campus of Princeton University.  So, if I travel back in time a little bit, about six months, in January, there was a press release from Princeton University in that they were holding this workshop to make bug-free software, and about half of the testing world had this huge emotional reaction.  I was certainly part of that, but I figured I should put a little skin in the game and figure out what they’re really doing.

MATTHEW HEUSSER: Wait.  Wait.  “Huge emotional reaction,” like elation and positivity and love?

JUSTIN ROHRMAN: [LAUGHTER].  Exactly.

MATTHEW HEUSSER: Happiness and joy?

JUSTIN ROHRMAN: More like, “It’s not possible to make bug-free software.  What the hell are these guys doing?”  I went there, and the project is really interesting.  They have this library built off of a programing language called, Gallina, and what they are doing is making math-like proofs for low-level components of products like an operating system or a kernel or an SSL library or some kind of embedded software on a medical device, things that are extremely formal, and what they’re doing is verifying that the product matches up with that specification to some degree.

MATTHEW HEUSSER: So, what kind of products could you use this for?  Could you use this for a website?

JUSTIN ROHRMAN: Not exactly.  I think that you could, but it would be much more difficult because websites are not formal.  So, it works for formal systems because the rules are established.  We know math works a certain way because there is a formal set of rules and you work within those rules, and that’s kind of how compilers work.  There’s very specific things they do, and we’ve been developing and using them long enough to know most of those things.  You could still make the argument that specifications are necessarily incomplete.  We don’t know everything the product should do.  They can be wrong.  People mess up when they make them.  They can be out of date.  But, for things like medical devices, maybe try to get it as close to the specification as you can and it matters more than a web app to book an airplane flight.  The stakes are higher.

MATTHEW HEUSSER: So, you want to have a precise input, a precise transformation, a precise output, and precise stakes.  Physical, medical device would work, a router.

JUSTIN ROHRMAN: So, there were people there from Google talking about how they are doing formal verification.  Thus, the phrase they used, “Formal verification” on their SSL libraries.  There were people from Intel and Cisco doing it on hardware design.  There was a guy from a company that makes voting machine software, which is sort of strangely regulated.  They were using it for that project, and they don’t call it “testing.”  They call it, “verification” or “reasoning,” like it’s a philosophical tool to reason about how something works.  It’s interesting.  They use a whole different language set.  Their goals are not testing.  Their goals are very clearly verifying to a specification, and it’s through math proofs with this Gallina language.  They are looking for industrial partners for companies that have products in this niche phase, and I was actually talking to our cohost, Perze, about this.  He works at a company that makes medical devices, so they might be well-suited to becoming a research partner with DeepSpec.

ANDY TINKHAM: Did they give any examples of bugs that they’ve found doing this sort of thing?

JUSTIN ROHRMAN: Not exactly.  There was only three days of us running through a series of talks.  The first day, it was stuff I understood.  The second day, it was a lot talks around industrial partners.  Day three, I was well out of my depth.  They were showing math proofs in a PowerPoint Slide, and I was saying, “Where am I right now?”

MATTHEW HEUSSER: You might well be the honest man.  I was in grad school, everybody took a chapter, and then you reported on it.  The Metrics Chapter had a symbol that was the sigma symbol, which looks like a really funky, liney E from 1 to N of F of N/N, and it was in the background of the slide that these guys were presenting.  They did their whole presentation, and I raised my hand and said, “What does that formula mean?”  And, the people giving the talk didn’t know.

PARTICIPANTS: [LAUGHTER].

MATTHEW HEUSSER: And I said, “It’s the sum of the function from 1 to n/n.  It’s the average.  Couldn’t you just say ‘the average?’?”  And, they were like, “It looked impressive.”  I don’t know.  Is it possible that they’re all just a bunch of advanced math degree people, but it’s possible that they just couldn’t admit they didn’t get it, either.

JUSTIN ROHRMAN: I think some of them get it.  Walking around during the break sessions, chatting with people, a lot of the nametags said, “MIT” or “Yale” or “Princeton.”  [LAUGHTER].  There was a lot of brain power in the room for sure.

ANDY TINKHAM: What school did you write on your nametag, Justin?

JUSTIN ROHRMAN: I wrote, “Association for Software Testing,” and it stood out.  I got a lot of questions about what we did, which was fun.

ANDY TINKHAM: Well, it sounds like it worked out pretty good.

JUSTIN ROHRMAN: I also had lots of fun conversations about coverage and the difficulties with specifications, which they are completely aware of, by the way, and testing and all of this fun stuff.  I had a good time.  I’ll write more about it.

MATTHEW HEUSSER: Thanks Justin.  The other big event that we are trying to cover, a little bit here and a little bit there, is the Reinventing Testers Workshop that James and John Bach ran on Orcas Island.  We’ve got a Podcast with James.  It’s kind of a free-flow, but it didn’t really capture the content from that week.  So, I thought maybe Andy could give us a quick rundown of the content and talk about what that means for us a little bit this week, and then we’ll cover the content over time, over the next few weeks.

MICHAEL LARSEN: And, to add to the fact that, out of all the attendees on this call, I’m the only one that wasn’t there.

MATTHEW HEUSSER: I’m telling you man, you’re missing it.

MICHAEL LARSEN: Hey, you know, it’s the challenge of having a day job.

MATTHEW HEUSSER: Well, yeah.

ANDY TINKHAM: I think it’s also interesting to point out, Michael, that you were not only the only one not at the workshop, you are also the one that lives closest to the workshop as well.

MICHAEL LARSEN: This is a good point.  Yes.  Meaning the fact that I’m on the West Coast, right.

MATTHEW HEUSSER: So, Andy, tell us about Orcas.  Give us the rundown of the schedule at least.

ANDY TINKHAM: So, it was a three-day workshop.  We started off on the first day talking about “role.”  Actually, we started off by talking about, “Why is there even a need to reinvent testers?”  We did a group table activity where each group pulled together a list.  Then, we presented them to the broader workshop audience, moved from there into John and James’ vision of, “Why there’s a need.”  We moved onto the concept of a role-gram, which is a diagramming technique used to explain the roles, and talked about that through much of the rest of the day, including an exercise using them, really fitting them into a discussion.  Day two, we stared off talking about, “What testers do,” looking at, “What is analysis?  How to invent an analysis,” looking into heuristics, heuristical safety, responsibility, and wisdom.  We got into that RATS and the Dolphins, added some explicit knowledge, coaching, did an exercise testing a physical object, and discussed that.  Talked about, “Testing versus checking” and ended the day with a testopsy where we observed John doing a testing activity himself.  Day three, was focused on the mechanics.  We talked about “testability.”  We talked about “risk.”  We talked about things like “critical and social distance,” and then we moved into the Agile testing quadrants as James and Michael Bolton have defined them.  We started with the original one’s that Brian Marick created, worked through Lisa Crispin and Janet Gregory and then on to James and Michael.  We talking about trading zones, and then after that, we had a bunch of experience reports.  Then, we wrapped up the workshop.

MATTHEW HEUSSER: Okay.  So, we just gave a brief overview of everything in the universe that we covered in three days.  We don’t really have time to dive into it.  So, if you were to sort of zoom out and say, “What was the general theme of what we were talking about,” a lot of it was “the value we bring and the way we position ourselves as testers,” agree or disagree?

JUSTIN ROHRMAN: I agree mostly.  I think it was about being able to slice up the work you do and the people you interact with and how it’s done, so you can actually explain the role.  If your role is disappearing within your organization, and for many people it is because of Agile or Continuous Delivery or DevOps or whatever, you want to know it’s going to be missed when it’s gone.  This was a set of tools that helps you explain that, I think.

ANDY TINKHAM: I think it’s not only what’s going to be missed but also if your role needs to change, “What’s your baseline?  How do you even understand what incorporates that role?”  Whether it’s a testing role or any role really, a lot of it was tools about, “How do you figure that out?”

MATTHEW HEUSSER: You might do that analysis and say that, “In the whole pie that is software development, my slice is really thin and these other things that are coming could make that even less important.  So, now I have a choice to make.  Do I continue to say, ‘No.  No.  This word means this very specific thing and I don’t do anything else’ and make the slice thinner, or do you find ways to expand change and be more valuable?”  How do you break out of the commodity trap, and by commodity trap, when you go buy gas at the gas station, everything, it’s all the same, all you care about is price.  It’s very easy as testers for us to say, “We want to make our work stable, predictable, and repeatable.  We wanted to define it so well that any idiot can do it.”  Then, when we succeed at that and any idiot can do it, you might as well just outsource that or hire some middle school student to do it for $7.00 an hour or something like that.  It’s a trap.  We end up competing on price.  For those of us who do contracting or consulting, it’s a very dangerous trap, and Andy is a practice lead for C2 Consulting?

ANDY TINKHAM: C2 Technology Solutions.

MATTHEW HEUSSER: This is pretty recently, right?

ANDY TINKHAM: Well, I’ve been a practice lead for a while.  I was a practice lead at a company called Magenic for about 3-1/2 years.

MATTHEW HEUSSER: Right.

ANDY TINKHAM: Then, I left last August.  Magenic has a nationwide presence, and I was getting on a plane a lot.  C2 is focused on the Twin Cities area, Minneapolis, and St. Paul.  So, I can stay home, which is good.

MATTHEW HEUSSER: Well, I’m happy for you.  That travel thing is a killer.

ANDY TINKHAM: It was time.  So, as a practice lead, my role really falls into four main buckets—one of which is a billing and delivery support role, where I’m out working with clients myself.  I’m supporting our consultants in the field.  I have a sales and recruiting role, where I’m on the sales calls and screening all of the candidates that we bring in.  I have a community building role.  So, we’re not just a loose collection of consultants who are at different client sites and never see each other and never interact.  Then, the one that’s most important today is, I have an external reputation building role, where my goal really is to make C2 stand out from all of our other competitors in town.  The Twin Cities is the market that has several consultancies focused on testing.  There’s a lot of big companies here in town that, when they have open-recs just dangling out there and everybody descends on them and they’re all, “When are we drawing on consultants,” either as independents or hourly, there’s a lot of competition for the consultants themselves.  It sometimes becomes a, “Who can get there first,” and we’re trying to really rise above that and differentiate ourselves in a way that make our clients say, “Hey.  We want to work with you,” and then our consultants say, “Hey.  We want to stay here and work for you rather than having everybody bounce around between different consultancies.”

MATTHEW HEUSSER: Yeah.  Whenever I hear someone on the client side, “Oh, the sharks are in the water—oh, my gosh—I’m overwhelmed with headhunters and recruiters,” Excelon isn’t even going to be in the bid.  We’re just not even going to bother, because it’s going to become piles of resumes, competing on hourly rate, who can take who to lunch, and who has the nicest pens.  My goal is typically to help explain the value proposition that, “We’re different.  We’re special.  We can do things.”  It sounds like C2 is trying to do the same thing?

ANDY TINKHAM: Yes.

MATTHEW HEUSSER: Which I think applies to… just testers on a Scrum Team?  Testers, just at a company?  Anybody in our audience?  Right?

ANDY TINKHAM: I’d say, “Anybody in our audience,” yeah.  We have a wide range of clients that range from, “We’re waterfall and we admit it,” up to having a decent handle on Agile, most fall someone the middle, where they have adopted some Agile practices.  We have a range of Industries.  There’s a lot of medical device stuff here.  So, actually I was listening to Justin’s stuff on DeepSpec with interest and may follow up with him offline more on that too.  We have retail companies as clients.  We cover a wide range, and I think this applies to all of them.

MATTHEW HEUSSER: So, how do you do it?  Imagine that you are meeting a customer for the first time who thinks they’ve got testing all figured out.  There’s a checkbox that you check after development so you can go to production, and they just want to get it done as cheaply as possible, and you want to say, “Well, actually…”  What do you say?

ANDY TINKHAM: We’re framing things up as this idea of modern testing.  For those on the cutting edge, modern testing is going to have ideas, some of which have been around for a while, honestly.  They’re more modern than a lot of the traditional approaches that we see testing groups recommend.  There’s a set of guiding principles for them, similar to the principles of the Context-Driven Testing School.  At its core, modern testing has four major areas that the principles break down into.  The first one is that context is the heart of everything.  Just like the Context‑Driven School it’s, “Start with understanding your context and then draw on your toolbox of skills to determine the right action.”  It’s not, “We’re going to come in with a set methodology, and we’re going to make it fit into your project and just tell you have to change things because this is the way we have to do testing.”  The second big component is testers are no automatons.  We, as an industry, have spent a long time harping on, “Well, anybody should be able to run this,” and what we end up with is a script that we give to a person that we just expect them to run as told.  Some places are more liberal in letting testers go.  Others are very much, “Well, I’m going to assume that you’re just going to run the script,” and we’re dehumanizing testers when we do that.  The third big pillar is, using multiple lenses as we design tests.  Testers traditionally have used a lens of just requirements for designing their tests, and requirement is a very important lens.  We have to use that lens, but it’s not the only one.  Just like when we look at things through a telescope or those old red-blue 3D glasses, were going to see some things and we’re going to miss other things, and what lens we use is going to change what we see and what we miss.  We need to use that same approach when we’re testing, look at things like risk and failure modes or security and performance, taking a set of test ideas that come from these different lenses and determining which ones are the important ones in our context.  Then, the last pillar is, providing information in a timely and useable fashion.  Testing is all about information proving.  We can do amazing testing, and if we don’t share the information with anybody, it’s not very much value at all.  It’s entirely based on our reputation, and how much you trust me as the tester when I say, “Yeah.  It works.  I did all the right tests.”  And, even that, is providing a little bit of information.  So, those are, kind of, the four big pillars.

Breaking it down into the principles, some of these echo principles of the Context-Driven School, but then there are other ones as well.  I actually even lifted a few from the Context‑Driven School Principles.  Context is the most important focus, and people are the biggest piece of that context.  No best practices.  Only good practices in context.  No treating people like automatons.  Always provide value.  Always portray testing as a skilled and knowledgeable profession.  Provide needed and timely information.  Automation supports testing, not the other way around.  Collaborative relationships, not antagonism.  Drive to create new knowledge on a project, not just confirm existing knowledge.  Use the tools that make sense and add value.  For me, that’s the core of what modern testing is.

MATTHEW HEUSSER: So, I really like that, Andy.  I think I’d use a little different rhetoric, but I think I agree with what you’re trying to accomplish there.  Specifically, you’re much more explicit there than the Context-Driven Manifesto, which I think is great.  If you think about what professionals do, your doctor or your lawyer; they have expertise in the subject.  You don’t.  So, you say, “Engineers/accountants, here is a problem.  Tell me how to solve it,” and then they solve it for you.  They say, “Here you go.”  If you’re not happy with the results long-term, you find another doctor.  But, you don’t tell your doctor what blood tests to run.  In testing, I think the paradigm that you’re pushing back against is the, “We’re going to plan everything up front, and then we’re going to work the plan.  You’re just going to do what you’re told.”  That’s a low-value, non-professional activity.  Where we get in trouble is a large organization where we’re only staffing a couple of people.  So, unless we’re coming in as coaching, assignments on a transformation effort, a large organization would say, “The wheels are already turning.  We don’t see how to incorporate your stuff while keeping these other wheels turning on other projects that you don’t have authority for, and that’s just something we have to work through.”  I don’t know if you’ve run into that.

ANDY TINKHAM: I haven’t yet, but I will.  Absolutely.  The way that I’m approaching it right now is getting all of my practice members aware of the principles first.  Really getting them to internalize them, to understand them, and then equipping them with three main toolboxes.  The first toolbox is a set of tools to understand, to elicit, and to analyze the context.  So, that’ll have things like interviewing techniques, how to prioritize the different lenses, the different non-functional requirements to figure out what’s most important on a given project.  Review techniques to understand artifacts that are already in place.  Then, the second toolbox is a lot of the testing techniques.  Much of the testing knowledge that we’ve built up over years can still apply in some context.  There are times when we might need to write a test plan document or run a regression test.  None of that knowledge goes away, but it’s setting it up so that people know the pros and cons of the technique.  They know what to apply it.  They know how it could be harmful, and they can be equipped with a set of arguments for when they go into a project and something is hurting the team’s productivity.  They can effectively work with the team to ultimately change that practice.  The third toolbox is reporting.  It’s things like, “How do I accurately convey the information that I’m gathering through this testing?”  That could include things like different formats for graphs, different ways to summarize and present numbers, different metrics, and how they can be used and misused, communication to different levels of management, and a wide array of communication techniques.  So, I really see that first toolbox being the key to your scenario, Matt, where they’re going in, things are already established, and we need to get a handle on it.  We need to understand what’s working and we need to work in that context while we work with the team to make things even better.

MICHAEL LARSEN: So, if I could step in here, Andy, real quick, and I’m borrowing a little bit of Seth Godin here.  If we’re just focusing on the bottom line, if we’re just thinking about getting things done at the cheapest level, were always going to be dealing with a race to the bottom.  The fact of the matter is that somebody is always going to be faster or more cutthroat and aggressive than you might ever be, but the one thing that we can all bring to the table is a unique-creative spirit and energy. “How can we utilize scientific experimentation that testing, when you get right down to it, really is the cornerstone of this?”  I think in a lot of what we see today, there’s a requirements doc and we must make sure that we’re doing what the requirements say, and it seems to me a lot of the “testing,” in other words, experimentation on being able to confirm if a hypothesis actually holds water or not, is filtered out to the point to where all that we’re doing is we’re rubber stamping to say that a requirement is met.  To me, it seems one of the ways that we as testers can add value is to step away from that and say, “It’s not enough for us just to rubber stamp the requirements.  What are we missing?  What haven’t we considered?  What hypotheses are we not even looking at here, and how can we get some answers there?”  What do you think about that?

ANDY TINKHAM: I completely agree, and I think that’s where the lenses metaphor comes in.  If you look at how a team is often focused, you’ve got an analyst largely thinking about how the application should work.  You’ve got developers largely thinking about how the application should work, and yeah, they’re going to pull in some things about the edge cases and how it shouldn’t work.  They’ll think it’s kind of “off in the weeks” for a lot of people, but the problem is they see it as “getting off in the weeds.”  So, you’ve got this whole team mentality of, “What should this look like when it works?”  By testers bringing in these different lenses and thinking about things from different directions, running those experiments, that can bring a lot of useful information to the team that no one else on the team has really thought about, but I think I do need to put more experimentation stuff in the toolbox.

JUSTIN ROHRMAN: It kind of sounds to me here that we’re framing modern testing or reinvented testing as just “good” testing.  It’s taking the specification and turning each line into a test case, and it’s very clear to us, at least, that that’s shallow, that there’s so much more to do.  Is the reinvented role just looking at what we do and digging into that and going deeper, realizing that the role of software testing isn’t about a specification so much as it is about digging into really old books and discovering where problem-solving comes from or where usability comes from?

MATTHEW HEUSSER: This is a problem that I think we run into on the show quite a bit.  I think we would agree on this, Justin.  The Context-Driven Manifesto written in 2001, it says that, “The solution is the customers find value.  If the customers don’t find value, you’ve wasted your time,” and what that means is, you could completely confirm a badly-written spec and the customers don’t find value.  So, the tester’s role is more than inspector and box-checker and comparison of this document to that document matches the software, but it’s a search for customer value.  As soon as you start thinking about it that way, then you start doing things like, “Hey.  I need to be in the driver’s seat.  Hey.  This follow-the-steps thing is silly.  Let’s not do that.  Hey.  Every build is different, and the risks change on every build.  Therefore, our test strategy should change on every build.  Therefore, we shouldn’t be doing the same things every time.”  You realize that pretty quick.  There are sub-segments of the test community that knew this 20 years ago.  In that sense, this stuff is not new, and we do have enterprise customers that we come into that say, “Over the next three days, what we want you to do is to create your test cases that track back to the specifications in JIRA and then get them reviewed and then execute the test cases and then initial on every step that you did the thing.”  We still deal with that.

ANDY TINKHAM: I was talking to a project manager yesterday who had done some Googling or something and found a model where all of the unit tests, even for the developers, was written in a separate phase from creating the code.

MATTHEW HEUSSER: I can’t stand that.  When people talk about unit testing like, “Go Google test-driven development.  Read a book.”  That drives me nuts.  It’s like this sort of reasoning by analogy, but the people haven’t done it.  I’m sorry.  I guess I have an allergy to that.  In 2011, James Whittaker stood up at STARWEST, one of the world’s largest test conferences, and he said that, “Your customers don’t want your test cases.  They want working software.  Testing is dead,” and a lot of us were kind of like, “Duh.”

JUSTIN ROHRMAN: You said, “It’s sad that you don’t know what testing is,” right?

MATTHEW HEUSSER: Right.

MICHAEL LARSEN: When you get right down to it, also it’s when you develop a component, you want to make sure you can get from point A to point B.  It’s like a function.  You put something in, something comes out, something happens in the middle, and you want to make sure that you can do that.  The tester’s role is to figure out everything in the known physical world that can attack that black box and figure out ways that we can somehow turn it upside down or, barring that, figure out if there is some aspect to where the functionality, as defined, is irrelevant.  Sometimes you’re right, that “goes into the weeds,” or, “Oh.  You’re getting off track here.”  And yet, there are times where I’ve come back and said, “Yeah.  I may be getting off track here,” but we’ve come back later and they said, “Oh, yeah.  That getting off track was important and we now have to address that.”

MATTHEW HEUSSER: If we think we understand what modern testing is, and there’s maybe two problems we could talk about within that, how do we explain it?  It’s as simple as, “Oh, let me tell you about my role.  You don’t understand what I do.”  And the other one is, “How do you transform or change a large organization that is already in flight?  How do you reassemble the plane while it’s in flight?  We’ve got a couple of techniques that we do for the second one, but I wanted to throw it out there to you guys before I jump in.  How you brand it, and then how do help an organization change?

ANDY TINKHAM: I think we can briefly address the branding.  So far, it has been incorporating the pillars into sales talks that has been instrumental in having organizations say, “Absolutely, we want to have you come to tell us more about this,” and we’re also doing a lot of talks, clients coming to the office, to hear more about modern testing.  I did a local user group last week.  So, that’s how I’ve approach branding so far.  And, to just quickly go back to, “Yeah, I don’t claim that this is new,” but it is very much a case of, “I’ve been influenced throughout the course of my career by many, many people, including all three of you.”  There was something at the Reinventing Testers Workshop.  I think it was on the morning of the first day, “Are we were inventing the state-of-the-art or are we reinventing the state-of-the-practice in a lot of places?”

I think that that’s an important distinction, where we’re not necessarily throwing out all the stuff that we deem state-of-the-art, but we’re recognizing that a lot of that hasn’t trickled down into broader usage and that broader usage role is what we’re really trying to do here.

MATTHEW HEUSSER: What can the individual contributor do on a test team to explain testing to his development and delivery team and to his management?

ANDY TINKHAM: Well, I think that a key first step is sitting down and actually thinking about what my personal testing philosophy is.  The activity of going through and coming up with those statements, and there were things similar to the principles of modern testing, was a huge clarifier for me personally as a tester, and it’s an activity that I’ve recommended to several people.  Now, Chris Kenst has taken me up on it and put a post up on his blog with his philosophy.  Several of my practice members at Magenic did it, and everybody has come back and said, “That was really clarifying for me.  That was a really good exercise.”  So, that’s where I would start as a tester looking to brand themselves, “What do you believe about testing?”  You can’t communicate that until you know yourself.

MATTHEW HEUSSER: That’s great.  So, this is an exercise for yourself about, “What do I do?”

ANDY TINKHAM: Right.  And, “What do I think about what I do?”

MATTHEW HEUSSER: When you hear someone say something that is contrary to that, you could say, “That’s not really how I think about the work.  We should talk about that sometime,” and then, if they take you up on it later, they are officially giving you permission, and they are asking you.  So, they are putting you in a position as an authority to talk about your role.

ANDY TINKHAM: There may also be negotiations.  When you’re being asked to do things that don’t fit with that philosophy, sometimes it make sense to say, “Okay.  Let’s talk about what the goal is that we’re trying to accomplish here, and are there other ways that we could accomplish that goal as well?”  So, that becomes a higher degree of conflict, perhaps, than, “Hey.  We disagree, and I’m happy to talk about it, if you want.”

MATTHEW HEUSSER: Thank you, Andy.  That makes a lot of sense.  We’re getting low on time.  Justin, Michael, do you have any other questions or comments?

JUSTIN ROHRMAN: One thing occurred to me, modern testing or good testing to some degree is a little rebellion against the idea that people that don’t understand testing should be directing how it’s done, saying that, “Everything needs to be a detailed test case or everything needs to come from the specification.”  It’s just clear that that is not testing to me, and I think moving beyond that is understanding why and being able to explain that and demonstrate it.

ANDY TINKHAM: Yeah.  I think that’s a piece of it.  Yeah.

MATTHEW HEUSSER: So, we should talk about where we’re going to be and then maybe a tool or two and let Michael close out.  I just signed up for Agile Des Moines, Iowa in September.  I’m going to be at KWSQA.  Anna Royzman has a conference in New York on the 26th, 27th of September.  Software Testing Atlanta Conference is the 26th, 27th of September.

MICHAEL LARSEN: One other thing I’d like to throw out here, since we’re talking about conferences, even though I’m going to be very lightly attending conferences this year because of my Philmont Trek that is taking up the majority of my time, I am actively working with and I am one of the reviewers for the 34th Annual Pacific Northwest Software Quality Conference, which takes place in Portland, Oregon every year, and this year it’s happening October 17th through 19th.  The theme this year is “Cultivating Quality Software” and I can tell you already we’ve had over 100 submissions.  We’re working through and we’ve already committed to those who have been accepted, and we’re doing reviews on papers right now.  At least for the ones that I’m reading, it looks like there’s going to be some cool content at this particular conference.  So, if you’d like to go and participate in that one, I’d give PNSQC a good solid thumbs-up.  People come from all over the world to participate, and there’s some really good solid talks that happen there.  So, I’d like to encourage that one as well.  Also, I want to make a very straight-up request regarding The Show.  We’ve been here now for a number of episodes.  It’s taken some time for people to know that we’re here and to get used to it, and we publish the show twice a month.  So, we’re to the point now to where people are starting to ask questions or comment on it or share it around, and we’re grateful for that.  The Testing Show is your show.  It’s not ours.  It’s your show.  We make this show because we want to bring the best information that we have about software testing, software delivery, and topics that are not necessarily only related to testing, and that’s where you come in.  We would love it if you would write to us.  Send us your questions at The Testing Show at qualitestgroup dot com.  If you like the Podcast, leave us reviews on iTunes, because the more that our show gets reviewed, the more likely we will bubble up when people are looking for software testing.  And, “Hey.  What’s a good software testing podcast?”  We’ll show up, if there are more reviews.  So yes, I am being shameless.  I am asking you.  Review the show.  Give us questions.  Tell us the things you want to hear us talk about, and we will more than happily dive into it.

MATTHEW HEUSSER: All right.  Thanks for coming.  We’re back again in two weeks.

MICHAEL LARSEN: All right.  Thanks for having us.

ANDY TINKHAM: Thanks.

Recent posts

Get started with a free 30 minute consultation with an expert.