Insights Podcasts Testopsies


September 28, 2016

This time around, The Testing Show is coming to you “Live” from the Conference for the Association for Software Testing, held in Vancouver, B.C., Canada.

Matthew Heusser, Justin Rohrman, and Perze Ababa met up with Michael Bolton to discuss “Testopsies”, a focused examination and task analysis, and applying it to the opportunities for learning and refocusing of efforts often bundled under the label of “testing”.









Michael Bolton








MATTHEW HEUSSER: Hello and welcome to The Testing Show. Today, we are “live”… well, we are recording from the Conference for the Association for Software Testing, with a quorum of our regular guests, so we also have Justin Rohrman.


MATTHEW HEUSSER: You know Justin from the show, but you might not know that he’s the President of the Association for Software Testing.

JUSTIN ROHRMAN: I’m President today. Tomorrow, we’ll see. We’re having our annual elections, so…

MATTHEW HEUSSER: Which also means he’s also been kind of busy with the conference.

JUSTIN ROHRMAN: Yeah, very busy.

MATTHEW HEUSSER: He wore a sport coat and fancy shoes.

PERZE ABABA: Oh, yes, he did [laughter]

JUSTIN ROHRMAN: I have to admit this is the first time in my life I’ve owned a sports coat.

MATTHEW HEUSSER: Looks alright, looks alright… and of course, we also have Perze Ababa.

PERZE ABABA: Hi everyone.

MATTHEW HEUSSER: All the way from New Jersey.

PERZE ABABA: Yes, that’s correct.

JUSTIN ROHRMAN: All the way from beautiful Princeton, New Jersey.


MATTHEW HEUSSER: But he’s here, we’re in the same room. We also have a very special guest today. Michael Bolton, the… principal consultant at DevelopSense?

MICHAEL BOLTON: The “only” consultant at DevelopSense [Laughter].

MATTHEW HEUSSER: Only consultant. I’ve known Michael a long time, he came to Grand Rapids, Michigan ten, twelve years ago. Spoke at the Grand Rapids Programming User’s Group. We’ve worked together on a few things since. So, welcome to the show, Michael.


MATTHEW HEUSSER: And you did a tutorial yesterday on “Testopsies”.


MATTHEW HEUSSER: What’s a testopsy?

MICHAEL BOLTON: A testopsy is a dissection of a session of testing, and one of the things we discovered yesterday, is that actually we could apply that kind of analysis to the type of thing that Rob Sabourin likes to talk about, such as task analysis of what’s going on during a testing session. And so, with the group, we collected things to look for, things to observe, ongoing processes, moments, skills, tactics, activities, and cataloged those, and applied them to a couple of sessions of testing. It was a lot of fun for me, and I learned a lot, myself, and I think people there did, too.

JUSTIN ROHRMAN: So testopsies, to me, sounds like a way to begin an anthropology experiment, to some degree, because you’re collecting massive amounts of qualitative data about how people actually do testing, and then you can pick it apart and find patterns in it, to some degree.

MICHAEL BOLTON: Absolutely, although I don’t want to overstate my skills as an “anthropologist”; I’m a Tyro when it comes to that sort of stuff. But that’s the general principle, yes.

JUSTIN ROHRMAN: Is that the intention with developing “testopsy”, or is it more of a teaching aid?

MICHAEL BOLTON: It’s both, of course. We’re trying to understand the activities that are going on so that we can analyze them, and discuss them, and prove them. The idea is to do something, and to study it, and to see where it can be done again with intentionality. One of the things that is kind of interesting, as part of the process, is trying to figure out how to name certain things so that we can talk about them, so that we have a handle to grab on to, in order to discuss them.

JUSTIN ROHRMAN: What are examples of things you are trying to name?

MICHAEL BOLTON: Some of the names exist already, of course, in the discourse. One example is “factoring”, identifying things that might matter to a test condition, to something that’s happening in the product, also called a “variable analysis”.  But it’s not just about variables, necessarily, we might be trying to break down things in a function or a feature, things that are happening.

JUSTIN ROHRMAN: Ah, OK< so it makes sense then to call if factoring instead of variable analysis, so you’re overloading the term just a little bit less.

MICHAEL BOLTON: Oh yeah, absolutely! We’re trying to go deeper on each one of those things. Recording as we’re testing; what happens is, we record something as we’re testing, we break that down, we notice it, we become aware of it, we put it in the forefront of our consciousness, we edit, we narrate, we justify as we’re doing testing work in general and as we’re doing something as simple as writing a note. Managing tempo, the pace of what we’re doing. One of the things that came up yesterday was the notion of the role of pauses in testing work, giving the opportunities for critical thinking faculties to wake up. One of the things that came up in today’s keynote, for example, was when we’re engaged in a task, there is this amazing thing, that never occurred to me… of course, in retrospect, it’s completely obvious. When Nicholas Carr was talking this morning during the open season, one of the things he pointed out was that we have very little space in our short term memory; it’s like we don’t have that many registers. So one of the things that can easily get displaced from our working memory is the goal of the task we’re engaged in. In retrospect, well, that sounds completely obvious, but it had not occurred to me quite that dramatically before he said so. We looked a little at the value of note-taking in testing as a means of deliberately introducing pauses into the tempo of your work, which gives you just a little moment to start asking the critical thinking kinds of questions. Am I still on mission? Is what I’m seeing really what it looks like? What am I noticing? What might I be missing? I believe pretty strongly that when I watch people who are managing the tempo of their notetaking, it affords that opportunity to look at what they’re doing just a little more reflectively than they might otherwise.

JUSTIN ROHRMAN: That’s an interesting idea. I’ve heard that referred to as “slack time” before, adding space into the work so that you have time to reflect.

MICHAEL BOLTON: I think that’s true. However, slack time is often thought of in a larger sense. Adding to the schedule. I was interested yesterday in the micro sense of adding that kind of slack.

JUSTIN ROHRMAN: Like adding pauses in the actual flow of testing work, hands off the keyboard to a note pad.

MICHAEL BOLTON: Or to, even to another screen, where your taking notes or building a mind map or something. So there’s a great big long list that has developed over the years. I mean, it used to be a list that originated with James and Jon Bach, called Exploratory Skills and Tactics, but then we started to realize, if testing is fundamentally an exploratory activity, and I believe it really is, then that list should be generalized into just general skills and tactics. So managing focus, managing tempo, moment of collaboration, interviewing, questioning, modeling, comparing, conjecturing, all these things are happening all the time as we’re testing, and it’s interesting to look at them at varying levels of granularity. James did a session of seven minutes of testing with Alexander Simic, recorded it, and then looked at what was happening in two-second increments; went over the game film and looked at it.

Another thing that we’ve done, and this has been around for a little while, is looking at an entire testing session, or a tester’s day, or a tester’s week, in anything from two to ten minute chunks, and trying to find out what the dominating activity is in each one of those things. One of the things we realized, via thought experiment (it turns out, whenever we’ve collected actual data, it’s a good deal worse than this), but via thought experiment, we looked at a plausible week’s worth of testing. We broke it down into ten minute increments. Things that might happen. You know, chats with the developer and chats with coworkers, sometimes on-mission, sometimes off mission. Test Design and Execution, whereupon we obtain coverage, bug investigation and reporting setup, lunch, coffee breaks, new builds… well, we looked at that, and in a thought experiment that we ran in our heads, when we broke it all down, laid it out, we realized that one week’s worth of testing work might easily be as little as one day’s worth of coverage. We’d keep showing this to people and they’s say “oh yeah, that looks like my week, except it looks a little optimistic!”

JUSTIN ROHRMAN: When you say “coverage”, you mean actual software testing?

MATTHEW HEUSSER: You learned “testing progress”.

MICHAEL BOLTON: Yeah, well we’ve got to be careful about “actual testing”, because it’s all testing work.

JUSTIN ROHRMAN: It is all testing.

MICHAEL BOLTON: So what we’re trying to do is we’re trying to… I think you used the word “overload”… recognize that the word “testing” is overloaded, and try to look at those activities at a finer level of granularity. One of the things that we noticed, right away, is that testers actually spend a relatively small amount of time learning about the product, actively engaged in interacting with the product and learning about it and increasing test coverage. Another insight was – again, something that, in retrospect is kind of obvious, but until you notice it it’s lurking there, below the surface – the bugs that we find are, by definition, the shallowest bugs in the product. When we look with a client, where I did some of this on site work, what I found was a tester who essentially spent an entire afternoon getting ready to test, preparing to test, doing setup work, trying to get the machinery connected, trying to get the test environment up and running, recognizing “oh there’s another problem”, and it’s off to the server room. Each walk to the server room took three minutes or so, and she had to take eleven trips to the server room in one afternoon of testing. So, to the manager, who’s not looking very carefully, that’s four hours worth of testing work, but if you look a little more closely, there is actually half an hour of that four hours walking… [laughter]… which allows us, of course, to ask some questions about that, and say “well, might it be a good idea to situate the testers closer to the server room? Might it be a good idea for there to be someone in the server room with whom the tester could interact and connect?” this particular kind of work required physical plugging of machinery and so on and so forth. When we look at things in detail, we get to find out all these little ways that we could have of optimizing things.

MATTHEW HEUSSER: So, in Lean Software Testing we call the walking back and forth “transportation waste”


MATTHEW HEUSSER: when we talk about… and we can, at some other time argue about the meaning of the word “waste” but..

MICHAEL BOLTON: Yeah, well, yeah, for instance

JUSTIN ROHRMAN: It’s heuristic, it’s not waste for everyone.

MICHAEL BOLTON: it is heuristic, sure.

MATTHEW HEUSSER: Right, it’s heuristic, or we talk about “touch time”, which is my hands on keyboard productive work.


MATTHEW HEUSSER: Versus, waiting for a build or waiting for a server to be provisioned, or waiting for a decision, or “well, I sent the email, so… until the product owner tells me what he wants, I can’t file the bug or not, so I’m blocked”. I think that’s all kinda’ in some of the same waters.

PERZE ABABA: I was in Michael’s session yesterday. I was in a group with Neil Studd, Steven Woodie and I met Ward Dymer, and we came up with a mnemonic


PERZE ABABA: [laughter] for the things that you can observe what the testers are doing, so we call it the CHRISTMAS mnemonic. C Stands for Conversation/Collaboration, you see people talking as Michael was talking about earlier. There’s time where you apply Heuristics or discovering new heuristics. We’re also Recording results and notetaking.  I is for  Interacting with the product. S is for Speculation and looking through your hypotheses. T is for Tool usage. M is for Modeling and Refining that same model. A is Articulating assumptions and S is Setup, configuration and orientation.  It’s just a small subset of what the list was.

MICHAEL BOLTON: I was totally thrilled by that, and the reason I’m totally thrilled by it is that it’s an instance of something James and I keep talking about in our work; how important it is for people to invent testing for themselves. Now, these are four really skilled people, so I’m not at all surprised on one level. And yet it was so wonderful to see that little mnemonic getting developed there.

MATTHEW HEUSSER: It was terrific, and that’s the kind of thing that spending a day on looking at things on kind of a micro level like this, it affords that opportunity and so, it’s terrific. That was one of the great moments of yesterday, for me.

PERZE ABABA: Thank you.

MATTHEW HEUSSER: I used to struggle with mnemonics; I didn’t get a ton out of them, but then, when you start to make up your own, they become real for you.

MICHAEL BOLTON: Gotta’ own ‘em.

MATTHEW HEUSSER: Yeah, that’s such a huge difference. And you can get a lot of value out of COPFLUNGGUN or SF DEPOT or… there’s lots of value there, but when you start to make it up for yourself, you own the process.  You take personal responsibility for the work. That’s when I think the magic starts to happen.

MICHAEL BOLTON: There was a moment as well yesterday when we looked at what happens when we get work invalidated by new information and by new stuff, the degree to which, sometimes as testers, we really do have to back track and do a sanity checking of revised builds and so on and so forth. That is part of the cost of doing business, in a sense, but it also affords the opportunity for even more learning and more recognition of what’s going on in the product. I’m suddenly reminded of Jerry Weinberg’s observation that “it’s not waste if you’re learning something, if something happens.”

With the transportation waste example, Matt, that you gave, my ex was a manager of configuration and builds, also responsible for Internationalization back in the day at Quarterdeck. She had a really interesting strategy. She set up her printer (she had a corner office), she set up her default printer to be on the opposite side of the office. Now that sounds initially crazy; why would you do that? The answer was it afforded her the mandate to walk to the other corner of the building, and encounter people all the way through it because, of course, it’s a rectangular building. You have the option of going through on a sort of diagonal path, or around the edges, or some combination of the two. That took her past, by one route, the reception and CEO’s office, and the sales office and the marketing people, and then by the other route you go past testers and programmers and that sort of stuff, so whenever she printed something, she deliberately introduced, on the surface of it, would look wasteful, but actually turns out to be this incredibly valuable social thing.

JUSTIN ROHRMAN: That is ridiculously clever [laughter].

MICHAEL BOLTON: There’s a little bit of genius, right there. For the same reason, or a similar kind of reason, anyway, I almost took up smoking. Back in those days, yeah. I  didn’t… it just makes me throw up, [laughter], but then I realize “oh, you can hang out with the people who are smoking, and just drink coffee”.

JUSTIN ROHRMAN: That was in the era of programmers surviving on coffee and nicotine, right?

MICHAEL BOLTON: Did that change, ever?  I don’t remember that changing [laughter]. In any case, when we look at stuff in a one-week length kind of testopsy, and testopsy is , as I say, a name for what is probably in anthropology known as task analysis… ethnography, I suppose. When we look at what people are actually doing, we see an immense amount of stuff that looks like waste but is actually very important socially.

MATTHEW HEUSSER: Yeah, I want to add to that, because that is a real risk, in that the sort of prescriptive listing of the wastes and then going after them is like the bottom of the hierarchy. So we teach that “flow trumps removal of waste”. You may want to fill up buffer on  a state that you want to keep busy, and that will actually improve flow, and that value for customer trumps them both… and even those are heuristic.

MICHAEL BOLTON: When you’re speaking of efficiency there, I think we get messed up about that sometimes, and it flows directly from Nicholas Carr’s Keynote this morning. The observation that Toyota made, realizing that efficiency is not the same thing as effectiveness. The assembly line and robots and stuff that they’d been using are terrific if you want to get out a large number of products all at the same time, but then that takes a hit on the human capacity to observe, and to refine the designs of things, and to notice problems and the designs of things and so forth. I think that’s directly transferrable to the world of software development as well. We see things that are frightfully efficient, but they don’t necessarily allow us to maintain engagement with what we are actually doing. So there were a lot of really terrific notes in his talk on that particular topic with everything we do, and not just with automation but with every activity that we do, or we name, or that we engage in. We lose some things on the swing so we gain on the roundabouts. And it’s important for us to observe those and study them, recognize them for that they are.

MATTHEW HEUSSER: So let’s talk about three of them… substitution fallacy, automation complacency and automation bias, and if we have time we could cover alert fatigue. I thought it was impressive that he gave a label to things that we have all observed, and had very hard conversations about.  Substitution fallacy, the idea that we can just plug in automation into a process that a human used to do, and nothing else will be changed, and they’ll do it exactly the same.

MICHAEL BOLTON: That’s certainly not new. McLuhan said at one point “we shape our tools there after they shape us”, but before him… I don’t remember, Jerry Weinberg told me Emerson or Thorough sad that [T.S. Eliot?] “we have become our tools”. That has been true ever since we picked up a stick; the stick changes us. It extends us, it enhances us, it enables us to do things. It also obsolesces other things that might have been important. It reverses into the opposite of what it was originally intended to do if we overuse it. Those are three of McLuhan’s Four laws of media. I love the idea that it doesn’t replace the work, it changes the work, in various interesting ways. That reminded me very strongly of Harry Collins’ abstract for EuroSTAR 2013, the talk he never gave, in which he said computers and their software are two things. He said, “as machinery, interacting cogs, they have to be checked to make sure that the teeth spin together and that the wheels spin nicely”, but then he also points out that “machines are also social prostheses, fitting into human life where a human once fitted”. I would add to that “or where a human with superpowers could fit, even if a human had never actually been there before”. But it’s a characteristic of medical prostheses, he says, like artificial hearts; they don’t exactly replace the thing that they were designed to replace. The surrounding body has to compensate. He says contemporary computers can’t do the things that humans do, because they were not brought up as social agents, they don’t participate in society. They’re not socialized. Therefore, we need a complex social judgment in order to determine whether humans will happily repair the difference between what the machine does, and the person that it replaces, and he says that’s far more than a matter of making sure whether the cogs spin right. So it’s nice to have that handle, the substitution fallacy. The hook to hang that idea on is nice.

MATTHEW HEUSSER: The one insight that I thought was really good, the human tendency to have automation, to trust it, turn your brain off, leads to two situations. It leads to complacency where, you’re just not even looking out the window anymore as a pilot on auto-pilot. Then when something goes wrong and [snap] the [snap] alarms [snap] start [snap] going [snap] off, it’s not doing what you expected, it injects a humongous amount of stress because you really haven’t been really flying so much, so you actually have had skill atrophy and you are injected into the situation of high stress.

PERZE ABABA:  I think it goes as far as to questioning a notion of authority. So whether that is a person, or an automaton, an API result, it’s… personally, for me, from the culture I grew up in, it’s not very easy to question authority. I come from the Phillipines. There’s a hierarchy elders, where you pay respect to them, and when they something, that goes. It gets a bit difficult to translate that into the workforce because that ends up emanating from that, and then when I came to the United States, that was something I really struggled with, because I was encouraged to question things. Just because your boss is screaming at you doesn’t mean he’s mad at you. He’s mad at something, but you can counteract to that.

MATTHEW HEUSSER: I’m kind of the other side to that, because I went to this hippy dippy question everything school, and when I got into business it was “matt needs to work on his social skills”, and what does that mean? I read all the books. I read “How to Win Friends and Influence People” and “the Seven Habits of Highly Effective (People)” and I read everything, and really it was, I was just supposed to say “yes” in the meeting where I knew the factual answer was “no”. You can’t people skills your way out of… I mean, you could try. There’s a definite “we’re both being pushed toward the middle”.

MICHAEL BOLTON: Well, it’s… one way we have getting around that issue, sometimes, we apply safety language then.

“Can you log in at the moment?”

“I have not been able to log in.”

Just expressing things in terms of a precise expression can often be helpful. A precise expression can often. Not always, of course; nobody really likes bad news,

MATTHEW HEUSSER: Certainly, we can talk about techniques to wiggle on the hook, but when what they want you to say is “yes”, and the answer is not “yes”, it can be problematic.

MICHAEL BOLTON: This was a great Jerry Weinberg lesson; “it’s fine for people to want things.”


MICHAEL BOLTON: It’s OK, they can want things. I’ve often said this; I want to play in the NBA. Trouble is, I’m a week away from my 55th birthday, I’m not in terribly good shape; I’m healthy, but… also, I’m 5’7” have no discernable skills in basketball [laughter], don’t even actually really like the game… come to think of it, I don’t want to be in the NBA, I just want the money [laughter]. Well, then, of course, it’s OK for me to want the money, but I’m not likely to get it, and so that’s just reality [laughter] so it’s OK to want stuff. It’s not OK to assume that they’re going to be able to get it and, if you recognize that they want something, if you acknowledge that, it’s OK to respond to that desire, and to say “well, I get that you’d like to know that it’s working. I’m afraid I’ve got some bad news, based on my observation, anyway [mhh], for the moment. All situations are temporary. That’s the other thing. “Well, I can’t log in right now”… that can soften the blow a little bit.

MATTHEW HEUSSER: So, Carr’s solution to this automation tendency to make us complacent; plane is crashing and the alarms are going off, is to turn the automation off, if I understood him correctly, turn the automation off every now and again, keep running it by hand so that the skills don’t atrophy to keep the skills don’t atrophy, is one of the things he recommended. How would that apply for the way most companies build software today?

MICHAEL BOLTON: It really does come down to that, for me, let us remember that we are developing software to accomplish human purposes. Let us engage in those human purposes, every now and again, specifically because it’s far more than a matter than the wheels spin right. The funny thing about that, to me, is software is very deterministic. It will do the same thing over and over again and what we seem to be doing in many places, certainly many of the clients I’ve been to, seem not to be engaging with the goal of the software, which is to learn surprising new facts about it. That’s why we test. We learn stuff about the product that other people haven’t noticed before. That’s one of the reasons that I am excited about the use of tools, about the use of automation as an exploratory device, and that is by no means a new idea. Cem Kaner has been talking about that for years. Brian Marick, I think, brought it up in, I think it was the late 90s, in 1998 or so he wrote a super paper on the principle that one of the things we might choose to do is to make our products testable in a way that makes them friendlier, more amenable to exploration, aided by tools. I think it’s a terrific thing to do. So much of the checking that is being done right now, I think really could be done at the developer level, and should be done at the developer level, because it’s developer oriented risk. It’s the kinds of risks that developers are, in fact very good at finding. So let’s do that, and as testers, we can certainly assist with that process, but the other way we can assist with it, which I think might be incredibly powerful, is by using some of the testopsy like stuff, the examination of how testers actually use their time, to ask “well, wait a second, are some of these bugs that the testers are finding low hanging fruit in developer space?” Because it takes far longer to investigate and report a bug after you’ve performed a test than it does to perform the test in lots of cases. Especially for those low level, very highly functional, rather than para-functional, coding-error-ish kinds of bugs. Let’s point out that finding those things represents a loss of coverage for the organization, because every moment we spend investigating and reporting a bug, by and large, is another moment that we are not spending on extending and deepening and broadening our test coverage. That means that testers can help developers make the case for something that a lot of developers seem to be calling out for, which is a little bit more time to produce stuff that is quite a bit better. That was a big point in the development of XP.

MATTHEW HEUSSER: Our term for that is “first time quality” in that if the first time quality is low, you end up spending a lot of time in these “find, fix, retest” loops. Now then, your touch time is going to be low because you’re waiting for a new build.

MICHAEL BOLTON: That sort of stuff reminds me of the “minimum viable product” thing. The emphasis appears to be on “minimum” and not so much on “viable” [laughter].

MATTHEW HEUSSER: that’s a good point. It’s a funny phrase.

MICHAEL BOLTON: I know, by the way, that I’ve received the minimum viable product, lots of times [laughter], but yeah, let’s emphasize viable and not so much minimum. I’d like to see a bit more of that. We’re all in this business to get stuff done and to get stuff done quickly. I think it might be important to see how we trip ourselves up in that effort. Make little changes, little things, that would allow us to do things in a way that we can’t necessarily say that any particular activity, setup, bug investigation and reporting is “wasteful”, but if we look at it a little more closely, we can see opportunities for tuning it or changing it in little ways that provide us with some big benefits.

MATTHEW HEUSSER: Alright, well I think I’ve used enough of your time. Thanks for the half hour, Michael. Anything you want to talk about, conference you’ll be speaking at?

MICHAEL BOLTON: I’ll be at Star West and I’ll be at EuroStar in the fall and lots of other work around the world.

MATTHEW HEUSSER: and they can find out more about you at

MICHAEL BOLTON: People can find me. Michael at developsense dot com. I’m happy to hear from people. Really, we get such immense value out of people contacting us and asking us questions, or contributing experience reports. That sort of stuff, it’s great to hear from people in the community, so keep those cards and letters coming, kids.

MATTHEW HEUSSER: and you can email The Testing Show at thetestingshow at qualitestgroup dot com. I’m going to give Perze the last word.

PERZE ABABA: All right, so in October, Test Bash is coming; It’s a good time to be a tester in the NorthEast, and of course as a fellow organizer of the NYC Testers meetup group, we do have our monthly meetups, so look out for those.

MATTHEW HEUSSER: All right, thanks guys, appreciate it.


PERZE ABABA: Thank you.