Insights Podcasts The Testing Show: Talking About Risk

The Testing Show: Talking About Risk

June 9, 2021

Today’s show gets away from the typical risk and testing approach, as in what techniques to use. Instead, Jenny Bramble, Director of Quality Assurance at Papa, joins Matthew Heusser and Michael Larsen to discuss the more challenging aspects around talking about risk, specifically how to talk to people who don’t want to recognize that it exists or is possible.















Michael Larsen (INTRO):

Hello and welcome to The Testing Show.

Episode 101.

Talking About Risk

This show was recorded on May 7, 2021.

On today’s episode, we welcome Jenny Bramble to the show to talk the ways in which we talk about risk and also how we don’t talk about risk. What are ways we can better have uncomfortable conversations and be better advocates for our customers and organizations?

… and with that, on with the show.

Matthew Heusser (00:00):
This episode, we have Jenny Bramble, who I first met at SauceCon, back when you were allowed to do stuff like shake hands. She is the director of quality engineering at Papa. Papa’s based in Florida. She’s in North Carolina. So we have that wonderful working remote effect that we’re seeing happen more and more. She just spoke at SauceCon last week. In an hour or so, she’s speaking at SpearTrack. A lot of her work has been in the area of risk management as a formal exercise. You’ve done a lot of things. So why don’t you tell us a little bit more about that?

Jenny Bramble (00:37):
My absolute favorite thing about testing in general is that we take humans and we take machines and we smash them together and something beautiful happens. So I like to say that my work is a lot around the impact that we can have on people and the impact that our software has on other people. So that comes into play with risk. It comes into play with creating Agile teams. It comes into play with doing weird things with automation to make things better and/or worse. That’s all about impact for me.

Matthew Heusser (01:10):
Thanks, Jenny. And anybody who’s listened to the podcast more than once knows Michael Larsen, who’s a lot more than just sort of the show producer that edits the audio file. He’s a practicing software person with a focus… I’d say is moving more toward quality, but still a ton of testing in there. Is that still right, Michael? You’ve reinvented yourself a couple of times.

Michael Larsen (01:33):
Yeah, it depends on when you catch me, but yes, that would be a fairly accurate assessment. At the moment, my official title is I am a senior automation engineer, which, yeah, I mean, that’s part of what I do, but there’s a whole bunch of other things that I’m associated with right now. I would say the biggest thing that I’m involved with is data transformations. And I guess, change management, if that makes sense. That’s what I happen to be doing at the moment. There’s going to probably be a lot more focus on more adaptive automation and probably even helping with development as things go forward. So it’s fluid and I like it that way. So it’s cool. Call me what you want to.

Matthew Heusser (02:14):
Data transformation and change management. I don’t know that we’ve done a podcast on that in a while, but today we were going to talk about risk, maybe a little bit less emphasis on the sort of techniques of how to enable continuous deployment at scale in the enterprise. And maybe a little bit more on talking to people who don’t want to recognize that it exists or is possible. I think DeMarco and Lister, one of their books, Waltzing With Bears, they talked about this thing that happens where everybody is absolutely certain everything is going to be fine until they wake up one morning and it isn’t. And then they’ve got a new plan and they’re absolutely certain, but it’s better to be certain and wrong than uncertain and right when it comes to the planned expectations for what’s going to happen over the next sprint or two sprints or the project or whatever. I’m curious, Jenny, have you run into that kind of thinking.

Jenny Bramble (03:13):
I run into a lot of different perspectives on risk. We were talking just before this, about how sometimes it’s hard to bring up risks. People don’t want to hear about it. They don’t necessarily want you to sit down and be like, so I’ve got some news and you’re not gonna like it. And I feel like that’s something that test engineers do a lot of. We spend a lot of time telling people things they don’t want to hear. You’ve talked a little bit about uncertainty and being right about that or being certain in general. And I think, especially in software, there’s this desire to be good at what we do, with developers in particular, and with product owners being good at what we do means it works great. The users love it. Nothing terrible happens when we deploy. And frankly, that’s not how software works. I was talking to a friend of mine recently and we were talking about a deploy that was going out and he’s like, yeah, I’m super confident in it. I’m like, cool. That’s great. I’m not! Everything you’ve told me about this, these are the concerns that I would have. And he was like, well, I don’t think those concerns are valid. Like, that’s fine that you don’t think that, but you need to start talking about mitigating it. We need to say, okay, how can we prove that you’re right or that I’m right. What can we do after the deploy to ensure that we have managed the risks that we’ve been talking about? And it’s a hard conversation, actually stepping up to someone and saying, I know you’re very good at your job. I know that you want to be seen as a subject matter expert. I know you were hired to do this. I know people put a lot of confidence in you, but guess who is going to act like she doesn’t have confidence\ in you? It’s me, the test engineer. That’s kind of hurtful in a lot of ways. You’re telling them their baby is potentially ugly. So how do you start having that conversation? How do you step into a space where you can say, Hey, I think your baby’s ugly.

Matthew Heusser (05:18):
Well, I’ll tell you one way not to do it, that I think might help us explore a different area. And that is earlier in my career when they said this is going out today, I would say, so you think so, do ya’? And then I would get the login rights and get the story and pull it down, pull the build down and find a bunch of bugs that were showstoppers. Not so much with the going out today, huh? It portrays testing and development as opponents in some kind of conflict. I’ve had enough of conflict. I don’t want it to be a fight. So how can we reframe this to be collaborative?

Michael Larsen (05:58):
I was just about to word in on this in the sense that I actually have been having this conversation because… when I say new I’ve now been on this team for a year, a year is almost a blip on the timeline. This group has been actively involved with and working with each other, I think, for a decade plus. So I am still by far the new guy learning all sorts of different things, but that works to my advantage. And the benefit to that is, is that I keep being able to say, well, you know, I’m the new guy here. There’s just so much of what we do because of the fact that, again, my team that I’m working with right now is very different from a more traditional product team. I don’t have a UI that I’m poking around it. I don’t have end product that is delivered with the exception of the fact that my end product is what is the data that you are sending to me and me, meaning my team, and once we get that data, what do we do to it so that it can go out the other side and be used by whatever product that happens to be. We have lots of products, lots of potential inputs. And because of that, there is a lot of things that we can potentially be doing to make sure that something is working correctly. And that opens up a lot of avenues for things to go horribly wrong if you are not paying attention. The nice thing about this… and I’ve got to give the developers on my team a lot of credit for this… each time that we’re discussing these things, or we go through and review something and I’m realizing, there’s this thing we’ve got to consider. There’s this thing we’ve got to consider. They’re like, yeah, you know, we’ve been dealing with this stuff for 10, 15 years. And a lot of it is just implicit knowledge that we just take for granted. And of course you’re not going to know what this is. We apologize for that. Let’s go take a look at this so that you understand what’s happening here. Oh, here’s a corner we haven’t really talked about. Let’s go discuss this. This is this old customer that does things in this unique way that hasn’t been done by anybody else in a long time. But because of the size of them, we still have to support it. Yeah. We’re going to have to deal with how some bodies are buried here. Let’s tread carefully. So I think my team actually deals with risk probably better than any other team I’ve ever worked with.

Matthew Heusser (08:38):
Sure. Yeah. They fought through a lot of these issues and you as a new guy are just sort of being briefed in. One of the techniques that I’ve used when I’m told. Yeah. I’m pretty confident in it. It’s going to be fine. You’re wasting your time is to say, well, I’ll go work on something else. Could you just send me an email that I’d brought the risk up and you told me not to investigate it? I’ve had mixed reactions to that kind of commentary. A lot of it depends honestly, on my way of being, when I say it. What do you think about that, Jenny? Or do you have a better idea?

Jenny Bramble (09:16):
I love that we are moving into a space where we’re tying kind of developer relations into risk, but you’re right. There’s a lot of times where the tester will bring something up and she’ll say, Hey, I think this is risky. This could be a problem. And we get shut down. I don’t think that’s fair because testers are the subject matter experts. That’s what we were hired to do. It’s what I’ve done my entire life. I’ve been a subject matter expert in testing and therefore in risk, there is an element of CYA. Whenever you’re doing this, essentially sign off that you told me not to look at this.

Matthew Heusser (09:54):
I do that for two things. It’s much less CYA and it’s more of when their eyes open wide and they realize they’re personally responsible for the quality now, I’ll often actually get a different result. They’ll say, you know what? It can’t hurt for you to take a couple hours, take a look at it. Or…

Jenny Bramble (10:11):
I’m going to argue that is CYA because they were trying to cover theirs by pushing it off on you.

Matthew Heusser (10:17):
Oh, you’re right. That’s true. Interesting. You say the thing and you don’t get listened to, but it’s documented in some way. And the bad decision happens. Six weeks later… On an Agile team, six days later, you’re about to make the same mistake again. And you say, you know, this just feels a lot like the fizzle-bob widget project. Remember that story? And the organization as a organism has an opportunity to learn from its mistakes in the event that we can’t get any change made, we can present the organism, the opportunity to learn from its own mistakes. And then if that doesn’t work, when people at your annual eval say, why was fizzy-bob late? You can tell the story.

Jenny Bramble (11:04):
That can definitely be a valuable strategy. I will say that as a test lead on a project, not that long ago, honestly, but it feels like forever I and the test team made a bad decision. We decided not to do a certain thing that we had been talking about doing. We didn’t do that thing where like, it’s going to be fine. Spoiler alert. It was not fine, but we documented it. We were like, well, we documented that we were going to do this. And a couple other people agreed. And at the end of the project went poorly because we didn’t do this one particular thing. We actually all had to sit down and go through why this had gone badly. Even though we had pushed some of the responsibility off onto the team, it was a quote unquote team decision. It was still our place as subject matter experts to make sure that the most risk possible was mitigated. And because we didn’t do this one thing, we didn’t mitigate a lot of risk. So I think there’s also an element of, even if it’s a team decision, the test engineers are the main stakeholders for quality, the main people who should be the subject matter experts and should be listened to.

Matthew Heusser (12:19):
Thank you. Yeah. The other thing that I’ve done when I’ve been in that situation is I’ve elevated the decision to an executive or a sponsor and that you might disagree with that because a lot of the Agile stuff is let the whole team decide, don’t delegate decisions, but I can say, “we found this one thing. We think it’s a big risk. This would happen if we test for it. Testing for it and creating the environment, it’s going to take us three days”, and they say, “no, go ahead and go forward”. And then when production burns down and we bring that person into the room, instead of the blamestorming I found, we typically get, “yeah, Matt told us about the risk and I made a calculated decision. What’s next?” which is a very different conversation than risk management, risk after the fact, risk realization, blamestorming, which we’ve probably all experienced a time or two,.

Jenny Bramble (13:12):
I don’t actually disagree with you. I think there are times when bringing someone in who has a lot of role power and a lot of potential responsibility is absolutely amazing for not necessarily making sure that the right decision gets made, but that someone who’s got the biggest picture makes the decision. I think that’s really important.

Matthew Heusser (13:34):
Yeah, I think so. It’s that someone makes a decision and it’s an informed decision. Yeah.

Jenny Bramble (13:41):
A decision. Any decision. Just make a decision (laughter).

Matthew Heusser (13:43):
Any other, or you could just tell a story if you want… risk conversation techniques you’ve seen?

Jenny Bramble (13:50):
I said earlier that I think it’s really important to talk about risk and talking about risk is super, super vital to being able to put a good product out there. I recently stepped into the role as director of quality engineering. One of the things I’ve been doing is meeting with a lot of our product owners. And they’ve been saying things like, “oh, I’m so excited. You’re here. The team is going to move so much faster. Now that you’re here”. And I do have to keep telling them that for the first little bit, we are going to slow down because when you have a very small company and your moving very quickly, your considerations are different. And when you start to get bigger, when you start to, when you get to a place that you can step back and you can actually start looking at the whole process, you can start talking about risk in a more reasonable sort of way. Actually let me tell one of my favorite stories about risk. I tell this almost every time that I give any of my risk presentations, we are all in one of those conference rooms. You know, back when we were all in conference rooms, like the big ones with the giant tables that look like you could never fill them up. There is a product owner who looks super happy. There is a developer who looks depressed. There is another developer who’s crying in the corner. And then there’s me who has like the dark circles under her eyes, like the thousand yard stare and the product owner goes, “oh man, I’m so excited. We’re going to get this product out. It’s been in beta for like six months, man. We’re going to get it out”. And the rest of the team, just droops, it’s like, “what’s wrong? Y’all we’re going to finally get this product out of beta”. And I said, “Carson, we can’t, it’s too risky”. And I looked at me and said, “Jenny, it’s been in beta for six months. How risky can it be?” And I started taking him through the product and I started taking him through this list of defects that we had been collecting. And his face started to droop. It turns out this product was on our Q2 roadmap. Like we had to get it out to meet our Q2 goals. But there were so many defects that we weren’t able to actually release it safely. The risk of some of these defects was very catastrophic because it dealt with money and he looked at us. He said, how on earth are we going to talk to the VP of engineering? How are we going to tell the CEO of the company that we can’t put this thing out? That’s on our Q2 roadmap. And I said, “well, we could talk about risk. We talk about risk assessment”. He’s like, “wow, if only we have someone who’s an expert in that”. Carson, I’m literally an expert in this. So what we sat down and did is we started looking at it and we started actually making a spreadsheet. We listed all of these defects. We listed a bunch of technical debt. We talked about the impact. If we let these defects in, if we went to production with them, we talked about how likely people were to hit them. We talked about a couple other things, and we really just made this gigantic risk matrix. And it had this huge spreadsheet that we ended up presenting to the VP level. And we’re able to convince them that this was an incredibly risky thing, that there was a lot of potential impact to our customers, to the reputation of the company and that it was very likely that people were going to start hitting these things and having these problems. That is the only time that I’ve actually used a spreadsheet to convince somebody, to let me delay a project by three months, it worked great, but actually starting to have that conversation was super difficult. It takes a lot of, we’ll say social karma to walk into not just your boss, but your boss’s boss’s boss’s office and say, “Hey, so we can’t do this thing that you have staked. Some of your reputation on.” Having a spreadsheet really helped because he liked numbers. And I could point to these things and say, this is the impact. This is the probability. This is what’s probably going to happen. If you don’t give us another quarter to work on those projects. That’s my favorite story about communicating risks so far.

Matthew Heusser (18:12):
Did you get your three more months?

Jenny Bramble (18:14):
I quit the job, (laughter) … but rumor has it from the people that were still there, They did get it pushed back to, I think, actually Q4 and by Q1 the next year they had started releasing it to the public.

Matthew Heusser (18:30):
Wow. It’s interesting there because it’s more the risk the customers won’t accept it. The customers won’t like it, that kind of a thing. It’s not a risk; risk is the potential for something to go wrong. It’s more like that’s, what’s going to happen. And we know the status of the software. So maybe there’s a risk of market rejection, but we know this is not going to work. And similar situations, I’ve just sort of drawn up a list of the core bugs, a narrative statement, and then a little bit more detailed “Do you want to get into the bugs that are not like the, you can’t log in level, but the next level down is sort of given that… Executive, do you want us to go live with this?” When I was a social tax, working on a social calc account, the spreadsheet product, we actually made a grid with green, yellow, red, and the CEO said, “this is like a Christmas tree. You guys can’t release this”. I was like, “okay, boss, if you say so, if you insist”. So we let them own the decision. Two things that I’ve done when I knew the status of the software to let executives decide if the thing should go out or not, and then we’re not framing it like it’s a fight. “You need to get this thing done. I’m not going to get this thing done. You need to get…” Like, instead of a fight, we frame it as, “it’s up to you”. That’s what I’ve done in similar circumstances. Michael, do you have a story or two?

Michael Larsen (19:57):
It’s interesting that you mentioned the bit about Socialtext. Granted it’s changed a bit. The team is no longer that core team anymore. It’s been absorbed and Socialtext’s DNA is now spread on into other products, but one of the key things that I really valued and I thought was amazingly cool about the way that it operated was that Socialtext… and I’ve said this on the podcast a number of times. So for those who’ve heard this and go, “okay, here’s the Socialtext three story”… Yep, You’re going to hear it again. Number one, if we need to do something, can we do it in Socialtext? Number two, if we can’t do it in Socialtext, can we make it so we can do it in Socialtext? And number three, if we can’t make it so that we can do it in Socialtext, do we need to do it at all? And that was so that we basically used every aspect of our product, the same way that our customers did. And then some! We did everything on our product. So if there was going to be a risk, it wasn’t just going to be that, “oh, our customers are going to suffer something”. We would suffer it! It was dogfooding at a really advanced level to the point to where every business function like you had mentioned, we made our own spreadsheet to work with our product. And we used that spreadsheet for everyday interactions, not just for using the product, but like our office survived on it. That’s how we did our books. If we had anything having to do with software development processes, our Kanban board was built into the Socialtext product. All of that was something that I thought was really kind of wild. I’d never seen a company that was that dedicated to making sure that whatever their customers were going to experience, we made absolutely sure we experienced it first. So that’s taking risk assessment and management to an interesting level that I don’t think a lot of organizations do. And I miss that, but yeah, exactly as you were saying, Hey, if we’re going to be able to verify that something is happening, we have to show it, not just in our testing, but it has to survive our every day work. And I thought that was really neat and something that I would dare say, I think a lot of companies would be humbled and impressed if they had to live by the same rules. Not practical for some things, but for Sociatext for that decade long period of time, we could, we did, and I’ve got to say, it worked really well.

Matthew Heusser (22:40):
There’s a couple of things in there, Michael, but one of them is, if you can, use your own product, like you are your own customer, and if it annoys you and blocks you so that you can’t do your job, it is probably a risk to your customers, too, and now you have a motivation to fix it. So you can do your job. And that might work for companies like Microsoft or Socialtext or Atlassian, who makes JIRA, or Confluence. Some subset of companies are really going to be able to do something with that. Software product companies, mostly.

Jenny Bramble (23:18):
We did it at Red Hat. When I worked at Red Hat, everyone had red hat, Linux on their boxes or Fedora. And it was interesting how quickly some things got fixed.

Matthew Heusser (23:30):
It reminds me of one of the lean guys. I think he won the Shingo prize for his book, Personal Kanban. He said that in consulting assignments, when he goes into a big IT shop, there’s always a set of trouble tickets. You’ve got the trouble tickets and you sort them by age from when they were first introduced. And some of these big companies, they can bounce between teams and it can be a year (laughter) or easily, 90 days, 45 days, 30 days for someone to get a password reset or something. They can’t really do their job and they’re blocked. And he would find a way to present that in a dashboard sorted by age and show it on the wall. In a meeting with all the executives. When they’re talking about quality, he would make sure that engineering level people could walk by the war room and see that he said that when he did that for a three-day meeting, by the second morning, the stuff at the top of the list would start to disappear and be auto refreshed. What gets the attention of senior executives gets fixed, or even what can be made visible to the whole team to go, “gosh, that corner of the room is really ugly. I should vacuum that”. But if you put a chair over it, you don’t even see it.

Jenny Bramble (24:50):
I think that’s a really valuable point. What gets seen gets fixed. And when you’re talking about raising risks or concerns or anything like that, if no one sees them, they’re never considered. They’re never fixed. They’re never mitigated. They’re never brought up. And even if it’s difficult and it is difficult, it’s one of our responsibilities when we see something to speak up and say, “Hey, have we thought about this? Have we considered it? Could we mitigate it? Does anyone hear me at all out there?”

Matthew Heusser (25:26):
Oh, I’ll throw in one more. We should probably start wrapping this up. You’ve got a conference talk to prepare for. Thank you so much at the time. And maybe we can throw one more in each and then our closing thoughts.

Michael Larsen (25:38):
So are we looking to say, what is that one thing that you can take to your team that can…

Matthew Heusser (25:43):
One more technique and technique sounds too harsh. It’s just a way of having a conversation to help the other person realize something they might not. Otherwise, “this is going to do that. No, it won’t. Yes, it will. No, it won’t. Let’s not argue about a fact”. I want to get away from the fighting. I’ve got to find ways to help us all be on the same team.

Michael Larsen (26:04):
Okay. So let me put in my little word here. And of course, as Matt is so fond of me doing, cause he knows I’m going to bring him up every time that we get into these kinds of conversations, because we shared a mentor and he knows exactly what I’m about to say. I just put on my Ken Pier hat. Whenever I have one of these kinds of conversations, Ken would present the information. He would say, “we’re coming up on this situation. We’re coming up on this release or we’ve done X, Y, and Z. Now I want to make very clear here. We’ve been able to confirm or refute these numbers of situations here. These are things that we do not have any clear metric, understanding detail for, and there’s just no way for us to do so if you want me to sign off on this and verify that we are good to go, that is beyond my pay grade. That’s not something I can do for you. What I can do is I can tell you where we stand, how this is going to impact us. If we choose to roll with it, based on the information that I have and the experiences that I have, that I can give you. Ball’s in your court, how would you like to proceed?” I find that wonderfully useful. Ken is a kind of guy who will look you straight in the face and say, “that’s beyond my pay grade. I can’t do that”. And he had the history and the gravitas to back that up. I’m not quite sure I’m very, but I use a lot of that. And I use a lot of the whole thing of, “here’s what we can do. Here’s what we can’t. Yeah. There’s a fuzzy area here. Are you comfortable going out with that?”

Matthew Heusser (27:45):
We’re often asked to do the impossible. Can you certify that nothing will ever go wrong with it? Hey man, I can’t certify that I’m going to get into work today and my car doesn’t have a flat tire. I can’t guarantee it. I think Ken’s idea of these are the things we’ve tested for either things we haven’t tested for. It’s your decision to go live, or you can say, I know I insist on giving you a couple more days to test for more of those things at the bottom edge of your paper. That’s fine too. Having the list is more helpful than having the awkward conversation where we say “we tested it. It’s pretty good. It’s okay. I guess”. Don’t put yourself in that situation.

Jenny Bramble (28:27):
Oh, I could talk for hours. (laughter) The two things that come to mind. I love everything about making it less about conflict. A lot of our human interactions we’re looking to win. We don’t have to win every conversation. The other piece is risk is about context. If someone doesn’t see the risks that you see or they don’t see the issues that you see, they may not have the same context that you have. A great way to start conversations. When someone tries to shut you down is to say, “let me add some context. Let me talk to you about what the user group in the 60 plus range is actually doing with their phones”. By adding that extra layer of context, you’re not saying, “you are wrong and I am right”. You’re saying, “I have context that I would like to share with you to make both of us better, to make us better at our jobs, to make our software better and to make our user base like us a little more.”

Matthew Heusser (29:26):
Thanks Jenny. And unfortunately we do have to wrap this thing up, but this is the part of the show where we talk about where you can find more about us and what we’re working on lately. So I know you’re doing a lot of talks, but they tend to get scheduled just in time. So is there a website people can go to, to read about your work or I know you’ve got your slides up on a website.

Jenny Bramble (29:53):
Sure! Most of my slides are up on Speaker Deck and that’s You can also find me on my website. It’s not a hundred percent up to date, forgot to put my 2020 conferences up there, but they’ll be up there at some point. You can also find me on a dozen different Slacks or on Twitter. Jennydoesthings I would love for people to follow me, tweet at me and agree with me or disagree with me.

Matthew Heusser (30:21):
Hey everybody, thanks for being on the show. And we’ll be back in a couple of weeks.

Michael Larsen (30:25):
Thanks Matt. Glad to be here as always.

Jenny Bramble (30:28):
Thanks. Y’all this was great. I love talking to you.


Michael Larsen (OUTRO):
That concludes this episode of The Testing Show.

We also want to encourage you, our listeners, to give us a rating and a review on Apple podcasts, Google Podcasts, and we are also available on Spotify.

Those ratings and reviews, as well as word of mouth and sharing, help raise the visibility of the show and let more people find us.

Also, we want to invite you to come join us on The Testing Show Slack channel, as a way to communicate about the show.

Talk to us about what you like and what you’d like to hear, and also to help us shape future shows.

Please email us at thetestingshow (at) qualitestgroup (dot) com and we will send you an invite to join group.

The Testing Show is produced and edited by Michael Larsen, moderated by Matt Heusser, with frequent contributions from our many featured guests who bring the topics and expertise to make the show happen.

Additionally, if you have questions you’d like to see addressed on The Testing Show, or if you would like to be a guest on the podcast, please email us at thetestingshow (at) qualitestgroup (dot) com.