Insights Podcasts The Testing Show: A Tester Walks Through Product Ownership

The Testing Show: A Tester Walks Through Product Ownership

December 21, 2022

Many individuals on the team have a vested interest in what the tester provides and reports on. One such important team member is the product owner, who can wear many hats in an organization and represent many different perspectives. In this episode, Holly Bielawa joins Matthew Heusser and Michael Larsen to talk about the world of the product owner, their involvement with the organization, and their view of the world and their interaction with QA and testing.











Michael Larsen (INTRO):

Hello and welcome to The Testing Show.

Episode 130.

A Tester Walks Through Product Ownership.

This show was recorded on December 6th, 2022.

Many individuals on the team have a vested interest in what the tester provides and reports on. One such important team member is the product owner, who can wear many hats in an organization and represent many different perspectives. In this episode, Holly Bielawa joins Matthew Heusser and Michael Larsen to talk about the world of the product owner, their involvement with the organization, and their view of the world and their interaction with QA and testing.

And with that, on with the show.

Matthew Heusser (00:00):
Welcome back, everybody. We are recording this just as the holiday season is in swing after Thanksgiving, before Christmas. I think you’re gonna get it before New Year’s. This time we wanted to bring in someone ancillary, someone connected, tangential to testing that has insights into how testing can be perceived as effective or not effective. We might not realize what they need for their role to be successful and how we can help. So I was talking to Holly Bielawa a few weeks ago and we thought, “Wow, this is a pretty good fit for the show”. I’ll tell you how we met when she was the implementation lead for what would become a very, very large Agile transformation company at a Fortune 50 company in the automotive industry that everybody’s heard of. Every year I see her at the Agile and Beyond Conference in the Detroit area. Surprisingly, not only is she great on product, but Holly knows a lot about testing, has a different, maybe, perspective that I thought we should bring on.

One more tidbit from her career. There’s a company called Menlo Innovations I really respect out of Ann Arbor and they have a role called High-Tech Anthropologist where before they write the software, they actually go into the workplace and see how people do the work and model it so they can design a user interface that actually reflects the work people do, which is kind of a unique thing. And Holly was one of the early, I don’t know if I’d say you defined that role, but you were certainly one of the first people doing that with Menlo that always impressed me.

Holly Bielawa (01:36):
Thank you.

Matthew Heusser (01:36):
So what did I miss from that brief introduction?

Holly Bielawa (01:39):
Yeah, it’s interesting. Not very many people know that I was in startups before all of that. So I had my own startup in the late nineties was written in Cold Fusion for the construction industry. So I was in several startups up until the mid two thousands. So my introduction to Agile actually was in 2002 through an early-stage company as an OCR where We launched in 2005 where we had a cross-functional team. But then I stopped traveling, I had young kids and I stopped traveling and that’s where I ended up at Menlo. So my startup experience was, and why I’m so interested in getting product right is, I was responsible for all of that in my own startup and others.

Matthew Heusser (02:20):
So that’s fascinating cuz that’s product. Literally what are we gonna build so that we can have wide adoption

Holly Bielawa (02:27):
Yeah. And make money before we all run out.

Matthew Heusser (02:30):
<laugh> <laugh>, that seems important.

Holly Bielawa (02:32):
Uhhuh <laugh>, absolutely.

Matthew Heusser (02:34):
So, and your talks, your public writing, you’re helping build a practice with Jeff Patton.

Holly Bielawa (02:42):
Jeff, yeah.

Matthew Heusser (02:43):
Which is product coaching, right?

Holly Bielawa (02:46):
Well, yeah, I mean I think it’s interesting. I’ve known Jeff for a very long time and helped him and I’m acknowledged in his book from 2014, “User Story Mapping”. It’s an interesting add-in to the conversation just because when you’re talking about trying to get all the way from product through to QA and out to the market, you really have to be able to see what you’re doing end to end. I’ve always been a fan of user story mapping because it helps to do that and keep the customer in mind. So I’m a big fan from a product perspective. Yeah, so I’m helping build that consulting and coaching practice and we do a lot of different things, but product coaching is a big part of it. There’s quite a demand for that right now as well.

Matthew Heusser (03:28):
So we hear these terms thrown around. There’s a number of managers and types of managers and fields like it has exploded in the past 10 years. We got traditional aligned management and get project management. You’ve got process management consulting, you’ve got product management. What should testers know about product management?

Holly Bielawa (03:50):
I think it’s too bad that we’re siloed off in different areas of the organization and don’t understand each other. But I think the main thing that testers need to know about product management and really, you know, when we’re talking about it, we’re also talking about product ownership and people are always confused about the difference between, you know, those two roles. Also, product owners tend to work a little more closely, but QA tends to be back-ended. But the very back of the process, especially with large companies and some of the large companies that I’ve worked with, QA is completely different organization than engineering. What do they need to know? They almost need to know everything and the intent and the strategy to make sure that they’re testing end to end before things go out. And that is not how things are working once you get into any size of company.

Michael Larsen (04:46):
First of all, Holly, hey, welcome to the show. I,

Holly Bielawa (04:48):
Hey Michael.

Michael Larsen (04:49):
I didn’t get a chance to pop in here real quick. Anyway, I just wanted to add on this and give a little context from my side that that will hopefully set up this question effectively.

Holly Bielawa (04:57):

Michael Larsen (04:59):
So I’ve often touted myself as being a lone tester, which isn’t always true, but it does definitely hold up when it comes to the projects and the products that I end up working on. Though I officially am part of a larger QA organization, more times than not I tend to be that special operative that goes in and works with a team or a project by myself. And so in a lot of ways I have this weird mixed mode where I can end up being a product owner myself in a manner of speaking. In other words, I own the testing aspect and I own the advocacy aspects of a given product that may change.

And then I have to be able to, as I’ve frequently said, I think it’s important for every tester to know, not just, “Hey, what do I need to do to test a product?” but “What do I need to understand about the product I’m working on? Whatever it is to be as effective as possible to everybody that’s going to interact with it.” So in a way, product ownership isn’t just a, well, here’s the product management team and they are the product owners. In a lot of way product ownership also filters into testing. Having said that whole glut, is there something I’m missing there? Is there something about that to where what we as testers should know about product ownership or vice versa, what product owners should know about us as testers and how we interact in that space?

Holly Bielawa (06:36):
Wow! When you talk through that, it brings a lot of things to mind. So I always think about it as “What are we trying to do here?” And in larger companies we’ve talked about, and I’ve coached about, shifting QA left because you can end up in a game of telephone, and you say you play a product ownership role, which is great just as long as you are also let in on the strategy, who the users are, what we’re trying to accomplish, those types of things. What I’ve seen in large companies… I don’t know, does anybody remember user stories? I haven’t seen a good user story with good acceptance criteria in seven years now.

Matthew Heusser (07:17):
So a user story would be like, “As a type of user I want to activity so I can benefit,” there’s a classic template for it. And then you would have acceptance criteria, which would be, “How will we know that the story has been accomplished?”

Holly Bielawa (07:31):

Matthew Heusser (07:31):
And that gets into something, maybe you can speak to it, we seem to talk about often on the podcast. It’s a difference between templatized process and skill. If you don’t have writing skills, if you don’t have analysis skills, you can do the template, but it’s not gonna be very good. What I find

Holly Bielawa (07:51):
mm-hmm <affirmative>

Matthew Heusser (07:51):
is that companies say we need a solution to our poor requirements problem. “Look! The template! We’re gonna do user stories.” They don’t even try <laugh>, they just used tickets in Jira <laugh>, like, “We’ve even given up the template!” is the state of the practice.

Holly Bielawa (08:06):
Yes. But I think that, it’s really interesting to me to talk about the root cause of why user stories really got dumped. And I totally hear what you’re saying about them being template and we, you know, sort of never wanted to use them that way. But the fact is that when we start talking about Agile and Scrum teams and Scrum adoption, what happened really was that so many teams got functionally aligned. It’s useless to have user stories in individual backlogs. And that’s a big thing for me to say, but I’m sort of gonna stick by it. At the team level. If you’re a database team and you’re just making calls or putting metadata on the data, then why would you work from a user story? It’s not appropriate for your team. You’re just a, you’re a team that’s working on data and databases. So the user story did become very ineffectual as soon as you no longer had cross-functional teams. And that happened with a lot of scrum adoptions. So you can’t necessarily blame user story format or the idea of templates or that mindset. It was really driven from, well wait, this doesn’t really even fit, like our team doesn’t touch the user. Like we don’t see a user in our team. And my view on that is that the team’s set up wrong.

Matthew Heusser (09:31):
That makes a lot of sense to me. I did one project where we had I don’t know how many hundreds of stories and they were all “As a user, I want to…” they were like, the first eight words were the same. So then when you look at it in Jira, they’d all be the same. But that project, that project at least had a user interface and a real customer. And like you say, there are a lot of, of data transformation projects, ETL projects, um, other kinds of software where the customer is extracted away from the person doing the work or doesn’t even like it. It would make more sense to just give requirements in a table format, um, with these inputs, get this output. So yeah, I can agree there are other kinds of software where it’s just less relevant.

Holly Bielawa (10:21):
Yeah, and this also is some of the reason that QA can be at the back end and is only involved once integration happens. So Michael, I’d love to hear a little more about your world with what I’m saying,

Michael Larsen (10:39):
Sure. Yeah.

Holly Bielawa (10:41):
Right? But being asked to make sure that nothing blows up in QA, a lot of different teams code and then finding an error or finding something that’s wrong. And then there hasn’t been testing done up until that integration point. Like is that something that you’ve, you deal with?

Michael Larsen (11:05):
Oh yeah. Mm-hmm. <affirmative>, well so what, what has frequently come up and… interestingly enough, I’ve actually made a bit of a transition because the work that I was brought in originally to do a couple of years ago, the team that I currently work with, I was brought in specifically for, as we were talking earlier, data transformation. So who is the end customer? Well the end customer isn’t really, in this case, a user with an interface interacting. It’s all of the data that was part of one system is being channeled through another system, converted into… I don’t see any reason why I can’t mention this. It’s being converted into XML, and then that XML comes out the other side. And my job a lot of the time is to say, okay, here’s our starting point. Here’s our ending point. Let’s run them through a variety of scripts that I have and let’s verify that what we see is correct on both ends.

And, mind you, correct on both ends does not necessarily mean that you run a diff and they match. That’s not what’s going on here. In some cases, the formats are completely different. In some cases what you used on system A is not what system B uses. So you can’t sit there and say, “Well okay, this matches this and this matches that. You get this fall-through state that you have to be aware of. And there could be 50 to a hundred differences, but as long as those 50 to a hundred differences match what you expect at the end, then you’re on good ground. If they don’t, you’ve got a problem. So you see what I mean? This, these are things to where it’s a simple user story. As a <laugh>, what would I say? As a tester, let’s make sure that these 50 things that I start with turn into these 74 things that I end up with <laugh>. Like how?

Holly Bielawa (12:59):
Yeah. Oh, it’s a simple data mapping.

Michael Larsen (13:01):
Yeah. Right. Yeah, exactly. But you know, that’s not something you’re gonna ever put into a user story. It’s not something you’re going, so yeah, you have to get into end-to-end systems and these are things that a tester needs to be aware of and they need to know about to say, look, if you’re gonna be effective at all with this, you’re gonna have to actually build… you almost have to approach this from a world-building and a gamers sort of strategy. When you think about if you’ve ever played Dungeons and Dragons and you’ve had to keep track of all sorts of stats as to how a character interacts with another character. I know that sounds like, what would that have to do with testing actually a lot <laugh>,

Holly Bielawa (13:41):
Right? And then you also have to know, what is the quest you’re on? Right, exactly. What is the…

Michael Larsen (13:48):
Exactly, yes.

Holly Bielawa (13:49):
What’s the ultimate quest here? I think that that’s the narrative that gets lost. So when we’re talking about shifting QA left, and having QA also know what is the ultimate use of all of this data, right? Because in organizations, it is a game of telephone like I said, but there’s also sort of this element of culture and people thinking that they know when they actually don’t know. And there isn’t a good, right now, there isn’t a really good path from strategy all the way through to QA that’s well understood, unless people are actually in the meetings.

And they’re actually let in on what executives are thinking and what product managers are thinking and what actually our strategy is to make money and what the vision is for the future. And that’s where the silos between when you talk about product management, losing the business side and engineering and QA lived basically under the C I O on the technology side and the business looks at technology like a service provider. Technology will look at the business as their customer. Whereas business people are not the customer. The customer is the one that’s actually going to fulfill the strategy. The customer…

Matthew Heusser (15:18):
The customer is the customer, right?

Holly Bielawa (15:20):
The customer is the customer.

Matthew Heusser (15:22):
I’ll tell you the reason I didn’t come on that automotive project, it wasn’t you. Everything you said,

Holly Bielawa (15:27):
Oh, thank God!

Matthew Heusser (15:28):
Made me really wanna do it. The other folks that I talked to that day, we never talked about a customer once. We talked about requirements and backlogs, we didn’t talk about someone driving a car at any point. And we didn’t even talk about the executives championing the projects that would become the thing for the person driving the car. Maybe that’s the way it had to be because the organization was so big. But it was so abstracted away from the customer. And I think you make a really good point with Michael’s work, he’s got this checklist of 50 things, and I think this happens all the time in testing, correct me if you disagree. He’s got this checklist of 50 things. Data A needs to go to data B. He doesn’t, well he probably does because his organization’s relatively healthy. But it’s very common that we don’t know which of those are the most important.

Holly Bielawa (16:25):

Matthew Heusser (16:26):
So testing just kind of tests everything the same way. Instead of understanding like, e-commerce is a little better. We have things like the path to purchase. We’ve gotta hit this because if the customer has a problem on this path, we don’t make no money. But the more abstracted away you get, the more you’re just doing data transformation. The more siloed you are, the less ability you have to understand what things are important and you should spend more time on and less important than you should spend less. And what I find is teams just test everything about the same and get sort of this lukewarm testing result. Does that make sense? Is I think …

Holly Bielawa (17:03):
Oh yes, it really does. And one of the things you said without saying it, or one of the words you used without actually using it was prioritization. We think prioritization is somebody’s individual job, but it’s actually, it’s kind of everyone’s job and everybody does it every day. And we know that if everything’s important, nothing is important. I have heard this from QA folks that I’ve worked with, is that they are just trying to figure out what to test because they can’t test everything. So what is it they’re actually going to test? And this is end to end. So what we would call UAT, user acceptance testing, which happened after all the code was written and integrated and if nothing blew up and there was some testing at the end, it just had to be, “Okay, what exactly are we going to test?” And there was no guidance. This team had no guidance and there was also an offshore QA team that actually had no contact with the larger organization, even technology.

Matthew Heusser (18:07):
Yeah, I’ve seen UAT work when they actually bring in the people doing the job and say, try to do your job with this software. But again, there’s only a small percentage of projects that actually work out that that really makes sense. You’re literally building software for customer service to use internally,

Holly Bielawa (18:22):
Right? Where you have better access to the users and you can live with them for a bit. It gets a lot more challenging when, you know, I worked in finance for probably the last, I don’t know, six or seven years and it got more and more difficult to get ahold of like traders or get ahold of people who would be end users. even worse sometimes in this process, it wasn’t clear what code had changed in the end.

Michael Larsen (18:54):
Let me give a little bit of additional details here. I think I can be safe to talk about this cause I’m not gonna go into anything that would like, “Hey you, you’re giving something away here.” But for example, right now, one of the things that I am working on is specifically in an API space, but what’s wild about this is that I’m working in an API space that has to do with importing and exporting of data <laugh>.

Holly Bielawa (19:17):
Mm-hmm. <affirmative>.

Michael Larsen (19:18):
So we’re back to it. So a lot of the time you would say, Hey, if I wanna actually understand how an API is working or how it’s supposed to do it, you would say, I want to be able to be on the system and see how the system works. Well, yes, I can do that to a certain extent. I can get onto the main machine and I can see what that machine does in regards to how it imports the data, what type of jobs it runs.

But really that’s not nearly the same thing as, say I’m somebody in an HR department who is running some training and compensation scenario and I want to be able to map this user to this thing. That’s something that is a little bit more tangible. It’s easy to understand. That’s more of a user doing something. In this case, me as a user doing something is, I’m going through and making sure that I can send these calls so that this particular flag in this particular transformation can take place. But because I’m dealing with customers data, of course, I’m not going to be able to go into their systems and see how have they actually set this up for context. I can get a very small view of that, but no organization is gonna say, “Oh yeah, we’re gonna let you into this hospital’s database.

Holly Bielawa (20:35):
Mm, exactly.

Michael Larsen (20:36):
To be able see how everything is mapped.”

Holly Bielawa (20:38):
That’s right.

Michael Larsen (20:39):
So there is a part of what I do that I have to take on faith and I have to step back and just say, “I generally understand the ramifications of what’s happening here if something is not mapped correctly or something’s going wrong here.” And the difference is, of course, if I was dealing with something that had a high risk, if we’re saying, look, we’re dealing with compensation data and if for some reason that compensation data gets transformed where the math doesn’t line up and suddenly you’ve got people who’ve gotten a pay cut, that’s a big deal. But if it’s something to where somebody is in one organization and because of a numerical thing, they get allocated to another and it’s something, “Oh goodness, let’s just flip this over.” Not a big deal. That’s not something that somebody’s gonna say.

Holly Bielawa (21:26):
Mm-hmm <affirmative>

Michael Larsen (21:27):
That’s the end of the world if we get this wrong.

If you mess with people’s money allocation and that has to do with how their paycheck’s generated, that’s hugely important. <laugh>. So yeah, that comes down to what do you prioritize? Well, to prioritize, you have to understand what the system is ultimately doing. And again, it’s different for each organization. One customer may be doing a data transformation related to compensation, whereas a different company may just be doing a data transformation because they’ve incorporated two teams and they just want to have everybody be part of the same team, which is much lower in both risk and, some would maybe even argue, priority. Does that make sense?

Holly Bielawa (22:12):
Yes. I was just going to say that you hit on risk there and then you said it because prioritizing by risk is just as important as prioritizing to money or value. Matt, were you gonna say something?

Matthew Heusser (22:27):
Yeah, kind of reading between the lines that everything you’ve been saying about the problems that happen in large organizations that separate roles. And it’s kind of neat because that first time we met you were literally brought in to kind of break some of that up and get people to talk to each other. I think that’s really what you’re saying. If we can just blow up the silos and get people to actually talk to each other and understand each other, then products can communicate what sort of critical success factors are, what really matters. And if product can communicate that and development can test for that and start thinking about risks as early as possible. So when we build the walking skeleton that is the system, we can test it end to end before we add, add on all of the bells and whistles to make it pretty, we’re gonna have a better outcome. I don’t wanna say that’s the ball game, but if you don’t have that, you greatly increased your risk. And if you do greatly decreased it. A lot of the Agile stuff boils down to, “Hey, man, If we could just talk to each other instead of trying to create these artifacts written on paper that are imprecise and vague and ambiguous and shove them between each other so we don’t have to talk to each other, we’re gonna have a better outcome. Is that true?

Holly Bielawa (23:43):
It is. And it’s a very, very tough problem because what we met in must have been 2012 or something like that, and it’s 10 years later and this is still an issue and we’re talking about it. I’ve talked to a lot of people in technology companies or with digital transformation and the problem’s gotten worse with this. And I’ll give you a very small team level example of incentives really run counter to everybody getting together and talking for a lot of reasons. But I was working with a team that was trying to settle trades. It used to be the trade settled in three days and this was trying to remove a day and having trades settled in two days and it was a small team working on it. So we had engineers working basically in a one week Scrum cycle. But what they would do, we had cross-functional team with engineers and a QA person.

So the engineers were getting done having things be dev done, moving onto the next thing, having things be dev done and there was one QA person on the team and she was drowning and she asked for coffee with me and said, “I don’t know what to do.” So I got the team together and I said, “Look, how is this working for you?” And the devs were like, “We feel great, we’re getting everything done.” And she fessed up to the group that her soul was leaving her body every sprint. So I said, “Well look, I think what we should probably be doing here is considering things only being done once they’re all the way through QA. The engineers were so worried about getting in trouble from the product owners. We had to call the product owners in and get permission to do a two-week test, two iterations where the developers couldn’t work on anything else until what they had just done was all the way through QA.

So this is Kanban, if anybody’s familiar with that. This is sort of taking more of a Kanban view. Well, they got more done. So at the end of the second week they said, “We couldn’t believe it. We actually got more done.” That was a single cross-functional team. But if you can imagine trying to do that when you can’t pull everyone together, they’re in different parts of the world. They don’t normally talk to one another. And I think this is an incredible problem that we’ve got with organizational silos. So you can say between engineering and QA, between the business and engineering, and you hear when I work with product managers, which is a lot, they do complain about the wrong things being built and it taking too long. So we’ve got a lot of organizational wait times. As soon as you’re documenting anything that takes time. So we wonder why things aren’t getting out into the world faster. But that’s part of the problem, Matt, that you bring up, is that we’re not talking and the incentives do not run that way. The incentive is, “I got my thing done.” So people are driven to get things done and off of their own to-do list.

Matthew Heusser (26:37):
It’s actually, yeah, like if I could just go to the meetings and say, “Oh my things are done,” and point to somebody else, then that’s great. And that creates so much for performance. I’m pretty sure Holly and Michael, you’ve both read “The Goal” by [Eliyahu M. Goldratt], theory constrain stuff, or at least you’re familiar with the theory.

Holly Bielawa (26:54):
Mm-hmm. <affirmative>. Yep.

Matthew Heusser (26:55):
And I see that so much. We’ve got six different steps in the chain. Each is optimized on price, so we’re only paying a very small amount per hour, but that’ll guarantee that whatever link in that chain is the slowest. In that case it was a single tester will slow down and become the bottleneck through the whole delivery process and those bottlenecks will ship. So you always get the worst outcome of all of your six teams. Yeah, and I think he wrote that book in a different century. So <laugh>, it’s,

Holly Bielawa (27:23):
Yeah, it’s sort of everything. Everything old is new again, right? Uh, if you stick around long enough.

Matthew Heusser (27:29):
Yeah. Well, I mean, we don’t talk about, well we try not to talk about Qualitest too much, but in our work, in our sort of ecosystem, both Qulitest and my own work, there are tools where you can measure throughput at the different steps and then you can try to figure out what the bottleneck is and elevate it. If it is testing, then the things you’re talking about, which is collaborating and adding resources to figure out, to test the right things to optimize performance, we can actually improve the performance of the whole organization. We’ve done the modeling then it’s like a hundred grand here would deliver two million of value, assuming whatever, we spent two million last year, we doubled the speed. Those kind of things.

Holly Bielawa (28:11):
I think what you’re saying, bringing that data and bringing those views is very valuable because it can get the conversation headed in the right direction. There are tools that you can use to see what code is never used, also. How much legacy code do you have sitting around that never gets touched? Those tools exist to present that data yet, you know, you see migrations where companies will spend years migrating. I may have worked at one of these companies that spent years doing a migration and never leveraged those tools, never leveraged that data and ended up doing basically a one-to-one rewrite of all the apps so that they would work on Linux instead of taking the time to do analysis. Half of what they were migrating was never going to be used.

Matthew Heusser (28:59):
Yeah. And we find that a lot of times some people somewhere in testing have access to that information. If they could find some way to collaborate with product, they could say, we could get you 90% of your testing value for 20% of your effort. Something like that. And these are the risks that would be open if we did it this way. Someone in product could say, “Yeah, let’s do it.” But you gotta have the communication links to be able to have the conversation.

Holly Bielawa (29:26):
Absolutely. I mean, I think that that really gets into the idea, well not the idea, the sort of, the facts that organizational hierarchies are not built to bring that data forward. In a lot of cases you might say to your boss, “Here’s this data,” your boss might present it to their boss and then as soon as it hits sort of a political barrier, it’s like, “Oh well, you know, we’ll talk about that next quarter.” <laugh> I can’t tell you how many times that has happened. Where can

Matthew Heusser (29:57):
We just, can we just get this project done and we’ll fix it later?

Holly Bielawa (30:00):
Yeah, yeah. Okay. Can you just, or you know, what’s worse is, can you do your job please?

Matthew Heusser (30:06):

Holly Bielawa (30:06):
You’re not getting paid to bring this information. You’re getting paid to actually do your job, so please do your job.

Matthew Heusser (30:13):

Holly Bielawa (30:13):
I think that that’s… we can all have this conversation and we’ve been around a while, and are brought in as consultants whose job it is to bring this type of stuff forward. But being an employee and I’ve done both things, right? So I’ve been even at the executive level where my purview is product, why am I talking about QA? We talk about being cross-functional and all of that, but that’s not very great for organizational hierarchy. You know, you’re not really paid to be cross-functional.

Matthew Heusser (30:47):
Well, I don’t want to end it on a low note.

Holly Bielawa (30:50):
Oh <laugh>

Matthew Heusser (30:50):
time for us to wrap up. So yeah, two things for you. One, where can people go to learn more? Do you have a final thought?

Holly Bielawa (30:58):
I guess my final thought, there are a lot of good resources. I’m sure you have so many on the QA side, but really the user story mapping. But if you wanna know more about the struggles of product management, the two best books that I know are “Escaping the Build Trap” by Melissa Perry, “User Story Mapping” by Jeff Patton. And there’s a new one called “Real Flow” by Brandy Olson. It just came out about a month or two ago. That’s also a great book just to talk about flowing through an organization. My final thoughts here are really around, we’ve gotta do what we can to break down these silos and to collaborate where we can, which means we have to not actually just do our jobs, but also talk to people who are outside of our realm. So it’s been great to be part of this conversation today cuz I just love talking to other people who have this similar mission of improving how we deliver valuable software to market, but working in different parts of the companies.

Matthew Heusser (31:59):
Thanks Holly. And you’re on LinkedIn. If people send you a LinkedIn connection request and said, “friend of the show”, mentioned the show by name, would you connect with them? Or are you a little more reluctant with

Holly Bielawa (32:09):
The connection? No, like I, I’m absolutely, and I’m also willing to talk to people always or have a coffee zoom if we’re not in the same area, to help people realize they’re not crazy. They’re not the crazy one <laugh>, they’re just trying to make change in places where it’s really hard.

Matthew Heusser (32:27):

Holly Bielawa (32:27):
Absolutely. You know, hit me up on LinkedIn. I’m at Feel free to reach out.

Matthew Heusser (32:33):
Thanks. I will add two little things that this conversation reminded me of. First of all is The Goal” by Eli Goldratt, which is for fashion organization. There’s a fantastic little story that is well written, it draws you in and it talks about this bottleneck problem. And Jim Benson also has a book called “Why Limit WIP”, which is Work in Progress, which is a tiny little book with some exercises in it that I’ve done with people. I mean, you can buy 10 copies for 50 bucks, a hundred bucks and just leave them on desks around the holidays. <laugh>, I would consider.

Holly Bielawa (33:06):

Matthew Heusser (33:07):
Michael, you have anything you wanna add before you’re going? I’ll let you, I’ll let you talk this out.

Michael Larsen (33:11):
Much of it’s gonna come down to, and I will say this again as before, I think we oftentimes on the show talk about if you don’t understand the context of what you’re working on, you’re gonna miss a lot of stuff. And that’s the nature of what I was bringing up here with the examples that I had is when you are working on a product, in a sense, you do need to make some ownership decisions regardless of where you are in the process. And to that effect, a book that’s been out for a number of years now, but I still think it’s worth reading if, albeit the narrative is a little bit grossly simplified, is “The Phoenix Project”. It shows an example of how, granted it’s geared toward a DevOps community, but it does show in a narrative format, you can see how many of these concept and many of these ownership aspects and being able to take advantage of them can make a pretty decent world of difference in the stuff you’re working on. So yeah, I’ve mentioned it before, but I think it’s still relevant even today. So there you go.

Matthew Heusser (34:10):
<laugh>. All right, well thanks Michael. Thanks Holly.

Holly Bielawa (34:13):
Oh, thank you.

Matthew Heusser (34:14):
We’ll see you soon. And thanks everybody for listening.

Holly Bielawa (34:16):

Michael Larsen (34:16):
All right, thanks for having us.

Matthew Heusser (34:17):

Michael Larsen (OUTRO):
That concludes this episode of The Testing Show. We also want to encourage you, our listeners, to give us a rating and a review on Apple podcasts, Google Podcasts, and we are also available on Spotify. Those ratings and reviews, as well as word of mouth and sharing, help raise the visibility of the show and let more people find us. Also, we want to invite you to come join us on The Testing Show Slack channel, as a way to communicate about the show. Talk to us about what you like and what you’d like to hear, and also to help us shape future shows. Please email us at thetestingshow (at) qualitestgroup (dot) com and we will send you an invite to join group. The Testing Show is produced and edited by Michael Larsen, moderated by Matt Heusser, with frequent contributions from our many featured guests who bring the topics and expertise to make the show happen. Additionally, if you have questions you’d like to see addressed on The Testing Show, or if you would like to be a guest on the podcast, please email us at thetestingshow (at) qualitestgroup (dot) com.