The QA Summit Pre-Game Show

September 22, 18:02 PM
/

Panelists

Matthew Heusser
Michael Larsen
G
Gwen Iarussi
Rachel Kibler
Transcript

It’s been quite some time since we have been able to attend in-person conferences. For a brief window, vaccination rates and easing of travel restrictions allowed Matthew Heusser and Michael Larsen to attend the Xpanxion sponsored “QA Summit 2021”, held in South Jordan, Utah, at the end of July, 2021. As part of the speaking group at the conference, Matt and Michael met up with Gwen Iarussi and Rachel Kibler to discuss their talks and the conference in general.

References

Transcript

Hello, and welcome to The Testing Show
Episode 105:

The QA Summit: Pre Game Show

This episode was recorded live in South Jordan, Utah, on July 27, 2021.

It’s been quite some time since we’ve been able to attend any in-person conferences.
Matthew Heusser and Michael Larsen flew out to attend the Xpanxion sponsored “QA Summit 2021”, held in South Jordan, Utah. As part of the speaking group at the conference, Matt and Michael met up with Gwen Iarussi and Rachel Kibler to discuss their talks and the conference in general. Also, just a side note, the interviews was recorded in the lobby of the Embassy Suites where the conference was being held, so if you are thinking to yourself “is that a waterfall I hear in the background?”… yes. Yes it is!

And with that… on with the show.

Matthew Heusser (00:00):
Well, Hey, thanks, Michael. And we are here the day before the conference. The first in-person conference that I’ve been to since COVID hit… has anybody else been anywhere?

Michael Larsen (00:14):
Nope, this is my first travel trip.

Matthew Heusser (00:16):
And the last time I saw Rachel was at the last conference I was at in 2019.

Rachel Kibler (00:22):
Yeah. It’s been a while.

Matthew Heusser (00:23):
Super Excited to be here. We’ve got long-time friends of the podcast. We have Gwen… What’s your last name now?

Gwen Iarussi (00:30):
It’s Iarussi. Or ([E-o-roo-ee] I’ll go by anything. That’s fine (laughter).

Matthew Heusser (00:37):
Welcome back. And Rachel Kibler.

Rachel Kibler (00:40):
Hi. So happy to be here.

Matthew Heusser (00:42):
And, hopefully, you know them by now, they’ve been on the show more than once or twice, and they’re both speaking at the conference. So we thought we’d just talk about what their topics are and turn that into sort of a roving conversation, because it aligns with my interests and passions. [Let’s] see if I’m going to get this right. There’s been a lot of changes in the world of software quality. And what does it mean? We talked recently on the show about testing versus quality and having more of a quality emphasis. Some organizations are moving toward a hybrid model where they don’t have “testers”. They have maybe analysts or technical contributors or technical staff members, but there’s no testers. So Rachel is going to be talking about the kinds of questions that testers ask on projects, because those questions still need to be asked to manage risks. Did I get that right?

Rachel Kibler (01:29):
Yes. My talk is called “Testers or none, the work has to get done”. A lot of times companies, people who aren’t working directly with testers think that what testers do is just…

Matthew Heusser (01:43):
press buttons…

Rachel Kibler (01:44):
press buttons after the code is written. And that that’s our major contribution, but there’s a lot more that we do as companies are trying to.

Matthew Heusser (01:52):
Click mouses?

Rachel Kibler (01:52):
Yes, click mouses, too (laughter) but as companies are trying to shift focus, the work still has to get done that testers are doing earlier in the process too. So talking about those questions that we just think of because that’s how testers think.

Matthew Heusser (02:12):
So those particularly when we talk about shifting left, if you don’t have testers to do that, who’s going to be doing the things that they would do in that role?

Rachel Kibler (02:18):
Exactly!

Matthew Heusser (02:18):
And Gwen it’s talking about building a culture of quality… did I get that wrong?

Gwen Iarussi (02:24):
Well, I mean, that’s part of it. Quality mindset.

Matthew Heusser (02:27):
Quality mindset, which I think would have some overlap.

Gwen Iarussi (02:32):
Yes, it does. Anybody can have a quality mindset. So it applies to environments where there are testers and environments, where there are none.

Matthew Heusser (02:39):
And what kinds of questions you would ask. So usually when we talk quality, we need more on the building side, but what if this goes wrong? What about this? What about this? So now that I’ve done my not terrible job of introducing their topics, maybe you could tell us more. Let’s start with Rachel. What kind of questions do testers ask?

Rachel Kibler (02:57):
A lot of what we do is analyze risk. There are risks in the code that we write; security risks and things, but then we can also talk about reputational risks and financial risks and legal risks and things that expose kinds of risks that we don’t want. Bringing out those questions during refinement or grooming. I have a picture of a horse being groomed because I always think of horses being groomed. When I think about grooming, am I the only one?

Michael Larsen (03:26):
Good reference.

Rachel Kibler (03:28):
Okay. But then even earlier in the process, making sure that there’s not too much noise or distraction, helping the team get to where we know what problem we’re trying to solve and how we’re trying to solve it and not taking everything for a given. I talk about bias a lot, too, in other talks, especially early in the processes, we can get anchored pretty quickly. And so testers, one of our jobs is to get us out of that anchoring bias, to get us deeper and deeper and figure out what it is that we’re actually trying to do and how we want to go about doing it and trying to strip away the stuff that doesn’t matter as much, the noise like the waterfall.

Michael Larsen (04:14):
(laughter) So let’s gear this toward a quality mindset. And I find this interesting because it kind of leads into a little bit about the talk that I’m doing, which we’ll cover in another part of this. But I am definitely interested on how you are going to sell that to people who maybe have a different view of what testing should be or what they think testing is. And it may not jive with what we know testing to be.

Gwen Iarussi (04:44):
Yeah, that’s a good question. I wish I was prepared to answer it now (laughter). No. So, I mean, when we talk about mindset, we’re taking a step back from the practice itself and we’re looking at the types of factors that influence how we look at software, how our customers look at applications and their interactions with the application, and we’re bringing it all together and trying to improve outcomes because at the end of the day, no matter who it is, whether it’s a tester or a testing organization or a software delivery team that has no testers and they just embed testing as part of their engineering efforts, the work still needs to get done. Those questions still need to be asked. And so when we approach testing and we approach quality, and we look at trying to scope that out and define what it is, we need to recognize all of the different factors at work, in the people that are playing in that space. One thing I’ve learned being in quality for as long as I have is that every single tester brings their own unique perspective to the table and they view testing their unique way. And they may ask questions that no one else asks there’s something to be celebrated in that. And it’s something to be aware of. So that as we’re building teams, as we’re to the table and planning out testing initiatives, we are aware of those factors and can get the results that we want. The other piece of this is the bias that Rachel was talking about in terms of understanding how our minds work and all of our experience is basically a sum up of everything we’ve experienced, everything, we’ve read, everything we’ve seen, our interactions. And so we bring that to the table. And so how we use that in a way that gets those quality outcomes is what all focusing on.

Michael Larsen (06:27):
Awesome. Thank you.

Rachel Kibler (06:29):
I’m glad that our talks are at different times because I’m obviously coming to yours (laughter)

Michael Larsen (06:33):
I would like to attend it as well.

Matthew Heusser (06:36):
Gwen said something I really like, and that is we’re trying to improve outcomes. My role has been under attack instead of trying to defend it or support it, I have offered to quit or say, I could also do this other job for a project. And we could just see what will happened. Hopefully if you’re going to make such a bet, everyone will go, oh my gosh, everything will fall apart. So articulating, “what would the world be like without your role?” I think is a really powerful way to explain what it is you do. Which brings me to Rachel. What’s one of the questions that testers would ask on a project. Can you give an example? You want to give us some context?

Rachel Kibler (07:19):
Sure. So I figured out the exercise that we’re going to do, there’s going to be an exercise at my talk is going to be very exciting and talking about…

Matthew Heusser (07:31):
I don’t do yoga.

Rachel Kibler (07:31):
(laughter) Where we’re going to talk about redesigning a loyalty program. So I’ll have the old loyalty program and the new loyalty program. And we’ll talk about how we’re going to implement it. Questions that should be asked at this are “what is the problem that we’re actually trying to solve? And how does this try to solve that problem? Is there too much noise? Is this simple enough? Can it be scaled back?” That’s the beginning stuff. And then the next step is, “where are the risks? where are the risk points? Are we going to lose a bunch of customers because of this? Is there a risk in our technology as we migrate users?” All of these things, and then at the planning of “what’s the smallest chunk that we can code and test simultaneously?” Things like that. So there were a bunch of questions I’m basically giving the entire top right now. It’s not a very long talk.

Matthew Heusser (08:23):
That’s kind of what I was hoping we’d do. That was the plan. As a Royal Caribbean… like, I like boats… when the Royal Caribbean, a few years back shifted their loyalty rewards program from number of cruises to number of overnights because people were going on a bunch of three night cruises to get points. And they didn’t like that. One of the key questions, if you do that as an exercise, I would hope we could leave some obvious questions there. Here’s the requirements. Here’s the new plan. What’s the transition? If I was platinum before, how many points do I have under the new system? How do You come up with that?

Rachel Kibler (08:58):
Yep.

Michael Larsen (08:59):
I Would think it would be an interesting experiment to literally in the open leave methods in which the system could be gamed and see who picks up on them.

Matthew Heusser (09:09):
I’m going to put a bug on one of my slides tomorrow. So I’m going to be like, if your car is a car insurance problem, right? If your car is worth 40,000 to $50,000, then you do this. But if the car’s were 50,000 to 60,000, you do that. Well, what if the price was exactly $50,000? It’s in both categories, it’s a requirements bug. So if you’re not paying attention, I would submit, If you’ve ever worked on significantly complex software, you could take the requirements, documentary or JIRA stories and put them all together and build tables under these situations, you should do this and you can make a lot of boxes. And there’s a big combinatorial explosion. You can’t see me, I’m waving my hands. But if you did that modeling and you made the table, you would find a bunch of question marks where the requirements don’t tell you what to do. And that’s, one of my interests is I want to find the times where the developer, the product owner and the tester would all argue about what the software should do and come up with four different answers and…

New Speaker (10:06):
Well, I Was going to say… that would *never* happen (laughter).

Matthew Heusser (10:08):
Get a single answer before they write the code. And then we can save some of these stupid arguments and rewrites.

Rachel Kibler (10:15):
Yes.

Matthew Heusser (10:17):
And it sounds like both of you have different ways to accomplish that result than what I would do. Some part of your talk is going to overlap.

Gwen Iarussi (10:25):
Probably a little bit. I mean, mine’s more, it’s a little philosophical. I’m waxing philosophical this time.

Rachel Kibler (10:32):
I like it.

Gwen Iarussi (10:33):
Yeah. I’m going cerebral. We’re going to talk about the brain and how it learns…

Matthew Heusser (10:37):
What about the brain? How does the brain learn?

Gwen Iarussi (10:39):
I’m not givingn that up. That’s spoilers (laughter).

Matthew Heusser (10:39):
This isn’t going to come up until maybe a couple of weeks and go live for a week and a half.

Gwen Iarussi (10:45):
I go in and talk about pattern recognition and how we use that to not only be able to make the right decisions, but also make decisions more quickly and how that’s a really useful thing, but how it can also lead us astray, because there’s nothing to say whether that builds assumptions that are correct or incorrect. And so you can have logic flaws where you have false positives, you have false negatives, and so you can get yourself into trouble. Raising awareness of some of that stuff and understanding perspective and understanding how important it is to be asking questions and bringing people together and bringing those different perspectives to the room and bouncing ideas off of one another. And how important that collaboration is to getting to the right solution so that you don’t have to rework.

Matthew Heusser (11:29):
So I would argue that stereotyping, the good kind of stereotyping is where we make broad generalizations that are accurate enough, that they’re kind of a brain shortcut. So we don’t have to think about it every time. And I think what you’re arguing for is, okay, so you’ve got your bias and your stereotypes that you’re probably not even consciously aware of. We get more people in the room, we have a norming function…

Gwen Iarussi (11:54):
Right!

Matthew Heusser (11:54):
we can eliminate the kind of false, “that doesn’t really fit here”, Stereotype.

Gwen Iarussi (11:59):
Right, you can qualify some of those assumptions, you can make sure that the gaps that you see or the priorities that you see are the same priorities that everyone else sees and bounce those ideas off of one another, to make sure that focus ies where it should be, and that we’re not leaving some massive gap in our testing, because we assume something’s going to work a certain way, or we assume that the conversation has been had, or we assume that because all of these work this way, this one has to work that way too. That’s a challenge, I think, that we are facing today more than ever, because we’re looking at globalized applications where there may be different rules for every single country that just adds to that complexity. So those discussions become much more important because our bias, the way that our minds work will get in the way and will cause those issues if we’re nopt careful.

Matthew Heusser (12:48):
Kind of like how for estimation or budgeting or quoting, we have a tendency to get a number of quotes. And then we kind of throw out the bottom one and throw out the top one and average it, it’s kind of a common strategy for measurement.

Gwen Iarussi (13:05):
Right!

Matthew Heusser (13:05):
We can do that with features, requirements, quality test approaches.

Gwen Iarussi (13:11):
Yeah.

Matthew Heusser (13:12):
What is it? Many eyeballs make bugs shallow is the common expression?

Michael Larsen (13:18):
I don’t know if it’s a common one but I’ll go with it.

Matthew Heusser (13:20):
I’m not quite quoting it right. It’s open source software.

Michael Larsen (13:23):
Okay. It certainly makes sense that the idea is that if you have many people who are looking at something…

Matthew Heusser (13:29):
Maybe it’s many eyes, I think many eyes…

Michael Larsen (13:31):
Which again, makes sense, because if you have a lot of people looking at something, you’re able to have multiple viewpoints on it and it does make it so that you flush out more things.

Matthew Heusser (13:40):
That was the theory with open source software was anybody could look at the code and find the bugs. And it hasn’t quite worked out that way because not everyone has the skill set to do that.

Michael Larsen (13:50):
True.

Matthew Heusser (13:50):
But most people in the organization that are stakeholders have the ability to have a conversation. They might not read it, but have a conversation about what the software should do and say, oh my stakeholder community, you really got to look at the way rounding works. Cause we need to be do bankers rounding this time, right?

Michael Larsen (14:08):
This reminds me of something that I just went through this past week. We had this one story that we were doing because of a custom treatment that we were doing for a particular customer. We have this particular bug that had been introduced and they came back and they said, we also have to do this custom date functionality. So what I’m guessing is if you define what the date functionality is and you feed something in, it’s going to convert it to that. Mind you, there was nothing here in the conversation that we had that was telling me what the custom date function was. No, that wasn’t what it did. What it did was if you define a custom date, you have to then provide the date in the format, in the profile agreement that you have, to match it. If you don’t, you end up getting really unusual results. Like the dates just don’t work right. Let’s say, for example, you have day, month, year, which makes perfect sense if you’re a European client…

Rachel Kibler (15:03):
Right.

Michael Larsen (15:04):
Versus if you were an American client, you do month, day, year. Well, if you don’t have two separate agreements in place, how’s it going to be handled? And how’s it going to be handled when you are day, month, year? So you have 30-10-2020. Instead, it’s going to go, wait, there is no such thing as a month of 30. How do I deal with this? And it blows up.

Matthew Heusser (15:28):
That’s really an interesting internationalization problem that we’ve probably run into a time or two in our career that we don’t talk about in testing much . It’s a real pain.

Michael Larsen (15:36):
But it’s one of those things because of the way that it is coded, you might think to yourself, okay, we have a custom date format, custom date format means that if we put that in, it’s going to convert to that. No, you’re saying you have to put it in that way…

Matthew Heusser (15:49):
you have to put it in the format.

Michael Larsen (15:50):
It’s just one of those things that again, if more people were talking about this, we would have been able to say, oh, that’s what that is. Instead because just two people were working on this. I spent a day working on it before we got to, oh no, that’s not what that means. It means this.

Gwen Iarussi (16:05):
And the interesting thing at work there is that’s one area where having a bit more emphasis on why that story is important would probably have flushed that out.

Michael Larsen (16:15):
Yeah, absolutely.

Gwen Iarussi (16:16):
So, I mean, there are all kinds of different kinds of things that you can do to bring that in. Hopefully you’re not finding it after things have been coded.

Rachel Kibler (16:24):
I do love a good date bug.

Gwen Iarussi (16:26):
Oh yeah.

Rachel Kibler (16:26):
Those are, those are amazing. But Michael, what’s your talk about?

Gwen Iarussi (16:29):
Yeah.

Michael Larsen (16:30):
Thank you for asking. I’m doing something very much out of my typical wheelhouse. Now I laugh about this because my title is, I am a senior automation engineer, which I always find hilarious because I think about 75% of my time is doing actual testing and 25% of my time is doing actual automation. But yet I am a senior automation engineer because that’s how they code my paycheck. It’s just the way it is. A few months back. I did have an experience thing about getting into the guts of what self-healing automation is. My realization was after I went through it, it’s nothing of the kind. There really is no such thing as self healing automation…

Matthew Heusser (17:10):
What’s being called…

Michael Larsen (17:11):
What’s being called…

Matthew Heusser (17:12):
The accepted label for self-healing automation is a very particular kind of test that is able to recover from certain types of errors. It’s very limited.

Michael Larsen (17:26):
Exactly. And so self-healing automation is really opportunistic multi-choice automation. In other words, if you code something where you’re looking for a particular locator, and you only are looking for that particular locator, if it finds it it’ll pass, but if it doesn’t find it, it’s going to fail. You fail for a number of reasons. You could fail because of latency. You could fail because of what appears in one section of the page, doesn’t appear the next time you load that page. I looked for a very particular piece of equipment on Amazon. Now, if you’re looking at it from a web pages perspective, if you see it, you know you’re seeing it and it might be worded any number of ways, but if you enter, “I’m looking for this thing”, where’s it going to show up? Even if you’ve gone and said, “Okay. I want to have the prime and I want to have Amazon choice and I want to look for these exact words”. Okay, cool. Is it going to be your first item or is it going to be your ninth item? Is it going to be at the top of the page or is it going to be on page two? Because there’s other things that kind of match it. And Amazon just might like to say, Hey, you know what? We’d kind of like to push these people because they’re a featured seller…

Matthew Heusser (18:39):
Something might be sponsored…

Michael Larsen (18:41):
or something might be sponsored.

Matthew Heusser (18:42):
they could change the rating factors…

Michael Larsen (18:44):
It literally doesn’t show up the same way…

Matthew Heusser (18:47):
Top seller.

New Speaker (18:47):
So my top slide… when I say self healing automation, I’ve run this test for months with the same parameters and you’ll see greens and reds, because exactly that, even with the self-healing, even with the idea that if you take artificial intelligence machine learning, and when we get right down to what artificial intelligence and machine learning is, is you have a list of variables and counters. And the more times you run that test, if a particular counter gets the test across the finish line, it gets incremented. And over time you’re weighting starts to change. You’ve now built up multiple layers of locators that it can try, and then it can go, all right, good enough. And just showing that, even with my self healing test, it still fails.

Gwen Iarussi (19:36):
Right.

Matthew Heusser (19:38):
I think the theory is they’ve got 10,000 customers, each of which are running 10,000 assertions a day, which represent a hundred discrete tests. So as people will go in and fix it, they can then track, oh, this changed it was red. And now it’s green because they changed the index number in the table of this thing. So now we can look at all the fixing and run that through an ML algorithm and say it failed. Oh, is that in a table? Oh, it is in a table. Oh, let’s walk the table. Let’s walk the table and see if something else works from the research that I’ve done among the companies that are advertising self-healing automation. They’re not doing nearly as much ML as just let’s walk the whole DOM and see if it works.

Michael Larsen (20:21):
And I will be warning people in my talk. Like if you’re looking for somebody who is a super expert on self healing automation, that’s not what this talk is about.

Matthew Heusser (20:30):
Part of the promise is supposed to be. This is going to make your life easier. If we say, if we say I had to become a super expert and study and get a master’s degree in this case, that’s not what we want.

Michael Larsen (20:41):
My point that I’m showing with the tool that I picked and mind you, I picked the tool that I picked for one very specific reason. It was free and anybody who wants to use it can do so and test it out for free. That’s my reasoning for why I’m using that particular tool, but in the process of doing it, it shows all of the things that they usually look at. And here it gives you a stack and say, based off of our interactions, here’s your list of locators. Here’s your really good ones. Here’s your not so good ones.

Matthew Heusser (21:09):
So they give you a list. That’s really interesting. So it’s not, it’s not self healing in the sense of we’re going to actually change the code. If any of these things work, we’re going to call it. Good.

Michael Larsen (21:19):
Yeah. So my point is, it’s not self-healing in the way we would look at a biological organic organism that’s been hurt going in and doing what it needs to, to heal itself. What I look at it as, as it’s a multiple opportunistic test and it gives you as many possible options to pass as you can. That doesn’t sound nearly as cool as self-healing testing, but it’s much more accurate.

Gwen Iarussi (21:46):
That’s much more difficult to market (laughter).

Matthew Heusser (21:47):
So let me ask you this in order for a company to do that, they’re basically a codeless usually record playback tool.

Michael Larsen (21:57):
Yes. There are some other libraries that you can run that do something similar. Again, I didn’t want to get too much into that. I only have 30 minutes. So I think trying to do a dive on those will be very hard.

Matthew Heusser (22:09):
All of the ones that are commercial, there might be some open source ones where you really can get under the hood and play with it. But I think all the commercial ones are usually go to a web browser inside of an I-frame or something, or with the plugin, you can record your tests and then it’ll create it in something like a grid and you visualize it.

Michael Larsen (22:30):
You can block out your flow. You can create conditions, you can create loops. There’s a whole bunch of things that you can do with it that doesn’t require you to write every single line of code. And that’s exactly what they’re targeting for. If you want to export your tests as more of a, say, Java test or something, or pick the language of your choice, you can do that. And then you can manipulate it any way you see fit. My whole point though, was I wanted to keep it as simple as possible, demonstrate the concepts for as many people as possible. And even if somebody wasn’t a really keen programmer, they could still go, oh, I understand what this is doing.

Gwen Iarussi (23:05):
Matt. You’ve got a couple of talks coming up[, too.

Matthew Heusser (23:06):
I do. Well. I want to focus on you guys. I mean, I talk it on the show all the time.

Gwen Iarussi (23:13):
Yeah. But we want to know. I want to know.

Michael Larsen (23:16):
As you love to say, Matt, okay, fine. Give me your elevator pitch on what you’re talking about.

Matthew Heusser (23:20):
I got two talk. Two talks. First talk is what do testers actually do. And they don’t just push buttons. They ask questions. It’s not just clicking. And we have all this automation stuff. Does the automation do the same thing as the testers? What we have to understand what the testers do. And when you go to a conference, you boss gives you the day off. Usually they pay some money. Maybe they pay for a hotel. Pay for airfare. You come home from the conference in three days and the boss says, what should we do? What’d you learn? And you say, we need to hire a consultant and we need to go through a transformation.

ALL (23:52):
(laughter)

Matthew Heusser (23:53):
We need to buy some tools. And then we’re going to slow down to speed up. And then, you know, in a couple of years it will be better… and our boss says “The only thing I know is we’re a week behind on every project that you were working on”. I want to say here, techniques that you can use tomorrow to explain what testing does, should run an exercise with your whole team that will materially impact the bugs that you find. And you’ll find more bugs more quickly, and people will understand what you do. So we’re going to do a classic testing problem that I’m pretty sure Rachel’s seen before. It’s a simulation. Michael’s probably seen it before. I don’t know about Gwen… It’s a classic simulation that you can do better or worse at based on how skilled you are as a tester with very little experience, it’ll put you under time pressure. That’ll put you under conditions of uncertainty and ambiguity, where you won’t know enough to do your job. And if you’re a good test or you can do it anyway, which is the situation it’s like that software development, like we don’t have enough time to do all the things. What can you do with your minimum? And then you can defend what you did under scrutiny. This is what I did. And here’s why. We talk all the way through all that second talk. Okay, great. That was very micro. That was very hands-on. Here’s a program to actually go test it, find me some bugs. Cool. And we talked about maybe some models to come up with that. So let’s pretend for a moment. I want to offer the audience half an hour of consulting. It’s going to be like this, but come in as a consultant. Or you hire me as your vice president of engineering, maybe, I don’t know. And you bring me in and okay, well I know we have not enough time and too much to test. This is an hour long talk that I’m shrinking down. So if you disagree after we can have a beverage and I can, we can go through the slides on, like, you did not have enough time to do all the testing. If you think you can do all the testing and the time that you have, you’re probably relatively junior and you don’t have the creativity, the ideas, the questions, the curiosity. That’s cool, because I can teach you how to find those problems. You should have come to my talk earlier. That’s so nice. Hey, okay. I agree. We don’t have enough time. Okay. Well then what we have to do is we have to be able to defend our choices to use for our risk management. How do we do that? We visualize them. So create a visualization of, for this product. Here’s where we are. Here’s the test strategy visualization. Here’s the test approaches. Here are the risks that remain with the time that we have. Here’s the cutline for when time runs out or here’s the priorities for when times when we can rack and stack our risk. Here’s how we’re going to approach it with room for creativity too. And then let’s do that across the portfolio. So if I came into the organization, as Matt the consultant and said, great, we want to improve testing. That’s where we’d start. Show me your risk diagrams. Who’s got when they can put up on the slide share. Who’s got, what would you say? Question number one. What would you say?

Gwen Iarussi (26:42):
In most organizations?

New Speaker (26:45):
Yeah.

Gwen Iarussi (26:45):
Well, we don’t have one.

Michael Larsen (26:46):
Nobody’s got them.

Matthew Heusser (26:49):
It’s going to be four hands in the back that go up.

Rachel Kibler (26:51):
Right?

New Speaker (26:52):
Those are going to be former clients. You guys can’t talk. So then we’ll talk about the benefits of that. And then I’m going to blow through some visualizations that I’ve made for real clients, that real customers that are different ways to do it. And half an hour, is going to be my limit. That’s my talk.

Gwen Iarussi (27:08):
I like it. Problem is, I want to call out like someone we’ll see tomorrow. It’s going to be interesting.

Michael Larsen (27:15):
And for all of us, we should add, this is our first in-person conference in case it wasn’t clear. We’re here… In a very long time. We’re all gathered around a table talking into one microphone, which usually we don’t get the chance to do.

Matthew Heusser (27:29):
I actually took some video. I wasn’t just taking pictures. I was taking it so we can, we can maybe put that in from YouTube or whatever.

Michael Larsen (27:36):
Good. Yeah, absolutely.

Matthew Heusser (27:37):
It’s just random five second shots. There’s a shot of the waterfall.

Rachel Kibler (27:42):
Nice, nice.

Speaker 3 (27:43):
So with that though, if we want to be able to make sure that this tops out at a half an hour show, we’re going to have to say goodbye. So for those of you who are listening to this, thank you very much for joining us for The Testing Show. We will see you again in a couple of weeks, quite possibly with a brand new topic or who knows, maybe we might have a continuation on stuff we’re learning here. So with that, thank you. Take care, everybody.

Rachel Kibler (28:05):
Thanks for having me so excited to see all again.

Matthew Heusser (28:09):
Thank you guys. This is great.

Michael Larsen (28:10):
All right. Thank you everybody.

Michael Larsen (OUTRO):
That concludes this episode of The Testing Show.

We also want to encourage you, our listeners, to give us a rating and a review on Apple podcasts, Google Podcasts, and we are also available on Spotify.

Those ratings and reviews, as well as word of mouth and sharing, help raise the visibility of the show and let more people find us.

Also, we want to invite you to come join us on The Testing Show Slack channel, as a way to communicate about the show.

Talk to us about what you like and what you’d like to hear, and also to help us shape future shows.

Please email us at thetestingshow (at) qualitestgroup (dot) com and we will send you an invite to join group.

The Testing Show is produced and edited by Michael Larsen, moderated by Matt Heusser, with frequent contributions from our many featured guests who bring the topics and expertise to make the show happen.

Additionally, if you have questions you’d like to see addressed on The Testing Show, or if you would like to be a guest on the podcast, please email us at thetestingshow (at) qualitestgroup (dot) com.

Recent posts

Get started with a free 30 minute consultation with an expert.