The Testing Show: The BIASED Podcast

February 25, 22:15 PM
Qualitest
Transcript

As software testers, we are often told that we need to check our biases at the door, to not go in with preconceived notions, to look past logical fallacies, and prevent them from entering into our daily work. As our guest Rachel Kibler helps us see, that is easy to say but much harder to do.

 

itunesrss

 

 

Panelists:

 

 

 

 

References:

 

Transcript:

Michael Larsen (00:00):

Hello everybody. And welcome to The Testing Show. It is August. We are really excited again, because now The Testing Show is back to twice a month. So we are delivering you a show two weeks after our last one. At least that’s our hope. I’m Michael Larsen. I am your show producer, occasional moderator, sometimes smart-aleck. We also would like to welcome our guest to the show today. Miss Rachel Kibler, making a return appearance. How are you, Rachel?

 

Rachel Kibler (00:29):

I’m doing great. Really excited to be here. Thanks.

 

Michael Larsen (00:32):

And of course you all know our master of ceremonies, Mr. Matthew Heusser. Matt. Floor’s yours. Have at it.

 

Matthew Heusser (00:41):

Thanks, Michael. And this is The Biased Podcast, which we’re going to talk about people acting irrationally or things that don’t quite make sense or responding incorrectly to evidence. Maybe you find a problem and people get mad at you, and we’re trying to solve a problem here. So much of the testing literature is based on this idea that humans are rational creatures that make good decisions based on data. It’s hard for me to read that and not laugh. So we’re going to get real and talk about some examples of ways that people don’t act rational and maybe a little bit about how to deal with it, to get to better outcomes, which is really the magic piece. Rachel, did I get that right? Cause you’ve been doing research on cognitive bias and you wanted to talk about it. Did I get that right?

 

Rachel Kibler (01:29):

Yeah, That’s exactly right. I just want to differentiate that we’re not talking about unconscious bias. We’re not talking about the isms; sexism, racism. This is, we think that we think rationally and we don’t. The scientific method is built to remove bias and we don’t quite take the same approach in testing.

 

Matthew Heusser (01:49):

Or certainly not often enough.

 

Rachel Kibler (01:53):

Very true. Very true

 

Matthew Heusser (01:55):

For those who haven’t met Rachel before, we first met three years ago, I think, at the Conference for Association for Software Testing and you keynoted at Agile Testing Days last year, and you’ve just been doing more and more in the space. Where would you describe your expertise? I mean, you’re a thinker, that’s no secret, but how would you describe where you fit into the constellation of testing people?

 

Rachel Kibler (02:21):

Honestly, I still consider myself a jack of all trades when it comes to testing and a master of none. I have a lot of interests in how we think about testing and how we talk about testing. I like talking about the way that we approach testing.

 

Matthew Heusser (02:37):

Yeah. I would think that… I would say you’re a relatively new face that is not aligned with a particular school. Because of that, you’re not aligned to a particular orthodoxy. You don’t have a particular definition of the word “test” that you argue about, but instead are open to new ideas, which brings us to this idea of biases. What’s an example of a bias that actually happens in software?

 

Rachel Kibler (03:02):

One of the most prevalent ones that I find is confirmation bias. I really believe that we build better software by forming relationships with our developers and with our product owners, with the people who have an influence on our software. But sometimes that can get in the way. When we have a developer who always, or almost always writes really good code, we go in there and we want them to have written good code. And so sometimes we miss things. That’s one example that I come up with a lot in my own testing, just because I have great developers and I have to challenge myself repeatedly. No, they did not write perfect code. How did they not write perfect code and finding those examples?

 

Michael Larsen (03:48):

I can pop in on this one. I know that feeling. The truth of matter is you’re absolutely correct. I’m a little bit more diligent with developers I don’t know well. Maybe not consciously so, but I’ve noticed I do give the benefit of the doubt to people who I have a longterm relationship. So sometimes I can find something very fast by that knowledge like, “Oh, I know that they’re usually good at doing X so I can poke at Y a little bit here, but I don’t necessarily say, “Well, I know they’re really good at X. So therefore I’m going to grill them on X.” I have been bit by that in the past. Any time we talk about a bias, the only way to address the bias is to be aware that it’s there. No developer wants to be seen as being foolish or missing something obvious, but they also don’t want to be seen up because we didn’t call them on something that could have been easily found.

 

Matthew Heusser (04:44):

Let’s talk about a couple of those. See if you disagree, Michael, you can say, Joe is really good dev and Sarah is a terrible dev. Joe gets all the easy assignments that are influential, that are the new technology. And Sarah just does maintenance because she’s not very good. And then when it comes to annual review, we say, “All Sarah did all year was bug fixes”. Meanwhile, Joe is jumping from technology stack to technology stack, creating a bunch of bugs that Sarah has to go fix. When we look at them, we confirm our beliefs. Is that another example of confirmation bias?

 

Rachel Kibler (05:22):

Yes, I would say so. And then we can also talk about cognitive dissonance as kind of a companion to confirmation bias, where if we have the example of Joe and Sarah and Sarah says, “But Joe is creating these bugs, you can’t punish me for fixing his mistakes”. There can be a lot of hesitance to accept that. And that’s what we call cognitive dissonance, where something doesn’t conform with our belief about the world. And we don’t want to accept that.

 

Michael Larsen (05:57):

Perfect place to jump in here. I’m going to make a suggestion on this front because it’s a beautiful way that it’s illustrated. And I’m going to give a caveat upfront because some of the language used in it is not G-rated. So parental guidance is suggested for this. Don’t know if anybody needs that prompt, but like to include it in the show notes. So I’m letting you know in advance. So don’t come back and yell at me going, “Oh my gosh, how could you share that?” Well, I’m telling you why. There’s a comic strip/infographic style on the web called The Oatmeal. And The Oatmeal covers this exact topic of the reverse confirmation bias. In other words, being addressed with information that really bothers you and how you react to it. The term that he uses. And that’s also used in a book that I really love (and also in a podcast I love called “You Are Not So Smart”) is “The Backfire Effect”. And it goes through a number of examples in this document and explaining what happens. As we get information that challenges our view of the world, we fight back against it because we don’t want to believe it. We put up excuses as to, “No, that can’t be right. No, that’s not at all possibly the case” rather than us sitting back and going, “Huh? You know what? Maybe we’ve set up an unfair system here. We’re punishing somebody and saying, they’re not really adding much value, when in reality they’re fixing the mistakes of the person that we are looking at positively because they’re doing all sorts of cool things in our mind.” And now we have to step back and say, “Is the developer doing cool things, but making a lot of mistakes, more trouble than they’re worth and is the developer that’s actually fixing the problems, a more valuable member of the team?”

 

Matthew Heusser (07:49):

I’m going to add to that. And what person do we have doing the same job? What can we compare them to? The tester on the team isn’t doing so great. Okay. You have one tester on the team. Compared to who? Put another person in the testing role and see if they do any better. “Oh, we can’t do that”. How do you know they’re doing poorly? And you can’t really say it that way. Testers tend to be factual, detail oriented, put it all out there. What I will say is to get someone who is criticizing you to wake up to that reality. You’re not just going to be able to tell them. You’re going to have to do something like send them a funny Oatmeal cartoon.

 

Michael Larsen (08:28):

I think it would start a great conversation, frankly. Yeah.

 

Matthew Heusser (08:32):

So along those lines, Rachel’s done a lot of the research for us and she put together the start of what I think could be a good paper. And we’re going to get that out. There’ll be a link in the show notes for this, but there’s a couple constellations of fallacies that I think work together to cause real damage. If you could talk about them, I’m really interested on your take on the planning fallacy, primacy effect and anchoring.

 

Rachel Kibler (08:56):

Perfect. So planning fallacy is… we see it all of the time in software. It is where we assume that something is going to take 20 minutes or two hours or half a day. Or we assume that a project is going to take two weeks… And then it takes half a day or a week and then a month to get a project out the door. So this is planning fallacy, a big part of it goes to the framing effect where when something is framed for us, when something is introduced to us, if it’s introduced as an exciting possibility or kind of a slog that will also change the way that we approach it and the way that we think about it. So the framing effect and the planning fallacy work hand in hand, we take what we have and we assume that it’ll take less time than it does and our attitude towards it can influence how much time we think it is. The primacy effect also works in here. Primacy is where we take the first thing that we hear. And we assume that that’s the most important. And we remember it the most. So like in a list of words for recall, we tend to remember the words towards the beginning, much more than the words at the end. In software, this can be where we tend to remember the first thing that we talk about, whether that is the framing, the excitement, or the first important thing about testing. And we may forget about the other stuff. The security, if it’s talked about later in the conversation may not be remembered as well.

 

Matthew Heusser (10:32):

I think those tie together in a really interesting way, whenever you give an estimate, we tend to give the lower number first. So it changes. Maybe you have an Agile shop that measures things in hours or points. It doesn’t matter. You say that’ll take two to four weeks or that’ll take two to four days, or that’ll take two to four hours. People hear the “two”. In fact, I’ve been in meetings where someone said… let’s say it’s October 1st. My manager asks me for an estimate. “If we drop everything and start today on this project, full time, doing nothing else, it should take four to eight weeks.” And that was October 1st. That shows up as a bullet point on a slide a week later to senior management saying, “it’s going to be delivered November 1st”. Like, how did that happen? We didn’t start on it full time. I said four to eight weeks… What?! And I think it’s a combination of planning, primacy and anchoring. The first number we hear is the one that is right. And we always think it’s going to be easier. And we forget about all the other variables and we just go for that. But there’s an interesting opportunity here, I think, when it comes to framing. I come to a dev and I say, “I’m working on this really interesting problem. I’m having a challenge, isolating this one variable in this system. And I think that’s where the bug is, but I can’t reproduce it. I wonder, might be cool if we could kind of isolate it in the debugger or something like that to figure out if this particular variable at a particular setting is causing the bug”, or if you went and said, “This crap never works. There’s some, you’ve got some problem with your code where there’s a bug and I can’t reproduce it because who knows what it’s doing. It’s driving me nuts. I had spent a week on it. It’s a waste of my time. I can’t get anywhere”. In which of those two, do you think we’re going to get more help?

 

Rachel Kibler (12:23):

That’s the age old debate of whether Dev and test are opponents or partners. That’s a really great example.

 

Matthew Heusser (12:31):

It’s as simple as the language we use, if we frame it as, “Will you help me with this boring slog of a horrible thing, that is no fun. That is beyond my skills that is too hard” versus, “Let’s pair on this awesome thing”. And we say, “Oh, well of course you just say, let’s pair on this awesome thing”. If I were to take a video camera around the desks or the home offices of corporate America for 40 hours and take a sample of a thousand people, what’s the breakdown going to be? Is it going to be 50/50?

 

Michael Larsen (12:58):

Not even close,

 

Matthew Heusser (12:59):

Not even close. 90/10… And 90 is going to be wrong. I really think there’s an opportunity here, if we’re thoughtful, to genuinely improve outcomes. It doesn’t cost any money. We don’t need to change in a policy. We don’t need to limit work in progress or go to a Kanban class or buy any technology. We just have to manage these biases better. If we manage biases better as a team and as teams of teams in the organization, what kind of potential improvement in velocity or quality do you think is possible?

 

Michael Larsen (13:35):

Oh, that’s a big one. So we’re talking about the anchoring fallacy or the framing effect. They both touch on each other. And the idea is, and a phrase that I’m going to wholeheartedly steal from David McRaney again, “You Are Not So Smart” book and podcast, and that is “re framing”. In other words, making it a point to deliberately step back from whatever it is you might have a bias for, identify that you might have that bias, and deliberately enter into conversations where you may feel letter “A” opinion, but you have to talk to somebody who has capital letter “B” opinion. The question is, are you prepared to go into that conversation to speak to the capital “B” opinion? Can you take your commentary and reframe it for capital “B” opinion? It’s not an easy thing to do. I apply that when I test and the answer is, yes, I do believe that it does give a gain in velocity. Now it’s not going to be giving you a gain as in you’re going to get more stories done. I have no way that I can honestly say how we could even measure something like that. But what I say, and I’ll go to the mat for this. If you take the time to understand how to do reframing, just that if you want, maybe use Stephen Covey’s old Seven Habits of Highly Effective People, “seek first to understand then to be understood”. It’s the same idea. And then say, alright, with that, let’s take this approach to explore this. Most people don’t stop to listen to what people are saying. They don’t want to internalize what they’re saying or how they’re speaking or how they’re approaching a problem. They want to get just enough information out of that person so that they can make a comment and they can say what they want to say. That’s the reality of people. It’s just how we interact with things. Listening is hard. And being able to reframe gives you at least a tool kit to force you to say, am I really listening?

 

Matthew Heusser (15:43):

Couple of things. I really, you really made me think there. And I appreciate it. When I hear reframing, I usually think of a verbal technique where you get someone to change their perspective. I really appreciate that. You said no, no, no, no, no, no. First listen to them to understand their perspective. First, repeat their perspective back. Make sure that they understand that you’re aligned because reframing can go bad. When they say, for instance, “there’s no way we can do this on time. It’s not possible for us to do this. There’s no good reason we should do that”. You can reframe that with, “Oh, so you don’t know of a reason we should do that. You don’t know how to hit that deadline. You don’t know why we would want to do this, instead of it’s impossible. You just don’t know how.” That’s a reframing. If you do that, you may win the argument, but lose the person. Someone who is not socially adept will feel very awkward. Their face may get red. You may win in a meeting, but you’ve lost the relationship with the person. Someone who is socially adept will say, “what just happened? What? No! What? I don’t!” And so if you’re going to try to reframe in a helpful, positive way, I think it actually takes a fair bit of social skill and nuance and seeking first to understand. So I’m really glad you hammered on that. Michael, thank you.

 

Rachel Kibler (17:04):

I would like to argue a little bit with Michael. I think that if we’re aware of our biases and have more conversation and do the reframing and work together more upfront, then we can move faster. It’s a higher cost upfront of time and energy, a lot of emotional energy and social skills, but we end up building better software and building it faster. How many times have we had to go back and do rework over and over and over again where it could have been dealt with in a planning meeting or in refinement? Had we just had a little more conversation and maybe a little more attention between the team, but trying to understand and trying to see that we all have the same goal. It could have saved a lot of time in the end.

 

Michael Larsen (18:00):

That is a very good point. I… Darn it. I don’t have any way to argue that. In a way you are able to quantify what I was putting out as a qualitative thing. And that’s totally cool.

 

Matthew Heusser (18:12):

I didn’t hear a number yet. Did I hear a number yet?

 

Rachel Kibler (18:16):

No. And you’re not going to get a number.

 

Matthew Heusser (18:19):

I’ll give you a number. When we’re talking about biases. If your number’s too high, you will scare people. So if I said twice the work and half the time, 400% improvement, executives love that, but a lot of team members would say, Oh, so you’re saying we’re screwing up. It’s also really, really presumptive. There are organizations that maybe could get that high of a benefit. If you want to Google it. There’s a paper called drum buffer rope at Microsoft. By David. I forget it. The Kanban guy. We’ll get a link, where he sees that kind of improvement, 10%, 5%. Those don’t really threaten people. And in my experience, except for maybe two high functioning organizations I worked with, Socialtext was probably one, I haven’t seen many organizations yet that couldn’t see a 5% productivity bump just by communicating better, eliminating biases, eliminating conflict and friction. And that’s almost free because you can do it with good leadership. And I know that cause it happened when I was a Socialtext.

 

Rachel Kibler (19:21):

I like that. I think that’s a very good point. And it is nonthreatening. One of the things that we have to talk about when we talk about more conversation and more collaboration upfront is the problem of conformity bias. Sometimes we talk about deindividuation with it too. It’s hard to say no, the testers are often the ones to say no, and we raise questions and we’re kind of Debbie Downers, like all the time, it’s our job. And so sometimes when we’re having those conversations, it’s just easier to go along with the team because there’s excitement and there’s flow and you want to get stuff out, but you have to be careful of the conformity bias.

 

Matthew Heusser (20:04):

Yeah, absolutely. We had a tester. We were working with on a training assignment who just never found any bugs. They’d just been promoted to senior tester… errr, what is happening? We’re doing simulations. This person found one bug and they kept bragging about it all day. Great. It’s an Internationalization bug. That’s cool. But like keep doing other stuff. They’re manager, they were federated organization. So they were in lots of teams. There was one senior tester. She said, “Well, she never held a project up”. Every project that she works on is on time, never finds any serious problems that would hold the release up and they fix it in post. They fix it after it’s live. So she never hurts any deadlines. So some people maybe get more bonuses and she’s easy to work with and she’s friendly. So what’s not to like? And the answer is “material outcomes, bad software for customers, unhappy customers”, but that’s really hard to measure. So yeah, there absolutely can be pressure to, let’s just say there’s no problems. Go to the movies. Don’t test too hard, but I really got to wonder if you’re doing that as a tester. And that’s what you’re asked to do, how long til somebody wakes up and realizes that there’s an expense on the balance sheet, they can just eliminate and get the same outcome. So how do we fix it?

 

Rachel Kibler (21:19):

That’s a hard one. One of the things that I do, I’ve been on my current team for just a few months and I didn’t want to make waves up front. I wanted to do my job. So I would do a lot of testing later on in the process. Instead of speaking up during our planning and during our refinement and now I’m there and I’m ready to engage a lot and I have to temper it. So I’ll say, “Okay, I’ll be enthusiastic. But this week I’m going to ask this many questions and just trust that they’ll understand that I’m doing a good job”. Even if it kind of derails the conversation a little bit. Even if we have to go over time in a meeting, I make challenges for myself. I also have checklists of things to make sure that we address as a team or that I address in my testing. I’m trying to prove that it’s faster in the end if I challenge in the beginning, but that’s hard. It does take a lot of emotional effort, especially as a tester who doesn’t really write a lot of code and as a woman, but that’s an entirely separate topic. I want to talk about the availability heuristic. It kind of goes along with confirmation bias. When we see what we want to see. Availability heuristic is that we think about the things that we’ve been thinking about. So if we find a story and we’ve seen a lot of bugs in this one area, then in the next story that maybe touches that code, we’ll be more likely to see similar bugs in that same area. And so we may not look so hard at other areas. It ties in closely with confirmation bias, but it is a separate thing that we think about the things that we’ve already been thinking about. And when we’re confronted with a new challenge that may be unfamiliar, if we can find ways to make it more familiar, by going back to what we know, we tend to do that.

 

Michael Larsen (23:31):

So kind of like the old mind field hypothesis, if you’ve sent something through a minefield, you’ve detonated a number of mines, where are you most likely to walk the next time? Where you’ve already detonated. You’re not going to try to say, “Hey, Let’s go around this area”. Oh right. Cause in the real world of bugs, of course that’s what we would want to do because that’s where we’re going to find mines that haven’t been detonated. But human nature is going to say, “We’re going to focus on the area we’ve already walked”.

 

Matthew Heusser (24:00):

Or, you know, there’s a couple of different ways we could take that metaphor. One is we always find bugs in this sub-module. Let’s test. This sub-module harder, which could be a rational decision presupposing, the same people who always make the safe bugs haven’t learned anything or are still working on it. So Rachel, how can we make that a common, irrational decision? I just want to make sure I understand what we mean by availability.

 

Rachel Kibler (24:25):

One of the canonical examples of availability is if you ask a group of people to name three animals, most people will name a dog. If you ask a group of people to name three tools, most people will name a hammer. It’s the things that we are familiar with, that we tend to accept and name and find. So in testing, when something has worked well in the past, we may not look in that area. Like you mentioned with the print, but if we found clusters of bugs in other software, we may spend more time on that looking for similar bugs.

 

Matthew Heusser (25:08):

So I’m okay with saying, we’re going to adjust our strategy. Look, the development team hasn’t changed the problems, the way they address problems, hasn’t changed. They always introduce bugs over here. We’re going to look over here, but to say, we’re going to forget that other categories of risk even exist. We’re not even going to think about them any more. We’re going to forget about the iPad because we looked at it two years ago and it was fine. You know, that’s availability bias.

 

Rachel Kibler (25:35):

Yes. Let’s just talk about this.

 

Matthew Heusser (25:39):

Go ahead. Sorry.

 

Rachel Kibler (25:40):

Can we talk about the sunk cost fallacy?

 

Matthew Heusser (25:43):

We’ve already spent so much time on the podcast. I mean, I guess off balance sheet,

 

Michael Larsen (25:49):

Ba-Dum. Ching!!! Okay. Well-played Matt, well-played.

 

Rachel Kibler (25:59):

Companies will go all in on an automation tool and it is terrible to work with, but we stick with it because we have literally put money into it, but we’ve also put a lot of sweat and tears and probably cursing into it. And we don’t want to give it up because we’ve put this effort in. It also happens with stories where, when we take an approach to a story and there were a couple of different approaches, maybe we didn’t do a POC, a proof of concept. Maybe we just chose an approach and we’re in testing and it’s so buggy and part of us wants to say, “no, we should do it a different way, but we’ve already spent so much development and so much testing time on this way.” But we don’t say that we just push on and release something that may not be the best approach.

 

Matthew Heusser (26:55):

I think we get sunk cost fallacy Any time we get feedback from customers that it’s like, “I really don’t like this” and we’ve already built it. And that feedback is ignored and the product just goes out anyway. And the adoption rates are poor.

 

Michael Larsen (27:10):

The Sunk Cost Fallacy tends to come into play when your emotions are tied up into this. A number of years ago, when my daughter was first learning how to drive a friend of ours called us up and said, “Hey, I have this opportunity. If you’re interested in it.” He happened to have my daughter’s favorite car. It was a 1997 Mazda Miata, MK-One, gorgeous car, little convertible. She loved it. It had 180,000 miles on it. Back of my mind was sitting here saying “this is going to be a money pit”. I bought it for $800, you know, because, “Oh my gosh, my daughter’s going to love this.” No sooner did we get two or three months into owning the vehicle? The transmission needed work. And that cost us $3,000. Well, okay. I only spent $800 on the car, $3,000 into it. Oh, that’s not the end of the world. You know, it’s still reasonable. It’s okay. Besides it’s a Miata, it’s so cute. She loves it. And so we fixed it and we got that. Brought into the city and because she’s not, you know, it’s a small car and she’s not used to working with that. She backed into the back of a grill of a pickup truck and it bent her trunk. And that cost us $1,900 to fix, to get the back the trunk back up. I’ve already put the… But she loves it. And you get where I’m going with this, right? This $800 car that was this real love of my daughter’s life. Long story short within a year of owning this vehicle, on a nice little Saturday drive down the coast, the head gasket blew.

 

Matthew Heusser (28:49):

But she loves it!

 

Michael Larsen (28:50):

But she loves it. And, but at this point she looked me in the face and said, “Dad, I get it. You’ve been really sweet about this. Stop! Don’t stick with this! I even understand that this doesn’t make any sense. You know, I would love to keep the car, I would love to do that, but we don’t have the $5,000 it’s going to take to fix this. And then, you know, it’s like, even if it’s an MK-One, now we’re talking about, we would be in for 10, $12,000 and I could practically buy a new, not new car, but a much, much more reliable used car than this. And it broke both of our hearts. It really did. We had to just admit, Nope, sorry. We gotta let this go. There is my classic sunk cost fallacy thing. Again, the point being was I reacted impulsively because my daughter loved this car, but I’m willing to bet that had I even gone back and said, “Okay, do you love this car enough to put your own money in to fixing the transmission that probably could have stopped it right there and said, “yeah, you’re right. No move on”. You know? And so that’s really, it sunk cost fallacy tends to be most disastrous when you are emotionally attached to it, or you cannot separate yourself from that emotion.

 

Rachel Kibler (30:08):

I love that example.

 

Matthew Heusser (30:12):

Also in American business, often we would rather make like, it’s a little bit late. So let’s just now it’s still two more weeks late. It’s two weeks late, two weeks later, two weeks late. Okay. Now we can ship it. The aggregate of that is three months, but to cancel a project, you know, now we’re talking about a needs improvement on our annual reviews. So the incentives for the individual may be different than what’s actually best for the company. That’s kind of meta, probably outside the realm of what a tester can influence on any given Tuesday. But it’s a problem.

 

Michael Larsen (30:46):

No question. We could riff on this stuff all day long, but unfortunately podcast does have kind of a set limit time that we need to do. So I think this is probably a really good place to wrap it up. But before we do that, I do want to say Rachel, because you are the guest of honor on this, how can people learn more about you? How can people see what you’re up to? What are you… In this time of COVID, you know, “Where are you going to be?” is probably not the right question, but maybe virtually where are you going to be?

 

Rachel Kibler (31:14):

Well, you can find me on my website, which is https://Racheljoi.com. Joy with an I at the end, instead of a Y. I blog there infrequently, but I think it’s good content when I do blog and all of my public appearances and links to writing that I’ve done for other companies is on there too. Yeah, I am looking forward to next year. I was going to keynote at CAST this year, but that was postponed right now in COVID I’m not very active. I’m trying to blog more and do more writing, but as for public appearances, fairly limited, but you can see all of those on my website. I also, if I can put in a plug for my company, I work for 1-800-CONTACTS and it is a fabulous company to work for. They really care about their employees and they care about their software. They really care about their customers and customers come first. So if you need contacts, use them. If you’re interested in software jobs hit me up, we usually have something going on.

 

Michael Larsen (32:16):

Excellent. All right, Matt, you got anything in the hopper?

 

Matthew Heusser (32:19):

So Excelon development has been doing a lot of writing with Subject-7 in collaboration to sort of move the needle forward about how good test tooling can be done. I’m sure it’s wrong, but I think it is an order of magnitude better discussion than it’s been happening in software. When you say Test automation is stupid, you can’t do it. Stop saying that.” versus “it’s awesome and amazing and unicorns and rainbows”, and the truth is in the middle. And we need a nuanced way to have that discussion. And I think we finally gotten some of those things written down. So I’ll try to get some links to be in the show notes so you can check that out. I’m going to be in Chicago more this fall. So if you’re in that area, look me up. We can maybe have a meeting, or do coffee, or something.

 

Rachel Kibler (33:04):

I want to put in a plug for Matt. Conversation with him is amazing. He is so wonderful about talking with people and talking about testing. So if you have a chance to talk with Matt, you should absolutely do that.

 

Michael Larsen (33:18):

I concur. I am going to be doing something at the beginning of September. I will modify the notes so that I’m not going to stop right now and go dig up for the exact dates, but getting into September, I’m going to be doing something with Testing Guild, Joe Colantonio’s organization. I will be talking about testability. And then October, at this point in time, I am in discussions on doing a virtual workshop on accessibility, how we ultimately deliver that is a question, and we’re kind of working it out. Watch this space. There may be more about that in future podcasts. As of right now, I think it’s a good time for us to put a pin in it and say, thanks for everybody for participating and to those who are listening. Thank you for listening. We look forward to seeing you in two weeks. Take care everybody.

 

Matthew Heusser (34:02):

Thanks Rachel. Thanks Michael.

 

Rachel Kibler (34:04):

Thank you for having me today.

Recent posts

Get started with a free 30 minute
consultation with an expert.