The Testing Show: De-Risking Enterprise QA with Simon Evans
As organizations get larger or infrastructure/software becomes more relied upon, testing itself can have its share of risks.
For this episode, Matt and Michael chat with Simon Evans about how to approach larger systems (and perhaps not so large systems), make the needed decisions to focus on important areas and make QA a less risky proposition altogether.
- QualiTest Group Acquires Experior Group, Positioning Itself as a Leader in SAP Testing
- Adopting Enterprise CI/CD: More Speed, Accuracy & Releases
- A heuristic for regression testing
- PNSQC 2020: Add Some Accessibility To Your Day
Michael Larsen (00:00):
Hello, and welcome to The Testing Show. Glad you could join us. This show should be appearing early in October, 2020. As you are probably still aware if you are hearing this, as it comes out, we are still dealing with the COVID situations of differing natures, depending upon where you are. That’s probably going to be our reality for the ensuing future. If you hear the show a few months or a couple of years in the future, the background for this probably isn’t going to matter all that much. At least we hope it’s not going to matter all that much. In any event, we would like to welcome to the show, our guest, a recurring guest. He’s been here before, Mr. Simon Evans. Hello, Simon. Welcome to the show.
Simon Evans (00:40):
Hey, thanks for having me back on.
Michael Larsen (00:42):
Glad to have you. And of course, Matt Heusser our MC, moderator, guy that kind-of keeps track of everything. Matt, just do your thing.
Matthew Heusser (00:52):
Thanks, Michael. And everybody knows Michael Larsen, our show producer, and so much more. Simon’s a friend and previous guest of the show, and as a Senior Vice President for Enterprise at Qualitest… Been around forever. But where were you before you were at Qualitest, Simon?
Simon Evans (01:10):
I was at a company called Experior in testing and QA, but specifically focused on the SAP world. So we were acquired by Qualitest getting on for three plus years ago now.
Matthew Heusser (01:22):
So you’ve been developing some material that I’m not sure if it’s gone out as a conference talk yet, or I’m sure there’ve been some papers that have been developed around the area of de-risking enterprise test or enterprise delivery, even. It’s a testing show, but a lot of what we’re trying to accomplish is reducing risk. It really tweaked my interest when I heard about that. When you say de-risking enterprise QA, can you tell us a little bit more about that, Simon?
Simon Evans (01:51):
Enterprise itself is a funny area. Obviously, there’s some big players in there like SAP, Oracle, et cetera. And we also have companies like Salesforce and Workday who are sort of really making some big moves in this world. As a whole, certainly around the QA piece, it’s still really old fashioned in the way that things are done. There’s still a huge amount of manual testing that goes on. People, in trying to gain some kind of efficiency or make things quicker, turn to automation, but almost a little bit blindly. So there’s a number of issues around that side of things. And really, what we’re just trying to do is help organizations break down some of these barriers, the phrase that we’re using (and I know it gets overused in other areas), it’s all about velocity. How do we gain the speed, but in the right direction and in the right things.
Matthew Heusser (02:41):
So how do we gain velocity in the right things? What are the pieces of that when you’re looking at how to do it, where do you start?
Simon Evans (02:51):
There’s a couple of different threads that we look at. Obviously identifying what velocity is required for a customer is a key amount. And then going through some of the traditional things in terms of people, processes, and tools to work out where we can get those biggest efficiency gains. There’s a number of things that we do that will gain 15% or 20% efficiency just through changing the model that they’re working to. And we’ll see that depends on, are they still working in Waterfall? We still see a huge amount of Waterfall on the enterprise side, or is it sort of Agile? Where are they on their journey towards this DevOps scenario? And then what can we do with things like automation? Automation is used a lot within enterprise, but it tends to be only around progression testing. They tend to be big cumbersome test packs running end to end. It’s not easy just to pick up part of the pack and run with that. The duration of executing these tests tend to be a long time, a lot of set up a lot of data to do. The test themselves run for a long period of time. It’s really just not very dynamic at all. So there’s a huge amount we can do in that space. And then also using intelligence as well. If we look at any enterprise landscape, there’s lots of applications sitting in there. Quite a lot of our customers will have over 200 applications sitting in that stack or talking to each other. So working out what to test is really hard and it relies on business users, it relies on historical knowledge and ultimately the main approach these days that we see, even if someone’s going through a big transformation, is turning back to the business and saying, “what should we test?” That whole process is manual. That whole process is slow. And actually, if we use the power of tools that we have these days, we can actually target exactly what to test, have the intelligence behind it, have the business impact behind it. What is actually changing? Let’s look at the technical analysis that’s coming through and let’s look at historical information, what’s been bad or good historically, and start to become pinpoint accurate. So we can dig deeper where we believe there is risk. We can do light touch testing across this broad spectrum, just to make sure that we’ve got all of our bases covered and then have a feedback loop in that to say, “well, how well did we do?” Are we accurate enough? And are we managing risk in the right way? Ultimately all of this information, that’s sloshing around everywhere, pick through it and provide business-ready reports, dashboards, however you want to batch them. But things that some of the business community can look out and say, “do I need to worry?” There’s detail behind it. They can dig into it if they want to. So they can make decisions around what’s about to get deployed. What’s not. And then obviously we have the technical information that’s there for the rest of the team to pick apart if there’s kind of defects or things they need to look at.
Michael Larsen (05:38):
Simon, I’d like to throw in something here just to get your take on it and also to kind of sneakily set up the next part of the topic. [laughter] So as a hypothetical, let’s take my organization. We’ve got a group of products that are all meant to interact with each other, but they’re all for all practical purposes, separate groups, some cases they’ve been acquired, they’re all meant to be presented together, but they all have different technology stacks. The only thing that really unites them together, if you will, is a widget platform that can be displayed in one place. And we want to look to say, “Hey, how do we make it so that our process is less risky”, because right now there’s a fair amount of handholding that goes on on, there’s a fair amount of good faith and “Hey, this group is doing this and this group’s doing that”, and at the end of the day, we pull it off, but I’m sure there’s gotta be some suggestions as to how to limit the heartache.
Simon Evans (06:33):
Yeah, it is complex and no two landscapes are the same. We can take two identical or very similar companies competing against each other. They’ll be doing the same things, but the landscape will be very different. It’s peeling back the layers of the onion. The main thing that we end up looking at is their business. It sounds like a really simple thing to say. When we’re looking at such a huge landscape, we have to make sure that the last thing that we’re looking at is what they’re actually trying to achieve. So let’s not forget everything that we do in the IT world has kind of, involved as we are and working hard on all these things. Its purpose is to drive competitive advantage for these organizations. You know, all those different kind of systems, all the different bits of data flying around is there to support business functions. Some of which are not the most exciting in the world. If we’re talking about finance processes or whatever. Other ones are on the sort of cutting edge on the consumer side, and we’re into kind of apps and all sorts of cool stuff. Ultimately it’s all there to drive some kind of competitive advantage in the business for these organizations. And it’s that layer that quite often gets missed off. So we do need to look at code and we need to look at how are the things hanging on to each other at a technical level. But when we start looking at risks, we really have to get into the business modeling side of it to understand what that business impact is. One of the phrases that we use is this concept of “fit for use” versus “fit for purpose”. When we work with organizations, quite often, they’ll have partners in there they’re doing a great job, but they are contracted in there to develop and deliver a solution and potentially do the testing around it. And they will be delivering what they’ve been contracted to. And that’s what we badge as “fit for purpose”. It’s a solution. It works. You can deploy change to it. They can manage that and it will still work, but does it really work in the context of the business? That last little nuance that when we look at enterprise we’re really, really interested in. I’m not sure if that answered your question.
Michael Larsen (08:42):
I think that works pretty good for me. Thanks.
Matthew Heusser (08:46):
What I heard, Risk based, truly risk based, based on, “Hey business. What matters to you? How do we manage it? How do we measure it? What the impact? If it doesn’t work, what’s the probability that it’s going to go wrong? How stable has it been historically to make our best use of time?” And that feeds into the tooling, are these tests so valuable and so repeatable and so straight forward that we could get the computer to run them all the time? And how do we then make that actually a good investment as opposed to a boat anchor? And then what is the change model or what is the model for development and are there efficiencies to be gained by changing the way we think about delivery away from -I mean, Waterfall never really worked in the first place, but we just didn’t know anything better- to something that is a more complex adaptive system like agile or DevOps. These all tie together, right? Because if you’re doing dev ops, you probably have to do automation smarter. And if you’re doing agile or DevOps, you have to do risk based smarter.
Simon Evans (09:48):
Yeah. And maybe if I try and pull up an example together, so the customer that we work with, our big global customer, they have a number of different SAP landscapes as well as hundreds of applications that interact in and out of it. And they’ve been living with that for a number of years. When they deploy any changes and they do this on a frequent basis, they historically used big kind of quarterly drops of code. And they wanted to get into kind of a weekly mode. But the biggest thing, holding them back is the testing side of it. It just takes too long. When they do big deployments, they’re using several hundred people to test. A lot of those come from the business. You have to go different regions, different countries, that’s a huge organizational challenge, just even doing it. So they said, Hey, can you come in and try some stuff with us?” And what they wanted to do is better deploy their change. I’ll just focus it on the SAP side for the time being. Deploy some change into their SAP landscape, promote that through to their quality environment, be able to analyze that and say, technically what’s changed. And therefore, what should we test? Can we apply that risk based element that you just mentioned on top of that? Ideally, most of this stuff is automated. It can run and our release manager can go around getting approvals from the different business people. Cause they’re all happy that the testing has been done. So conceptually simple, nothing amazing in that. Actually in terms of how you do that in that SAP environment is very difficult. They’re deploying their changes. There’s some tooling inside SAP to do that initial change impact analysis. That’s all churning and gives us a lot of technical information around what’s happening. And we’re able to then extract that and using some of the tools within Qualitest that we have, Qualisense being the main one. We can start to analyze that information even further. The sort of things that we’re looking at is, one, putting those business layers in. So that historical risk based testing stuff, let’s put some intelligence behind that as well. So we’ve got the risk analysis happening within Qualisense. We can obviously update that depending on if there are changes within their business. They’ve got years of testing results that we can analyze. What historically has failed, what historically hasn’t failed. And actually we’ve tested thousands of times, but we never get any kind of negative results. Are those two areas equal, or actually, should we be focusing on the one that tends to fail more often? There’s all sorts of data we can analyze to improve that we can look at service desk information even, and see, you know, where are the users complaining about? Are they genuine issues or, or not, and pull all that together to really refine and exactly what to test. Either it will flag up and say, we haven’t got anything automated around that. So it needs some manual attention, but again, we can target the right person to go and do that. And then where we’ve got the automation assets set up for them, we can either run the whole thing or we can say it will automatically select a subset of those tests, trigger it via Jenkins for us, the test runs. Lovely. We get the results back in a nice pretty format. And that whole approval process works. Now, this isn’t anything new outside of enterprise taking that kind of approach, but it absolutely is in these kinds of these SAP Oracle environments. And what we’re trying to do with all of this stuff is build it in a tool agnostic way. Because if I speak to one company over in the UK, they may use the Microfocus products, historically, anyone familiar in the SAP and Oracle world at the moment, I know that Tricentis are really kind of making great strides. So if I speak to another customer, they may have that over there. It’s a real mismatch. So what we’ve pieced together for customers is something that you can plug in any of these tool sets. One of the customers we’re using rather than just the traditional Microfocus UFT, we’re actually using LeanFT. I think they’ve changed the name of it again now. So there’s almost like a millennium framework approach to doing traditional SAP, which historically has not been achievable. So we’re really trying to open the flexibility for customers and just find ways in which they can deploy change and actually just get this testing done and dusted super fast and they can focus on what’s being deployed. Do we need to worry about it? Yes, no. Okay. Off we go. And with that particular customer that I started talking about, we’ve got them successfully on to regular weekly deployments. That’s sort of the age of these big cumbersome quarterlies or half yearlies is well and truly dead.
Matthew Heusser (14:13):
Great. So what I would say is that across all of the large traditional customers, I have, I see this desire to move toward more frequent releases like SaFE, the scaled agile framework stuff. So they want to release every, you know, eight sprints, which are two weeks long at 16 weeks. That’s four releases a year or something like that. I do see, I agree with you on this risk when you start coordinating and regression testing in a very limited time on a bunch of different continents, stuff just gets missed. And I want to amplify that the traditional risk-based methods, which sound great on paper are not great at handling this multiple continent, distributed development environment. Each group can come up with their own risk statement, but it’s unlikely that those risks will be integrated in a meaningful way so that you actually can come up with a single view of what the risks are and how you’re going to tackle it. Well, a lot of people on the show know , I’m a big fan of Karen Johnson. I think her RCRCRC heuristic was innovative for its time and relevant today, which is if we want to figure out what to test… What’s recent, what’s core, what’s risky, what’s configuration sensitive, what’s repaired recently, and what’s chronic? We can use those as guidelines to create a list. A problem people have with RCRCRC in practice, I think, is they can’t get the data. Like, that’s cool. I’ve got this list, but the project’s too big. There’s no single source of what’s chronic. We don’t have the data. We can’t coordinate across teams to figure these things out. And I think what I’m hearing from you is whether it’s JIRA or whether it’s service desk tickets, or whether it’s a Wiki or there’s various ways to get the data, put it into something and out will pop something that is better than any individual person could get working on a whiteboard and typing in Microsoft word by themselves. Am I in the right direction?
Simon Evans (16:16):
Yeah, a hundred percent. Within QA,, One of the things that we’re really lucky with, and it’s only the last few years, I suppose, that we’ve really been tapping into it is data. We see everything that goes on there’s business input that comes in there’s technical input, et cetera. We’ve got historical information. There’s so much information out there that can tell us what is the health of IT as a whole, because we see all the different projects and programs running as well. So we should be tapping into that information. And there’s a couple of things that happen is one. We can get that white boarding example. There’s still value in that, but what we’re getting is relatively subjective viewpoint. So for anyone that’s done risk-based workshops before, you know that you’ll end up in various different bum-fights with people who are saying my area is more important or all my stuff is priority one, because it’s so important. Someone else is saying that, no, it isn’t. My area is more important and it takes a while to really get all that mapping out. But there’s good valid information in there. But what we want is also to overlay the objective information so that data that we can get hold of from a service desk or from that historical test information. As an example, there’s another customer of ours. They’re having troubles with one of their platforms. It was customer facing platform, regularly having kind of outages and issues. They were saying, what is going on? We need to test more. We need to grow the testing. By the way, we weren’t testing it at this point. Just throw that in. But you know, we need to test more. We need a bigger test team. We’ve got to get this, this issue. So we ran some sessions with them. We did some analysis. We did the subjective stuff. We interviewed various people and did workshops, et cetera. And then we started using our AI engine, Qualisense, to start chewing through various different data sources. And in one of the areas, it spotted just a relatively simple example is that in 35,000 tests, it only identified 4% of the defects that had gone through. So they’re doing this huge volume of what they think is testing the right things to try and solve this problem. But it was clearly not working. We identified that obviously testing is ineffective, no surprise there, but as we went deeper into the data, we could identify exactly why. And ultimately we took over that team and reduced the number of people in that team quite significantly. We halved that testing team for that particular application. We introduce new frameworks around automation. We use this analysis to pinpoint exactly what to test. We still use some of that business subjective information as well because they’re stakeholders in this. They need to feel comfortable. Instantly, as we took over, no critical incidents in a live environment, have now gone through, I’d have to double check what the number is, but 18 months or so maybe getting over two years where they haven’t had any of those outages and never achieved that before. I think it even got mentioned at board level because it’s suddenly issue resolved. We have amazing people. Of course we do that, do this testing, but it’s actually just about utilizing the data that’s out there to really hone in on precisely what to test in the case for them. It’s like saying you’re testing too much. We’re going to test less, but we’re going to get more effective results.
Matthew Heusser (19:29):
Michael Larsen (19:30):
So I’m hoping I can jump in here real quick with a simple scenario that I’ve dealt with recently. This has to do with my daughter and her having to try to deal with the fact that an account that she had got deleted through no fault of her own and the cartwheels we had to turn, just so that we could get that reinstated. It was a royal pain in the neck. Why do I bring this up? I bring this up because I think that there are a lot of times that we end up having issues or there’s things that we test that don’t really, like you were saying previously, don’t really hit the main thing that we want to be looking at. My point with this, and I’ll condense this a little bit, is how do we address those areas? How do we address the areas that are less cash focused? Less “Oh my gosh, we have to make sure that the payments work!” I mean, yeah, we have to make sure that that works and that’s great, but there’s all the other ilities that I think tend to get a very backseat treatment. How does that fit into this equation?
Simon Evans (20:31):
Yeah, that’s a good one. I’m torn in my mind as to how I answer it. Because there’s part of me that goes, we want to talk about customer experience. We want to make sure that our customers are having the best time with whatever interaction they’re having. And therefore we should put specific focus on to that. There’s another part of me that’s there going, what is the cost impact? Whilst your example there is frustrating and unfortunate, if that’s happening to lots and lots of customers and it’s really impacting their brand and reputation, then obviously something has gone wrong, their own analysis isn’t picking up on these kinds of things. But if it’s only happening to a few people, you then into that kind of cost benefit. What is the impact to them as a service provider to your daughter? How much time do you invest in it versus making sure that that thing that’s impacting their dollar money are the things that actually get teste., That may be slightly controversial view, but it’s really just looking at it in black and white. It doesn’t matter whether you’re a social media company or a manufacturer. The focus has got to be on “what makes you money?” What is it that either is gonna drive revenue or drive profits or protect the customer experience. But having said that there’s a number of companies you mentioned at the beginning of all this, we’re still in the COVID scenario. There are a number of companies, big kind of retailers out there, global retailers who had been getting it really wrong. If you’rel Amazon, congratulations. But for a lot of the other retailers out there that relied on bricks and mortar shops, and yes, they all have ecom. Very few of them that actually really focused in on the ecom side, their setups were pretty basic, not very robust and it didn’t really have a totally joined up supply chain behind it. So we saw these big spikes, but hadn’t invested in it, let alone that actual testing side, and the real focus. Have a sit and think about the number of brands that maybe you’ve bought from historically, or you heard about where you just couldn’t get ahold of anything. There’s places. I know it’s still impossible to get hold of anything. That’s where they’ve got it wrong.
Matthew Heusser (22:34):
OK, let’s talk about CI/CD. Michael and I are both big fans. I know Michael was basically part-time build master for a number of years on at least one of his customers and your actually advocating enterprise CI/CD and there’s lots of different ways to do that. Maybe you could do a generic case study from a real customer that you don’t have to go into too much of the detail on how do they even move in the direction of enterprise CI/CD what does that mean?
Simon Evans (23:02):
I think the honest answer to that is they don’t really know yet. There is a number of organizations that are trying to get there. They’re really struggling with the fact that -said it enough times- but multiple applications that are sitting in there. I mean, we haven’t even talked about the fact that they’ll have cloud based applications, such as Workday sitting in there. And of course they’re deploying change on a regular basis. Customers trying to deal with their on premise landscape as well as what’s cloud-based. And obviously some of them are shifting their on-prem into cloud as well. It’s a real complex landscape and everyone’s sitting around going, “this is the right thing to do”, and I’m yet to be convinced that I’ve seen a true enterprise answer for it. From a quality perspective, we can build in speed and we can build in flexibility and ways in which we can identify issues. We can get testing done earlier and earlier in that process. So they’ve got the flexibility to say, “right! We’re deploying some change. We’re testing some targeted bits later on”. We’ll start overlaying that business information, et cetera, but we’re yet to really see anybody truly nail it. So it’s an area that I’m fascinated in at the moment because we see a number of organizations promoting CI/CD within enterprise, and they tend to be pushing products that supposedly support it. But I’ve yet to see a joined up answer, that really deals with everything going across the environment, management, to the data, to actually managing the changes going through, let alone, before we get on to talk about the whole testing side. It may have sounded like a bit of a copout to your question. But my whole perspective, when I see CI/CD working really well is it tends to be in quite focused area as opposed to covering this huge set of landscapes. So what do you guys see out there?
Michael Larsen (24:59):
Well, what you’re describing is actually the challenges that we have been facing. We are in the process, I would say of condensing things down, converging things to the best of our ability, making it so that software that is resistant to running with CI/CD is less resistant to it. For example, the group that I’d worked with for a long time, they had very well done software. In my opinion, it worked very well, but it had worked on a framework and software libraries that were difficult to upgrade and update. I don’t think I’m talking out of turn here. I think it’s probably okay to say we took some time to figure out what strategy we were going to do and how we were ultimately going to approach it. We reached a point where we just said, okay, it’s time for us to really make a conscious effort to bring this in line with some of the other products so that we can eliminate this particular issue of it being its own environment that’s being built, it’s own structured format that doesn’t share anything with any of the other products in that regard. Especially since it’s the main platform, many of the other products were going to be presented on, it was important to integrate with less issues. It’s a chicken and an egg thing. Before we can say, yeah, we’re going to move on to a new CI/CD or one that’s going to work with the rest of the organization, we have to get our libraries in line with the rest of the organization before you can actually go the next step and say, okay, we’re going to do this radical rebuild of our CI/CD system. So that it’s more in line with everybody else’s. It’s a work in progress and my guess is it will be a work in progress for a while.
Matthew Heusser (26:37):
Yeah, I think that’s pretty vulnerable and honest. Not enough people say we’re working on this piece. These things are like GI Joe action figures. They’re all sold separately. There have been some attempts to kind of come up with the one true way to do enterprise CI/CD. Usually that is you have an enterprise CI/CD team. They typically take one of two options. Either they come up with their own system, which doesn’t work for anybody for specific reasons, or they go through the infinite regression analysis of requirements. And a year later they haven’t delivered anything. And it’s because they’re still trying to figure out what they should build. The third one, which I haven’t seen anybody try yet is this is our number one big customer. Like we’re a large accounting system and we have a hundred different teams, but 60 of them support the main product and it’s a web and mobile based product. So what we can do now is we can build CI/CD for those 60 teams first using them as reference requirements. And then we can expand to include the old legacy stuff. And, you know, we might never get to the COBOL systems and that’s okay. But then you have to create a whole team. If you’ve got a hundred teams, maybe spinning up one new team is not that hard to do during the annual budget cycle. Short of that, you have all these individual teams creating their own CI/CD systems and you have some sort of governance oversight project. Maybe you bring in a consultant and they try to figure out which of these two is like the other? Where are the synergies? How do we build the community of practice? How do we get some emergent order? The language I would use is as a consultant, how do I help the emergent order emerge in a way that will come together instead of creating conflict? And that’s going to be based on the customer, you have to be accessing the right levels because it’s possible you do all the right things, things start to happen. And there’s a reorg. And now all that work just went to some other continent because it’s five bucks an hour cheaper. Those kinds of things happen. There’s a lot more on the political side there than the technical side, getting people to align incentives so they work together. But that’s what we do, right? We come into these organizations, if we’re invited and we help them make change happen.
Simon Evans (28:49):
Yeah. This kind of loops us back to what we were talking about earlier. The information that we have as part of QA is a key part of this for me. When we start to try and address this challenge with organizations, my starting point is always say, right, what is it that they need to do? What information do they need to help speed up some of these processes? So being able to present information in a clear, concise way is a key part of this for me. And then we start working back into some of the detail, as opposed to just starting at the beginning, saying we must automate and let’s automate everything and it’s,
Matthew Heusser (29:26):
Oh my gosh. [laughter]
Simon Evans (29:29):
As opposed to actually, what is it that we need to show people? How do people make decisions to get everything flowing through quickly? That will start to instruct us on what do we automate? Where do we automate? Do we even automate on certain things, et cetera.
Matthew Heusser (29:43):
I’ve got a line for you. I think you said it, but I’m kind of pulling it out and piecing it together. We might be doing the wrong testing. We don’t know. How about we automate the collection of the data to figure out what the right testing is. First. That’s not happening enough. People don’t know how to do it.
Simon Evans (29:59):
A hundred percent. Couldn’t agree more.
Michael Larsen (30:01):
All right. I think that we probably want to put a pin in at this point. We’re coming up pretty close to the end of our allotted time for this. I think it’s time for us to shift into “what are y’all up to?” mode. So Simon, as our guest, is there’s anything that you are currently doing. If there’s any upcoming appearances conversations, writings… this is your chance to shine.
Simon Evans (30:21):
We’ve actually just run a session. I think it was just last week, last Wednesday, talking about this exact topic and bringing it to life in terms of how we’ve built the agnostic tooling around it. We’re showing off some of the tools that we used, but it could be any of the products that get used in enterprise. And obviously we’re plugging in Qualisense to actually show you how we drive some of that decision making. It’s available back on the Qualitest website, qualitestgroup dot com. We’ve got a whole heap of information specifically around enterprise and how we tackle these things. And of course, if anyone wants to reach out directly, I’m happy to engage on conversations around this topic. I’ve been in it for far too many years, and I care to admit that it’s something that it’s a little passion of mine. So I’m keen to get some of these challenges solves for customers. So yeah, just reach out.
Matthew Heusser (31:13):
Thanks Simon. We’ll probably try to put some things in the show notes. I have a talk that I’m getting over zoom it’s for the Lansing QA group and for the Kitchener/Waterloo QA group. And I believe it’s complimentary for both. And if, basically I present it with an app you’ve never seen before someone says, we need you to test it, but it needs to go out today. And how do you deal with that? Can you do it in a way that stands up to scrutiny? It’s a skill. It can be taught. You can hire for it. You can train it. And I’m going to be talking about it for an hour. So that’s my big thing that’s coming up. The other thing is I suspect I might be talking to Simon about some of this enterprise risk because it overlaps with some of the work I’ve been doing recently, thinking differently about how we think about testing and the whole picture of testing that we present.
Michael Larsen (32:02):
Fantastic. And I will throw in my little bit right here. I am giving a workshop at the Pacific Northwest Software Quality Conference on “Add Some Accessibility to Your Day”. That is the formal title. And I will be talking about how to do accessibility testing, what kind of tools you can use and how to get actively involved in it, to get your hands dirty kind of thing. You don’t have to be a pro at it. You don’t have to know anything about it. Just be interested in coming in and participating. And if you are participating at PNSQC this time, the workshops are not an extra fee. They’re part of your bit. So you just come out on Wednesday, pick one and hang out and I’d love to see you there. And with that, I think it’s time for us to say goodbye. So I want to say thanks to everybody. Thanks for listening. We look forward to talking to you again in two weeks. As always be excellent to each other and party on, dudes.
Matthew Heusser (32:53):
Simon Evans (32:54):