Usability is a big deal with applications and application “stickiness”. The better the Usability, the more likely an application will be successfully adopted.
This week, Smita Mishra of QAZone Infosystems and Fandoro (and a lot of other things) joins Matthew Heusser, Jessica Ingrasselino, Michael Larsen and Gerie Owen to discuss what makes applications painful to use and discuss methods for making them less painful. We also discuss a variety of ways that testers can get a leg up on Usability as an advocacy platform and ways in which to encourage good UX.
Also, in our news segment, we address five possible usability issues with Electronic Health Records that go beyond frustration issues with applications. These can literally be a matter of life or death!
Panelists:
References:
- Smita Pandey Mishra (LinkedIn)
- Software Test Processionals
- QAZone Infosystems
- Fandoro
- Test Practitioner’s Club (Delhi, India)
- What is an Electronic Health Record (EHR)?
- Human Factors and Ergonomics (Wikipedia)
- Save Our Scrum (Book)
- A Wearable’s Story
- Test What sells more – UX
Transcript:
[Begin Intro Music]
This show is sponsored by QualiTest. QualiTest Software Testing and Business Assurance solutions offer an alternative by leveraging deep technology, business and industry-specific understanding to deliver solutions that align with client’s business context. Clients also comment that QualiTest’s solutions have increased their trust in the software they release. Please visit QualiTestGroup.com to test beyond the obvious!
[End Intro]
MICHAEL LARSEN: Hello, and welcome to The Testing Show. Happy May to everybody. I’m Michael Larsen, your show producer, and we are happy to have our regular host and moderator, Mr. Matthew Heusser.
MATTHEW HEUSSER: Good morning or whatever time it is as you listen to this.
MICHAEL LARSEN: Also, we’d like to welcome back to the show Jessica Ingrassellino.
JESSICA INGRASSELLINO: Hi there.
MICHAEL LARSEN: Making a serial appearance (just worked out that way) Gerie Owen.
GERIE OWEN: Good morning, everybody. Good afternoon. Good evening.
MICHAEL LARSEN: We’d like to also welcome our special guest, Smita Mishra.
SMITA MISHRA: Good morning, everyone.
MICHAEL LARSEN: Matt, you know the drill. I’m going to turn it over to you, and let’s make this happen.
MATTHEW HEUSSER: Thanks, Michael. If you’re a regular listener of the show, you probably know Gerie. You certainly know Jess, who is now the Director of QA or salesforce.org. Did I get that right?
JESSICA INGRASSELLINO: Yes.
MATTHEW HEUSSER: So, you’ve been busy?
JESSICA INGRASSELLINO: So busy. [LAUGHTER].
MATTHEW HEUSSER: So, tell us a little bit about what you’ve been up to. We haven’t seen you on the show lately. It’s for a good reason.
JESSICA INGRASSELLINO: Yeah. It’s been really exciting. Now, as the Director of QA, I now have 6 direct reports. So, we’re building out our exploratory testing teams for every product. We really value making sure that we have exploratory testing as our main means of engaging with the product, and also we are building out our automated testing because we don’t want exploratory testers spending all of their time doing regression. So, we’ve always had these things, and what we’re trying to do is build out our teams so that way we can balance the work a little more appropriately. We’ve been a little stretched. So, it’s really fun to be building out a team. In addition we’re building out security, and we’re building out accessibility testing.
MICHAEL LARSEN: Yay!
JESSICA INGRASSELLINO: Yes. [LAUGHTER]. It’s really fun, because I am doing less actual testing. Although, I’m still finishing up some of the projects that I was on. I started on March 26th. So, there’s still some projects that I’ve been working on and bringing other people as we’ve hired my backfill, but I’m also getting to do more strategy work which is really exciting for me. I’m getting to do a lot of learning, digging into a bunch of things that are just new areas. So it’s also a little scary but in that good way. Really excited, really pleased that I’ve been given the opportunity and having a lot of fun working and building a team.
MATTHEW HEUSSER: Great. We’re glad to have you here and want to hear about it. We also have Smita Mishra. Is that close?
SMITA MISHRA: Yes.
MATTHEW HEUSSER: Okay. [LAUGHTER].
SMITA MISHRA: [LAUGHTER].
MATTHEW HEUSSER: Smita is another where I couldn’t even… You know, I try. I give introductions and I say, “We first met here,” and I think we probably first met in the STP world and at conferences. Smita has had such a long career in testing, and she’s kind of a serial entrepreneur. She was a test consultant, QA lead type person, community builder. I think you’re in Bangalore, right?
SMITA MISHRA: Delhi. [LAUGHTER].
MATTHEW HEUSSER: Oh, okay. [LAUGHTER].
SMITA MISHRA: I’m in Delhi. [LAUGHTER].
MATTHEW HEUSSER: She’s in Delhi, which has a very large population center, but it doesn’t get press (American press). Bangalore and Mumbai, usually more. There are lots of exciting things happening, and she runs a bunch of product companies. So, I can’t do it. Smita, fill in the gaps here. Tell us about yourself.
SMITA MISHRA: Thank you, Matt. Thanks for this generous set of comments. I think we met first at Software Test & Performance Magazine. There was a magazine that we used to have, and I guess that’s where we started to discuss. Of course, you and Michael were my mentors. I did a lot of learning with you both. So, that’s where we started. Coming to my introduction, let me start with what is most important then move on gradually to things which are not so important. I’m a mother of two kids, which is the most important role in my life right now. I enjoy problem solving, and I love meeting new people. I started a company in 2010, which is QAzone Infosystems wherein I do test consulting, do hands-on testing with my teams. Though, I have another tech platform that it’s a SaaS Platform for retail outlets that we’re building. It Fandoro, and it’s more on the gifting side of it. So, we have APIs that people can connect to as well as enterprise packages. Then, I have a Test Practitioner Club Meetup that I run, which has close to 1000 members. To be precise, 943 as of today on meetup.com. We do regular Meetups there for our testers in NCR Region. I enjoy mentoring and coaching college students, so I have kind of given tech talk for over 5000 students so far. I’m a computer science engineer with 17 years of experience in the industry. Yeah. So, that’s what I do.
MATTHEW HEUSSER: Great, and you’re now on the show.
SMITA MISHRA: Yes. [LAUGHTER]. Very excited to be here. Thank you.
MATTHEW HEUSSER: Yeah. We’re sponsored by QualiTest. Gerie is currently VP of Knowledge Innovation for the US at QualiTest, and testing keeps changing. When I started my career, I actually tested Windows-based products, which is mid-range computing. Quickly moved to the Web. Quickly moved to mobile. Along the way, we’ve had to invent and rediscover sort of new paradigms for, “What’s intuitive, and how do we make it easier?” So, the news item today is actually on Electronic Health Records. I was surprised. The five major usability issues of electronic health records. One of the ones they listed was “auto-refresh mix up,” where you have a list of patients by last name or bed number and then auto refreshing brings back the default settings, you lose your sorting, and stuff is in a different order than you expected. If you know it was the fifth down, you’re going to have the wrong person. That just sounds like a straight-up bug to me. That’s just not right. Any thoughts?
JESSICA INGRASSELLINO: Yeah. [LAUGHTER].
MATTHEW HEUSSER: Go ahead, Jess. Maybe I should point at a person. I like the panel.
JESSICA INGRASSELLINO: So, I have two thoughts. One is, “Yep, seems like a bug.” The other is (and again, this does point to usability), “When and how are the actual users being brought into these life-critical products?” This is a life-critical thing. If I were to do human subject research, just qualitative research, going in and observing, I would have to have permission and get signatures from all the participants to go in and sit and watch them, but it seems like we have kind of a lower standard or a different set of rules for how we create products for people. It’s weird to me that we don’t require in life-critical applications that we are bringing in users earlier, that were saying, “We’re trying this out. This is what this could do. This is the potential damage, and this is what could happen. Do you except? Yes or no?” [LAUGHTER]. Before the apps are allowed to be released. I think if we did that, we might think a little bit more about potential bugs or potential problems of different use cases because we would want our product to be accepted in the same way that qualitative researchers and quantitative researchers have to go through and really think about and outline all the potential problems and issues. You know, they’re not testers. They’re researchers. So, this would have to include product, developers, and pretty much every aspect of the business. I see a usability issue, but I also see kind of a broader issue that doesn’t seem to be addressed.
MICHAEL LARSEN: Yeah. The one that caught my attention (and I’m not at all surprised by it, in fact it seems to be just a general aspect) the number one usability issue that they mentioned is alert fatigue. You know, ignoring of alerts due to the overwhelming number of them could cause clinicians to overlook important life-saving alerts. Yeah. I can certainly agree with that. I just deal with that with my e-mail and with my Skype updates and things. Even when I try to filter that all out and just say, “I just want to get it down to the bare minimum so I’m only dealing with the most important stuff,” text messages to me and only me directly not only go to my phone but they also ring on my Fitbit. Even that I find at certain times I just get to a point to where I’m getting this buzz on my wrist like, “For crying out loud, I don’t have time for this,” and I just shuffle it off. At the worst of that, it’s possible that I might buy a couple of minutes. Miss my daughter saying, “Hey, I’m not feeling well. Can you pick me up from school?” “Oh, okay. I can go take care of that.” Versus, “Yeah. I need to make sure that I get a life-saving asthma medication into a patient who’s under in a few minutes, and uh, whatever, I’ll get to that later.” That’s scary. [LAUGHTER].
MATTHEW HEUSSER: Yeah. Absolutely. There is movie, the characters come in and they are going in a hospital, and they say they “need this or that STAT and this STAT and that STAT,” and the guy says, “Wow. You really seem to know what you’re doing.” He’s like, “No. You just throw the word “STAT” in a lot and people take you seriously.” Of course, in a hospital environment, “STAT” means “ASAP. Right now. It’s really important.” Well, if you say that all the time, then everything is equally important and nothing is that important. It reminds me very much of when I bought my house. The home inspector guy, everything was, “Critical, had to be fixed right now. Big deal. Oh, my gosh. What are you going to do?” “All of it. Putting a railing in. We don’t have kids yet. How can that be that important?” You realize that if he said, “Oh, it’s not a big deal.” You know, “It’s okay. Don’t worry about it,” and then someone did fall and break their leg, then we could sue the home inspector guy who said it “wasn’t a big deal.” So, he had to say everything was a “big deal,” and I suspect alert fatigue in a hospital environment is similar. A lot of this seems to me to be stuff good testing could have/should have found. I really think there’s a blurred line. I’m more interested in quality than I am in “functional testing, we just do functional testing here.” I’m like, “Well, it’s not going to work for your customers. Unintended consequences of customization is another one. There is a customer version of the electronic health record to fit their workflow, and the custom system might have missed the proper testing. That’s just buggy. [LAUGHTER]. That’s just, “We didn’t test it.” I guess my question then is, “Where does testing end and usability begin?” I’ll ask that to Smita.
SMITA MISHRA: Yeah. I’m not an expert on the healthcare EHR usability discussion that you just mentioned. So, I’m not an expert to be commenting on it, but—
MATTHEW HEUSSER: Let me push back on that.
SMITA MISHRA: Yeah. Yeah.
MATTHEW HEUSSER: This is all the same. Yes. There is some expertise, and there are some words to know. Electronic health records are extremely similar to large enterprise resource planning systems.
SMITA MISHRA: You know.
MATTHEW HEUSSER: You are kind of an expert.
SMITA MISHRA: They are. Yeah. Yeah, Matt. I would agree with you to a certain extent, but I’ll tell you why I said that. There’s a site called Healthcare IT News, and when Michael had mentioned about this particular podcast, I was trying to look at the usability in IoT and EHR reports and everything. What I figured was, there are reports (specifically there are studies and researches) done on EHR vendors and products and the services they provide where there have been permanent harm and, in some cases, temporary harms caused to the patients because of usability issues specifically, which is very grave in nature. When I am thinking of it, I’m like, “I don’t even know how they would have truly done the usability test of something.” I would really like for real scenarios, real users, to use certain things. As long as they are just software, which are mostly into the invoicing side or admin side, I’m still okay. Even the report side, I’m okay. But, the moment it comes to devices that could really get a little trickier. So, coming back to your point of, “Where should it start?” The entire industry keeps pushing that, “Testing has to shift left, shift left,” that’s where I would say that usability also, [LAUGHTER], needs to shift a little left, and it has to be applied at the minimum viable product. So, if you want to build a product or if you want to build a service, you have to first build, of course, skeleton prototype, a wire frame of the product, and that’s what we call, “minimum viable product,” which is functional in itself and kind of represents all the core functionalities. At least the most important things, the main goal, the main purpose of the application for the users and introduce it at that stage. Much at that design and analysis stage. Rather than building the product and then throwing it out for the users to then decided. Must like performance. Once you build the product, once the architecture is wrong, now coming back all the way to fix issues which are not just UI level. Unlike a lot of people who think that usability is more like user interface, user experience is so much more. It is how we actually design our content, how we actually lay our visuals. Visual is just the top layer of it, but there’s so much more the user needs. Then, “What is the content requirement or the functional specifications?” Then, on top of it, “How does the user interact with it? What is the kind of information architecture we want?” On top of it, then we design the information. So, yeah, that’s what I would say. You have to do it as early as possible.
GERIE OWEN: Yeah. I totally agree. Usability defects often tend to be rooted in design. So, if you don’t get in early, it’s almost like security defect. They’re more expensive and more difficult to fix.
MICHAEL LARSEN: So, in my earlier testing days (I’m going back almost a couple of decades), we had a term that we used quite frequently called “human factors testing,” and that was where we did actual usability studies. As part of every project that we were working on, we would take a random people in the company and ask them, “Hey. If you would like to sign up for our human factors team, it doesn’t take a huge amount of time. It just means we’re going to grab you at some point and we’re going to bring you in, and we’re going to run you through some scenarios with our product (new things that we’re developing and new things and that we’re going to release), and they would video tape from the screen or from us in the room, especially if it was a physical device, to see our reactions and how we would interact with it. Then, they would go through and analyze those interactions and reactions and come up with some pretty high level and also some low level change things that we needed to do based off of that. Now, I don’t hear about that very much anymore. Is it just because I’m old, or has the vocabulary changed and I’m just not aware of it? Is human factor still a thing?
MATTHEW HEUSSER: Allow me to be the tester on that. I hope I’m doing this in a good spirit. Getting information, I’m not just trying to be the big jerk. I have issues with human factors that I think a good human factors expert could answer. In that, what the claim was, “We’re going to spend all this money, and we’re going to have these users in. That every employee is balanced. We have to go out and get real users. We’re going to turn a video camera on, and we’re going to study their eye movements. We’re figure out where they click, and then we’re going to make this awesome design based on that. It’s going to be empirical and data based.” It seemed like a whole lot of just kind of talking to people and guesstimating to me, is what really ended up happening. I worked with companies that did formal usability studies with that sort of method, and it seemed to me the outcome that we got was pretty similar to kind of an outsider objective tester paired with bringing in employees that are not on that department, not on that team, kind of have a job description like your end users, and just walking them through the software over some pizza, over lunch. Pizza is great to walk people through with, by the way, because your hands get sticky. It’s awkward and you’ve got to take a bite while you’re trying to move a mouse, and that’s what you want. So, yeah. I used to hear about human factors a lot. I talked to people who that was going to be their life’s mission, and I don’t know that their outcomes were that much better than what Gerie and Smita have been talking about where you have a whole product team that gives this a lot of thought before you actually build the thing and you plan to throw one away. Does anybody disagree with me on that, because it’s kind [UNINTELLIGIBLE].
GERIE OWEN: Well, yeah.
SMITA MISHRA: Kind of.
GERIE OWEN: I think there’s a reason why those weren’t successful, because you didn’t really define your user. I think now, we’re using a lot of [UNINTELLIGIBLE].
MATTHEW HEUSSER: Yeah. That’s a definitely a different way to do it.
SMITA MISHRA: I would also have a little different view on this, Matt. Michael, you aren’t that old. So, yes. We do have things like human factors in the usability testing. We just call little different names like where we do user interviews. What method you use for usability, whether it is pretty much in your office with the real users or whether it is remotely done under some moderation or it is un-moderated and remotely done, this all would depend upon exactly what’s your purpose and what are the kind of constraints and resources you have. For example, I very recently did an exclusive usability test project for a network marketing team from Germany. Their users were in India, and they had very specific rules on the kind of incentives they had, the kind of money they made, and the kind of content they wanted to have their sales people/have their network marketers have access to, what kind of offline content, what kind of online content. So, there were a lot of things. When I looked at that application, I was like, “I don’t know.” I will have to really study and understand, “What is the whole purpose of this application?” To be able to, you know, click buttons, and know what I really need. Whereas, at the other end, there are applications (let’s say the Facebook’s and the Gmail’s) where they throw in their features with A/B testing or beta testing wherein they let their users, totally un-moderated, and let them test it and say, “Okay. Let’s see how many of them actually adopted, how many of them comfortably use it. Could we have done this in my network, could we have done what Gmail does for our network marketing team?” No. We could not, because the people who would be actually using it was a very specific group of people, “What are their priorities? What would they want out of that application?” It was only known to them. So, it was very important for us to select them. It also depends upon how qualitative insights you want. Is it just a small feature change? Maybe you don’t need everybody to be in the same room and testing it and we don’t need the video recordings; but, if it is a very specific app coming up all together for a specific user group, you do need sample size from that user group to be able to say if it’s going to have any benefit for them.
MATTHEW HEUSSER: Yeah. I think it depends on the kind of application. So, one of the examples I was thinking of was a health insurance company where every single employee had health insurance. We could have taken someone who was hired last week (almost anybody off the street) and put them through some conversation and they would be a reasonable approximation of our user base versus something like that where you can’t even begin to understand the domain. A lot of times, in companies like that, you end up hiring employees from the domain and then teaching them technology and testing. But, yeah.
SMITA MISHRA: Um-hum.
MATTHEW HEUSSER: I hear you.
SMITA MISHRA: It also depends on, “What is your focus? Do you want to do more qualitative analysis just to understand the good and bad at the broader level, or is it that you’re looking for very specific quantitative? What is the time it takes for the application or for the humans to move from this point to that point? What is the error rate? How many users actually get it straight, or how many users actually fail at so many tasks?” So, you want to capture the patterns/that trends. Also, there are sometimes as simple as things like the color contrast. In my case, I saw that some of the buttons were completely ignored by the users because of the location and the color contrast. They were not even visible very well to the users. Those were important buttons. So, we had reports like, “I couldn’t find this.” Now, a user who doesn’t know that domain would never report it. They won’t even be looking for something like this.
MICHAEL LARSEN: Yeah. I feel very much the same, that often we think that we’re putting together something that looks slick and minimal and really easy to interact with, and for a broad swath of the population we’re not taking into account how they interact with applications or how they would need to be able to do so. If you have mobility challenges or certain cognitive challenges or just, for that matter, color blindness, if you can’t tell a contrast difference between something, yeah, you’re likely to miss something really important that somebody who is “normative,” they just take for granted, “Oh, yeah. I interact with it this way.” Then, somebody else will say, “Well, I don’t even see it.”
SMITA MISHRA: Um-hum. Absolutely.
MATTHEW HEUSSER: So, let me ask a tough question then. We can all come up with the “right way” to do it, “Oh, we just need to get involved upfront. We need to build prototypes. We need to have someone thinking about usability from an outsider’s perspective. It’s a role, just like design is a roll. You know, I’ll pontificate on the beauty of what a good quality process would be.” Then, you actually have a job and a customer and they say, “Well, you’re a tester. What are you talking about? We have someone to build the user interface. Stay in your lane. Go do your job, and I’ll do mine. I’ll give you a bill tomorrow morning. It’s going to have these three features in it, and you can go test it now.” Now, what do you do, if your goal is to improve quality? I would argue there’s lots of things I can do to make the product better in that position. In fact, I can get really good at that. Arguable, that’s how context-driven testing started. It was, “Let’s stop telling everybody else to change and let’s just get really good at texting,” but our effectiveness will be limited in that situation and you basically have to do it twice. You’re going to build the wrong and then you’re going to test the bugs out of it, including the usability bugs. How do you influence the organization to change, to innovate in their process, so that they can do some of these more progressive ways of thinking about delivery?
GERIE OWEN: Absolutely. That’s the questions for usability, for performance, for really practically any type of testing. For UX particularly, the way you convince them is from a service company’s point of view, usability is one of the most important things, because it’s customer facing. This is where your customers’ perceptions are critical. You can have a great product on the backend. It may have awesome functionality; but, if it’s difficult for the user, they won’t like it and they may not even use it. For companies that are producing online retail product, customers are very picky. If they don’t like it, they’ll move on to the next company. Not only that, they will plaster the social media with their poor experience.
SMITA MISHRA: Particularly, when we think of usability, the first thing that comes to mind is trying to narrow down whom you are building the product for so that you have a better understanding. Now, when I have to convince my management or my development team or an organization to adopt this process earlier in the cycle so that we are not too late in the game, I definitely think that I wouldn’t be wrong to say to them that, “If you are able to identify a band of target users, forget about me testing it for usability at the end of the system. Just think about it from the overall business perspective.” You firstly get to know your early adopters easily, because you know your band of target users; and, since you know your specific functions and features that a particular set of users are going to use, that speeds up your development time. That also guides your testing of course, the usability testing and functional too, and that also gives the marketing individuals early insight into the user acquisition strategies, like how they could acquire these early adopters. It narrows down so many variables. Not just for testing, but for the overall organizational process, and it saves a lot of money if you do it early. Like Gerie said, “If the users don’t use it, it’s of no use.” So, if you follow this process, you actually save a lot of money. I think that would make sense to the business people.
JESSICA INGRASSELLINO: I was thinking about the ways that we talk to business. How do you actually get to the point where you are the tester who is saying, “Okay, and this is the value proposition?” How do you do the thing? What if you go and you say to business, “This is the value prop,” and business says, “Be quiet, go test?” The very, very practical of that. Once you have identified the value proposition, if you can’t bring it to the business, what we can do as testers, aside from being very good at testing, is to identify the other people in the business, and there are these people in every company I’ve worked for, who agree with us, who have a similar viewpoint and say, “Yes. I agree. I see this value proposition.” So identifying a developer identifying, identifying one of the product people, even if it isn’t somebody who’s necessarily on the project you’re working on, getting their viewpoint. Once you identify those people, having lots of conversations with them and experimenting on your own as a tester to implement things. Yeah. It’s definitely taking a bigger piece of responsibility; but, every time that I’ve experimented with a technique or with implementing something new and I’ve taken on that responsibility with the input of others in the business, I’ve been able to present my stronger tests and my stronger test cases with some buy-in that has been organically grown. Sometimes I feel like it’s, “Ask forgiveness, not permission” on a small scale to show efficacy and then broader adoption gets easier.
MATTHEW HEUSSER: We’ve had a great conversation—right?—a nice rolling conversation, but unfortunately we all have to get back to our day jobs. So, before we go, I’d like to give everybody a chance to talk about, “What are you up to lately? What’s happening with you? What’s going on? What are you publishing? I’ve talked a lot, so I’ll just say that I’ve been adding more chapters to the book, Save Our Scrum. Despite the fact that Scrum is moving into the late-adopter stage and everybody is super excited about big-new scaling frameworks, there’s still a lot of work to be done to get to high functioning teams, and I think that attention to the basics is one way to get there. If you don’t have a copy, you can find it on Leanpub, Save Our Scrum, and I’m back to adding new chapters. Let’s do Michael up next.
MICHAEL LARSEN: Okay. This is coming out in the middle of May; and, with that, one of the things that I’m working on that I’m quite excited about is that I’ve been doing a series for Ranorex who is the producer of TestRail, and I’ve been writing some topics on broader automation themes and things to think about when you are doing this. I had asked them, I said, “Hey. We don’t really talk about how to test things related to accessibility and inclusive design, would you be interested in me doing a series on that?” They said, “Yes.” So, in the next few months, look forward to those coming out. I’m excited to be writing these, and I’m excited people want me to keep talking about this.
MATTHEW HEUSSER: Gerie?
GERIE OWEN: I just wanted to congratulate QualiTest on being the number one PurePlay testing company. We’re all about testing. Some of the innovations that we’re looking into, of course, is UX, and I’ll be doing my wearable’s presentation at Agile Testing Days in Boston, coming up in June. We also will communities of practice in DevOps and automation and performance. I’ll also be doing a presentation, a workshop, on SASQuest coming up. I also have been doing a lot of writing and work in continuous testing.
MATTHEW HEUSSER: Thanks, Gerie. Jess?
JESSICA INGRASSELLINO: No speaking things coming up. Namely, I’m working on a book and another book proposal and lots of learning as I keep moving forward in my new role.
MATTHEW HEUSSER: Smita?
SMITA MISHRA: I will be speaking at SeleniumConf Bangalore this year and the topic is Usability. So, I’ll be doing that. I’m speaking at a couple of Agile Conferences in and around Delhi and in the northern part of the country. My major chunk of time these days is going on into doing enterprise sales for Fandoro because we may be going for a short round of funding very soon, so we’re trying to make ourselves investor ready. [LAUGHTER].
MATTHEW HEUSSER: That’s exciting. I hope to hear more about that soon.
MICHAEL LARSEN: Yes.
MATTHEW HEUSSER: Okay, everybody. It’s been a while. I appreciate your time. We’ll get together real soon to talk about Testing on The Testing Show. Thanks everybody for coming.
MICHAEL LARSEN: Thanks for having us.
GERIE OWEN: Thanks.
SMITA MISHRA: Thank you.
JESSICA INGRASSELLINO: Thank you.
MICHAEL LARSEN: That concludes this episode of The Testing Show. We also want encourage you, our listeners, to give us a rating and a review on Apple Podcasts. Those ratings and reviews help raise the visibility of the show and let more people find us.
Also, we want to invite you to come join us on The Testing Show Slack Channel as a way to communicate about the show, talk to us about what you like and what you’d like to hear, and also to help us shape future shows. Please email us at TheTestingShow(at)QualitestGroup(dot)com and we will send you an invite to join the group.
The Testing Show is produced and edited by Michael Larsen, Moderated by Matt Heusser, with frequent contributions from Perze Ababa, Jessica Ingrassellino and Justin Rohrman as well as our many featured guests who bring the topics and expertise to make the show happen.
Additionally, if you have questions you’d like to see addressed on The Testing Show, or if you would like to BE a guest on the podcast, please email us at TheTestingShow(at)qualitestgroup(dot)com.
Thanks for listening and we will see you again in June 2018.
[End Outro]
[END OF TRANSCRIPT]