Testing AI and Machine Learning

October 13, 04:31 AM
/

Panelists

Matthew Heusser
Michael Larsen
Jen Crichlow
Transcript

Back in 2012, Matthew Heusser, Michael Larsen, and a number of other testers and test professionals came together to write a book devoted to the topic of “How to Reduce the Cost of Software Testing”. To celebrate the ten year anniversary of that experience, Matt and Michael welcome fellow author Petteri Lyytinen to share their experiences writing the book and to see if, ten years later, we have made progress on our goal or if there is more that we can do in regard to the goals we set out a decade ago.

References:

Transcript:

Michael Larsen (INTRO):

Hello, and welcome to The Testing Show.

Episode 124.

Testing AI and Machine Learning

This show was recorded Tuesday, September 20, 2022.

In this episode, we welcome Jen Crichlow of SAVVI AI to join us and help us demystify the worlds of Artificial Intelligence and Machine learning, to discuss developments in the AI and ML space, and how to make sense about how to test these technologies.

And with that, on with the show.

Matthew Heusser (00:00):
Well, thanks everyone for listening to one more episode of The Testing Show. This week, we wanna talk about artificial intelligence, particularly machine learning, and how to test it.To do that, we have Jen Crichlow, who is VP of Program Delivery at SAVVI AI. Welcome to the show, Jen.

Jen Crichlow (00:22):
Thank you so much for having me.

Matthew Heusser (00:24):
You’re welcome. As always we’ve got Michael Larsen back who I’m sure you’ve heard the intro to the show from, Hey. Hi Michael.

Michael Larsen (00:31):
Well, You always know when I’m around or not around. In this case, I’m here. Hey, Hey, y’all doing <laugh>?

Speaker 2 (00:38):
So we wanna talk about… in particular, when it comes to machine learning and the classic example is; a recommendation engine, your Amazon, your Netflix, your… any eCommerce app. They’ve bought a few things. They’ve clicked on a few things. You have a huge amount of data for “people that bought your thing also bought”, “people that liked what you’re looking at also liked”, “we want to make a recommendation”. There’s a couple of ways to do that. If you could get all the data in an Excel spreadsheet, you can pass that to some hand-rolled code, using some ML library. You might be able to get a tool to help you with it, which I think is part of what SAVVI does. You could try to write a very, very, very precise SELECT statement, but when you get an answer back, is it right? And how can you continuously improve your recommendations to get better? I think that’s the problem. And for testers, we have to figure out, were those the best recommendations possible using an algorithm that is an opaque black box we don’t understand that we can’t see into. Did I frame that right? Jen? Is that the problem?

Jen Crichlow (01:47):
Yeah, I think that is the problem. And it’s an ongoing problem, one we wanna keep asking ourselves over and over and over again, especially when we’re thinking about quality assurance.

Matthew Heusser (01:58):
Yeah. And it’s iterative, right? Like you might make it better today, but you wanna make it even better tomorrow.

Jen Crichlow (02:02):
Absolutely. New things are gonna come into the landscape and we have to react to them.

Matthew Heusser (02:08):
So people know you’re a Vice President of Program Delivery. That could mean a lot of different things. I have the impression SAVVI AI is not a huge organization.

(02:17):
No, we’re quite small and young.

Matthew Heusser (02:21):
Tell us a little bit about the day-to-day, and maybe how you got into that role or how you got into ML.

Jen Crichlow (02:26):
Yeah, absolutely. SAVVI AI is Software as a Solution. It’s intended to be part of, be it a startup team, or an existing organization of varying sizes. But essentially what we’re dealing with is event-driven systems. And so our objective as a business is to provide a platform in which software development teams can quickly spin up machine learning algorithms based on their existing business logic or emergent business logic and to pipe those in either as predictions or decisions directly into their system. So it’s using low code options. We have no code optionsm, too, if you just wanna validate that the algorithm is working as intended. Yeah. Machine learning, ML- OPS-esque platform, intended to help you to get end, to end from testing all the way into production and then continue using the system in production and enhancing your customer’s journey.

Matthew Heusser (03:21):
And so how did you… tell us a little bit about what your role in that group and then

Jen Crichlow (03:27):
yeah.

Matthew Heusser (03:27):
How you fell into machine learning AI?

Jen Crichlow (03:30):
So my role on the team in terms of program service delivery is largely with the internal team, making sure that we are coordinating the work across all of our different work streams. So that includes customer success program and product road mapping. That includes day-to-day engineering. But I would say the bulk of my time is largely spent in quality assurance. So that is testing and finding new ways to flex the system. So sometimes that means trying to hack it. Sometimes that means trying to overwhelm it. Other times it can mean trying to outsmart it, if that’s a word or a way to think about it. So, yeah, it’s a lot of mind-bending kind of work. A lot of context switching. And the way that I got into machine learning, it’s kind of funny. I went to undergrad and graduate school for artistic studies and art history, arts administration,

(04:24):
and I got really interested in how at the time nonprofit organizations were adopting software solutions for their own operations. So not just iPads in a gallery, but think how do you maintain and manage your membership? And so a lot of database work, that was my intro <laugh>, as you can imagine into some complex databases, some of which were clean, some of which were not. And so along the way in that journey just exposed myself to more and more systems, worked at a Fortune 500 company, worked at a bank, just really trying to get a broader skill set. So shoot ahead to today where I’m at SAVVI, I have the opportunity to work across industries by building a software that is intended, to some extent, to be pretty industry agnostic and yet service teams that understand their systems to be event-driven. So yeah, fun journey. <laugh>

Michael Larsen (05:14):
That’s awesome. Hey, if it’s okay. I’m sure that for a number of people, they’re gonna be curious as to, of course, what is machine learning when we get right down to it and how it relates to AI? I hope this isn’t too much of a tangent and I’m mostly interested in this from a perspective of QA, but there was a talk that really impacted me a number of years ago by Carina Zona. The talk is called “Consequences of an Insightful Algorithm” and it really, without going into too much detail, I encourage everybody in the audience… I’ve talked about it before. It’s in numerous show notes in the past, but I wanna bring it up today, cuz I feel it really resonates with, if we’re talking about testing AI and machine learning, that talk would be a very good place to start. Hence why I’m referencing this. I would be very curious. Where do you balance the… you’re testing AI, your testing machine learning, but you’re also looking at it from a perspective of, “what should AI and machine learning be doing?” And just to give an example, one of the key pieces of Carina’s talk was that a major retailer was using data-driven marketing and they accidentally revealed to a teenager’s family that that teenager was pregnant.

Jen Crichlow (06:33):
Goodness. Yeah. I’m familiar with that one.

Speaker 1 (06:36):
So, with that <laugh> with that little hammer hit, what is something that we could maybe relate to and get our hands around what machine learning is and how we in QA can approach it?

Jen Crichlow (06:50):
Yeah. Thank you for that. Yes.

Matthew Heusser (06:52):
Before, before you answer that, can I give a little bit more context? I know that Jen might be familiar with this talk, but our audience might not. There was something about, they were spying on what you were clicking on, what websites you were going to. So then they presented you with ads saying, “here’s some recommended maternity clothes for you.” So if the parents were logged on as the kids account and go to target.com or whatever they say like, “Okay, who in the family is pregnant because Target has figured this out!” I think that was how it was working.Epis

Michael Larsen (07:25):
Similar. Yes.

Matthew Heusser (07:27):
It was more than just, they happened to be looking at it, but they actually had ways whether it was a Google search history or something that they knew and then their ad display indicated things they knew about the daughter. And that was how the parents figured it out.

Jen Crichlow (07:43):
Yeah. So I wanna say broadly what AI is and truly a blanket statement. There’s other ways to define this, but as I understand it, it’s the observation of intelligence demonstrated by machines. And so if I look at that as an umbrella of different disciplines, I think we include robotic process automation in that, we include a natural language processing, machine learning there, VR and AR. And of course, as we continue looking into the future, these different disciplines and applications can also be used in parallel with each other to create different types of experiences. When we’re looking at machine learning specifically, we’re talking about the use of development of computer systems, such that they’re able to learn. And we would like for them to adapt over time using algorithms and statistical models. I think the big takeaway from me as I was transitioning from of course managing databases into actually leveraging machine learning, is that it can identify patterns a lot faster than I can.

(08:51):
I think that is the huge benefit that everyone is trying to figure out how best to leverage in their own business and do it in a way that… it’s challenging, because you’re flagging some ethical questions. I think that is absolutely something we also need to consider. There’s also, is it even usable? Is this functional? Does this meet our business objectives? Is this aligned with how we’re evolving? Is this really just a decision tree? <laugh>, you know, like there’s a number of ways that I think we have to look at it and question it and ensure that it’s being utilized in our own businesses in a way that helps us move whatever our needle is and in that direction. But that being said, when we think about it in quality assurance, especially because oftentimes we’re looking at tabular data, there’s labels at a very high level that we need to consider as useful to us.

(09:42):
So I think one of the challenges there is of course date and time stamps, when we’re thinking about it being event driven are gonna play a huge role. And the things that we wanna ensure are being populated correctly are accurate. As we’re looking at some of the legal implications too, we need to consider what data -be it labels, be it sources, be it proxy information- falls under information that actually needs to be protected in some way, or perhaps cannot be used by the AI and ML in varying ways. Especially if we’re talking about systems that are intended to be deterministic in certain ways. To answer your question about what is AI and ML for business,

(10:23):
I think one useful way to perhaps consider it is a new team member. And by that, I mean it’s a part of the work that we’ll continue seeing. It’s something we need to continue learning about. We need to find ways to integrate it into how we work. And yet also understand that there are constraints just like rules and guidelines that each of us follows as team members and employees amongst a team. Does that help a little bit?

Michael Larsen (10:50):
Yes, definitely. Thanks for that. I appreciate the clarification for what it is and what it isn’t for this. So with that, we talked about what it is, but now how would you go about doing that? I guess the better way of saying it, how would you define QA for machine learning?

Jen Crichlow (11:09):
Yeah. Yeah. The key is starting with, well, what are you using it for in your system? And so of course, with anything that we’re gonna perform a battery of tests on, the very first question is, is it functional? <laugh> does it mean our initial expectations? That’s a question we wanna keep asking all along the way. That’s part of the iterative process is coming back to that first one. What is its performance? How is it working? I think as a rule set for our own team, we think of QA in ML as just a continuous process of seeking that the functional requirements are met, that the performance is improved and improving. So both of those things. That we can see visual markers that it has happened and that it is continuing to happen. And then I think the third is starting to identify a runway for future enhancements.

(11:58):
So sometimes that means finding that certain labels or data sources are not necessarily the best sources of truth, or maybe we need to update their formatting. Maybe we need to ingest that information differently. It often triggers a conversation, but all of those things ultimately rotate back to doing that process again. And so I don’t think that it stops once you’ve moved your ML engine into an environment that you can test in and begin to validate in. But also once it’s in production, we don’t want this to be a set it and forget it tool <laugh> if that makes sense.

Matthew Heusser (12:34):
If I hear you, right, we’re particularly focused on ML, which means we throw a data set at a problem to give us a predictive answer, for whatever that is. I think recommendation engine is probably the most thoroughly understood. Everyone has seen one of those. We’re watching Netflix and it says you should like this movie. And there’s a lot of factors involved there. And you have that, you’ve alluded to it, is that loop back where you could write an algorithm that gives a certain weight to the number of star ratings they get of the people that watch this movie of the people that watch similar movies. You could write a straightforward algorithm to weigh things, to make recommendations, but the next step would then be looking at, “Okay, of the people that click through to the things we recommended. Did they like ’em and how can we improve it of the people that didn’t click through the recommendation was bad?

(13:30):
How can we improve it?” You can throw all that at the algorithm. So what do testers do? What’s the quality role there? It sounds like, to me, based on this conversation, there’d be a significant advantage to being able to check these things in production. There’s different strategies, the golden master, where we know the entire data set and we never test it. We should get the same answers all the time, but realistically, practically, I’m gonna want to get to access to some production data, whether it’s anonymized or not. A human being is gonna look at this and say, “Hey, I just watched a James Bond film. I only watched James Bond films and it just recommended some Christmas comedy.” A human would know that difference, but it would be very difficult to write an algorithm to do that.

Jen Crichlow (14:24):
Absolutely. There are methods for that, especially when you’re looking at things in production. One method that we like to use is setting aside a certain amount of the data set to be used specifically for training and another data set specifically used for testing and thus validating that ML. The fun thing is, of course, it’s an iterative process. So having your engine, your machine learning engine in production is key. One thing that SAVVI’s intending to do that might be a little different (I think there are others that are doing this as well) is that it’s doing that regularly. It can do things like setting different thresholds, certain amounts of data need to be added to this data set in order to then run this again. And in certain instances you might end up with the same algorithm that you already had. We call those champion/challenger events, where essentially you’re allowing the ML to not only look at the data set and identify a new best model, but then you put that up against the existing model that you already have.

(15:25):
So that’s another layer that you can do. Another approach is, of course, synthetic data; throwing synthetic data into the mix, and kind of seeing how that performs. Synthetic data is tricky because, at that point, you might wanna leverage a <laugh> a little bit of an art to it. And that is to say, to put some intended outcomes, essentially see if the machine learning system can identify the pattern that you put into that data. So yeah, I think that helps clarify some methods, but obviously the day-to-day folks, some of them might use Python in their tooling, but I think increasingly we’re gonna see platforms provide more agnostic yet specific to that data set’s own observability. So that is the details around the types of patterns that were identified, the types of models that were compared against one another, things like that.

Michael Larsen (16:16):
Very cool. So if I wanted to get into testing with ML or doing AI… now, granted of course we could use SAVVI. And I think that would be neat. We want to of course encourage people to explore that, but let’s just say that I am somebody who is coming to ML with very little knowledge about it, but I want to get into it. And I wanna learn more about that. We’re always looking for tools, right? We’re always saying, “Hey, what’s something that we could plug into immediately and get us some familiarity?” And like, if you wanna do performance, Hey, downloading JME, we’ll give you a boost in that. Absolutely. Is there something equivalent to that with ML or is ML basically, “if you don’t know how to program with machine learning algorithms and working with them, that’s where you gotta go first?”

Jen Crichlow (17:01):
No, I think there are many of these entering the market. Of course, you can learn each of the tools and how they work. <laugh> there are many out there that you can use. The most basic ones, though, and I think sometimes we forget this, it’s quite manual <laugh>, but you can also download these data sets and go back to the basics. You can use SQL. You can use Excel, to some extent. You can use Python. You can use Postman for some of these things. You can use… goodness, there are a lot of templates that are coming to mind. Basically, I think the way to approach one’s intro into machine learning, (especially as far as data validation goes), first and foremost, to actually look at the software that you’re dealing with and it sources. It’s something you’re going to need that I think often gets lost when we’re more validating data is understanding its source information.

(17:52):
Sometimes there’s data labels that seem like, “yeah, it’s okay, just go ahead and use those”, when and all actuality, (and it sounds like this may have happened in the story we were referencing before), is that there’s information that can be collated back together that might not do exactly what we need it to do. And so I wanna caution people before they dive solely into the tools to perhaps maybe begin by understanding the data that they want to leverage within the business that they’re working in or within the team that they’re working with. There’s plenty of platforms to source synthetic data, if you wanna play with that. But again, sometimes that means somebody else has scraped it and might not have included all the information about the source material. There’s platforms that you can begin to pipe in the data on its own, but maybe it’ll limit the types of models you can get. I think it’s important to explore those things, but I almost think key to begin with the data itself and where it’s coming from, because that’s going to inform the type of tooling you use and then thus how you validate it. Do you mind if I pause and flip it back to you guys?

Matthew Heusser (18:53):
Oh, go ahead. Sure.

Jen Crichlow (18:54):
How do you think about machine learning, especially with tooling and how people who maybe already are dealing with data should begin their journey into leveraging machine learning?

Michael Larsen (19:06):
Well, from my end, and also I gave a little bit of a talk about this last year in regards to “self-healing automation”. So that’s where my initial foray into machine learning came from and understanding how to use it. And again, when it’s broken down and you start to look at what machine learning in that capacity actually is, that’s a little bit more along the lines of, “Hey, you have a document object model, you have page objects and everything else. And multiple ways you can define something on the page.” So when something changes and you wanna go and make sure that you haven’t broken your app, or you haven’t broken your tests, having “self-healing” (and I’m putting that in big air quotes) automation allows you to do that. Well, what is self-healing automation? Ultimately it’s a multiple choice option or a pool of choices/options that every time you run a test and you get a successful outcome from running that test, you get an agent that increments its counter.

(20:12):
So it knows that it’s, if you scan it and you put them in order, “Hey, this one we trust more because it’s been right more. And this one we trust less because we either haven’t gotten to it. Or we just don’t know it’s viability.” Now, if the one on top doesn’t work for some reason, it now says, “Oh, okay, I guess we have to go to the next agent. Can we see if that works? Hmm. Maybe not. How about this one? How about this one? Oh, Hey, this one worked great.” Now that counter for that agent gets incremented. And when you look at it that way, it’s fairly dry, it’s fairly straightforward, but it helps make sense for it to say, “Okay, machine learning is really just, it keeps track of multiple things that have happened. And if it sees that something has gotten a positive result, it increments the counter. And if it sees that it didn’t get a positive result, it either doesn’t increment or it decrements if it actually fails.” So that is in a nutshell, most of what I know about and understand about machine learning and what I’ve personally done with it, which I realize compared to what you’re talking about, that’s barely scratching the surface.

Jen Crichlow (21:20):
Oh no, no, no, no. What you’re describing is so true because it’s the feedback loop, that moment where it’s finding out, “Okay, but was that the right answer?” And sometimes there’s a human that jumps in and is like, “mm-hmm no, <laugh>.” I have to guide that a little bit. I like that. You’re saying self healing in air quotes,too, because I do think that’s the trajectory that the discipline is going in for the most part that it already begins to identify when there’s a divergence, so to speak from whatever the intended goal is. But it’s still in both cases, it’s ingesting a feedback loop of some kind. And I believe that that starts with the QA teams, especially right now, because it’s still quite a hand process. There’s the human hand is still involved in it.

Matthew Heusser (22:04):
So for me, what I’d say is that there’s a lot of confusion and mythology around particularly ML in that we use phrases like big data and really a lot of the practical applications I’ve seen for ML. When you listen to the vendor, give their little pitch, it really boils down to data that would probably fit in a spreadsheet. And you can do things like, look at averages and standard deviations. There’s an Excel method you can call, that’ll try to figure out what the line looks like for your data. And then you can predict what’s gonna come next. That’s a huge part of supervised ML, whether it’s a select statement or whether it’s doing some work in Excel, that’s what a lot of data scientists are really doing. What I would think of as machine learning is just the next step, selecting a library, usually in Python to look at something too big to fit into Excel, usually a spreadsheet or a database, and sucking that data into memory or passing it into the algorithm to figure things out.

(23:13):
If you’re doing a lot of number crunching already to kind of figure out who your typical customer is or what your common customer archetypes are and how many there are and what methods they do, even looking at, what are the most popular features that you’re using and how much money is flowing through each of those features to create a test strategy. That’s actually based on real risk based on use instead of just sort of testing everything the same. You’re already swimming in the right waters. And then, from there, when you say, “We don’t really know, but we’re going to create an algorithm to make recommendations. We want it to be based on what people actually like or people are actually doing instead of creating a weighted mathematical algorithm.” Then I think your question was how to get started or where people start. I think that’s the on ramp to ML for testing.

(24:05):
And really at that point, we start talking about the things that this show is interested in, like data quality or quality of software. Another one would be you go to the doctor, you give ’em your symptoms and they tell you to things to check for. There’s a term for that; it’s automated decision support systems. You can do that in just straightforward code. And the next step beyond that is to actually say, “When a test comes back, how many problems did people actually have?” And then you can feed that back in. So then in addition to just table driven and check them for a thyroid condition or these other three things, “Oh, no. Based on those exact conditions, based on this big, huge pile of data, we want you to check these other things.” I think that the data privacy, which we alluded to earlier, when you start letting the computer make decisions for you and go places kind of automatically ,you run into unintended consequences, that could be problematic.

(25:05):
Things like privacy. And I think that the systems thinking testers can apply are desperately needed. You know, when we start to say, “No customer would ever think of that,” often those are security holes because that’s exactly what our hackers are gonna do to try to get in. If you say, “No user would ever do that.” And it results in not just an unintended consequence, but a security problem, that’s a serious security problem. It’s a long answer to say, I think there’s three or four different ways that tester skills can be turned sideways and applied to machine learning and in AI.

Jen Crichlow (25:41):
Right on. I agree. And it’s an interesting point you’re flagging to about the kind of pivot and mindset, because I don’t think machines know that stuff yet. <laugh> we are still constructing what the ML engines are going to do for our businesses. We’re still trying to implement guardrails. There’s a human hand that’s still a part of this process and guiding what should and should not be. To your point,

(26:06):
control over this portion of the system is still key. It’s still how we ensure quality that does not go away. That’s inherent in all software that we’re building. We wanna make sure that we understand our systems, that we can truly manage them. We know how they work, what they’re intended to do. And then of course, meeting business objectives, it’s evolving with the business as the business should be evolving in the real world.

Michael Larsen (26:30):
So I guess if I could just throw out a pie in the sky, what do you think we’re gonna see in the future? What do you think functionality wise or future wise might we ultimately see coming from machine learning?

Speaker 3 (26:45):
Yeah, I feel like right now, the big ticket across industries, no matter where you’re using ML is, of course, observability. Everyone wants to know what is this thing doing? And eventually we’re gonna have more and more conversations of, can you explain how it’s doing it? And again, QA is a part of this process. <laugh> we are the first ones to kind of touch it and validate it. But I think feature set wise. Some of the things we’re gonna see is a lot more third parties validating a model. So that is to say, like, if you’ve spun up your own model, you’ve worked with another team. There’s going to be more of that. Okay. Somebody else has gotta look at it and kind of affirm that it’s doing what it’s supposed to do. But also as we look at more government oversight, that’ll also come into play of ensuring that as a business, each of us is responsible for what we are outputting with machine learning. I think we’ll see a lot more simulations. I think we’ll see a lot more democratization. So more industries using it, not just the big names, the FANGs of the world, so to speak or the big logistics companies. I think we’ll start seeing more small businesses being able to leverage it. And in that same vein, I think we’ll see a lot more templates in order to help teams kind of onboard into those tools. So I’m excited about it. How about you guys? What do you foresee?

Michael Larsen (27:54):
Well, from my end, again, I’m still fairly in the test automation side of the world, as far as machine learning goes. And I’ve looked at a lot of the things that have been coming from, but I would like to actually see some ways… this is more along the lines of from a user’s perspective, I guess. What I would like to see from machine learning is some way to have a say, if that makes sense, as in, if I’m going to be going online, if I’m gonna be able to do certain things, I’d like for there to be some kind of unified field in the tools that we work with. And that might be a much broader discussion. I don’t necessarily mind you mining for some of this data because sometimes it’s really helpful, but also sometimes it can get to be excessive. Is there a way that I could say, “Look, you can remind me once, maybe twice, but if I start getting 25 or 30 mentions that this is something I might be interested in, I’m gonna quickly get annoyed with whoever you are that’s getting this information.” Yeah. So that’s an area. And again, I don’t know if that’s something that from a QA perspective or from a software development perspective, but as an advocate for accessibility and for usability and such, it seems to me that that would be something that would be very welcomed by a lot of people.

Jen Crichlow (29:10):
Right on.

Matthew Heusser (29:10):
Well, for me, I don’t think we talked about it on the show. I went to a conference a few years ago where the speaker said they were gonna write test AI that would look at a test run, would see where the software failed, the automation failed, the GUI failed, would write a bug report, would change the production code, recompile, rerun, and see the software pass. And he predicted that as something that was going to happen. So I kind of raised my hand and said, “Did I hear you correctly? Do you think AI tools are gonna do this?” And he said, “Yes.” And I said, “Thank you” and walked out. That was kind of a big deal. I should say that there were some people that were very upset with me for that behavior, but I can assure you the alternative where I challenged him at his own presentation would’ve been much more embarrassing. My point is that there weren’t enough people in that room willing to say, “This is not gonna happen, it’s not right.” There isn’t even (at the time, that was 2016), there wasn’t even a knowledge of what the stuff was capable of for people to know the claims weren’t realistic. So I think our next step is getting to a broad understanding of where these tools actually can work, what the tools are, how to use them, how to develop them, and then an increase in the applicability of the tools so that they’re easy to use and integrate, as right now, ML is it’s kind of like penetration testing in that it is a magical art. There’s a few certificate and college programs in it, but it’s kind of hidden…with penetration testing, that seems to be intentional, frankly. It’s a black hat thing. Let’s all hang out in Vegas and talk about our cool tricks. But with machine learning, there just hasn’t been enough public examples that people can learn from. And I think that’s what’s gonna come out next and that’s great. So we’re gonna demystify it. We’re gonna take away the magic from it. And we’re gonna find the broad applicability. What I really want to get to is a place where an executive says, “yeah, that’s not really a fit here.” instead of, “I read this magazine article, go do it.” You know what I mean?

Jen Crichlow (31:29):
Yeah. Like a full understanding of what it actually is and its benefits, but it perhaps where best to place it, where not to place it. Cause I agree with you. I think there’s a lot of instances, there might be a simpler solution to <laugh> to the problem.

Matthew Heusser (31:43):
Or we don’t even have the data right now, but that’d be really good data to get. Let’s go get data. Let’s put it in Excel. Let’s put it in a database. And once we have the data, we can do a lot. Yeah. You know, and once we have the data actually using it for purposes of ML becomes the possibility. One of the teams I’m working with right now, we got some really good opinions. We ain’t got no data! <laugh> We gotta get the data! Excuse my language.

Jen Crichlow (32:10):
Right on. I love that. Thank you. I just wanna say thank you so much for hosting this conversation. <laugh>

Matthew Heusser (32:17):
And you’re welcome. Where can people go to learn more?

Jen Crichlow (32:20):
Yes. So I am all over LinkedIn these days. That is the best place to reach me if it’s anything machine learning related. You can always email me. I’m Jen (AT) SAVVIAI.com. J E N. But yeah. Find me online. I’m in the Chicago area too. So if you’re in the circuit and driving around, getting to know some of the other teams let’s hang out, let’s meet up.

Matthew Heusser (32:41):
<laugh> Maybe we can, To really do it right, It’d probably have to be the winter, but maybe we could do some kind of Midwest Chicago Testing Show Meetup in that area.

Jen Crichlow (32:51):
Yes.

Matthew Heusser (32:51):
Don’t know, December, something like that. January, somewhere in there.

Jen Crichlow (32:55):
We’ve got space. <laugh> Happy to host.

Matthew Heusser (32:59):
Nice. We’ll talk to the Qualitest folks. See if we can make it official. We’ll see what we can do. And with that, closing thoughts for Michael?

Michael Larsen (33:05):
Well, I mean, again, I think a lot of this has been really interesting. There’s been some areas that I thought, “Yeah, okay, I feel kind of secure about my knowledge for this,” but we hit on a bunch of new areas. So I’m really happy to have some new food for thought to consider with this. And again, also we tend to look at things with our own limited vision. So it’s nice to have a little bit of a broader conversation. So Jen, thanks so much for joining us for this, and Matt, if you want, I’ll be happy to take us out here and to everybody else say thank you so much for joining us on The Testing Thow. And please look forward to hearing from us again with a new episode in a couple of weeks. Thanks, everybody, for being here.

Matthew Heusser (33:40):
Hey, stick around to hear Michael talk about the Slack. We’d love to have you on the Slack and keep the conversation going. Thanks, everybody.

Michael Larsen (33:47):
Thank you.

Jen Crichlow (33:47):
Bye everyone. Thank you, again.

Recent posts

Get started with a free 30 minute consultation with an expert.