Insights Podcasts The Testing Show: Is It Testable?

The Testing Show: Is It Testable?

October 27, 2021

One of the great challenges of software testing is the fact that software is often designed to accomplish goals for a user but is developed in a way that is resistant to testing. In addition to knowing what to test and where to test, we should be asking how we want to test and if our testing goals are even possible. Matthew Heusser and Michael Larsen welcome Gil Zilberfeld to discuss how do we make our products more testable and how we can leverage those capabilities.

 

itunesrss

 

 

Panelists:

 

 

 

References:

 

 

 

Transcript:

Welcome to The Testing Show:

 

Episode 107:

 

Is It Testable?

 

This show was recorded September 29, 2021.

 

In this episode, Matthew Heusser and Michael Larsen welcome Gil Zilberfeld to discuss how we make our products more testable and how we can leverage those capabilities.

 

And with that, on with the show.

 

 

Matthew Heusser: (00:00)

Hey, thanks Michael, for that great introduction. This week, we’re talking about testability and we’ve actually got two experts, one being Michael Larsen show producer and co-host, which you know so well. Morning, Michael.

 

Michael Larsen: (00:18)

Good morning. It’s interesting to be a panelist in the formal sense. Thanks.

 

Matthew Heusser: (00:23)

Well, good… good time zone, Michael, I should say.

 

Michael Larsen: (00:26)

Thank you.

 

Matthew Heusser: (00:27)

And then we’ve also got Gil Zilberfeld who I’ve known for years. I think mostly we hang out in Germany at Agile Testing Days.

 

Gil Zilberfeld: (00:35)

Yeah.

 

Matthew Heusser: (00:36)

But I think I’ve seen you at the Agile conference and such, too. Right?

 

Gil Zilberfeld: (00:40)

I’ve been to the U.S. STAREast, sometime in the last decade or so.

 

Matthew Heusser: (00:47)

It’s interesting. So Gil gave a talk on testability at the last Agile Testing Days. You were at TypeMock for a long time building a tool to help developer unit mocking skills. You went independent. I’m not sure… a while ago.

 

Gil Zilberfeld: (01:07)

Yeah, that’s from 2007 and for seven years, something like that.

 

Matthew Heusser: (01:12)

You gave a talk on testability with kind of a little tweak in it, in that when developers hear testability, they hear exposing the innards of the application to everyone in the world. And maybe that’s bad design. I don’t know that I’ve heard that argument before, but I certainly have heard people who don’t see the value in it and don’t want to do the extra work. And testing comes in late and says, “Where’s the testability? Can you run forward to stay in place to add testability for us?” And there’s a lot of pushback against that.

 

Gil Zilberfeld: (01:49)

Yeah.

 

Matthew Heusser: (01:49)

You’re talk promised to clear up some myths about testability, right?

 

Gil Zilberfeld: (01:54)

Yeah. You see, I was a developer for a long time and I’ve been told how to develop software in a certain way. You kind of create diagrams and you say, okay, this will solve the problem. And you build it and you say, yay, great. It works. And they give it to a tester and the tester doesn’t agree, but sometimes they can’t prove it. That means if I give you like an app and all you have is the UI, all you can do is do stuff with the UI. If I don’t expose anything as a developer to you, you can’t set up things like set up scenarios or set up data in order to run some scenarios. Therefore, if I don’t prepare the code to be testable, it won’t be testable either for the developer or for the tester. What is the tester’s job? The tester’s job is to say, well, this is what we know about the application, as much as we can. And if I’m hiding part of it, some of this is obscured and therefore we can’t report on it. This is something that has become very interesting in the last few years, because a lot of the pushback that you were talking about because developers build, they were taught to build software in a way like I was. And then testability is something that if it’s not there, you need to patch on it. And then it becomes kind of a homework. They don’t want to do it. They already did it, right? They created the perfect software. And apart from that, they’re developers. If they’re going to go back into their code without tests, they’re going to break something. So that’s another reason why they don’t want to do it. So, unless there’s something that the group, the whole development group agrees that this is something that’s very important and therefore we need to code in a certain way in order to make things testable, it will not come out of thin air. And we all lose from that.

 

Matthew Heusser: (03:50)

Well, there’s a couple of mixed things that are overlapping there. I think they’re all bad. And you’re recognizing all of them. One of the anti-patterns I have seen is that all we can do is test through the UI, and all we can do is set up data through the UI. So the test automation does the same thing, 20 times in a row.

 

Gil Zilberfeld: (04:10)

Yeah.

 

Matthew Heusser: (04:11)

That takes 15 minutes so that search will be able to find the search results that we expect, for 20 users that are set up. And then there’s no way if that’s just one test, there’s no way. Even if you parallelize it, continuous integration is going to take long enough that we’ve got a slow enough feedback loop that it’s not effective. And another one is we don’t even have labels on the user interface elements so that the XPath is like “the fifth row of the fourth table column of the third div”. And, uh… then the UI changes and everything breaks, we have to debug and fix it. So those are the two most common things I’ve seen. Um, pause here for a second. Cause you’re agreeing, Michael, do you want them talking about? Does this make sense to you?

 

Michael Larsen: (05:00)

Oh yeah, definitely. In fact, I’m laughing a little bit about this because as I’m hearing what Gil is saying, I’m inwardly chuckling, because last night at the time of this recording, I did a presentation for the Pacific Northwest Software Quality Conference meetup. And it was my talk, “Is This Testable?” and so everything that you are saying are points that I reiterated last night. I was like, “Okay, this is interesting.” So I want to add a little twist to this. If I can. One of the key points that I made and what I talked about testability is I took it back to science and I said like many things we have to start with first principles. The first principle, I always encourage anybody fIf they’re going to get into software testing is they have to have a firm understanding of and be willing to actively utilize the scientific method. The point being is that you set up a situation, you ask a question, you know, create a hypothesis. That hypothesis needs to be experimented on. That experiment will produce data. You need to look at that data and you need to determine if that data either affirms or refutes your hypothesis. Lather, rinse, repeat. These are things that we oftentimes take for granted, but they’re very fundamental to when we want to discuss anything related to testability. And the key factor is being able to say, what details do we need to have going in? And how can we be as objective as possible with the testing that we are doing versus oftentimes hand-wavy subjective stuff. And if you’re using a UI, yes, in some ways you can say, well, we can break everything down into a specific place. If you’re using WebDriver or some other libraries to be able to run an application. Sure. There’s things you can do. But as was just mentioned, a lot of those things are inefficient. They’re not necessarily the most direct way to do something. And you are forced into a biased position. In this sense, you’re UI biased, what it will or will not show you, and that may not be super efficient. It may not be something that will actually give you the raw data that you really need to make a decision, especially over multiple iterations. Being able to make something testable might mean you have to break out of those basic parameters because again, you want to deemphasize that if possible, and you want to be able to make sure that the data that you can collect is data that you can either repeat or examine multiple times and fairly quickly. It’s important to communicate as early as possible. I know that, Hey, we’re working on a UI product or, Hey, we’re working on getting this new feature in. I do a lot of accessibility advocacy. And so a lot of the stuff that we would do in say, UI automation, where your mouse click here and click over here and do this. You don’t even use a mouse in many of the accessibility aspects.. You use the keyboard exclusively is your application going to utilize many of the keyboard shortcuts and methods that make a product accessible to begin with. Your UI won’t tell you that. Really, this is just fresh in my brain from last night. So I’m kind of just like, “Oh yes! Oh! And we talked about that! And that! And that!” So I’m going to step back now and let Matt take it to…

 

Gil Zilberfeld: (08:53)

I actually wanted to build on what you said, Michael. There’s two things I want to add. First of all, we’re kind of sticking to the UI as a method of testing, which kind of leads us to think. Well, for me too, to think about two things. First, a lot of the time we use the term untestable… Well, we, we really mean it’s not really untestable, but it takes a lot of time to do everything around testing, including automation as well. Matt, you mentioned going in through the HTML and the XPath and saying which element we actually want in practical sense, if this is the kind of work that we need to do in order to test well, let’s face it in practical sense. People won’t do it. Untestable really becomes not tested. We need to think not in binary terms of whether something is testable or not, but making testing. easy. Things come out of that as well. Second thing you mentioned the bias. It leads me to one of the things that they mentioned in the talk in order not to be biased toward an interface, testers need to learn about architecture. How the thing is built. What it exposed. Does is ask the right question? Does it expose an API? Can I add something to the database or not? Can I replace the database? Can I have tools to do that? Can I run tools in that environment? It’s not the old way of here’s an app, test it. Rather than let me know the whole ecosystem of how it works, how it is deployed, how it is built. And now that I have this information, I can invoke many things in my tool belt to test.

 

Matthew Heusser: (10:36)

Thank you. To backtrack for a minute. Michael is calling in from California on Pacific time. I’m on Eastern time. Where in the world are you, Gil?

 

Gil Zilberfeld: (10:47)

I am in Israel. The internet is connecting all tired people, wherever they are (laughter).

 

Matthew Heusser: (10:52)

So what Greenwich Mean Time plus something

 

Gil Zilberfeld: (10:56)

GMT plus three.

 

Matthew Heusser: (10:58)

And Michael is GMT minus seven or something.

 

Michael Larsen: (11:04)

Minus 8.

 

Matthew Heusser: (11:04)

11 hour difference. That’s just fantastic. 11 hours. Yeah. So your before the business day and Gill is

 

Gil Zilberfeld: (11:12)

Towards the end, yeah.

 

Matthew Heusser: (11:13)

Speaker 1: (11:13)

Fantastic. It’s amazing how we’ve been doing this podcast for years now, but it’s amazing how that’s just normal today. Yeah. Maybe we should do it. Maybe we should do a podcast on that.

 

Matthew Heusser: (11:25)

I think most of us can say ah! This is an anti-pattern. This is an anti-pattern. And oh my gosh, this is terrible. This is terrible. One thing that I do recommend is asking questions during development, because I find that if product owners and developers give a different answer to the same question, you’ve just identified a bug that would have been created. And then you can get them to talk and you can prevent it. And it costs us what, 15 minutes of time, let’s say that you’re in the bad situation. Let’s say you’ve been hired as a tester for a buggy piece of software and you come in and surprise it has got a bunch of testability problems. What do you do? And you could make up any scenario within your experience because there’s a bunch of ways this can go wrong. But I would say the classic, you have to test through the user interface. There aren’t clear identifiers. So you can’t write code in order to do tooling. Maybe even the user interface is unpredictable in some way. You click submit and an order comes back and that order number is unpredictable. What are you gonna’ do?

 

Gil Zilberfeld: (12:25)

I can tell you about my experience. I work mostly with developers who want to automate the tests. So they have access to the code. When I teach them, train them, to write unit tests or integration tests, API tests, whatever, they usually have to change their code for the tests to work. This is the stability. It wasn’t testable before. Now we can write tests for it. Sometimes the changing of the code is so simple and it doesn’t have any real problems with it that they are willing to do it. Now, if you have the same situation with a tester, unless the tester knows how the thing is built, they will not understand or point the developer to, Hey, maybe you can do this kind of modification and I can do it more easily. And they certainly cannot communicate the value of that. That’s a problem that we have unless we’re working together, we cannot explain why this is valuable. Like I said, the developer is kind of under an impression. I know I’m generalizing, but I was a developer, so I know. Like I said, that the developers thought that the testers are going to test everything because they have all the time in the world and all the resources. There’s not a very clear picture of what everything is because everything could be anything, right? Unless the discussion takes place, and the discussion goes into details, and the code, the architecture is exposed to someone with some testing experience. The question is what come up and then we go back to, yeah, this app, it doesn’t work. It sucks. I have a list of 10,000 bugs that you need to go to fix, but I can’t really point you as a developer to what to fix in order for me to test better. One of the things that surprising people, when I say it is that I came to the conclusion testers need… I won’t say every tester needs to code or learn how to code, but they do need to learn the developer language. What are classes? What are code lines? What is the presentation of conditions in the code? Because otherwise for them, it’s like a giant black box and they can’t point the developers to, Hey, I need this to be available to me because if it’s a giant black box, they don’t know what it is and what it relates to a valuable sitting like three levels down. Testers needed to speak that language. Developers can learn the testing language. But most of them are not like directed toward this funnel. So I think the responsibility is on the tester side. I’m optimistic about this, but it will take awhile.

 

Michael Larsen: (15:18)

Hey, Gil, I got one thing that I can possibly comment here. And during my talk last night, I called this a possible blinding flash of the obvious, but this is something I think, honestly, if more organizations where you have the ability to do it could do it, I think it would help considerably what you’re referring to. How can you tell where the main thing is the main thing or what interacts with what? And one key area where we lose or don’t exploit all we can is log files. And the fact that log files, we’re not just, if we have an application, we don’t just create one log file. If you’ve got a front end application and it’s running some kind of wrapper to keep everything together in the UI and make it look good, that’s one thing that might be doing a log. If you have an NGINX server, for example, that’s doing a log. If you have a search engine, that’s a component of your product. Specifically, that’s creating a log. If you have a microservices gateway or an API that you’re running, that’s creating a log. And so you have all of these logs that you could be a potentially looking at. But the biggest challenge of course, is either the dearth of data from certain logs versus the overwhelming amount of data. So one of the things that I like to do, and again, this is where I call this the blinding flash of the obvious. I like to use a screen multiplexer, something like screen or tmux or byobu, which a lot of people use for pairing purposes, but you don’t have to, you can also just take a session in bash or whatever, and you can split them into multiple panes. I like to run mine horizontally if you want to run vertically fine. But by doing that, you can then in each one of those windows tail, a different log file, as you are running a test, you have six or seven bands that you can look at and see what’s touching what. What’s actually responding. Who’s going where. If you notice that a certain thing that you’re doing is hitting a database really hard, or it’s hitting a search engine really hard. That gives you insights as to, oh, Hey, here’s some possible performance areas that we can look at or Hey, here’s a potential vulnerability or, oh gosh, this is a third party thing that we have no control over, which by the way, if you are dealing with log files, another thing that I would encourage, if you have control over the development of it, get each of those teams to unify on a log file format so that, you know, Hey, I’m looking at this file. It’s constructed this way. It can give me this kind of a message, this kind of a priority. And in this module or component, it’s actually happening. Those things can allow you to, I don’t want to say homogenize, it’s the wrong word, but I mean really give you an insight as to what’s happening. And then you only have to worry about those areas. It’s like, oh, NGINX log? I can’t do anything about that, but maybe I can wrap something to help me pull it in and make it a little bit more usable or relatable to the other log stuff that we do. So that’s an example there.

 

Gil Zilberfeld: (18:36)

If it’s possible to do it, great. It really depends a lot on the type of the organization. And who’s in charge as in which logs, the ability to unify them or create a specific format really depends on organizational structure. And if it’s possible, great. However, I would like to go back to testability. What you described as something that you’re looking at the crime scene, maybe as the crime is happening, but you don’t understand what is happening because what, what you see is text running on the screen. That’s cool, but sometimes not all the texts will run on the screen and not all the hints will be there. You might figure out things that are working, but if the application doesn’t expose data statuses at the rate that you want them or so on, written into the logs, it just leaves you like a detective looking for scraps inside logs, which you were given. Testability means or improving testability means better logs, especially in specific logs, the unified logs, so I can find something in the whole amount or the whole heap of data. Developers kind of create a blanket for the like log everything out so you can find it if you’re looking for it. And like you said, it’s overwhelming. If I have like 10 logs that I have to sync in terms of time sync and understand which each one is different language. Again, I’m looking at the crime scene after the fact, if we want to make better sense and make our time a lot more effective, we need to go, like you said, at the beginning, we needed to create a system that exposes data that we need. We might not know when they’re built with everything that we need, but it needs to be changeable, modifiable, in a way that when we do it is easy to ask the developers, Hey, I need this type of information. And if it’s a bigger organization with development organization, I need to know who’s in charge and which component service, whatever. So I can ask them to log more data there. Again, if we want to get better at this, you ask me which of the way I would go. I would make the system create and expose better data rather than teach the tester where to look for data that is available.

 

Matthew Heusser: (20:59)

Thanks Gil. One thing I want to go back to, if it’s all right, is this issue of, is it testable. When you say it’s not testable? I think you mean, I am unable to write a computer algorithm that can run through all of the things and produce a result. I mean, a human can probably test it.

 

Gil Zilberfeld: (21:19)

Given enough time and resources. Yes, probably. (laughter)

 

Matthew Heusser: (21:22)

Okay. Yeah. That’s fair. Even there are some cases where we say a human could test it, but it wouldn’t be economical.

 

Gil Zilberfeld: (21:29)

Can I give you an example of something that became testable, but with a lot of effort? So we we’re working with a team in a bank, they were working with a drop of production sanitized data, but it’s a mainframe and it’s going to be a main from like the next 50 years. So they can’t really change it. The mainframe doesn’t have any capabilities of deletion or modification and so on. It works as it’s working in production and that’s cool, but first they don’t know when they’re going to get a drop of a new version. Data may change every night, a drop can occur and they can’t fill the data that they need. So basically you don’t have control in the database and the data there can change anytime. And you can’t really edit data. If you want to run a query about a person without credit history, I’m simplifying it, I know… It won’t be the same user every time. Okay. So that’s a restriction. So they built a tool that connects to the automation part, but also works for the manual testing as well. That finds a user like you’re asking to. So give me a current user in the system that doesn’t have credit history and it will give you a user ID, which you can query and do stuff with it. Let’s give this guy a loan. This is a scenario. Now this guy has a loan on his history. So when you run the test again, it won’t be coming as the same user because you change things in the database. It’s not the things are not testable. You have a system, it works. They just needed to create a very, very sophisticated tool to help them test all kinds of scenarios that otherwise would be either harder to even perform like who’s this user. I need to test with a user that doesn’t have credit history. So I need to go into this database. If I want to test it, I need to find a user that will answer correctly to the API I’m going to invoke. Because after that, when I asked for a loan, I need to continue the same scenario. Before that tool, It was essentially untestable unless you start from you create a user from scratch and moving all kinds of hurdles until you actually get to the point where you’re actually starting your actual test. So it’s not that the system was not testable, but in practical terms, it wasn’t, because who would do that on the spare time. And they don’t have spare time. The effective situation was that a lot of scenarios were not tested. So it’s not that the system was untestable. It was untestable given their resources and what they decided to do, which is kind of complex solution, I think. It was build this tool that will help them both manually and automated way to test. So the untestable has a lot of meaning. The solutions for that, and can have different meanings as well, different operations and implementations. That was kind of a smart solution. What they did, it was also very costly solution. This tool was built by one person, resident genius there, who is still there. What happens when he leave, somebody needs to maintain that. I saw when I asked this question about what would happen, they don’t know. Everybody I talked with was using the tool, but they were not developing it. It’s a very complex question. It doesn’t come down to just code. It has impacts from a lot of facets.

 

Matthew Heusser: (25:16)

Wow. That’s a great question. So we are running out of time. Couple of things I wanted to cover before we go, I did check YouTube, Gil. And there’s a version of your talk from the Heisenbug Conference.

 

Gil Zilberfeld: (25:28)

Yeah, I did it also as a webinar. So Heisenbug was a version of that and you can check it out there and can you get it on my TestingGil YouTube session.

 

Matthew Heusser: (25:38)

We’ll include that link then. And Michael was your talk recorded that you gave yesterday?

 

Gil Zilberfeld: (25:43)

Yeah, I’d like to watch it.

 

Michael Larsen: (25:46)

I believe my talk was recorded yesterday, so it probably will be uploaded at some point. However, if you want to get a gist of what my talk is, my paper that I initially wrote for this, I did two years ago and I wrote it for the Pacific Northwest Software Quality Conference. That paper is available. I will link to that in the show notes, and you can see all the details of what I covered there. And that paper basically was the core of my talk. Also, I want to add one additional component because the whole point of my is this testable talk came out of my participating in the Ministry of Testings “30 Days of Testability” challenge that they offered a couple of years ago. And I want to definitely make sure that to give credit where credit is due, because I learned a tremendous amount of testability from that particular 30 Days challenge. And it gives you a checklist and an opportunity… and when I checklist meaning a checklist of assignments and readings and things that you can do so that you can actually bone up on testability approaches and knowledge and methodologies and books and research so that you can be effective in utilizing it. So I strongly encourage that. That will also be in the show notes.

 

Matthew Heusser: (27:04)

That’s awesome. Yeah. Hopefully if we do it in time, maybe the video is up before this goes live. We can include, otherwise we’ll have the paper. Wow. So, sorry we just don’t have the time to cover this in more depth. But before we go, we should probably do, in addition to, do you want to plug anything? Is there a website? A Twitter? Some way people can get what your talk you’re giving next, both of you guys, a chance to do that. But if you, when you do that, can you throw in, if you have it a single testability tip, I’ll go first. It’s the taxi cab testing, where you have automation that sets up the data. So a human can jump in and do the testing. That can be very easy because all you’ve got to do is write a script that runs some insert statements into a database, and then maybe it reads a text file and blows your data in. And then you can log in and you can do all the things that can balance speed and human interactions. So if you don’t have that, that might be something to look into. Gil?

 

Gil Zilberfeld: (28:07)

Okay. So first of all, I’m giving my next webinar on TDD and legacy code on the 12th of October. So if you can check out either my site over at everydayunittesting.com or follow me on social media, you’ll see our webinars coming up. I’ll be very happy if everybody comes. And I mean, everybody at testability tip one for developers and one for testers. The developers… Duplication of code is okay, everyone will tell you. It’s not. The reason is that if I need to fix something, I need to go over. I need to remember what will fix everything and go over there and fix all the places that I made this mistake. But it also has impact on testing because if code appears in different places, I, as a tester, might need to cover all kinds of code in different places. So that’s extra work for the tester. Removing duplication helps the testers as well. For the tester, ask the question, can I mock/simulate/something that, when you ask that from the developer, they’ll start thinking about A, is it possible? B how is it going to be? And C if there is a need for that, how can I do it? So just triggering the idea that something needs to run, not like it is not going to run in production, creates a discussion and ideas of how to create something that at some point we can replace and by plugging other stuff in.

 

Matthew Heusser: (29:44)

Cool. Thanks. Michael?

 

Michael Larsen: (29:46)

Okay. So my kind of take away and step away from this is if I was going to give you any one hint that can help you, not just with testability, but also with potentially finding bugs, that’s to go in and be on the lookout for what’s being updated and what’s being submitted into the code repository. And as you do that, if it’s reasonable to do so, look to see where those components that you’re working on, like Gil was saying, you can mock out and be able to work with, but also look to see what other components they directly interact with and touch. I call this feeling the heat. You want to look for the radiant heat source, so to speak, of your changes and how they are actually interacting with other components. That’s one, but from a more broad testing example, I would just say, there is no easy fix to testability. It’s an ongoing process. It requires focus. It requires consistent communication. Be willing in those earliest meetings as a tester to be there, be there at those story workshops, be there at the time they discussing requirements and to borrow a Jon Bach line, provoke the requirements as in start testing as you’re discussing it, because that’s where you’re going to be able to determine how am I going to test this? And are we making this so that it is as easy to test as we possibly can.

 

Gil Zilberfeld: (31:17)

Cool tip.

 

Matthew Heusser: (31:18)

Great. Thanks. Okay, guys. I know Michael’s got to get going. Gil, thanks for making this work on short notice. Lots more to talk about if you want to come back.

 

Michael Larsen: (31:28)

Excellent.

 

Matthew Heusser: (31:29)

All right. Thanks everybody.

 

Gil Zilberfeld: (31:30)

This is fantastic. Thanks again.

 

Michael Larsen (OUTRO):

That concludes this episode of The Testing Show.

 

We also want to encourage you, our listeners, to give us a rating and a review on Apple podcasts, Google Podcasts, and we are also available on Spotify.

 

Those ratings and reviews, as well as word of mouth and sharing, help raise the visibility of the show and let more people find us.

 

Also, we want to invite you to come join us on The Testing Show Slack channel, as a way to communicate about the show.

 

Talk to us about what you like and what you’d like to hear, and also to help us shape future shows.

 

Please email us at thetestingshow (at) qualitestgroup (dot) com and we will send you an invite to join group.

 

The Testing Show is produced and edited by Michael Larsen, moderated by Matt Heusser, with frequent contributions from our many featured guests who bring the topics and expertise to make the show happen.

 

Additionally, if you have questions you’d like to see addressed on The Testing Show, or if you would like to be a guest on the podcast, please email us at thetestingshow (at) qualitestgroup (dot) com.