The Testing Show: API Testing
In todays fast paced and ever more complex software development landscape, Application Programming Interfaces (APIs) are more important than ever. Testing them has their own unique challenges. This week, Matthew Heusser and Michael Larsen talk with Beth Marshall, Ben Dowen, and Andrew Knight about the unique challenges testers face when working with and testing APIs and the variety of methods to make those challenges a little less daunting.
- Beth Marshall (LinkedIn)
- Ben Dowen (LinkedIn)
- Andrew Knight (LinkedIn)
- Beth The Tester’s Tales
- Full Snack Tester
- Automation Panda
- JQ: Command Line JSON Processor
- What is Contract Testing?
- Postman Platform Overview: Be API-First, Not API-Last
- Test Automation University
- API Java Path
- Boa Constrictor
- Exploring Service APIs through Test Automation
- Tester Of The Day
- I’M WRITING A SOFTWARE TESTING BOOK!
Michael Larsen (INTRO) (00:00):
Hello and welcome to The Testing Show. Episode 96. API Testing. This show was recorded on February 26, 2021. Matthew Heusser and Michael Larsen welcome Beth Marshall, Ben Dowen and Andrew Knight to talk about some of the unique challenges and opportunities that come with developing and testing APIs as well as resources and tools to help with the process… and with that, on with the show.
Matthew Heusser (00:00):
This episode, we wanted to talk about API testing, which is a big part of modern software testing on most of the projects I’ve been on, for anything but the smallest of websites or the smallest of handheld apps. And I don’t think we talk much about how to do it well, don’t think we talk much about… What are the challenges with API testing? What are the anti-patterns? Just as a community, I just haven’t heard those things come up much. It’s much more, “Here’s Soap UI. It’s great.” And that’s sort of the end of the conversation. I think we’re going to go a little bit deeper today, and to do that, we’ve got quite the panel. So we’ll start with Beth Marshall, who’s a Senior Test Lead at SmoothWall in the UK. Welcome to the show, Beth.
Beth Marshall (00:45):
Pleasure to be here.
Matthew Heusser (00:45):
And tell us a little bit more about what you do at SmoothWall. What that means. I see it’s an Edutech…
Beth Marshall (00:51):
That’s right. So we work in the kind of safety space. We keep millions of children safe online every day, which at the moment is a really big deal and very important. I love my job. I love doing some good in the world and we certainly do rely on APIs and API testing quite heavily as an organization.
Matthew Heusser (01:10):
And you’ve been doing more and more of that in your day job. If I heard that right. I reached out for people that were API experts and your name came back.
Beth Marshall (01:19):
It did, yes. Chris Armstrong, one of the testing peers, reached out to myself as a potential source of API expertise. I guess you’ll be the judge of that, right?
Matthew Heusser (01:32):
Well, it’s, I think it’s good that you’re not someone who’s been doing it for 15 years. Experts sometimes lack beginner’s mind and they aren’t even aware of the challenges; they can’t articulate them. So we’ve got the full panel here. Also, Ben Dowen is a QA engineer at PA Media, which is a media services company also in the United Kingdom. And your Twitter handle is the @FullSnackTester. Could you tell us a little bit about that and your API testing experience?
Ben Dowen (02:01):
Yeah, absolutely. Full Snack Tester is a bit of a tongue-in-cheek take on the idea of being a full-stack tester of which of course API testing is part of the full stack of testing. I kind of see that a lot of modern testers actually could put themselves forward as full stack testers and it’s, you know, a play on the joke of full snack developer that I saw being thrown around for a number of years, but I didn’t see anyone else really owning the idea of being a full stack or full snack tester. And I love my snacks. I’ve been working at PA Media in the racing and betting squad for about a year. Now we have a big estate, lots of APIs, lots of services that integrate with one company processes the data, changes the format, outputs it somewhere else, very few UIs, lots of services.
Matthew Heusser (02:56):
That sounds like a person we want to listen to, then. So I’m sure you’ve seen some of the problems that I see in my consulting work, where things are just gnarly and nasty and hard to debug and pull apart when the tool was just supposed to make it magically work fantastically well, or am I wrong? Am I just like a terrible tester?
Ben Dowen (03:16):
Oh, I’m sure You’re not a terrible tester. There’s a lot of challenges to API testing and some of them are shared across other parts of testing and test automation. I tend to find there’s a huge challenge in setting up the environments and the configuration correctly. And especially with those interactions with third party, sometimes you end up having to rely quite heavily on mocking. Otherwise, you just don’t have the predictability and the repeatability that you need to kind of run those automation through.
Matthew Heusser (03:48):
Yeah, that’s been my experience too. So finally for the panel today, we have Andy Knight, I saw him on Twitter as @AutomationPanda. Based on the amount of information he was putting out there and how he was framing himself, I assumed this was kind of an independent consultant, but Andy has a day job as a Lead Software Engineer in Test at Precision Lender. Now, welcome to the show, Andy, thanks for coming on.
Andrew Knight (04:13):
Thank you for inviting me.
Matthew Heusser (04:14):
And, of course, we have, as always, Michael Larsen, our show producer and contributor who have to deal with me every week.
Michael Larsen (04:25):
Well, thank you very much. Happy to be here. Hey, actually, this is very timely. I figure I’ll throw this out as an intro question because it’s what I’m currently dealing with right now. And it might be something that could set the tone or start with this. A lot of what I’m doing right now is more API related, much more shell-scripty, less GUI, if you will but the one thing that kind of drives me crazy, and maybe there is a nice, simple way of doing this… I struggle with the idea that, you have an API, of course, you have a service, you run a call, whether it be through a tool like SoapUI, or through Postman, or if you’re old school like me and you want to use curl and a bash script. But what oftentimes ends up happening is I can send all this stuff and I get this massive reply back and I might want to go through and say, “Oh, you know what? I only need this little tiny piece of this big JSON chunk that I’m getting back or HTML”, and then having to go through and do this overt manipulation of everything I get. Just so I can get that little chunk of data that I actually need to do something else. My question is, am I just not hip and savvy enough with this? Or is this really something that a lot of people struggle with? And is there some way, if somebody’s getting into doing API testing and having to do that kind of manipulation, are they setting themselves up for a long learning curve?
Ben Dowen (05:55):
Matthew Heusser (08:06):
So I’m Going to respond and make sure I understood you correctly. I think you did a great job explaining from sort of a technical level, all the moving pieces. There’s a lot of different ways to do it. What you’re recommending is kind of a common open source, RESTy, JSONy way of doing things. Would you agree there is a problem when you’re testing getting the right data and knowing the right things are in the system to test against. And the terms I would use for that is test data setup and test data management. So you’ve got to make like a bunch of calls to create the object in the system so that you can make the call to see that it returns the object you just set up, or you have to have a persistent data store with a Golden Master, where you can make the call and get the result and check it and make sure that it returns the right results. Golden masters are problems when it comes to running tests in parallel because the data gets corrupted and they tend to age out for various reasons. You can’t use them anymore because the data is old. And there’s some rule that your order has to happen within the last 30 days. Are those problems to which the other ideas you recommended are the solution? Or did I just misunderstand?
Ben Dowen (09:18):
You’re absolutely right, there, that ultimately there are many ways to solve the problem, but the problem still exists. And you have to figure out your strategy for how to deal with that. And sometimes that’s a compromise. One of the difficult things is understanding the data that your service needs and how you’re going to tackle that. Are you going to, as you say, get the data there up front, are you going to have the data living with each test? Are you going to seed that data? How do you manage that? That can actually be a bigger challenge than doing the calls themselves. Ultimately, making a call and addressing the stuff that comes back to the individual calls, it’s relatively straightforward stuff. Understanding how those connected series of calls works together, which dates you need from where, that is a challenge. Oftentimes the way that I meet that challenge is a lot of exploring before I get as far as automation. You know, it’s about sometimes taking a step back and trying a few things out before you get anywhere near understanding how you’re going to codify the thing. I appreciate that doesn’t give you an out of box solution because you’re obviously right. There’s a lot of problem there to unpack.
Matthew Heusser (10:36):
I think realistically the goal for today might be understanding the problem and the challenges that people have. So if they’re in the middle of it, they can go, “Oh yeah, everybody’s got that problem. Okay. “Or before walking down that road, they can know what sort of pitfalls to look out for. Thank you, Ben. Beth, do you have anything along those lines?
Beth Marshall (10:56):
I guess, to take it back to something slightly more simple in response to the original question, ways of passing data from one source to another source, something that I have used in the past, we actually use a tool called Tyco. It’s a Node.JS, Gage, Tyco is our kind of platform. We can quite easily take part of the string from a JSON response body, whatever that part is. And then pass that as a variable to a subsequent request. And we work in quite a microservices environment and we find doing that is quite a neat way to, in a very basic, simple case, be able to use APIs to kind of string requests together and use one variable into another. And obviously something that most testers are aware of, but not all are, is this concept of Contract Testing, with a very straightforward test. You can confirm the whole JSON schema is as expected. For some that can cut down their testing by quite a lot. It’ss a contentious thing. And some people don’t like to wholly rely on it, but I think if you want a straightforward solution to checking all of the values in a response without having to write endless number of test cases, contract testing is certainly worth looking into.
Matthew Heusser (12:30):
And the tool I’m familiar with that is Pact, where you can have an analysis of what the response should look like compared to what it actually is. What are the variables? Are they numbers, or are they text? Have you used Pact or you’d use something else?
Beth Marshall (12:46):
I first looked at contract testing when pairing with a complete stranger in order to solve a testing challenge that was kind of a friendly hackathon and contract testing was something that they recommended using Postman. And I found it really straightforward to do that in Postman. A problem that people have with contract testing is the enormous bulk of text that they get when they copy the JSON schema into the test section in Postman. And there are ways of baking that into other areas so that your test area is left quite nice and clean. You can store that in a folder level or collection level and do those tests there. Yeah, Postman’s what I’ve been using for my contract testing.
Matthew Heusser (13:34):
I think that makes sense. Postman is a lot more dynamic oriented, a lot more call the API, get the result back, look at the results, make sure that the JSON matches my expectation of what I’m going to get and Pact is much more source code oriented, from what I’ve seen.
Michael Larsen (13:49):
I just realized I was a little selfish. I jumped the gun here because I had a very specific question on this. Also, maybe there’s a good chance that there are some people who are listening to this show. We might want to go and do a little bit of API 101 in the sense that, “What is an API? What are we actually doing with this? Why do we care?” And if us as testers have never set foot into even dealing with an API, which I realize in this day and age might be rare, but there might be some people who never had to test an API before. What are we really talking about here and how to put it bluntly, make those first little connections?
Beth Marshall (14:28):
I am definitely coming from a place of getting into and learning more about API testing in the not too distant past. So how I would explain it, obviously API stands for Application Programming Interface and it is how two applications speak to each other. So it’s kind of peering behind the curtain of an application or a website, whatever is that can be scary to people who are new to testing. An analogy that I like to think of is when you think of another person, so think of your mom or your partner, you think of their face first. We as humans tend to focus on what we can see. In that way, testers often are drawn to the UI side of things because it’s safer. When they sit here, API, that can be a bit intimidating because they can’t necessarily see it. It’s not something that they can interact with very easily. But a great place to start is to just look at your dev tools, just open a browser, go to those dev tools and check the Networking tab. When you’re clicking through a website, you can see those interactions happening behind you, and it helps you to make sense of that puzzle.
Andrew Knight (15:41):
An API is just a code like response from an application. If you think about it in abstract, the difference between an API and a UI, a UI shows you something visually. An API gives you something textually or binarily. They’re one in the same, they’re both interfaces. You interact with them in a certain way. You do an interaction, you get a response. The reason why we have APIs versus purely UIs, first of all, they’re typically faster. Secondly, from a computer standpoint, they’re simpler in terms of the type of information that is returned. The reason why we use API APIs is so that we can have fast data messages going back and forth.
Matthew Heusser (16:27):
I wanted to ask anyone to comment on the state of API testing today. Does anybody have any broad general comments they’d like to make about the state of API testing?
Beth Marshall (16:39):
I can say very broadly a trend that started some time ago. I know it’s not brand new, but Kin Lane did a keynote about this at Postman Galaxy just a few weeks back. He was talking about this concept of “API First”. He gave a slide that showed the number of companies that consider themselves API First. And what that tends to mean is the first thing that gets designed and considered, planned for, and written is the APIs and everything else, all the code kind of fits around that. It’s certainly something that I had heard of. Every big company that you can think of was on this slide. And so I certainly think that is a big trend. API First seems to be where the market is to add to what Andrew was saying. I think this allows for atomic testing to take place, to really focus your testing efforts on the smallest possible tests that you can have. API really helps with that. What that allows you to do is use things like parallelization and CGI to really get fast feedback on automated API testing.
Matthew Heusser (17:52):
Andrew Knight (19:07):
I think the triangle can be a helpful visual, but every single test strategy for whatever kind of product you have is going to be inherently dependent upon the type of things that you’re testing. If you’re testing a system that has a rich web service base and then a front end on top of that, where a lot of the stuff, so to speak, is happening at the API level and the front end is thin, yeah, the API is going to be a great model for that. I can talk about the company where I work, Precision Lender. The Precision Lender web application is a commercial banking pricing and profitability tool. It helps bankers price loans for businesses. The way that it’s architected. It’s an ASP.NET application. There are web services behind it, but honestly, a lot of those web services are simply just chucking ridiculously high amounts of data into the front end. And the front end is very, very heavy. That front webpage has tons of logic and gets tons of views. For our case, It doesn’t really make sense to directly test a lot of those APIs behind the web application. If I were to say something like that without giving the context of what the web application is, someone who thinks the pyramid should be applied to every situation would say, “Oh, that’s horrible. All of your black box tests or web UI tests, you don’t really have any service level tests? Why would you do that? That’s so against the pyramid.” Well, when you look at what the application is, the burden for need of testing is actually in the front end and having tested this application for coming up on three years, I can handily say, we’ve almost never had a bug in that API. Almost all the bugs have been frontend bugs. Your mileage will vary with the pyramid is what I’m trying to say.
Matthew Heusser (20:54):
Yeah, and it depends on what the API is doing. If all the API is doing is wrapping services that have existed for a very long time and you’re using a very standard template that you have demonstrated to your satisfaction of worked once, you might spend less effort on that middle layer. Also, it was a test automation pyramid. It wasn’t a testing pyramid. I think people forget that a lot. You can do a whole bunch of stuff with the front end. You might not use a tool to drive it as part of CI. We didn’t have much API testing.. Well, no, we had actually, we had a bunch of API testing and it was mostly at the REST, payload contains, given this authentication. One of the neat things about that environment I’ll say is we had really good setup. We had command line set up where you can say, create a user. This is the username. This is the password. And it was totally integrated with the test system. Then you could say, change the middle name, all with like two lines of code. And then you could say, get the profile and you can say it should contain the new middle name and we’d see it. I think that kind of scriptability is lacking a lot. What I see, I worked at a retail company, not that long ago. And you would have to find a product that was in the test database. You could say, “Hey, look at this product” and “Oh, it needs to have so much time to get here, to be on the shelf. And if it’s a fresh product, then you need to order it here and get it here. And it needs to be sold by here or else it gets bad”, and you could run all those tests by hand against a product you actually found in the database with a SQL command. But taking that to the next level, where it ran as part of CI and found all of the information that was a problem. Has anybody else run into a similar problem? And what did you do about it?
Ben Dowen (22:41):
A lot of the API testing that I do, I don’t have a front end, but I do have exactly the problem you say in terms of you put the data in, and if you didn’t have an isolation environment, all the things could be updating it. The way that we are currently isolating our environment is with containerizing our tests. If we had a standard old style shed environment, I probably Couldn’t run a lot of the tests that I’m currently running. In order to get some sanity, to get our tests even to CI/CD that we’ve got to use Docker. We’ve got to get these containerized.
Michael Larsen (23:17):
So we’ve been doing a lot of talking here about the various options and various ways that we can look at API testing; pluses, minuses, benefits… Just to kind of close it out for anybody who’s listening to this fresh and new. “Okay, cool. I want to be able to start with this. I want to be able to get into playing around, with API testing, but I’m not sure how to get started with it.” I mean, yes, we could say, Hey, here’s SoapUI, here’s Postman. Those are the two tools I usually use, or if I want to be generic about it, I say, go in and dig into curl and start piecing together commands that you can run. It’s very limited in that regard or maybe it’s not limited and I’m just looking at it from too short of a view. If I was going to bring up my team or I was going to bring up some new, fresh testers and say, “I want to get you up and running and effective with doing API testing. I want to give you three tools to focus on and get to know so that you can be game.” What would those three things be?
Ben Dowen (24:17):
The first tool I would reach for is Insomnia, which is a API REST client. It does a lot of similar things to Postman, which is another great tool, but I find it really, really quick just to get going with your absolute basics. That’s the first thing I do in my workflow is I reach for Insomnia. Get the endpoint for an API, start with a get request, go from there.
Beth Marshall (24:43):
The first thing for me that I would recommend the tool is documentation. Swagger, I guess. I know one of the things that is a major challenge for a lot of folk when testing APIs is quality of the documentation they receive. It’s sometimes very hard to second guess exactly what an API shouldn’t do as well as guess what it should do. The documentation really should be up to par. I would say Swagger or similar as a tool to make sure that documentation is correct as a starting point. And you can start to test just by looking at the documentation to work around that, be prepared to challenge that and start shifting your testing left and looking at things right from the start. Also Postman. Postman would be my absolute go-to tool. It’s very straightforward.
Michael Larsen (25:40):
Fantastic. Thank you for that. Andrew? What’s your thoughts?
Andrew Knight (25:43):
To be honest, I wouldn’t recommend a tool because if we’re talking about a team or individuals who don’t really know about APIs or API testing, sending them down the path of the tool probably could mislead them. Honestly, what I’d recommend is learning what it means to have an API and what it means to build good API tests first. And so instead of the tool, I would recommend a resource, something like Test Automation University, where there are multiple courses on APIs and API testing to help people get started understanding what the thing is. And from there, perhaps depending on what direction they want to go with APIs, then I could recommend tools. If we’re talking about historically more manual testers, who aren’t going to be so much into programming, something like Postman is probably going to be very, very helpful for them. You could have tool wars, you know, Postman versus SoapUI versus other things, in my opinion, if the tool is good, I don’t really care too much about what is being used. I mean, maybe there’s some simple advantages or whatnot, but it’s like, I don’t want to get into those turf wars. If you’re looking at more of like a traditional or a programmer type, someone who’s going to have their hands on and coding test automation, then the question is not so much of a tool, but what language, world, ecosystem are you entering? And from there, it could go to like, you know, if you’re doing Python, of course you’re doing requests and C-sharp doing something like rest-sharp. I’ve been working on an open source project called boa constrictor, which is an implementation of the screenplay pattern. And so we have WebDriver-based interactions as well as REST-Sharp based interactions. So that can be a quick way to really help you automate your REST API testing. So yeah, start with learning about it. Get good resources, TAU is a great place to start. And from there, let your needs and design decisions determine what tools you should be pursuing.
Beth Marshall (27:25):
I just wanted to add to that at this stage, a bit of an exclusive for you. Angie Jones has been in contact with me. I am in the early stages of writing a API course for Test Automation University. I did check with her that I could speak to you about that. And her words were, if you tell them you’re doing it, you really have to do it because…
Michael Larsen (27:56):
(Laughter) Oh, that sounds so much like Angie! I Love it. Exciting! Right On!
Beth Marshall (27:57):
I’m, I’m really thankful to some of these topics that have given me some cool chapters for the API course that I’m going to start to write. So thank you.
Michael Larsen (28:06):
I’m looking forward to seeing it.
Ben Dowen (28:07):
That’s super exciting, Beth. I was just going to absolutely second Test Automation University. I’m really glad that you mentioned that. The course on that, Exploring service APIs through test automation by Amber Race is absolutely fantastic. And I would definitely recommend that as a good starting point. Even for people who have used a REST API tool before, and even for people who know a bit about test automation, the course is fantastic. Of course, when it’s out, I’ll be recommending Beth’s course, super excited about that as well.
Michael Larsen (28:38):
I will not argue there! For those folks listening, Matt had to unfortunately run away, so I’m going to close this out here. So just final thoughts from everybody and what are you working on and where can people learn more about you?
Beth Marshall (28:51):
So I am currently obviously working on starting this Test Automation University course, and I am all ears for suggestions for what you would like to learn, what challenges really like to deep dive into a little bit more. So please feel free to get in touch with me. My Twitter handle is @Beth_AskHer and my blog is BeTheTester.
Michael Larsen (29:18):
Ben Dowen (29:20):
Yep. You can find me on Twitter at @FullSnackTester, FullSnackTester dot com. I am currently working as I have been for a little while now on a project called “Tester of the Day”, TesterOfTheDay.com, If you want to go take a look, and every day we’re celebrating the test automation community and awarding this, we’ve now audit some hundred and 60 plus times. If you get a moment, go onto the site and nominate a tester for their contribution to the testing community.
Michael Larsen (29:51):
That is a great initiative. I will be happy to do that. Andrew, how about you?
Andrew Knight (29:57):
So I’m busier than ever It seems these days. At work, I am building a team of elite software, engineers and tests to support our company. It’s been exciting because I joined the company three years ago as the first software engineer in test. And now we’re a team of four plus we’re getting a manager soon. So I’m enjoying raising them up, mentoring them and turning them into beastly awesome engineers. I’m also working on an open source project called boa constrictor, which is the .NET implementation of the screenplay pattern right now. It supports weather-wise whilst rest API interactions. And I believe it is a much, much better alternative than using something like raw calls or the page object model. You can find me on Twitter @AutomationPanda. You can read my blog at AutomationPanda dot com and hopefully sometime next year I will be publishing a book entitled “The Way to Test Software”.
Michael Larsen (30:46):
Exciting. I look forward to seeing that. Definitely. All right, well thank you very much, everybody. I appreciate your time, your talents and your energies for helping us make The Testing Show for this go around. And for everybody who is listening, we are very grateful for your time and being able to be part of our show today, we look forward to catching you on another episode of The Testing Show very soon. Take care everybody.
Beth Marshall (31:11):
Ben Dowen (31:13):
Andrew Knight (31:13):