The Testing Show: Meaningful Performance

February 25, 22:37 PM
/
Transcript

Today as we see the ideas of server-less architecture, cloud-based implementations and more distributed services come into play, the way that we think of performance and performance testing is changing. Nevertheless, it’s still critical and can be a make or break element of a site or applications suffers or demise.

To talk about the ways that performance impacts testers as well as ways to discuss the broader implications of the changing performance space, Scott Barber and Caleb Billingsley join Matt Heusser and Michael Larsen to chat about ways to have conversations with those who make the larger decisions.

Also, the conversation covers ways to give those who are not as familiar with the performance space a point of reference to get in and start tuning up their performance game.

 

itunesrss

 

 

Panelists:

 

 

 

 

References:

 

Transcript:

MICHAEL LARSEN:    Hello and welcome to the Testing Show, Episode… 62.

[Begin Intro Music]

This show is sponsored by QualiTest. QualiTest Software Testing and Business Assurance solutions offer an alternative by leveraging deep technology, business and industry-specific understanding to deliver solutions that align with client’s business context. Clients also comment that QualiTest’s solutions have increased their trust in the software they release. Please visit QualiTestGroup.com to test beyond the obvious!

[End Intro]

MICHAEL LARSEN: Hello, and welcome to The Testing Show.  It is November.  We are winding down this year.  Excited to be with you all.  First and foremost we would like to welcome our guests.  We’d like to welcome Scott Barber?

SCOTT BARBER: Hey!  How are you doing today?

MICHAEL LARSEN: Excellent!  We’d also like to welcome Caleb Billingsley?

CALEB BILLINGSLEY: Hello.

MICHAEL LARSEN: And just to make things interesting and unique, all the way from Boston, from the SmartBear Connect Conference, let’s welcome our MC, Mr. Matt Heusser.

MATTHEW HEUSSER: [Laughter].  Hi, Michael, good to be here.  I think I found the quietest corner at the conference location, so I think we’re going to be okay.

MICHAEL LARSEN:  Awesome.  All right.  Let’s go ahead and let’s get the show on the road.

MATTHEW HEUSSER: So, this month’s episode is going to be on Performance Testing, and we have both Scott Barber, who if you’ve ever Google searched “performance testing,” you hopefully know of Scott Barber, and Caleb Billingsley, the practice lead for performance tests for QualiTest.  So, welcome to the show, Scott?

SCOTT BARBER: Hi, Matt.

MATTHEW HEUSSER: So, tell our audience a little bit about what you’ve been up to lately.

SCOTT BARBER: Well, as some people may be aware, I kind of dropped off the public conference and teaching circuit, and I went back into the trenches with a couple of roles.  Currently, I’m working for Ferguson, which is, for those who aren’t familiar with it, think Lowes and The Home Depot for people who like $20,000.00 refrigerators.  Working in the IT shop there as a performance architect for their services environment.  So, I have focused over the last several years on the performance aspects of web services and Agile and continuous integration, delivery-type environments.

MATTHEW HEUSSER: That’s probably the best 2-minutes introduction we could give, Scott.  I’ve been following and working with Scott for a real long time.  When it comes to a broad diversity of experience with perf tests, you were one of the cofounders of WOPR (Workshop on Performance) testing?

SCOTT BARBER: Right.  It’s been a very interesting and dynamic last 20 years in this space.  Yeah.  Trying to keep up with all of it.  There’s been a lot going on.

MATTHEW HEUSSER: Let’s get to know Caleb a little bit.  Caleb is the performance test lead for QualiTest.  Tell me a little bit more about that, Caleb?

CALEB BILLINGSLEY: So, I started out 20 years ago in performance testing.  I have been in the trenches myself a number of years, but recently really a leadership role, more managing the business side of helping business continuity and making sure that companies’ investments and digital modernization scale and perform at the levels that they need.  That’s really been a focus in working with our technical architects, people like Scott, that are actually delivering the solutions for clients to ensure that they are going to scale when they deploy these new systems that are replacing (a lot of times) Legacy systems.  Sometimes it’s Cloud migration projects.  Sometimes it’s just re-writes in-house.  So, it’s a nice variety out there of projects that business leaders are being asked to embark upon.

MATTHEW HEUSSER: It’s fascinating how you added that little bit of subtlety there, in that, “We are replacing systems.  We are transitioning systems, reconfiguring systems” today, as opposed to creating our first website.  We’re not doing that anymore.  That adds a little bit of complexity to the picture and also makes it easier, because we can predict load based on historical data.  Let’s talk about starting a performance testing program from the management perspective.  Have you got a couple of common errors people make, common mistakes you’ve seen, and lessons learned from that so people can avoid that?

CALEB BILLINGSLEY: I think, from a management standpoint, starting a practice, definitely find good advice.  Somebody with experience and a background, whether that’s a company like QualiTest or somebody that’s got the experience of Scott.  You want to look at your situation and every situation can be a little bit unique.  The second thing is, look at it early.  A lot of times performance is looked at towards the tail-end of projects and it can be very difficult, if not impossible, to think about data and environment.  That’s probably the two pieces that people underestimate or don’t think enough about in the early planning stages.  It’s, “What environment am I going to use to get a meaningful test?”  That has gotten a little bit easier with Cloud-based deployments in that you can oftentimes spin up a Cloud environment of similar size, if not identical, but there are still tricks there in terms of we are going to integrations a lot of times into other companies, maybe tax systems, maybe shipping systems.  Those may not allow full-scale loading of those environments.  So, you need to have a plan there.  Then, the data.  Understanding how you’re going to generate enough meaningful data and also have  PII  and protect your data.  If you do them right, they’re tremendous assets.  But, if you don’t do it correctly, then you’re going to have a test that may not give you an accurate answer.

MATTHEW HEUSSER: Yeah.  I think those are big things to consider upfront.  While we’ve got you on the line, Scott, with your perspective, I’m curious, what do you think is new and exciting in the world of software testing?

SCOTT BARBER: Well, I’ll tell you.  There’s almost two faces to that question, Matt.  There’s the one face of that question that says, “There’s nothing new under the sun.  Testing is testing.  Everything else is context.”  To a degree, I fall back on that a lot, because what’s changing is technologies and organizational practices.  For instance, one of the things I’m seeing a lot of today is folks shifting from groups of testers testing end-user functionality to testers getting in and testing out what they used to think of as the developer’s job, testing APIs and services and getting further integration embedded into the development lifecycle.  “Does that really change the thought process of testing?”  To me, I say, “No,” because what we’re trying to accomplish is the same.  The way that you think about, “How do I exploit potential problems,” is the same.  What’s different, “Maybe I’m not doing it for a user interface anymore or maybe I’m doing it side-by-side with the developer on their local machine now instead of off in another room after the developer has moved on to another piece of functionality or something.  I think that the big changes that we’re seeing are less about the act of testing software and more about the context, the environments, and the technologies.  Re-teaching ourselves where the right focus areas are.  Oh, to find all those “gotchas” as we’re going through and developing and deploying applications.

MATTHEW HEUSSER: Speaking of “gotchas,” do you have a favorite tip or trick either to monitor load or to generate load?  If you’re coming into an organization and you don’t know much about it, you just know it’s going to be a web app that runs over HTTP, what’s in your tool belt right now?

SCOTT BARBER: It’s very interesting because there was a time when the starting point was begging for log files.  Today, I find that log files comes a whole lot later.  What’s funny is, even as an employee now, I’m in an organization that has hundreds of web services, which to someone who works in services might not sound like a ton.  To somebody who has never worked with it, it’s a phenomenal number.  But, it takes a long time to find and to learn all of these.  The one thing that all these Cloud and services technologies kind of comes built in with is some monitoring.  Now, it’s not the detailed monitoring you might want, but there’s some there.  Everything has got an admin panel.  So, I find that if you can just figure out where these panels are.  If you’re in the Cloud, AWS, and Azure has a panel.  If you’re doing it all in‑house, I happen to currently be working with Red Hat, Fuze, and AMQ.  They’ve all got admin panels that have all kinds of information in them.  Again, it’s not going to be the nice, clean, fantastic, traceable AppDynamics type monitoring solutions that you might want, but hey, it’s a great place to get started.  It’s already there somewhere.  We’re talking about jumping in, and that’s the key.  It’s to figure out what exists and whether that’s some kind of in-house solution, whether there’s by talking to the folks in your network/operation group, whether it’s finding these admin panels, whatever it is.  Step one is to get access to the information that exists, because if you’re walking in cold, asking somebody to implement something is not going to get you very far very fast.  Right?  You need some information before you ask people for action.

MATTHEW HEUSSER: So, if I hear what you’re saying, you might be surprised how much monitoring is right there from the native tools for whatever you’re using, and you can go grab those instead of going to an ops guy and giving him a list of what you need to have monitored, which might or might not go so well.

SCOTT BARBER: That’s exactly right.  The key is, even though this information exists, it’s probably in multiple places, and people probably are not yet putting those pieces together for the purpose of looking at load and performance.  So, there’s a lot of information there that’s just not being parsed together in way to tell the load and performance story yet.

MATTHEW HEUSSER: Speaking of the “load and performance story,” when we were kicking around this podcast, we came up with a bunch of questions that are sort of common questions, and they were often uncomfortable questions.  They were often unfair questions.  They were questions that put you on the spot in your role in quality in the organization.  There often isn’t enough time to give the right answer, the subtle answer, the nuanced answer, the “it depends” answer.  So, what we like to do is ask those questions to our experts and have them give the sound-bite answer, which is just good enough to save face and create an environment later where you can have the detailed answer, just good enough to buy you some time to get the data you need to give the answer later.  So, I’m going to throw some pretty hard questions at you that are pretty similar.  I’d like to see what you come up with.  Is that fair?

SCOTT BARBER: Looking forward to the challenge.

MATTHEW HEUSSER: Great.  Question number one.  I’ll throw it at Scott then.  Isn’t performance testing irrelevant in the age of Cloud computing?  I mean, you just pull up a new server anytime you get a load?

SCOTT BARBER: If all you’re worried about is load and you’ve got (say) a file server, hey, spin up another one.  I’m in.  Here’s the challenge though:  While that may be true, what folks tend to overlook when they make that assumption is the fact that systems talk to systems and that processing occurs behind all this file serving and webpage serving.  Somewhere your data gets consolidated, and maybe you can spin up another database server or get a bigger one or whatever.  But, sooner or later, if you’ve got multiple instances of that data that needs to coordinate across multiple servers or you’re spreading it out dynamically in another way, sooner or later, you’re going to end up spending more “time” simply keeping your data in sync than you are spending serving that data.  Now, that’s just one example.  The point is that, while it’s true, static load, more servers.  Go for it.  Once you get beyond static, you’ve got communication across multiple machines, environments, Clouds, you need to keep an eye on that, because sooner or later that communication is going to bite ya.

MATTHEW HEUSSER: Thanks, Scott.  Caleb, do you have a different answer?

CALEB BILLINGSLEY: Yeah, I do.  My answer is a little different angle that I see often, which is a lot of companies are modernizing their current systems which typically they pick up a technology partner or a hub technology that they’re going to modernize around, and that could be a third-party vendor.  It could be a Cloud technology.  But, at the end of day, that technology has to plug into their ecosystem, and to do that, they’re going to (as Scott was talking about) show these thousands of APIs that you have to integrate with.  When you put load against that system, it may scale fine, just like the vendor claimed when they sold you that system, but it’s all those integrations that you start to see falling to not perform at the level you need.  The analog I would draw here is, the Cloud may allow you to add lanes, like on an interstate, but really at some point you’ve got to get off the interstate and have exits and you have to have traffic flowing at those junction points as well.  So, just adding servers in the Cloud, will oftentimes give you more lanes on that interstate, but they don’t give you higher performing intersection or exit ramp.  We’ve seen projects, unfortunately, that were $10 million, $20 million, hundreds of millions of dollars projects that either got cancelled or seriously delayed because the core technology they were modernizing around was fine and was following the claims of the vendor, but all the integration points were either to another third-party vendor, through a web service API, or maybe an in-house system or link into an in-house mainframe or other component, much like what Scott is talking about with all this distributed data, it did not perform at the level that they needed.  They couldn’t even replace their Legacy system because the performance was less than the Legacy system.  So, they bought this modernization project based on (in a lot of cases) saving money or saving time, and they couldn’t find that value towards the end of the project and had to delay or re-architect the entire project due to performance problems.  They were led to complacency that the performance wasn’t going to be a problem because of Cloud and you could just spin up some more servers.

MATTHEW HEUSSER: Okay.  Next question.  You’re in a room.  You’re starting a performance test program, and the CSO (in an offhand way) says, “You know, Target got hit with a DDoS attack last week (a distributed denial-of-service), we’ve got to be prepared for it.  Are you prepared for a DDoS attack?”  He just looks at you, how do you respond?

CALEB BILLINGSLEY: Well, hopefully we have a plan in place to simulate that.  If not, it’s worth talking to the CSO about their approach and then how the quality organization can work.  A lot of times, it’s a tandem team when you talk about something like a DDoS attack.  It is often considered part security, part performance.  A lot of times you do need security software to detect those type of attacks and shutdown the origin at the application.  Of course, you’re not trying to scale to handle DDoS as much as you are trying to protect the application from getting attacked.

MATTHEW HEUSSER: Great.  Scott, do you have anything to add?

SCOTT BARBER: I’ll only add this.  A long time ago, I remember when I was trying to simply set up and configure a load generation environment, just to make sure that I had all my agents working and generating load, I whipped up some quick little scripts to just call the homepage, some very common websites at the time.  Things like Google and ESPN.  High volume stuff.  I figured it was no big deal and just started cranking out just a ton of load just to see what my load generation environment could handle.  Turns out, very shortly thereafter, there were lots of complaints in my entire office building.  Folks couldn’t access their favorite websites.  Luckily, it just kind of came back in a day or two and the police didn’t show up, but what had happened was I had inadvertently generated denial of service attacks against these folks.  So, the one thing that’s on our side is generating the traffic to simulate the denial of service attack is really pretty easy, which is kind of scary from the perspective of folks that are trying to protect against them.

MATTHEW HEUSSER: Yeah.  Of course, it depends on your organization.  As Caleb points out, “Have you talked to the chief security officer?”  Either there will be software to detect when someone is trying to send traffic to us and shutdown our website or there isn’t.  If there isn’t, then we say, “That’s a conversation we need to bring security in on.”  If there is, we can say, “We simulated an enough load to trip the circuit breaker, and in test the software turned off.”  In fact, I have had that happen too.  We were testing a pivotal product for the Software Testing World Cup, and it had an Open Source layer of webserver that had a bit flipped on for “protect from DDoS,” and it shut itself down in response to the users from all over the world who were testing it for the Software Testing World Cup.  So, I’ve tripped one of those things with human testing, so how about that?

SCOTT BARBER: That’s awesome.

MATTHEW HEUSSER: Yeah.

SCOTT BARBER: Or scary.  Depending on how you want to look at it.

MATTHEW HEUSSER: A little bit of both.

CALEB BILLINGSLEY: That’s actually great planning.  Not only can we trip DDoS, but a lot of times when we’re doing a good load test, we need to make sure we don’t trip it falsely.  That can mean distribute load, nine cities, ten cities, geographical IP distribution, and I’m sure Scott has run into that a lot in the field.  Right?  If there’s a planning process so you can be intentionally helping the CSO to validate if the software that they’ve purchased is going to protect against the DDoS or the inverse of that, which is we need to consider it in our planning because we could have a test with 20 people on a bridge that are all very well paid that their time is wasted if we can’t run the test at the volume that we’re trying to attempt because we haven’t planned, “How do we get past the DDoS protection software that may be in place that might stop the load test from running?”

MATTHEW HEUSSER: So, let’s try something a little bit different.  How about micro-services?  How does the change to micro-services impact performance testing, Scott?

SCOTT BARBER: That’s pretty much the life I’m living right now.  To be fair, the group I’m working with right now hasn’t quite fully embraced the “micro-services.”  They call them “mini-services,” because they’re just not quite micro yet.  But, the concept here is fantastic and it’s a wonderful opportunity.  Because, think about it this way, every small service, a developer follows a pattern basically and puts a little bit of code inside a thing that we used to call a “function” or a “procedure.”  It’s kind of encapsulating.  It’s just got inputs and outputs and a little bit of code.  What we’re doing is we’re building straight into the dead and deploy process mechanism where each one of these services gets (I call it) a performance screening at the local level, at the DevTest, performance, and then it moves on to prod.  These tests (these performance tests) are very quick and simple to create.  When I say “quick and simple,” I mean literally it takes minutes for either the developer or an engineer to generate these scripts and run some load for 5-10 minutes and have the ability to say, “Yes.  My piece of code is performing as expected.”  That’s fantastic.  We’ve shifted that whole part of the performance evaluation, performance testing, way left.  Starting, literally, it can be running along with unit tests.  You can plug it into your continuous integration, build, and deploy.  But, the challenge here is this, while all that is fantastic, and by the time you move through environments and get to prod, you’re able to say, “The service is performing well.”  It’s very important to remember that doesn’t necessarily mean that the overall performance or the end-user perspective of performance is still good.  Because, if you’re placing an order for example on a website, the move to micro-services means that when you click that order button, you’re not calling “a service” that processes that order.  You’re calling a service that calls a service that throws some stuff in the queue that writes some stuff to the database.  It calls another service to call another service to call another service.  By the time you string those things together, you can still end up with some pretty serious performance impacts that don’t identify themselves when you’re testing the micro-services in isolation.  So, it turns the whole piece into kind of a two-part process, and that might sound complicated.  But, to be clear, anyone who has done this, will remember this pain.  Testing the services independently on their way through to production means that when we do these larger more traditional end-user facing load and performance tests, we have already been able to eliminate a whole bunch of potential problems.  So, when we do see something, we have a much clearer indication on what to look for, and it makes the whole isolation, identification, and troubleshooting scenario move a whole lot faster.

MATTHEW HEUSSER: I can see that.  Thanks, Scott.  Caleb, from the business side, do you think (it’s possible the answer is “not really, testing is always testing”) has micro‑services changed the game when it comes to planning, organizing, and supporting performance testing for management?

CALEB BILLINGSLEY: Well, I think it brings a new interesting angle that I think would benefit the businesses for cost optimization and the benefit of the micro-service promise, and I don’t know that I’ve seen it fully realized in any client.  I’ve seen some good case studies, but it’s the promise of being able to right size the hardware perfectly to the compute load required by that micro-service so we can get the best price for the highest performance for the business, and I think definitely planning for that in the performance cycle so when Scott has architected and his team has built out the performance testing.  Then, making sure there’s adequate cycles to do some different what-if scenarios in the Cloud so that you can map that micro‑service and, as Scott was saying, tie them together to make sure, (A), they work and that they perform at an acceptable level, but then optimizing the footprint of the underlying virtual machines that are running in the Cloud to get the best cost value.  It’s a real unique opportunity, but if you don’t plan for it, you’re not going to have time to make sure that’s going to work well and put it into production.

MATTHEW HEUSSER: Okay.  Great.  Final question, which is kind of hook ends with the other one.  Again, you’re in the meeting, an executive says, “We’re going to DevOps, so we’re just going to monitor production.  If there’s a performance problem, we’ll just roll back to the previous stable version.  I don’t think we need the performance tested, if it’s working in production now.  We’re just going to do continuous deployment.  With config flags, we can change anything.”  How do you respond, Scott?

SCOTT BARBER: So, the first thing I’ve got to say is:  When that works, it’s a beautiful thing and I’ve seen it work.  However, that is not a small step.  Let me say that again, “That is not a small step.”  What we’re talking about, you’re trusting that every step along the way, the right mitigation and monitoring or observation, tools or techniques are in place to ensure that as you’re doing your continuous deploy and you’re monitoring in prod, that you’re not introducing catastrophic problems.  So, things like I mentioned with micro-services, those fully integrated performance tests, or screens, they build validation tests even, those will keep a lot of your problems from showing up the first time in prod.  It will keep you from having to roll back every other day.  The other thing is, to really make that work, what you really need to have in place is kind of a staged rollout in prod.  You don’t want to go big-bang on things to millions of customers.  You need to rollout to maybe one cluster or a group of clients or whatever so that you can see what each of these changes does, because the thing that folks tend to overlook is that (I don’t know) 8/10 it is not the individual change that causes a major problem.  What it is, is how this change interacts with that change, and interacts with that old Legacy system in the bizarre and unpredictable world of production.  So, if you’re not paying attention, things will cascade on you very, very quickly.  If you don’t have the mechanisms in place to catch all the “oops,” you’re going to find yourself in a world of hurt.  This is not to say that I’m against DevOps and continuous deploy.  I’m a huge fan.  What makes me nervous is the folks that haven’t thought all the way through it and don’t implement the rest of the process to make sure that they are monitoring basically everything along the way and building in proactive, call them alert systems, and that’s the performance testing that I feel is most important in that environment.  It’s not the, “Is it okay to go to prod?”  It’s the performance testing that allows you to say, “When this goes bad…”  Someday the database will get overloaded.  When that happens, what’s going to be the leading indicator so that we can set up that monitor to not tell us when it’s already gone bad, but tell us when that leading indicator starts to pick up so that we can take action before it falls over?  That’s the key.

MATTHEW HEUSSER: Caleb, do you have any thoughts to add?

CALEB BILLINGSLEY: I think just (and Scott hit on this a little bit) the thought that I can just click a button and rollback to the previous version, assumes that the code change caused the performance problem, and that may not be a valid assumption.  It could be we’re growing our business, which I think everybody on this podcast probably is interested in doing is definitely growing and we’ve got more volume and more traffic.  We’ve captured more market share, and we’ve passed the scalability point of our architecture.  If we weren’t testing, if we weren’t doing the things Scott was talking about, we don’t know that.  So, the idea that I can just click a button and rollback to a “safe state,” may not be true if the problem is we just have more business and our site is experiencing heavier load and our architecture is now hitting a ceiling that’s there and maybe was put in 20 code deploys ago, not 1 code deploy ago.  So, it’s a great concept that I can, when I’m AB testing and the feature toggles, works very good for some businesses.  I know Facebook had talked a lot about how they’ve done that in their business and other businesses have done that, and that’s a great strategy to be able to quickly recover from an Agile quick DevOps deploy strategy.  But, it still doesn’t do everything you need.  You need have that performance testing practice to know where the head is on your architecture, where you’re going to hit a wall and not be able to scale or perform at the level your client base expects.

MATTHEW HEUSSER: That’s all the big questions I have.  Before we get going, Michael, do you want to throw a question out there?

MICHAEL LARSEN: So, I guess some of the things that I would throw out:  I know that this has been targeted primarily towards the execs and the big “gotcha” questions.  I’d like to tailor this back also to those of us who are actively working in the trenches.  With the way that things have changed, what are some of your favorite ways of looking at tooling up and tuning performance?  What are your go-tos?  What are some of the things that you would recommend somebody who wants to make sure that they’re getting their performance mojo on?

SCOTT BARBER: So, I’ll take a stab at that, because I have a pet thing that I tried to do for many, many years, and it’s the simplest thing ever in a way.  It’s all about getting everybody involved with building and delivering the software to just bake into their brain the thought about performance.  Not just, “Does it work?”  But, “Does it perform?”  At every level, whether you’re thinking “requirements,” whether you’re thinking “testing,” whether you’re thinking “development,” if you spend just 1 percent of your time, 15 seconds a day to ask, “How does performance relate to:  What did I do today?”  Then, based on your answer, maybe take some action.  All of a sudden, what you’ve done is you’ve baked in the monitoring, the observation, the tooling involved, and you find crazy things.  Like, with one group that I worked with a couple of years ago, every time there was deploy of a piece of code.  One of the acceptance criteria was looking at the performance trend graft for that code, previous version and current version, in the various environments that it was deployed to along the way.  It just became automatic.  So, the key there is, that’s not a big end-to-end performance test.  What that is, is every individual doing a little bit and every other individual expecting to see a little bit about performance every day.  That mentality will lead to all of the big stuff that we’ve been trying to protect against, all the big risks, for years, and it will keep us from having to react to them when it’s too late.  To me, that is the key.  It’s a little bit every day from everybody.

CALEB BILLINGSLEY: Yeah.  I think just a little different take on your question.  I think for the practitioner, the performance engineer, today, what I see really changing in this new landscape of Cloud distributed systems is probably three things:

  1. End-point or service visualization is mandatory, and so that should be in your practice.
  2. Cloud is here to stay, and so getting a Cloud certification, whether it’s Amazon, Azure, or your other favorite Cloud provider, is really (I think) essential for the modern performance engineer.  You don’t have to be (perhaps) an expert in every system, but understanding all the offerings of like an AWS and how you would performance test at least the large ones like S3.  Different providers have their different acronyms, so I won’t go into that.  But, really getting a certification there would be a great asset, as you help your organization to plan for the next generation performance.
  3. Being comfortable and being trained, knowing about some of the modern (what I call) APM systems and monitoring techniques, and Scott hit on a lot of this earlier in the podcast talking about, “Every tool has a dashboard.”  Get to know those.  So, there’s built-in monitoring in the Cloud systems.  There are also commercial packages that are very modern and very nice such as AppDynamics or Dynatrace.  You know, really, what I find is the day and age of somebody just throwing a bunch of users against a site and it crashed and dusting their hands off and saying, “Well, it doesn’t scale.  I’m done.”  That’s not working anymore.  I mean, really, Scott and our teams, they’re being asked to help solve these problems and plan for the future and architect for high-performance systems.

So, you know, kind of adding those three things to your tool bag, I think is essential for the next generation (if you will) of performance engineers that help companies to scale and to meet their business demands.

MATTHEW HEUSSER: Thanks, Caleb.  That’s a great insight, and I think that’s probably a good way to end the show.  I would just ask:  If people wanted to know what you’re up to lately or maybe a resource they could go to, to learn more, Caleb, what do you think that’d be?

CALEB BILLINGSLEY: I think there are so many resources that you can easily find, you know, on the web around performance.  There are some great low-cost educational sites that are good to go to.  Get a trusted partner.  Let somebody like Scott or somebody like QualiTest as a company to come in and talk to you about your goals and really make sure you’re right sizing and doing the right level of spend to ensure your business, because at the end of the day, performance is the business assurance.  It’s making sure your business can deliver the value to your customers that you can capture the revenue you need to capture.

MATTHEW HEUSSER: Okay.  Scott?

SCOTT BARBER: So, really, there are two places, aside from all the educational material, that I would go:

One is exactly what he just said, understanding the ecosystem/the architecture.  You can’t truly understand the performance of a system, unless you truly understand the architecture.  If you can’t think about a request and map in your head all the different places it goes and servers that it hits.  You know, it used to be pretty easy, because you’d think about three boxes sitting in your datacenter in the basement, but it’s not like that anymore.  So, you really do need to take a step back and make sure you understand, “What is actually going on behind the scene?”  Not necessarily at a code level, but understand the request comes in.  It goes through the firewall.  It goes to an API gateway that distributes to a services environment and something is going to a queue and something is going to the database.  Right?  Being able to trace that out in your head, is really key.  That kind of leans into my next place to go which is just sit and spend some time with your devs and your architects and your designers and really understand what they’re doing and how it impacts things.  Whether it is a micro-services, whether it’s a database index, whatever it is.  Get that insight from the bottom up.  When you start to build that picture, that’s where you can really tune what you’re looking for, what’s going to really add value to you when you go out and start looking for the things that have been written or what training classes to take or where the gaps are to really fill in and add value to your organization.

MATTHEW HEUSSER: Well, thanks Scott.  I think you hit the nail on the head.  With that, I’m going to call it a show.  You’re been generous with your time.  I think we’ve made some good progress here.  Thanks, everybody.  Let’s talk again soon.

[Begin Outro]

 

MICHAEL LARSEN:  That concludes this episode of The Testing Show. We also want to encourage you, our listeners,  to give us a rating and a review on Apple Podcasts. Those ratings and reviews help raise the visibility of the show and let more people find us.

Also, we want to invite you to come join us on The Testing Show Slack Channel as a way to communicate about the show, talk to us about what you like and what you’d like to hear, and also to help us shape future shows. Please email us at TheTestingShow(at)QualitestGroup(dot)com and we will send you an invite to join the group.

The Testing Show is produced and edited by Michael Larsen, Moderated by Matt Heusser, with frequent contributions from our many featured guests who bring the topics and expertise to make the show happen.

Additionally, if you have questions you’d like to see addressed on The Testing Show, or if you would like to BE a guest on the podcast, please email us at TheTestingShow(at)qualitestgroup(dot)com.

Thanks for listening and we will see you again in December 2018.

[End Outro]

[END OF TRANSCRIPT]

Recent posts

Get started with a free 30 minute
consultation with an expert.