DevOps is a common term and one that seems to be hard to pin down or to define. Additionally, it seems that for many, the idea behind DevOps seems to be an elimination of testers or testing.
Our guests Lisa Crispin of mabl and Jessica Ingrassellino of Salesforce.org are here to discuss how software testers are indeed important in the process of DevOps and some ways we can get involved in ways we may not have considered. A key element is Observability (#olly) and we get into the details of considering observability as part of the DevTestOps equation.
Panelists:
References:
- “Monitoring and Observability”, Cindy Sridharan, 2017
- Sarah Wells’ keynote from ETC
- Charity Majors’ white paper on observability
- A Practical Guide to Testing in DevOps, Katrina Clokie
- Principles for testing in DevOps
- You can’t buy DevOps
- Why DevOps gets lost in translation
Transcript:
Michael Larsen: Hello and welcome to The Testing Show, it is March 2019, glad to have you with us. This month we are going to be talking about Agile with DevOps, and we are happy to welcome Lisa Crispin, who is a testing advocate with Mable, and Jessica Ingrassellino, who is a director of quality assurance and testing, with salesforce.org, and with that, you already know who I am, I’m Michael Larson, I’m your show producer and sidekick, but let’s go ahead and get into it all and let Matt take it away.
Matthew Heusser: Thanks Michael. Woot. Welcome to the show everybody.
Jessica Ingrassellino: Thank you, I’m so happy to finally be here, I’ve been listening to you all for forever, seems like.
Matthew Heusser: Oh, so excited. Glad to have Jess back. You’ve been super busy right?
Jessica Ingrassellino: Yeah.
Matthew Heusser: I keep hearing about you, people keep saying great things. You both spoke at the European Test conference right?
Jessica Ingrassellino: We did and we both were at exploratory testing peer workshop, which is really awesome, so yeah, so great few days.
Lisa Crispin: Yeah it was.
Matthew Heusser: So you’re both going to be in Chicago for Agile Testing Days?
Jessica Ingrassellino: Yay!
Matthew Heusser: That’s going to be fantastic, that’ll be in June and I’ll be there too.
Matthew Heusser: So let’s dive into advanced Agile with DevOps. Lisa and I were talking the other day with Michael. What is the difference between DevOps and Agile? Can you make an argument that, like, it’s all the same stuff, we’ve been doing it for X years? I think I’d like to start with Lisa’s tutorial on transforming your culture with DevOps. Where are you going to be giving that tutorial?
Lisa Crispin: So Ashley Hunsberger and I will be doing that next week at Booster Conference in Bergen, Norway.
Michael Larsen: Wow, you guys get around.
Lisa Crispin: We do.
Michael Larsen: What are you going to say?
Lisa Crispin: Well, DevOps and Agile are both all about culture, right? It’s about building a quality culture. Ashley, her dad is an expert in automobile safety business and has made a lot of inventions and innovations over the decades to make our car safer. And she’s going to use some stories from that, what innovations the car industry made and how did that reflect in the statistics of how many people were killed in car accidents and so on. Then make parallels to that with software development and risk, and all the good things we deal with everyday. And use the iterative and incremental nature of development as part of Agile development and build that quality culture. Lot of hand on exercises around, identifying what are our processes today, what are our biggest challenges, how are we organized, and how should we be organized, how can we set up to focus on quality to really collaborate, enhance that communication because we know it always ends up being a people problem because we don’t communicate well? I just like the angle that Ashley takes with it, looking at another industry and seeing what that could teach us about our own.
Michael Larsen: Hey, I want to pop in here real quick. I just want to ask a question. For those who are listening to the show, and they hear the term and they understand that they think they know the term. What is DevOps and how does it work?
Lisa Crispin: I attended DevOps Days New York in January and this was a topic of conversation. Basically the people at the DevOps conference didn’t want to try to define it. Sounded like Fight Club. For me, it is a culture. It grew out of the Agile movement, if you want to call it that. I guess a lot of teams were having good cross functional collaboration between developers, and testers, and product people, and designers, and those kinds of things. But they were ignoring people who were over there watching production and taking care of the infrastructure or the people over in Ops. And so this was an effort to reach out and include them and get developers feeling responsible for monitoring what they did, once it’s released to production and get operations learning to code and getting into all that infrastructure as code so we can test it so it’s reliable so we can quickly get things to production and practice continuous delivery.
Lisa Crispin: So that’s how I define it, it is more a culture. You hear people say DevOps team or DevOps person, to me those are maybe the people who are helping the whole team learn what they need to learn to collaborate together but I think it’s an anti-pattern to have a DevOps team once you’re supposed to know what you’re doing. It’s labels, we like labels as humans. It’s all about doing a better job of getting a quality product out to our customers.
Jessica Ingrassellino: You know, listening in, I agree with Lisa and think of DevOps not as a person or a team of people even, but as she said, a culture. And we can even think of it as a practice, a way of doing, a way of being in our software development organizations. Sharing the ideas of making sure that we get our software out the door in a way that really embodies quality throughout the process and a lot of other principles that we will be talking about very shortly on the show.
Matthew Heusser: So, have we already asked this question? How is that different than Agile? It’s about culture, Agile is about culture. It’s about working together and it’s about collaboration. How is that any different than Agile software development focusing on individual interactions over processes and tools?
Jessica Ingrassellino: I don’t think it is any different, other than lots of people think they’re practicing Agile and leaving people out, maybe like operations. If they are doing that, and this gets them to include more people, I think that’s a good thing. Because I’m kind of outside looking in at the DevOps “community” that does feel like a separate community has grown out of it that’s way more focused on operations than the Agile community is. You know the early days of the Agile community, all we talked about were features and functional testing, we didn’t even hardly think about our infrastructure apart from continuous integration. If it’s a different way for people to look at it and do a better job, I’m all for it.
Matthew Heusser: So, how can testers get involved or contribute to DevOps?
Lisa Crispin: I think it really depends upon the interest of the tester. If there is not a DevOps culture. Part of it is the ability to build a relationship with the people who are doing Ops. Start to ask questions, and investigate the processes that are actually happening in your company. I think that there are so many ways to get involved that rather than me recommending any individual starting point, I’d say get to know those people. Get to know their pain points, get to know the work that they are doing. Figure out how your skills can actually help them to do the work that they need and want to do for the team. That will feed back into if there are testing issues that would be helped out through some Ops processes you’ll then have that relationship there where you can go and ask and collaborate.
Matthew Heusser: You know I did a little bit of digging, as we were preparing for the show, low and behold I found something very interesting and it seems to have a certain person’s name attached to it. So, yes, this is what we call a baited lead. Let’s talk a little bit about devtestopsmanifesto.org, can we?
Lisa Crispin: Yes, soon to be testingindevops.org or testinginCD.org because for me continuous delivery is really what it’s about and DevOps is a supporting thing for that.
Lisa Crispin: At mabl we’re very eager for testing and testers to not be left out in DevOps. We want to make that more visible because testers have the heart of continuous delivery it’s really hard to do continuous delivery without testing. DevOps is about building an infrastructure to do that testing.
Lisa Crispin: We wrote an e-book on our views of DevTestOps, we’re trying to coin this new term, which turns out to have been around since 2010. So, so much for our originality. And we came up with “well, let’s do a manifesto” because everybody does manifestos. And we came up with these thoughts around what would be…
Matthew Heusser: Manifesto’s are tight.
Lisa Crispin: Yeah. We came up with these manifestos. Then we asked for feedback on it and long story short was, you know what these are great principles and people gave us feedback to improve them. And what we really need, in my opinion, is a way to share information because this is pretty new to most people. I’m on all these various great testing slack channels, that we have. But they’re all free channels. So people post links to great articles or books or podcasts then they disappear because you can’t store very much on a free slack. So my goal is to have a place where we can all share those links, and they remain in a permanent spot. And also hoping to get people to contribute blog posts, everybody get on there and share information. How can testers get involved, Jess had some great recommendations there. And what kind of testing needs to happen, how can you be more confident to do continuous delivery, be confident about all these changes you’re releasing to production. That’s the goal of it, it’s an experiment. We’ll see if it works or not, but got to try.
Matthew Heusser: Awesome, and there is one thing that I definitely want to point out here which I think is a good, big observation, you’ve put it in bold type and I really appreciate this. It’s keeping testing visible. Read the manifesto, yes, OK. Above and beyond that, how can we keep testing visible?
Jessica Ingrassellino: In my teams, the way that we’re set up right now, we have one tester on each scrum team, sometimes two for our larger scrum teams. And something that we do and I’ll be giving a talk on this later in the year is, we use test charters as a collaborative tool that everybody on the team sees, uses, comments on from the product owner to all of the developers. Everybody is welcome to look at that document, suggest links to other documentation, talk with the tester about the different test ideas that they have. And that has really resulted in our teams having a ton of visibility into whats going on, where the tester is, and getting through, not only acceptable criteria but when they’re doing exploratory testing, seeing those issues as they come up. Testingat.org is a really collaborative process. I think that’s helped us have a lot of visibility among the other directors that I work with, they know how we test. They understand the general processes because they’ve seen it in action. They’ve even commented on test charters.
Lisa Crispin: I just love that story. It prompts me to give a shout out. I don’t think you can have a talk about testing in DevOps without referring to Katrina Clokie’s awesome book. She devotes a lot of that book to what Jess said, how do we build those relationships with other people in other roles, people in other teams? How do we know which other teams we should build relationships with? And I just love the idea of helping everyone on the team, testers or not, learn exploratory testing and using things like charters to make that visible. I’ve done similar things with my teams as well.
Matthew Heusser: When you said we’re making it visible with charters, Jess, that means we’ve got a description of how we’re going to invest some period of time for the tester to go test, and they should ideally find bugs or reduce risk around that topic. Is that a parrot exercise or is it just that that charter is visible, and maybe it’s a story or something, it’s tracked in a work flow system and the bugs that fall out of that, we know where they came from. Or is the developer or other staff member somehow involved in the effort?
Jessica Ingrassellino: It really depends. An example that I’ve talked about is when I was the only tester a long time ago, when I first started and I was doing testing. I was the only exploratory tester, I’d only been there for 7 months, I didn’t have much background at all on how that area of the system operated. It also happened to be pretty specific. So I laid out a charter with what I wanted to look at and areas of risk that I could think of. And then we have open source community managers who interact directly with our open source contributors. And we had some developers and they both actually looked at the charter. They helped me develop test cases that would be very specific to some of the things that we were handling. And then we all tested it together. There are other cases where it is less collaboration in terms of testing. But it really depends on the circumstance and the situation.
Matthew Heusser: Just for a little controversy point, do you sometimes feel though in this whole world with DevOps, one of the underlying messages we seem to be getting from this is: Testers? We don’t need no stinking testers.
Jessica Ingrassellino: [Laughter] Yeah, I was really kind of shocked when I went to DevOpsDays that there were no testers that I could find and no talks about testing. But I know other DevOpsDays do have them. I still kind of got the spidy sense that I did when I got into extreme programming back in 2000 of they’re oversimplifying it, they know they need quality its obviously all about testing but they just don’t let me use those words sometimes. So how do I elbow my way in, it’s a lot harder because the extreme programming community was tiny and the DevOps community is huge. We were at European testing days as we were talking about, and I was talking to Abby Bangser who is really on the leading edge, I think, of testing at DevOps. She said that she feels like the bridge over to Ops for testers is observability. And for those of you not familiar with observability, it’s like production monitoring. But with monitoring we instrument our code for things we think might happen so we can have an alert when certain errors happen or a certain error budget gets exceeded. Observability is taking that further and instrumenting every event in our code and logging all the things that they have in the tool so that if something goes wrong in production that we do not expect and we do not get an alert, we can quickly identify what exactly was that problem. Let’s see the revert really quickly, what we just put out there. Or let’s get a hot fix into production. The reason that Abby said that is because testers were really good at risk analysis, we’re really good at assessing impacts of things that might happen in production. We’re really good at seeing patterns in data so we can be observing, creating these dashboards for ourselves with all these amazing big data we have, and all these amazing tools we have to make sense out of it. And we can really contribute. Sarah Wells did a great keynote on Quality and Testing in cloud data applications. She stated that, for her, chaos engineering is simply exploratory testing. If we’re going to rip a table out of the production database, for our chaos engineering, is that a realistic test to do? Is that really something that we should spend time on? And I think testers can help make those decisions. I’ve been really excited to learn more about observability. I do think we can learn so much, learning about production. Just kind of like the Modern Testing principles from Alan Page and Brent Jensen, same kind of deal.
Matthew Heusser: Great, that’s a great example. We start talking about chaos engineering. One of the things that I think testers can contribute to is the observability. Working with a company on tracking their EPI’s which are dependencies of dependencies and somethings are too long. Why is that taking too long? Well, we don’t really know. We just know that it takes two minutes in production to press this button on your phone and then have it actually take effect on your Internet of Things device. There is data centers in the middle. We can’t observe. There is a bunch of questions around, what would it take for us to be able to observe? From our earlier talks, I think actually this has taken this a lot further at mabl they’ve done a pitch Op at creating that infrastructure. I think I’m going to let Lisa talk about it instead of pontificating. Tell us about how developed testers can contribute to observability.
Lisa Crispin: I had my colleagues at mabl walk me through the whole infrastructure and how is the site served up and all of the microservices. And then especially all the metrics that they’re gathering. The theory is that you can’t have too much data. It’s cheap to store the date. The analysis tools are there, they’re easy to use. They’ve just done a great job of producing all these reports and they do have great monitoring. You know, they’ve said all these different alerts up, and they have error budgets and visible charts around monitors in the office but they are also able to respond quickly. And the goal is find a problem in production before the customer does. So if somebody happens to notice an anomaly in a log or sometimes the customer is going to start contacting customer support, lets quickly fix that so they’re only suffering for a short amount of time. And I just thought, yeah that’s awesome. That is exactly what I should be trying to contribute to this. From my different perspective, as a tester, what else could I want to learn from production that might help us keep our customers happy?
Matthew Heusser: So, if I can butt in here. With a greater appreciation for observability. I don’t know if it’s a new acronym, but I love all the VAL11Ys, those are great. Since I post on A11Y all the time, I’m happy to start posting on O11Y, awesome. But is observability a job?
Lisa Crispin: Personally I think it’s a whole team effort, just like testing is to me. It’s something everybody can contribute to. And if you think about it this is people, the product people, the designers, that are going to want to know what features are people using, how are they using them, should we just remove this feature because nobody is using it? Or are people struggling with this feature because we designed it poorly? I think everybody has a stake, everybody has something they want to learn from it. And it’s not just about production failures, it’s about learning things that feedback into our, sort of, infinite loop of thinking about new features and getting learning releases out there so we can get feedback.
Matthew Heusser: What’s a good hard example? Because I could sit here and say, well observability sounds like a cool concept. But I’m sitting here thinking, how would I describe this to someone?
Lisa Crispin: Well, it happens at mabl less frequently now. But some people start contacting support because something isn’t working for them. So the engineers start kicking into gear and start looking at the log data and identify, okay here is the error, here is there person who had the error. Let’s go look at mix panel and see exactly what that person was doing. And pretty soon they can reproduce the problem, they can write a test, write a fix, get it out to production, or just simply revert what they just put out there, if that was the cause of it. Within anywhere from a few seconds to a few minutes they’ve been able to fix the problem. That’s fairly new, we couldn’t do that a few years ago. We didn’t have the pipelines to get things out to production that quickly, we didn’t have these huge amounts of data because we had no place to put them, we didn’t have all the great tooling to help us analyze and look at the data and inquiry it really easily. So, to me, that was kind of hit home when I started hearing about observability, if you know what we never can test all the things our test environments never look like production. Some days are always going to happen differently on production and we need to be able to respond to it. And there’s so much infrastructure we need to put in place for that to happen.
Jessica Ingrassellino: It’s funny because right now I’m working in a really different environment. We don’t have the direct to web staff, we’re working in managed packages that go out on top of a platform that we have no control over. So, what we do have control over are taking a look at the systems that are set up to observed by the core platform. When I’m thinking about this and observability and how it’s different from monitoring. Observability is sort of the thing that we can do to build the ability to monitor better into our product. In whatever way that might be happening. As a tester, being able to say, the past three issues that we’ve had to do hot fixes for, I really have needed to get this information. Is there a way that we can service this more quickly? Whether it’s a change in tooling, from the monitoring side. Whether it’s a change in the way that code is happening, or logs are being outputted. I think that those are all ways that we can build observability in, and as testers, we have the power to ask for what we need to have surfaced so that way we can do our jobs more easily. That has been my experience, in past companies too. Particularity at Billy we released anywhere from 3-7 times a day. Multiple features behind different feature flags, AB testing and all that kind of thing. And being able to as the “DevOps Team” for different log information, or different output was a really big part of being able to more quickly identify problems and help get towards solutions and help get fixes out faster.
Matthew Heusser: Yeah, I’ve written articles. There is an article on Sticky Minds called “You Can’t Buy DevOps” or something like that. Creating a DevOps team creates another dependency. But if what you’re doing is what Jess implied, which is the role the DevOps team is to enable the development teams to do more cool stuff. So now, from the command line I can bring up a test environment, or now I have feature flags that I can use. So now the tester is using the feature flags, or the tester is doing the research into the log files. The developers are injecting more data into the log files so that you have traceability. Back in the Windows days, we used to be able, when the application would crash, we would get this stack dump and we would know exactly what went wrong where. And to be able to take that functionality and bring it into production, so when someone says, “My user ID is [email protected], that’s my e-mail address, and the application did something funny and we can basically play back their session and we can look at all the messages that went back and forth, that’s really powerful. But someone’s got to do that and they might need to be a higher level person than a customer service rep. Not only can testers do that, but we can design data so that the customer service rep can do the tracing later because we know that knooks and cranies with the app and we know the weird questions to ask. I’m a fan of letter 1000 experiments bloom and leeting every team figure out what they’re doing and then you create a community of practice and discuss what’s actually working. Because you get a lot more experience that way, and you get better outcomes than premature standardization.
Jessica Ingrassellino: Yeah
Jessica Ingrassellino: Yeah, I like what you say about, to try a small experiment to see what works. Especially because all of this is so new, so who knows if we’re doing it the best way?
Lisa Crispin: I think of it a lot like Jazz improv. There’s a certain agreement about certain things that have to take place. Cord structure- it’s there, cord progression- it’s there, and then if each team is the soloist, so they take the scale they take certain aspects of it where they have freedom and they use that as the experimentation but there is a shared agreement about what this is, where we are in this space. And what tools we have and what things we’re going to do. And it’s a similar idea in Jazz improv, you get 12 people together on stage that have never played a piece together before. They have those loose agreements made, so that way they can then improvise and co-create with each other. And I think that attitude is what helps to generate some of the better ideas, and some of the better tooling and practices in engineering teams.
Matthew Heusser: Yeah, and just to be clear my strong preference is for each of the teams to just experiment and find their own way. When you get to the level of a massive number of teams and trying to coordinate deploys on a large number of systems and people say, “you have to have a DevOps team in order to enable development groups”. I go, “Ah, OK I’m not going to argue with you on that one.”
Jessica Ingrassellino: [Laughter]
Matthew Heusser: Awesome, thank you Jess, I think Jess is great. Maybe it’s time to transition for Lisa to tell us a little about mabl and what it does and what you’ve been up to lately.
Lisa Crispin: OK, mabl is, I guess you could call it a script less test automation tool, although within it you may write a lot of JavaScript. [Laughter] So, I think that’s somewhat of a misnomer. The idea is to make it accessible for people who are not coders, as well. It runs in the cloud. So it runs all your tests in the cloud. It’s all UI testing, you can run it in multiple browsers at the same time. Before I worked at mabl, I actually was a trial user of the product and what attracted me to is was each UI test runs in it’s own container, in all the browsers that you want it to run in and all your tests run at the same time. So, your test wait is only as long as your longest test. If you have a bunch of UI tests, and the longest one takes three minutes, all of your test results are back in three minutes. And when you’re struggling with your production deploy pipeline because your UI tests take 14 minutes and they’re super flakey so when they flake out you have to start the whole process over again. The whole thing takes 40 minutes, it’s really hard to get out multiple releases in a day, or even one. A lot of what mabl does is to try to get through that flakiness, so mabl captures all the information about your dom and where all these elements you’re interacting with are on the page and if some element changes, like changes it’s name, changes it’s location. mabl knows enough about that element to find it again. And say, well I think if I click here I can keep on going. And it will let you know that it did that. We call that autohealing. And at the same time it’s using machine learning to check for visual changes on the page, like parts of the page that should never change, that it’s thinks should never change, based on what it’s seen in the past 10 runs or so. If they change it’s going to let you know about that. What I do there is I’m a testing advocate which means, my job, it’s great, I just get to work with the software community, the testing community, try to find out what are people’s painpoints, try to help them with it. Is it something our product could help with? In any case, just providing blog posts and webinars and going to conferences and trying to help people keep pushing these things forward. And our goal is really to help people with continuous delivery, that’s what it boils down to. How can we succeed with continuous delivery, we need reliable UI tests, we need to feel confident, things needs to happen quickly. It needs to be easy for everybody. We don’t want to spend all our time maintaining automated tests because we might as well be manually testing at that point. Yeah, it’s really fun. It’s really different for me to not be a hands on tester. But, I keep my hand in it, don’t worry. [Laughter]
Matthew Heusser: I don’t have much more, what do you want to talk about?
Lisa Crispin: Well what’s Jess up to?
Jessica Ingrassellino: Ha, so aside from the directoring thing, where I have a team now of 17, which is 12 more people than I had at this time last year. We’re really growing quite a lot. I have conference talks coming up, I’m going to be in Heisenbug conference giving a property testing talk in May. And then I’ve got three other talks coming up, but they’re not formally announced yet so I can’t say at what conferences. But I’ll be talking about learning differentiated management and collaborated test charters. And I’m working on two more academic papers around testing and performativity. Just doing a lot of stuff, getting into a lot of leads.
Matthew Heusser: Could you give us just a 30 second story of what is property testing?
Jessica Ingrassellino: Sure, yeah. Property testing is like unit testing, except for instead of testing a single data point you are verifying a set of data points within that same test so instead of assert the weight of the luggage is less than 35 lbs and you use 34 as your number. You can have a tool that can actually send numbers 1 through 34 into that same test. So it’s still one like of code, the data generation, I’m looking at hypothesis and python so you can actually narrow the data down, finally, to be pretty much exactly what you need in that circumstance. So you’re able to get more coverage in areas you want to test multiple inputs in a single assert, without having to have the overhead of having to write extra tests for every single assert. It’s pretty interesting, to me.
Matthew Heusser: Cool, thanks. Uh, I think we’re more than used our time, but it’s been a great conversation. So, last words? Let’s talk to Michael.
Michael Larsen: Well again, just listening to a lot of what’s been covered here. It sounds like a lot of the reality that I’m currently dealing with, I would say frankly if you’re a software tester that is wondering about DevOps and how you can fit into it. A lot of it probably comes down to discover how to get your environment into that space and what you can do with it. And being able to be part of the monitoring process. Look into observability, get a better feel for it. Like I said, #Olly my guess is there’s stuff out there for that.
Matthew Heusser: I’m going to look for it, you better believe it. And make a case for that, and I am really interested now in looking at a shift towards property testing. I have not heard that term before. I like it! I want to learn more about it, I’m excited. This has been fun. Thanks everybody.
Lisa Crispin: Yeah, it sounds like what I’ve heard of is generative testing and I’m eager to learn what is the difference. I think that they’re both awesome.
Michael Larsen: Well great. Lisa, final thoughts:
Lisa Crispin: Well thanks for including me in it. Thank you, I learned a lot just listening to Jess. I love her examples and she kind of ahead of the curve the things she’s doing. It’s really exciting and we need to give that more visibility too. SO, I’m glad she’s able to share that on your awesome podcast.
Matthew Heusser: Then I’ll let Jess have the last word. I don’t have much, I just enjoyed being here. Thanks everybody, and I’ll see you next time. But Jess, your thoughts:
Jessica Ingrassellino: Thanks for having me in the conversation. It’s been a while, but it’s great having my first conversation back be on the show with Lisa because I really enjoyed our time sitting together and listening at the peer conference especially at Euro Testing Conference. And I say to listeners, follow the hashtag the Olly hashtag and follow Lisa on Twitter, she’s always saying great things and amplifying great messages from other testers too. So if you really want to keep a pulse on things that are happening, do that.
Lisa Crispin: Oh well thank you. Right back at you.
Matthew Heusser: Okay thanks everybody and we’ll see you, probably, on the Twitters.
Michael Larsen: Excellent, thank you very much.
Lisa Crispin: Happy testing.
Matthew Heusser: [Laughter] Happy testing.
Jessica Ingrassellino: Bye!
Matthew Heusser: Bye.