The Testing Show: Continuous Delivery, Part 1

February 25, 01:32 AM
Transcript

Continuous Integration, Continuous Deployment, Continuous Delivery, and a host of other continuous options abound out there. Do you know the difference? Would you like to? We asked Adam Goucher to come out and discuss with us the variations on the theme between these three distinct disciplines, what they mean, how they are implemented, and where testing and testers fit into the processes. This is the first part of a two-part interview. We will conclude this interview in our next episode.

Also, in the news segment, do you trust the idea of an open source autonomous automobile? Also, what happened when stock prices for many big technology companies all read the same stock price at the exact same time? If you are guessing a bug in the system, you’d be right, and we definitely have a few things to say about it.

 

itunesrss

 

 

Panelists:

 

 

 

 

References:

 

Transcript:

MICHAEL LARSEN: Hello everyone, this is Michael. What you are about to hear is a two part show on Continuous Delivery with Adam Goucher. We had such a good time with Adam and covered so much ground that we decided a two-parter was in order. Therefore, what you will hear this week is our news segment and the first half of our interview with Adam. Our next episode will have the conclusion of that interview. With that, on with the show.

[SHOW INTRO MUSIC]

 

MICHAEL LARSEN: Hello, and welcome to The Testing Show. I’m Michael Larsen, the show Producer and today we would like to welcome Perze Ababa?

PERZE ABABA: Hello, everyone.

MICHAEL LARSEN: Jessica Ingrassellino?

JESSICA INGRASSELLINO: Hello, everyone.

MICHAEL LARSEN:  Matt Heusser?

MATTHEW HEUSSER: Thanks for listening.

MICHAEL LARSEN:  And we would like to welcome our special guest, Mr. Adam Goucher. Hey Adam!

ADAM GOUCHER: Hey.

MICHAEL LARSEN: Good to have everybody here with us this morning, and with that, we’ll hand it over to Matt Heusser. Take it away, Matt!

MATTHEW HEUSSER: Thanks, Michael.  So, most of you know, by now, Jess and Michael and myself and Perze pretty well.  Adam is new to the show.  I think I’ve known Adam for exactly ten years now.  I think it was in 2007 that you came to GLSEC down from—I want to say—Toronto.

ADAM GOUCHER: Toronto Suburbs, but let’s go with Toronto.  Yes.

MATTHEW HEUSSER: Did you drive?

ADAM GOUCHER: I flew.

MATTHEW HEUSSER: Yeah.  Then, I saw you again at STAREAST a couple of times and seen you around and you referred me some work over the years, which I deeply appreciate.  We’ve kept in touch over the Twitter, but the testing world has been a little fragmented lately and we’ve all sort of blown our separate ways, I think.  You’ve been more in the area of Selenium and WebDriver.  What particular languages?  You’re not a .NET guy?  You’re a Java guy?  I can’t remember.

ADAM GOUCHER: Well, I can do them all technically.  Any time I’m doing work, I’ll turn away work that’s not Python, PHP, or Ruby.

MATTHEW HEUSSER: Okay.  All right.  So, Open Source scripting language-y stuff.  Lately, with the rise of DevOps you’ve been working on continuous delivery, continuous deploy pipelines.  Is that accurate?  Am I missing anything?

ADAM GOUCHER: Yeah.  That’s probably a good high-level summary.

MATTHEW HEUSSER: You’ve gone back and you’ve been independent.  You’ve had a job, and I think you’re independent again.  Are you running a company?

ADAM GOUCHER: I have a job these days.  I’m a CTO at a mobile marketing startup called, “The Mobile Experience Company[i]” where they deliver surveys, contesting, and such using SMS as the delivery agent.

MATTHEW HEUSSER: Now that everybody knows Adam, we should talk a little bit about what’s going on in the world of software delivery and development.  There’s two things that I want to talk about.  The Chinese Search Engine, Baidu, apparently they really see themselves as a Google competitor; and not only do they have autonomous vehicles, they’ve made them Open Source, and they are making that Open Source code available[ii].  So, if you buy the right kind of Honda or something, you can simply have an autonomous vehicle running, is the claim.

JESSICA INGRASSELLINO: Okay.  My opinion is that, for the record, it sounds like the worst plan ever.  Open sourcing the autonomous stuff, I mean, autonomous vehicles, I still think have a long time ahead of them in terms of testing and ensuring that algorithms are making decisions based on more than one or two or even three factors because there’s a lot of complexity on the road, but the idea that somebody could just, [LAUGHTER], take that and modify their own car with no idea and no algorithm and no testing and just wing it, is terrifying.

MATTHEW HEUSSER: Would you trust it?  Some other country does some stuff and they’re like, “Oh, yeah.  It’ll be fine.  Just buy this particular kind of car and then install our software on it through the USB port.  It’s going to be great.  You’re going to be able to, you know, connect with your iPhone, and drive stuff around.”  There’s the problem of knowledge, evidence, and trust there.  Like I want to know, “How did you test it?  I want to know, “Who’s using it successfully?”  I want to know, “What are the risks?  What happens if this gets in some kind of accident?”  The whole thing is crazy.  It reminds me of the companies that won’t let you work from home but they’ll hire a team of offshore developers.

ADAM GOUCHER: But you’re not going to get any of those three factors of confidence out of a Tesla either.  They’re not going to tell you how they tested it.  They’re not going to tell you any of that stuff.

MATTHEW HEUSSER: I guess.  But, in the United States, (maybe it’s sort of naïve), don’t we have regulatory bodies and stuff that say that, “It has to be this tall to ride the ride or Consumer Reports who then publish that stuff?”  “Yeah, we do.”  Epistemology is the philosophy of how we know stuff.  I would think that a Tesla or a Google with autonomous cars; and, if it crashes and I broke my arm in a Google autonomous car, “Google give me money.”  Whereas I installed this Open-Source thing and then it crashes, then I have to give whoever it is money.  I’m responsible.  I would think the responsibility would shift.  No one’s going to go after Baidu, if you’re autonomous car crashes.  They’re going to say, “Why did you trust them?  You shouldn’t have done that.”

JESSICA INGRASSELLINO: I’m not necessarily convinced that Google or any large company, Tesla, here would let you take an autonomous car without having you sign some fine print that negates their responsibility in the whole thing.  I have no faith in our litigious culture, [LAUGHTER], and I do think, (let’s face it), if I were here running a large company, I’d make sure that it was tested and when I felt that I was tested enough, I would have somebody sign fine print.  I don’t want my company being liable if somebody does something wrong.

MATTHEW HEUSSER: This is really interesting.  Reading the article further, “The platform is clearly in its early days.  The crucial elements of autonomous driving like perception and motion planning are still lacking.”

JESSICA INGRASSELLINO: [LAUGHTER].

MATTHEW HEUSSER: “Support for basic sensors like LiDAR, RADAR, and Camera are still under development.”

MICHAEL LARSEN:  Which in and of itself is saying “Yeah, you’ve got this great open platform but many of the technologies that are necessary for you to be successful are not supported so… I think right now, there’s not a whole lot of “there” there, but it’s been under development for quite awhile, so… hmmm [LAUGHTER].

MATTHEW HEUSSER:  My honest guess, at this point, reading through, is it’s an Enterprise Service Bus that you can hook components to, that’s Open Source, that you can plug in some C++ or Python; and, what they’re hoping is a company like Ford Motor Company that has their own facility where you can test drive and they know there are no humans at and there are big concrete barriers that you can’t drive through, will start hooking up these components for them and then contributing back to the Open Source Project.

PERZE ABABA: You know, I’ve been digging through the Source Code ever since you mentioned it, Matt, and I still can’t quite find the moral dilemma module.

 

MICHAEL LARSEN:  [LAUGHTER]

PERZE ABABA:    You were talking about the whole idea of perception.  If you’re put in a self-autonomous car into the point where it recognizes some things that it think it’s people and then, on the other hand, if there’s a wall, would it just choose to hit the and possibly injure the driver? [iii] Yeah.  I share Jess’ skepticism around this thing, because there are lot of possibilities of damage.  But, I am pretty excited with the idea that it’s Open Source and everybody can see what people are introducing into this thing, but still there’s a lot of questions that need to be answered.

MATTHEW HEUSSER: Yeah.  I’m just not so excited about the Open-Source Model for real-time life critical systems, and there’s lots of problems with Closed-Source Model too.  There’s lots of really crappy code driving a Boeing 747.  Dr. Kaner and I were talking about, “The avionic software had to be a good one,” going back and forth.  So, that was my position.  This is ten years ago.  I don’t know what I was talking about.  He said, “Yeah.  I used to feel that way too until I was on the ground and realized they were rebooting the airplane because it was exceeding its memory limits or something.”  [LAUGHTER].  I was like, “Why would they ever have to do that?  That’s not so good,” and it’s all Closed.  So, that’s challenging.  Moving on.  There was a bug where all of a sudden in some of the feeds for pricing for the stock market the prices of Google and Amazon and Apple dropped from $900.00 a share to $123.47 [iv]because some sort of Mock or Stub third-party library, which was designed to make the prices predictable, accidentally, “Oops,” found its way to production users.  Then anybody who had a system that said, “If the stock drops below $200.00, start buying it,” all of a sudden.  There’s lots of reason why that could’ve happened.  In a Continuous Deployment Model, you actually deploy your code to production.  It’s hidden behind configuration flags, so only your testers or your test users or your UAT people actually see it and then you test on production.  Then, when you say, “That’s good,” you change the config flag, and it rolls out to everyone.  It’s very possible that there was some sort of testing in production happening, some sort of flag was screwed up.  I think that’s pretty bad.

ADAM GOUCHER: But, really, who hasn’t pushed the wrong config flag into production?  Hypothetically, when you push a payment gateway to production and you have it mocked out and you run your automated test in production and everything says “great,” until the CFO lands at your desk 45 minutes later saying, “Are we down, because we haven’t made any sales?”  Hypothetically, of course.  That’s never happened in any of our careers.  [LAUGHTER].

MATTHEW HEUSSER: I was in an e-commerce company that did that.  It was very similar anyway.  The checkout process brought you to https://test.company.com.  It looked to you like you were actually checking out, and then it would redirect your order to dev/null because it was just a test system.  It wasn’t hooked up to anything.

MICHAEL LARSEN: Is it trying to purchase that stock at that price and wouldn’t, at some point, some system say “Uh, no! [LAUGHTER] You can’t do that! Or is it actually trying to initiate trades to do so and you are actually committing real money? If that’s the case,  then I’m really worried about these algorithms [LAUGHTER].

MATTHEW HEUSSER: It’s just the display logic.  You’re reading from a feed.  You’re going to an API.  You’re polling it every minute, and you’re getting the numbers back.  You have your own rules.  So the part that’s broken is, all of a sudden, you get back, “123.47.”  Now, if you have your own rules set up, “Hey, Scott Trade.  If it drops below $200.00, buy the stock,” you had to pay less than $200.00.  If you don’t have that third piece in there, which is an optional field, then yeah, you’d be buying the stock, and your software will be working correctly.  The problem is the stuff that you’re reading from that’s giving you bad data.

ADAM GOUCHER: Yeah.  We can get into nuances of financial markets, but the purchase order, if you’re trying to buy it at $200.00, nobody is going to complete that transaction.  So, you’re unnecessarily flooding the system with bogus orders that you’ll never get received.  The more dangerous part is when you start playing with options, because those are zero-sum games and you could lose your shirt on that one.  Hopefully, these big automated trading environments, they’ll have multiple feeds running in the way that, if you build the datacenter you have two different power companies supplying you power and three different Telco’s providing you bandwidth from three different entry points into the building.  You have confirmatory data feeds to run your algorithms.

MATTHEW HEUSSER: You mentioned earlier, “Who hasn’t messed up a payment gateway or screwed up a config flag and all of a sudden dropped all of their orders?”  That seems kid of a big deal.  Should we take a step back and talk about CI versus CD versus the other CD.  So, it seems to me that a lot of companies are rushing headlong into, “We’re going to do continuous delivery[v].”  They don’t have the processes in place to do that super successfully.  There’s not a lot of content into what that means.  Only, “We’re going to be deploying all the time.”  Maybe Adam can walk us through in his words, “What is CI?  What is the gap between where most companies are and the ideal, and how do you get there?”  Then, we can talk about what that means for testers.

ADAM GOUCHER: So here’s where I start to split hairs, but I think they’re important hairs.  There’s the notion of continuous integration, which is the easiest step in this whole process; because, in theory, you just hook up Bamboo or Jenkins or CircleCI or something to your Version Control System, and I think we’re safely at the point where every company is now using Version Control.  That wasn’t always the case, but I think we’re in a good state as an industry now.  Every time you do a Commit, you run a bunch of tests.  It could be one test.  It could be a thousand tests.  But, you’re just running whatever you have as part of the check-in, and that could be your unit test.  So classic new unit test, no database, no network, and they go up to your monstrous WebDriver test, such is CI[vi].  The build passes, and then you’re ready to think about whether or not you are doing continuous deployment [vii]or continuous delivery.  So, continuous deployment really had its hay-day in the media a decade ago.  Timothy Fitz was the big proponent of that, and that’s basically where it’s all automation, all the way through, from the time you check in your code.  The machines are making decisions, and it pops out into production.  It’s live.  Every single Commit goes through that pipeline.  Continuous delivery, though, is automation where automation makes sense and humans where humans make sense.  So, at my company, we do continuous delivery.  We have machines doing things all over the place.  Critical button-pushing needs to have a human and human judgment going into play and not every build goes into production, but every build could go into production.  So my little sound bite/snark—I know you’re shocked that I have a snark involved in this—continuous delivery is continuous deployment but with ethics because humans are in there making the decisions around whether or not we should push this code live out into production and affect our users.  Now, most of us aren’t in life-critical industries.  You know, I do marketing.  Nobody is going to die if we don’t deliver a survey.  So, there’s not too great an ethical problem there.  But, you can imagine, in some of these healthcare startups, if you did continuous deployment and something goes out and it really shouldn’t have and could have been caught by a human looking at it, then, you know, people could die, which is an ethical decision that needs to be made from the top down but also from the bottom up when you’re building out these pipelines.  CI is just the server that runs all tests.  Continuous deployment is check-in and then it pops out into production at some point later without humans involved and continuous delivery is humans are making the decisions around when to move things through the pipeline ultimately to production.

MATTHEW HEUSSER: Yeah.  So when we say, “CI is super easy, you just hook up Jenkins or CircleCI to your code base,” I mean, we’re like doing automated builds all the time.  Anytime there’s a check-in or every hour or so, we check out the code and we do an automated build.  That’s like the very basics of it.  So, if you introduce a new dependency on a code library which you forgot to make available to the build system, you know about it.  It eliminates, “It works on my machine” for builds.

ADAM GOUCHER: That’s the main benefit you’re getting.

MATTHEW HEUSSER: Minimum.  Minimum.  If someone breaks the build, you know about it very shortly after they do it.  Most clients now have a few unit tests.  Anybody working on a Legacy System is going to have—I’ve said this on the show before—the “unit tests” that are 50 lines of setup and then a function call and then 50 lines of teardown, one assert or two, to make sure that when I create this object, this thing is that thing, and it’s just incredibly painful and hard and awful and no one wants to do it and maybe you have to mock out the whole universe or maybe you have to actually instantiate the whole universe.  This Greenfield stuff, you had some company out of Silicon Valley that does CircleCI and they put their code in GitHub and they use Amazon Web Services, and all the sudden, “This is super easy.”

MICHAEL LARSEN:  With some Docker container that’s already set up someplace that has everything already in place to be spun up.

MATTHEW HEUSSER: Right.

MICHAEL LARSEN: Which again, to a benefit, it does work but then at some certain point if something’s not happening with your CI pipeline… we use Jenkins, so… do I go into the Jenkins config to figure out what’s going on, or is this something that’s happening in Docker? Do I have to go through and traverse a number of the things to find out why we’re having some error? Is there some contingency that we have requires a particular binding inside of our Selenium? Yearrgghhhh!!!

MATTHEW HEUSSER: This happened, I think, on the last podcast, off-air.  I was talking to Michael about something about, “How his company did some small component and where does that happen,” and you spend about 15 minutes trying to find in the code where that happened, right?

MICHAEL LARSEN:  Exactly, right! That’s [LAUGHTER} that’s why it reminded me of this, because we were going through exactly that scenario and “You know, I’m sorry, I really wish I could tell you exactly where this happens, but at least from what I can tell, there’s no “there” there, but I know it’s happening, but I just can’t pinpoint exactly where it is”. So what ended up happening was I contacted our CI guru and say “hey, if I want to look at this given component”… “Oh, yeah, you need to go into the Docker container. We’ve already set all that up.” When it was explained to me, like “OK, I get why we’re doing that”. It saves time. We’re able to spin out multiple instances and those instances come up and they’re ready to go, but if you have to chase down a problem, there are a lot of little siloed places that you have to go and pick through to see “where is my problem really?”

MATTHEW HEUSSER:  So, let’s talk about the next piece that I find really fascinating that I don’t hear about much that I call, “Automated provisioning[viii].”  When a build run succeeds, I want to be able to go into a user interface either ask a terminal or a web app and say, “Give me a webserver running that build,” and I want it to be superfast.  Now, if we’re actually running Selenium automated, as part of the build process, you have to do that anyway because we have to make the webserver to run the Selenium against it to get the results back.  I don’t see this happening much.  It confuses me.  I would think that would be kind of important.

ADAM GOUCHER: There is an entire startup waiting to be written just around the provisioning of things, specifically in the context of testers, for just that, “I would like an environment that looks like this with this build on it, and I would like it now.”  But, from a tooling standpoint, you can say that’s part of the continuous delivery build pipeline because, again, you are going to be in this context believing that a human should be testing the code.  So, the human needs access to the code.  A lot of this comes down to the decisions that are on your pipeline.  For instance, we do everything on Master.  So, we don’t need to worry about, “I need to have it from Branch X or Branch Y or Branch Zed.  It’s all in Master, so we have our human’s environment that things get pushed to, and then the humans test it.  When the humans have validated it, then certain people can promote it.  But, from a larger provisioning perspective, from the scenario that you have lined up, “I want this build, and I want it now,” there is a large business opportunity for a startup to do something like that for manual testing environments or even run this set of tests in this configuration environments.  Saying all this is kind of amusing; because, when I was using TestDirector, back in the glory days of like 1998 and 1999, [LAUGHTER], you had that sort of functionality in the systems of the time, but we seemed to have lost that with the movement of the Cloud.

MATTHEW HEUSSER: Yeah, and it seems like we shouldn’t.  Type in a code and do the thing and, [FINGER SNAPPING], here it comes.  Docker kind of, sort of, gets us there.  In theory, you could have your build create a Docker instance that goes into some sort of artifact collection-y thing and then you can just say, “Take this Docker image, and spool it up.”  There are problems with all of this because usually I’m talking about the webserver; and, if there are database changes, then that’s somewhere else.  Nine out of ten new new stories that I work on don’t require changes to the database.

ADAM GOUCHER: We have solved the database problem.  Database changes are a problem.  They are arguably the most difficult part in the whole continuous delivery lifecycle, but there are patterns for these things.  As I walk along the office and read the book, there’s the book, Refactoring Databases by Scott Ambler and Pramod J. Sadalage[ix], but we know how to solve those problems.  The webserver is the easy one.  You just drop the code on it, but so is the database part at this point.  It it’s not the easy part of your deployment or your testing regimen, then that’s what you need to work on.  I’ve turned down business when I was doing web automation stuff because they weren’t ready for the web automation stuff.  They still had to solve the database problem.  They still had to solve their centralized logging problem.  They still needed to solve their configuration management problem.  They still needed to solve their fleet deployment problem.  There was so many other things, that bringing web automation into the mix would not have addressed any of their problems.  They had greater pipeline problems.  I’m like, “I’m happy to take your money two years down the road from now but you need to solve these things first.”

MATTHEW HEUSSER: I think that’s huge.  I don’t think the Refactoring Databases is nearly that easy.  It’s weird and tricky and hard to put in place.  It doesn’t always work.  It might involve rewriting all of your code.  But, aside from that, it should solve the problem.

ADAM GOUCHER: Well, everything is hard in implementation.  But, if you are going to do this, it’s something you have to do.  You cannot get away with using old-school database management processes with modern software deployment practices.  They don’t jive.  I worked at one company and they had everything in stored procedures.  Largely still are doing crappy, high-risk, only-off-business-hours deployments because of their database constraints, and they have not been willing yet, from a business perspective to actually address this problem.  They know the steps they want to do, but they’re like, “You know, this is hard, and we’ve got a decade worth of business logic there.” You do. But, you also are going to have a decade-old or more software development lifecycle.  If that’s what you’re wanting to do, then that’s a business call that you have to make.

PERZE ABABA: The one pattern that I’m really seeing in all of the years that I’ve been part of teams that have tried to employ continuous integration or delivery or deployment is that we sometimes skip over the idea that there’s discipline that’s necessary for this to work.  As we dig deeper into, you know, what needs to be done to able to build something, we still need to go back and understand that everybody, [LAUGHTER], in the team needs to understand what’s actually required to be able to pull this off.  You know, there’s a lot of knowledge-based work that we think already exists, but that automatically becomes an assumption for the team that we can automatically do it.  But, that’s not really the case.  Continuous integration, in this case, is a practice.  It’s not a tool.  It really does require some degree of commitment and discipline by everybody in the team for us to be able to put this forward.  Of course, after we establish that, with the culture that we have, then we can move forward with using these tools effectively.  The biggest factor here is not the tools, it’s the people that takes part in this.

MATTHEW HEUSSER: Well, I totally agree with that.  I see organizations where, what I would say, “the organizational bottleneck,” is processing people and skills that, “We’re going to go to continuous delivery in five years, so we’re going Jenkins and Docker all things.”  We don’t need more tools because they will just make the things that take five minutes take five seconds and the things that take five days are still going to take five days.  We are still printing out pieces of paper and walking them around for physical signoff.  [LAUGHTER].  We’ve got to do something about that.  I was about to disagree with Adam about database practices until I caught the stored procedure part, because Etsy was doing somewhat.  They had Migration Mondays, and you could only deploy your SQL, if it changed the database, you could only do it on a Monday, and it had a rigorous review process.  They did deploy them dark.  So, they would deploy it and your changes would go into the system.  It’s very much like the Scott Ambler stuff.  Your changes would go into the system and populate both fields for a while, and then eventually you’d set the config flag, you’d turn off the old field.  Those are relatively old-school database processes, but they were making them work.  Not as old as we put business logic in stored procedures, I guess.

ADAM GOUCHER: All software practices are, at some degree, old.  Jerry Weinberg was writing unit tests, you know, in the 1960s.  [LAUGHTER].  You know, “Oh, we’ve got a unit testing framework.  These are great things.  Yeah.”  It’s a known thing.  The way Etsy was doing their database is how if you have a large co-base that’s distributed across multiple systems with multiple datacenters, that’s kind of how you’ve got to do it, but they also can—when I last knew people at Etsy, they had an IRC bot that you could just deploy the code at any point.  But, the database changes possibly went only on a particular day and you always had to factor in that there’s going to be multiple versions of the code and running in production, so your code had to be pretty darn robust to be able to handle it.  The old value is set and the new value is not, so you have to read from multiple fields and make the decisions.  It’s complicated, but you know, as Perze said, “It’s disciplined.”  That’s an XP practice, which predates Agile.  It certainly predates Scrum and the various other so-called Agile frameworks.

Recent posts

Get started with a free 30 minute
consultation with an expert.