The Testing Show: Dealing with Deadlines

February 25, 03:55 AM
Transcript

You know the feeling. Someone is breathing down your neck ,saying that we have to get the release out on this date at this time or else… well, it won’t be pretty, let’s just leave it at that! Sound at all familiar? Yeah, we feel your pain, and we talk about it quite a bit. Deadlines are a reality. Sometimes they are essential and necessary. Often they are nebulous and ill defined. Regardless, testers deal with them and the Testing Show Panel shares a few of our experiences and how we managed, or didn’t manage, those expectations.

Also, eClinicalWorks got to see first hand that untested, buggy and underperforming software can cost more than lost sales. In this case, it got them into $155 million worth of legal trouble

 

itunesrss

 

 

Panelists:

 

 

 

 

References:

 

Transcript:

MICHAEL LARSEN: Hello, and welcome to The Testing Show.  I’m Michael Larsen, your show producer, and today it is an all-panel show, meaning it’s just your regulars.  Let’s welcome Perze Ababa?

PERZE ABABA: Hello, everyone.

MICHAEL LARSEN: Jessica Ingrassellino?

JESSICA INGRASSELLINO: Hello.

MICHAEL LARSEN: And our host, Mr. Matt Heusser?

MATTHEW HEUSSER: Hey, thanks Michael.  Great to be here.  Our first piece of news, I remember when Sarbanes-Oxley came out, and there was this whole thing about how the CEO was legally liable for the numbers in the financial statements, and we did all these ridiculous and silly things because of it.  But, no CEO has ever went to jail for that—that I’m aware of.  It just woke them up, and we stopped seeing Enron’s.  But, this company had a $155 million fine for making electronic medical record software that was buggy.  It’s eClinicalWorks.  Apparently, not only was it buggy, but they paid Kickbacks in exchange for promoting the product.  But a couple of people are a little familiar with this story, and I’m going to go to Perze first.  Tell us what you know?

PERZE ABABA: Well, this is essentially second-hand information for me.  Apparently, on the East Coast, they are a ton of EMR companies, medical record companies; and, from a perspective where their rep is… essentially not as   good.  I know some people who know, [LAUGHTER], some doctors who switched over to the product, it wasn’t really as good as how they claimed it would be and that seemed to be the start of those challenges.  Of course, we can continue the conversation towards about what was reported in the article and things kind of line up a little bit.  I would love to have a little bit more details, but I don’t have those right now.

MATTHEW HEUSSER: Yeah.  One thing I noticed in the article, it says, “The government charged that eClinicalWorks falsely obtained certification for its software by concealing faults in compliance.  For example, the company entered in its programs only the limited number of drug codes required for testing rather than programming the capability to retrieve any drug code from a complete database, according to the Justice Department.”  So, the first time I read it, I thought it meant, “They only tested the limited number they were required to test and there were bugs outside of it and they should have done more.”  I’m thinking, “That’s bizarre.  If they tested what they’re required to test, it’s fine.”  The second time I read it, I realized, “They didn’t actually write the code to do anything except the examples.”  The specification, the test plan, the test cases.  They write the test cases, it doesn’t do anything more than that.  It’s impossible to interpret this as, “They didn’t actually write the code for it to do what it was supposed to.  They just wrote the code to make the test pass.”  The third thing is, “concealing faults in compliance,” which means they knew there were bugs, but the bugs weren’t on the test plan.  So, they just showed the test plan to the FDA or whoever was auditing them, put it out, knowing that it was buggy, and that’s criminal.  The CEO is going to be liable for the penalty, and one of the programmers is going to pay $50 grand.  The project managers are going to pay $15,000.00 each.  So, it’s actually going to the individual person level, although not as big as the huge fine.

MICHAEL LARSEN: Well, also, we can look at this one telling line in the story itself.  It says, “The government’s complaint alleged that eClinicalWorks failed to adequately test its software or to fix bugs for months or even years after they were detected, while paying at least $392,000.00 to influential customers to recommend eClinicalWorks products to prospective customers.”  That just sort of stands out as, “failed to adequately test its software or to fix bugs.”  How many of us have been in that situation to where, “Yeah,” something has been reported and it hasn’t been fixed?  Without some more context though it’s like, “What is the bug that they’re talking about here?”  They didn’t necessarily code anything to do anything, other than to the pass the tests that they were asked to pass, you know?  “Okay.”  [LAUGHTER].

PERZE ABABA: It looks like it’s not “just passing the test that it’s required to pass” too, but it seemed to me, like from the paragraph that Matt was reading earlier, they even tweaked the test plan so that it will actually match how the system kind of works.  For me, that is very, [LAUGHTER], very dangerous from that perspective

MATTHEW HEUSSER: When you read it, “The company entered in its programs only the limited number of drug codes required for testing rather than programming in the capability to retrieve any drug code from a complete database.”  That means that they just got the sample of 30, 40, or 50 codes that you could enter, and that’s all their software supported.  That’s bizarre.

PERZE ABABA: This reminds me of a project, [LAUGHTER], that I had six years ago.  We had an upgrade to a new solar infrastructure for our search engine and none of our predefined search terms (none of them) worked, and the only data that was loaded in the database was essentially anything about chicken—any articles about chicken, recipes about chicken.  It was kind of funny, but it was also frustrating, [LAUGHTER], at the same time that you pretty much were limited—

JESSICA INGRASSELLINO: Wow.

PERZE ABABA: —according to, [LAUGHTER], what you’re able to find.  You know, of course, this is brought up it, and it was remediated right away.  Apparently, they were in a rush.  They wanted to show that there were search results.  They didn’t think that people will look into the quality of the search results.

MATTHEW HEUSSER: That tells me that somebody built them a test database and that’s what they shipped, “Got the test database running.  Look, you can search for ‘chicken’ and get a ton of results.”  “Cool.  Ship it.”  Like, “Boss, you don’t understand.”  Like, “This is just the test database.  Someone should make the real database now.  I’m just the tester.”  [LAUGHTER].  Like, “Ship it.  It’s good.  That’s buy us a week.  Then, we can have it actually working,” which sometimes in the past, I’ve been that taken to absurd lengths.  A common trick that I have mixed feelings about is, “Do this hacky thing.  It’ll buy us a week so we can do it right.”  Before I was hired at McGraw-Hill, before they were acquired by McGraw-Hill, there was a company called, “2020,” and they had a contractual obligation to deliver a CD on a certain date and it wasn’t done.  So, they third-class mailed a broken CD to the customer and the customer got it four days later and said, “It’s broken.  I can’t use it.”  “Oh, gee.  We’re sorry.  We will get you a new one.”  That bought them four days to actually make it work.  That’s just terrible.  [LAUGHTER].  That’s terrible.

JESSICA INGRASSELLINO: Wow.

MATTHEW HEUSSER: Yeah.  Yeah.

MICHAEL LARSEN: Dang.  [LAUGHTER].  Oh, geez.

MATTHEW HEUSSER: It worked.  The horrible thing is it worked; but, I mean, couldn’t they have just called them up and said, “We need four days?”  I mean, wouldn’t you rather do that?  [LAUGHTER].  I don’t know.

JESSICA INGRASSELLINO: Yes.  My answer is, “Yes.”  But again, I am “just the tester.”  So, I might recommend that, but it wouldn’t mean I would necessarily be listened to, would it?

MATTHEW HEUSSER: Yeah.  It’s, um.  So, some other time we can talk about sort of the human nature and, I mean, the benefits of integrity and how to do integrity well.  But, today, I was thinking we could talk about, The Impact of Deadlines.  So, in this story, with our news item, I looked the company up.  I looked up all their testers, and they’re all in a different continent.  Right?  Especially when you outsource, but even if you have employees, if you have physical distance and you have a different culture—whether that’s San Francisco or New York or wherever it is—I think it’s easy to fall into the trap of, “I’m just doing this thing for this guy and I’m going to do the bare minimum to be compliant and he is going to hit his deadline,” right?  “We’re going to get him a CD on the date that he needs it.  It’s going to be broken and won’t work, but we’re getting him the CD.”  Where the goal of meeting customer’s needs and collaborating and customer satisfaction gets subsumed by the goal of hitting the deadline.  Somebody was talking about this.  Alan Page.  Was it Alan Page?  It was either Alan Page or Curtis Pettit who was recently saying that, “At Microsoft, the goal of customer satisfaction gets subsumed by pleasing your manager too often.”

That was that person’s experience, which makes sense at a big company with multiple layers between the technical staff and the customer.  Right?  So, it ends up being, “Hit the deadline, shipping is a feature,” whether or not that’s good.  So, what do you do if you’ve got this killer deadline approaching and not enough time to test, and how do testers respond?  I don’t think it’s unique.  I worked with a vendor that claimed on their website to do “enterprise CMMI-5 work,” and they turned in software that created reports for an insurance company and it was a medium-sized insurance company.  It had 400,000 members, and the reports wouldn’t finish.  They were things like, “nested” and “select statements” inside of a “select statement.”  So, you have to get all 400,000 members and then get each member individually or whatever.  The code was correct, but the SQL was so bad that when you put it on a big database, it would just time out.  Other people went to the vendor and said, “This stuff doesn’t work;” and, the vendor said, “Yeah.  Your requirements didn’t say how big the database needed to be.  So, we couldn’t possibly have known that we needed to have 400,000 members.”  They came back and said, “What are we going to do, Matt?  The outsourcer said, ‘The requirements were insufficient.’”  “I’ll tell you what we’re going to do.  We’re going to explain to them that there’s a practice in CMMI, Level 2, called, Requirements Review, and they should’ve caught that.  ‘Performance’ is on the checklist.  How fast does it have to go?  What kind of data do we need in our databases to have adequate performance?  They should’ve done that.  That’s what you tell them.”  So, they did.

They went back and said, “Hey.  The CMMI-5 Level, Outsourcer.”  It says, ‘Enterprise, CMMI-5’ on your website, but you do Requirements Review.  I think that’s not on us.”  The response, which blew me away, was that, “CMMI-5 Level work is more expensive” and they “only do it when it’s specifically requested.”  So, for those you don’t know, CMMI is the Capability Maturity Model for Integration, which is a path improvement which has 5 steps.  Supposedly, Level 5 is the awesome, top-of-the-line, fantastic, amazing, but it requires you to do a bunch of extra stuff.  So, although, they publically claimed to be “Enterprise CMMI,” which I think means, “our entire enterprise is CMMI-5,” they only, actually, like, did it once for one project and they got the little checkbox and they put it up on their website.  That’s not really much to do with deadlines, but I think it speaks to this representation that we get when we are confusing our goal of delivering valuable software to the customer with fulfilling the contract.  It’s a problem

PERZE ABABA: When we’re dealing with deadlines, there’s got to be a story before that.  Over-commitment, for example.  Because, I mean, in going back to the article that we were talking about, these guys were funded.  They’ve been doing this for years, and there’s definitely that pressure for them to be able to release something.  In order to make that deadline, human behavior kicks in.  We have this sense of—the one term that I can remember is—“go fever” really where, “We have to release.  We have to release.”  Because everybody is in that space or “we want to be able to collectively decide this course of action so that everybody’s happy.”  That “everybody” doesn’t include compliance or how your customers are actually going to be affected.  It results in something just as bad as what was published in the article.  I guess my question there is, based on everybody’s experience, how do you guys go counter to that line of thinking, especially when everybody is already at that level where, “Yeah, we really need to release.  We’ve spent everything that we need to spend, and we need to have a product out there?”

MICHAEL LARSEN: I sometimes run into this, even now.  I get the benefit of the fact that, as a release manager, I do get the opportunity to say, “I’m looking through this, we’re having problems getting this stuff done.  Our burn-down has discovered this; and, no, I can’t put this out.  Because, frankly, I do make the decision, because I can’t post the code up on, [LAUGHTER], the website if it’s got a problem with it.”  But we have toyed with this idea in the past, because we’re in between continuous delivery.  Because we’re not a pure software‑as‑a‑service play.  We have customers that software that they physically install behind a firewall on their own site, and we have to make that software available to them.  So, we don’t really get to make that case that, “Hey, we can push whatever we want to and just put up a single thing and everybody’s happy.”  Because of that, we do try to have a regular cadence for our releases and that regular cadence right now is once a month; and, with that regular cadence, there’s this tremendous desire to say, “We want to hit that date.  And, if we don’t, well, some nebulous doom is going to fall upon us.”  And, frankly, that nebulous doom never happens.

Every customer basically says, “We would rather wait a couple of weeks to get a new feature, if it means that you’re making sure that the new feature works the way it’s supposed to and doesn’t break something that we count on,” because that happens, frankly.  So, we’ve got a deadline.  It’s an arbitrary deadline and it’s just because we’ve said, “We wanted to make release available.”  Now it’s often the case of, “We go when we go.  If it’s two weeks after we said that we were going to release on a regular cadence, so be it, but at least that way we know that we’re not rushing something out the door and trying to make sure that we’re meeting some self-imposed date just to show, “Look at us.  We’re getting it out on this date, on this weekly time period.”  I think that there’s a benefit to doing that, but the other challenge, of course, is that when you set yourself up for a “regular release cycle” and something does go wrong that requires a fairly large amount of time and effort to fix, and that may not even be your direct problem, it may not be so much that your code is broken.  It’s that a dependency that your code is built with has changed and now you have to fundamentally change something about the way that your implementation is put in place.  That takes time.  That you just can’t say, “Well we need to fix that, and we need to make sure that we get it out by the 15th.”  “No, there’s no rationale for that.  You have to make sure that you’ve done all the due diligence necessary so that you know that now that you have this component that’s been updated and you don’t have as much control over that, that it’s working correctly and that everything that you need is in place.”

MATTHEW HEUSSER: You know, an interesting point about deadlines is, “Who feels the pain when the deadline isn’t met?  What happens?  The deadline is blah, or what?”  The worst project I’ve ever worked on, the one that hurt my sense of well-being as a human, was the one where we had to ship it on January 1st so the CIO could get his bonus.  They brought me in to do the code review and I said, “This is horrible.  This is terrible.  This is the worst code I’ve ever seen.  You hired a VB guy to write Perl code.  It’s not going to work.”  Then, my boss said, “Matt, you don’t understand the purpose of code review.  You don’t get to pass/fail it.  You have done the code review.  So, you need to check the checkbox that you did the code review, because it’s going to production because the CIO needs to get his bonus.  He has to go to production before January 1st.”  And I was like, “There are hardcoded test directories and test databases in here.  If you put it in production, it won’t work.”  And he said, “Fine.  Write down all your showstoppers, and you can see.”  I’m like, “Yeah.  Showstoppers I can see by looking at it, [LAUGHTER], but there’s going to be other ones that can’t.”  That was a horrible thing, but the deadline was, “It has to go for the CIO to get his bonus.”

There was another one where we had to hit a deadline for government compliance or we couldn’t sell this product and it was just the end of the world.  It’s actually documented in the Preface for The Clean Coder.  I wrote it up.  We had to have it on Friday afternoon.  All the technical pieces were done, except Legal hadn’t approved the forms.  “So, great.  Let’s go upstairs and talk to Legal.”  “Oh no, Matt.  They’re professionals.  They can’t.  They’ll get to it when they get to it.”  But, it was a deadline.  We had to do it.  We couldn’t actually put the PDF’s up on the website so that people could print them out and enroll in Medicare.  So, since we didn’t have that and we weren’t going to put the website up, we’ll just wait a couple more days.  But, we had to do it or we weren’t in compliance.  There was another one where—another website—for legal compliance, we had to have 12-point font, which doesn’t make any sense at all because, like, you can’t really measure font that way on the computer screen.  But, we had to go through all this stuff.  In the end, we realized that it wouldn’t fit on the screens.  So, some of it had to be just a tiny bit smaller.  The same people that said it “had to be 12-point font.  I didn’t understand.  I needed to make it work,” were now saying, “It’s okay.  The spirit of that regulation has no fine print.”  It blew me away.  But I think there’s a point there.  It’s, “Who feels the pain?”  I’m sorry, but this is a human‑nature thing.  If you don’t matter to some people, if they don’t care about you, then, “You’ve got to do it.”  They’ll put your feet to the fire and, “It’s a deadline.”  And, you know, “Give up your nights and weekends, and do it.  Just do it.  You’ve got to do it.”  And, if the shoe was on the other foot, not so much.  My question then is, “What is that affect where you stop being looked at as a human and start being looked at as a cog in a machine, and how can we undo that?”  Because as soon as you undo that and people realize, “Oh, yeah.  Right.  Your brother’s getting married this weekend and you would be giving that up so we could have a little widget on a checkbox on the website three days earlier.  Yeah, just go to the wedding.  It’s okay.”  Like, how do we get that humanism back into software?

JESSICA INGRASSELLINO: Maybe I’m just really lucky.  I only worked at one company where it was really intense like that, where we had to make ourselves available for a Sunday night push every single week.  It sucked.  I didn’t like it.  [LAUGHTER].  But, most of the other companies I’ve worked at, there are kind of like the Deadlines and then there are the deadlines.  I found that by negotiating and talking about the deadlines, if those start to stack up, like, “Oh, we missed this and then we missed that,” it opens up a conversation for, “Hey.  Is it realistic for us to want to take a look at doing this thing in a month or two months; and, if not, how can we either make like a V1 that we can talk about as being realistic for the kind of Deadlines, or if we can’t do it at all, what’s happening that’s making this not a realistic goal?”  I feel like I’ve been lucky enough to work it where we’re delivering products kind of in that spirit and in that way and doing our reviews—our, you know, mid‑sprint reviews—for companies where I’ve worked where we do a two-week cadence or companies where I’ve worked where we’ve released multiple times a day because we’re releasing like that.  Behind feature flags, we had some different flexibility in testing.  So, we knew a little bit more about, if our kind of end‑goal was reasonable, and it sounds to me like—and I don’t know if this is true, because I haven’t worked in these highly-regulated environments—some of them really just have that end goal in mind and kind of don’t do the deadlines.

It’s all about, “Well, two months from now, we’re going to deliver the PNIF and that’s it.”  What I wonder is, “Is there a way to build in some smaller checkpoints so that way there are, maybe, more red flags raised earlier in the process that can help people who are working on those teams address issues before it’s, ‘Okay.  Well, we’re supposed to deliver in 1-1/2 weeks and you might have a weeding to go to, but we don’t care because we need to deliver in 1-1/2 weeks.’”  I don’t know enough about very large companies with highly-regulated environments to say if that’s a possibility or not, but I do know that having the smaller, more regular conversations around deadlines and adjusting our scope and adjusting sprint requirements along the way has definitely helped and sometimes it has changed the decision about, “Go.  No go.”  Or if we wait for a week or to actually deliver something.  So, I’ve found that to be a little more humane also because none of the companies have ever said, “Well, you have to work overnight and on the weekend because we have to get this thing done,” because we have had those checks in place.

MATTHEW HEUSSER: Yeah.  I think that smaller batch is a huge change where, “You’re not going to hit the deadline, and you’re going to be late by an hour.  Because it’s a four‑hour estimate.  It’s not a big batch.  It’s just so much better than when we release twice a year and we’re going to be late by three months.  Oh, no.  A big-stupid war room, big-stupid triage meeting.  What bugs can we not fix, because we never fixed them before?  Now we’re running out of time.”  Certainly, Agile and Scrum and all those things, just the smaller batches you work with, the more you’re predicting your performance and the less you’re guesstimating it a year out, has all helped.  When I was at Socialtext and they had the big second round of layoffs, I started working every other Saturday for about four hours to get the release out.  But that was a lot of, “Run a script, go away, come back, and check to see if there were errors.  Debug the errors and then run another script and go away.”  It wasn’t that bad, and I think small batches help.

All right.  Now, our mailbag is:  [email protected].  If you have questions and you want us to talk about some particular issue that you’ve struggled with, maybe you’ve struggled with deadlines, and you thought, “This post wasn’t that helpful,” and you want to come on and tell us how to do it or something else, please drop us a line.  Now, it comes to the part of the show where we talk about what’s going on, and I’ll start.  This will be up and out and I think you will able to register and get the videos.  I don’t know if there’s a fee after the fact.  So, by the time you’re listening to this Online Test Conference should be over, and I’ve got a talk with Chris Kenst on Getting Git, specifically for testers.  So, that’s, “What is Git?  How would you use it?  Where could you use it?  Uses beyond code?”  By the time you’re done, you will actually have your own Git repository that you added some things to, and you’ll be able to have a much more cogent discussion with the developer about “version control,” which is the goal.  I’m excited about people learning technical things that are not just writing code.  So, www.onlinetestconf.com, I think.  We’ll have notes in the Show Notes.  That’s my one new thing.  Anybody else have anything new and exciting?

MICHAEL LARSEN: Well, I don’t know if I can say that this “new and exciting,” but this is what I will call “an after-the-fact call to arms and attention.”  I want to give a special plug to the Ministry of Testing, and I want to encourage people—if they have not already taken the time to do so, at some point over the past two months—to go back into their Dojo area and take a look at the various 30 Days of Testing Challenges.  They’ve done a few of them; and, just this past month, I finished 30 Days of Accessibility Testing.  I ended up and going through and writing a blog post, roughly speaking, for each of the 30 days.  Towards the last, I ended up doing a “fourfer”, where I put four them together.  But there are a lot of really good insights, and this is coming from somebody who considers himself a pretty active and somewhat polished accessibility tester.  There were a lot of things that I learned from going through that process, and I’m not going to rehash the entire thing.  But I will put in the Show Notes the original listing for the 30 Days of Accessibility Testing, and I will encourage that there are some pretty neat insights and areas that you can discover and learn about.  You can up your game tremendously in just through going through these examples.  So, that’s my plug.  The 30 Days of Accessibility Testing.  It’s already over, but that’s no excuse to say, “Ugh, it’s done.  I don’t have to deal with it.”  Go take a look at them, and take a look at some of their previous challenges.  And get a chance to play around with them and see how you can up your game.

MATTHEW HEUSSER: Thanks, Michael.  Anybody else?

PERZE ABABA: Well, for me, it’s my evergreen encouragement to check out NYC Testers, if you are on the East Coast.  We do have, you know, regular MeetUps.  We are also scheduling workshops where testers can be hands-on and you’re just not listening in on someone else’s experience.  So, please check us out:  https://www.nyctesters.org and https://www.meetup.com/NYC-Testers[x].

JESSICA INGRASSELLINO: By the time the show is on air, I will have already spoken at Python Day Mexico; and, other than that, I’m laying kind of low with external activities for the next month or two.

MATTHEW HEUSSER: I think that’s it for today.  We’ll be back in two more weeks.  We should be talking about—if we haven’t already, depending on the order—(we either just talked about or we’re going to talk about), Testing for E-Commerce and Retail, and I think it’s going to be a lot of fun.  But I think deadlines are still real, and we’ve got to deal with them.  So, thanks everybody.  We’ll see you in a couple of weeks.

MICHAEL LARSEN: All right.  Thanks for having us.

JESSICA INGRASSELLINO: Thanks.

PERZE ABABA: Thank you.

[END OF TRANSCRIPT]

Recent posts

Get started with a free 30 minute
consultation with an expert.