Insights Podcasts The Testing Show: State of the Testing Practice (Part 1)

The Testing Show: State of the Testing Practice (Part 1)

February 15, 2017

It seems that 2017 is shaping up to be the year of the two-parter, as we are back with another two part episode. This is Part One, in which the Testing Show regulars chat with Alex Schladebeck and Joel Montvelisky about the way that testing is practiced globally.

Joel has some insights on this in that he steers the State of Testing Questionnaire that runs in January and February each year, and gathers statistics about how testers actually work. We look at some issues that were discovered with the survey, such as how many organizations claim to do automation versus how many actually are making a solid go at it, as well as where those organizations choose to, or choose not to, apply their efforts.

Also, in the news, what happens when TSA’s computers go out on one of the busiest travel days of the year (the day after New Years)? The Testing Show panel and their guest weigh in, and they have plenty to say, both on the outage and the process in general.















[What you are about to hear is Part One of a Two Part show on the State of the Testing Practice. Part Two will be posted in two weeks. For now, though, sit back, relax, or do whatever you like to do when you listen to your podcasts, and now… on with the program]


MATTHEW HEUSSER: Hello everyone, welcome to The Testing Show, the absolute best podcast on software testing that I’m going to put out this week. This week, we have a kind of a different and special show. We’ve got international guests to talk about the state of testing worldwide. We have Justin Rohrman returning to the show.




MATTHEW HEUSSER: Jessica Ingrassellino, welcome back.




MATTHEW HEUSSER: But we also have from Braunschweig, Germany, Alex Schladebeck… was that close?


ALEX SCHLADEBECK: Thank you. It was close.


MATTHEW HEUSSER: All right, and what have you been up to lately, Alex?


ALEX SCHLADEBECK: Getting back into the new year after Christmas break, and the new year’s going to start off well.


MATTHEW HEUSSER: Your job has been changing.


ALEX SCHLADEBECK: Yes, it has, and you know this better than I do. I have just acquired the role of Head of Quality and Test Consulting at the company, so now in charge of making sure that our testers are doing cool stuff for themselves and for their projects. Our customers are benefitting from that. Still doing some hands on stuff where I can, but more looking at how everyone else is working and how I can help them do that. Exciting times.


MATTHEW HEUSSER: I knew you were doing more management-y stuff. I thought you were a dev manager for a while.


ALEX SCHLADEBECK: I’m a product owner. I still am. It’s going to be one of the things that are probably going to get less and less over the next year with other stuff coming up, but I’ve been doing a lot of product management for the last seven years as well.


MATTHEW HEUSSER: Right, you’ve been doing product management, and now it’s going to become secondary to your quality management work. Wow, congratulations. That’s awesome.




MATTHEW HEUSSER: Finally we have Joel Montvelisky from Practitest


JOEL MONTVELISKY: That’s correct.


MATTHEW HEUSSER: And they make Unit Testing-ish tools, I think.


JOEL MONTVELISKY: It’s actually QA Management, end to end requirements management, bug tracking integration…


MATTHEW HEUSSER: Wow, that’s a lot of stuff.




MATTHEW HEUSSER: I don’t know why I had you associated with…


JOEL MONTVELISKY: Well, internally we do unit testing, but not so much externally. We do integrate, though, with a lot of Jenkins and other CI systems, so we have a little bit of unit testing into us but it’s mostly, again, to follow the roles of manual testing, mostly the QA part of the house is working with us.


MATTHEW HEUSSER: And what do you do at PractiTest?


JOEL MONTVELISKY: I actually have a couple of roles. I’m one of the co-founders, I’ve been for a long time in the company.  One of my hats –I wear a couple of them- one of the ones that I like the most is Head of Testing, which means basically that I tell the testers as well as the developers (all of our developers also test) how to test and what to test. I’m mostly in charge of product, so I’m working all the time talking with tons of customers in how they’re using the system, how we can improve it, getting ideas, and again, mostly all around the testing community to try to understand how people are working.


MATTHEW HEUSSER: Well, it sounds like you’d be a good fit for the show then. Some news items, a couple of things. It’s the beginning of the new year right now. One of the news items is that, in the United States, the Transportation Security Administration, specifically was Customs and Border Protection, apparently had an error in their software on January 2nd, which is one of the busiest travel days of the year. I was flying home on January 2nd from Oregon, but didn’t leave the country, so apparently, it was only people that were coming into the United States that got held up for several hours. As someone that flies into the United States reasonably often, it’s really annoying to be held up more than fifteen minutes. There’s no bathroom, the lines are super awkward. It feels like a hallway that you are in until you can get all the way through. You can’t go left or right; you have to walk in a specific direction. I can’t imagine being there for hours. It seems like wat happened is, this is a software problem, and we’ve become so used to the automation and tooling doing it for us, that it was barely possible for them even to process these passports without the software working. The guy that spoke at CAST last year wrote a paper…






JUSTIN ROHRMAN: I was just thinking about that very thing.


MATTHEW HEUSSER: He wrote a book on this that Justin has done some commentary on. It worries me. I don’t know if you guys remember when we used to use credit cards, we use to have the little thing which you could pull out and make carbon copies with. The ta-ching, ta-ching machine, in case the credit card machines go down; I don’t think we even have those any more. I just think you couldn’t take credit cards. Am I right about that?


JOEL MONTVELISKY: There are a couple of places that actually do, but I’m not sure that I’ve seen them in the U.S. I do see them quite a lot abroad. In Israel where we work, some places might still have them. Going back over to this outage, it’s incredible. I don’t know how many of you travel with small kids; I do travel actually; I just got back to Israel from the U.S. with three small kids, and it sounds terrible. One of the things that I noticed, even when these systems are working, I don’t know how many of you have passed through customs lately, but they’re trying to implement a new system that is basically passport recognition, stuff like that, so that you need to stand in a line in order to do that process. I travel with three kids, the kids basically don’t work there because their faces are not recognized because they change so much. After we actually do that “automatic path”, they send you to a manual path, and even that path takes a lot of time. Again I think we are trying to rely too much on automation, even in these places, without really understanding if it works or what is, again, the customer perspective on that. It’s something really did take a look into, not only whether it works or doesn’t work, meaning for who does it work and how should we have a correct process next to the correct technology there.


ALEX SCHLADEBECK: Actually, I have an anecdote towards that, because I fly quite a lot as well. I have a passport that can work, theoretically, with these machines. They tell you to take your glasses are off, because that’s when the facial recognition works. Unfortunately, then I can’t read the instructions that it is telling me because I have pretty bad eyesight. In terms of who this is designed for, I think it’s designed  for the people working at the airport to have not a great deal of pressure, things to do, long queues, but it’s actually not that good of an experience for the customer, because you usually end up spending twice as long in queues, and you get frustrated because it didn’t work.


MATTHEW HEUSSER: Yeah it seems to me that the line is not much particularly faster with this new system then it used to be. It used to be that you would hand your passport to the guy, and they would look at you, and they would look at your passport, and they would stamp it, and you would leave, and now you have two go through the thing with the glasses, you have to get your picture taken by a machine, which then does the “yes, you are who you really are”, and then you still have to walk by the person and give them your piece of paper, and they look at it and it says “no red flags” and you walk out.


JOEL MONTVELISKY: And if you’re an international passenger…


JUSTIN ROHRMAN: I feel we’re making an assumption here that the people going through customs are the customer, and I’m not really sure that’s true.


All: [laughter]


JUSTIN ROHRMAN: I don’t think TSA cares too much about how long the lines take or what people in the lines feel like. They’re more concerned about how many people they have to staff and pay for to make customs operate, so the automation is probably some sort of attempts to, like, cut staff.


MATTHEW HEUSSER:  So if I paid careful attention, I would probably notice there used to be six booths in the Detroit airport, and now there are five staff booths. The line is about as long as it always was, but now they can get by with one less person.




MATTHEW HEUSSER: We’re pretty far afield from Customs and Border Protection went down for a couple of hours on January 2nd, but that’s OK. So we should probably move on to the next one, which is the State of Testing Survey that PractiTest is doing right now! [Editor’s Note: The survey has completed, but you can sign up to receive the State of Testing Report for 2017 when it is published].


JOEL MONTVELISKY: We’re actually doing it with the guys from Tea Time With Testers. People who know Lalit, and Justin actually helped us a little bit. He was part of the guys helping us review the questionnaire. Well, it’s basically an ongoing project. We started it about four years ago. It came out of… I was working on my blog and I was trying to get some real data about how people were working. How many are doing that –back in the day it was Agile versus Waterfall and other types of metrics- and there was nothing I could actually get my hands on. I went to Lalit, who actually runs Tea Time with Testers and I said “Do you know something about it?” We both realized there was nothing around that. Not a single survey that would be world wide in trying to give that data, and so we put the first one out. We got about, I think it was about 400 people answering it, and the second year we got about 600, and last year we got a little bit over 1,000 people. It’s become something that, right now, from being released only in English, we’re doing it also in a number of languages, so it’s become kind of a project of its own, and it’s really, really cool, because you get all sorts of people trying to collaborate and talk about it, so it’s actually very, very nice. I don’t know if some of you have actually seen some of the last versions that we sent.


ALEX SCHLADEBECK: I don’t think I did, actually, so I should make sure I see it this year [laughter].


JOEL MONTVELISKY: Yeah, again, just look for the “State of Testing” and I guess you will be taken back or go to the QA Intelligence Blog. You will be able to take it there, and again, the whole idea is to [not] drive it based on any biased agenda. We’re really making an effort, trying to involve people obviously from outside of our company and trying to actually get a good idea of what’s going on in there, and again, there’s some things that people say “Hey, you know what? Duh! Everyone knows about it”, but all of a sudden, you get, here and there, information that makes you want to say “Hmmm…” For example, there are about 80% of people, I think, saying that they have automation in their company, but out of this number, you’d think that 15% or 20% who are not even aware of how much automation or even where is the automation going into? So yeah, we might be automating more, but what are we doing with it? Does automation mean we have three scripts, or 80% of our tests actually covered in scripts? There are a lot of things in there, a lot of nuances that we start seeing. Obviously, we hope to see more, as we’re able to have more and more data points on similar questions that were asked over the years.


MATTHEW HEUSSER: Yeah, that’s probably a good place to start is “we have automation”. I actually was working with a company in New York. It’s part of the test community, and someone actually said, when they got their new CIO in, he was pro automation, so they trotted out the Selenium IDE stuff, and said “Look at that, woo! Screen flies by, woo!” Right? And he believed that they had automation. And that was probably one out of eight applications that that guy supported, and of that, of course you couldn’t just press the button and get a green bar and release it. There were other steps, but now the CIO thinks that they’re doing automation. They literally just don’t talk to him about the other stuff that he would ask awkward questions about, say “Why haven’t you automated it?”, and they’d have to explain and there’d be a project, and the project would maybe fail and why knows… you don’t bring that up. It’s not like he’s going by your cube every day and saying “What are you doing?” Is that experience just them, or am I right that there is this sort of “the things we talk about and the way things actually are” and there’s a difference between the two.


ALEX SCHLADEBECK: I’ve seen a lot of projects that I’ve gone into to consult or to talk about problems that they are having because they want our help, and they say “yes, they do automate”, and it turns out that after asking a few more questions that what they have is, they have something like Quality Center (or ALM as it’s called now), they have ALM and they’ve got lots and lots and lots of test cases written in that, and all that needs to be done now is automate it, and they have the tool for it. They’ve bought a tool and installed it, or they have one test case that they recorded three years ago to prove that it could work. It’s there to check a box, and I think there will have been something like what you just said, that somebody has seen something going across the screen, and have gone “Oh, obviously, they know what they’re doing.” That counts as them automating, whereas the way that we understand that is “No, that’s not helped you at all and that’s not integrated and it’s not doing anything. Actually, you don’t have any automation.” So I think that there is this idea of checking some boxes so that some manager who had some crazy idea and it’s never going to work for us, whether that’s true or not is another thing.


MATTHEW HEUSSER: I also like to be interested in how long does it take to get a build? How often do you get a build? How often can you get that build installed on a test server? How long does your regression test process take? I think those are all sort of interesting questions. And then there are some people that would yell at me for using the term regression test. I think it’s a reality for most of the companies I work with. Sorry! Companies like Google and Microsoft are good examples of “they have the resources to actually be successful at this, because they see the whole board”. As a percentage, how many companies are trying something like this, and then as a percentage how many companies are going to be successful three years from now, still running with this tool, haven’t said “Oh, we’ve realized that was wrong as we’re trying something new, haven’t shoved it in a drawer, haven’t rewritten the user interface and the automation falls apart, the champion didn’t leave”, but actually long term institutionalized. Wat’s the difference between “How many people are trying it?”, and I’m going to throw this one at Alex. I think she’s got a good handle on hat is happening in Europe. How many people are trying it as a percentage, and then how many will be successful, long term?


ALEX SCHLADEBECK: My feeling is that –this isn’t a percentage, I’ll try to get to one in a second- a lot. Based on… we’re involved in an automated and an open source automated test tool, so I see people who are downloading that over… that’s not a percentage. I think about 90 to 100% of people who are some way involved in testing, or in some way involved with “Yeah, actually, we need to be doing some more automation,” whether that’s as part of the pipeline or test automation or wherever.


MATTHEW HEUSSER: Let’s limit that  to GUI driving-y stuff, so Unit tests don’t count, Jenkin doesn’t count, automated builds don’t count. All of those things are good.




MATTHEW HEUSSER: But the loud voice telling us we need to automate is telling us we need to drive a GUI.


ALEX SCHLADEBECK: Yep. Okay. So again, I think… and again, I think that that is, that is a lot of people, probably maybe not the 90 to 100%, maybe 70 to 80%. I think that the people who aren’t doing it are maybe people that have been burned before by it. One of the things I spend a lot of times talking to people about is that driving tests through the UI, yes, it is a bit more difficult than doing it on some other levels, and you should have other levels, too, like you said, but it’s the bad rep that it gets comes mostly from people doing it wrong, or trying to use it for the wrong things. That leads into the answer of “how many people are going to be successful”, probably not vary many, because they probably won’t go about it in the right way. It’s the most frustrating thing ever to be able to say to somebody that you’re consulting, “look, if you do it like this, I can 100% guarantee that we will be speaking again in two years”, and you’ll be like “Yeah, we didn’t actually get very far with that”. It’s not about the tool at that point. Sure, if the tool doesn’t support 90% of your widgets, then you’re going to have a problem. Usually that gets filtered out by the proof of concept phase. It’s usually that people are writing tests badly, or writing tests for the wrong things, or not integrating them. Sadly, a lot of people are still failing on that.


MATTHEW HEUSSER: Yeah, I think integration is huge.


JESSICA INGRASSELLINO: I just want to bring up… maybe this is helpful for the audience, so I work at, which is SalesForce Foundation on GitHub. We build on top of .com. They release three times a year, and we have zero control over what happens and what we are building on top of. IF something changes on that base layer, we all have to work around it, but at .org, our code base is open source, so people can go and they can look at our build system, they can go and look at our browser tests and stuff like that. We are kind of building to two things. We’re building to our product, which we release and work on every two weeks, but we’re also building to the .com product, which we have no control over, and it’s release three times a year. It may provide some interesting insight for people to know that they can go, they can contribute if they want to, they can pull from GitHub, it’s all there. It’s open [laughter].


JOEL MONTVELISKY: By the way, I do want to take a step back, and maybe look at it from another lens. Because of my work I do work with all sorts of organizations that are actually implementing PractiTest. I see those customer who have automation, and it might be open source tools, we see a lot of Selenium, we also see a lot of Ranorex and the functional test tools and stuff like that, but we still see quite a number of organizations that simply do not have automation, and I think that because automation is such a cool topic, because so many people are talking about it, either those companies who don’t have automation are not in the spotlight, or maybe they are even afraid of saying “Hey, you know what? We don’t have automation”, but we’re talking here about between… again, a very big guess… between 25 and 35% of organizations are not even interested when you talk with them about integration, automation with manual testing, they say “hey, we don’t have automation here”, and I think that that’s something that we need to take into account. Out of the 100%, it may be even more, because I guess that there are tons of companies that don’t have even test management, much less automation. It’s something that I think a lot of people should realize. Automation is not a magic cure and not everyone is using it. Matt, something that you said, and I guess that this is one of the problems here, is that “Hey, you know what? Let’s forget about CI, let’s forget about Jenkins, stuff like that.” From my experience, whenever I see automation working, it’s automation that’s been integrated through the CI. It’s automation that is being run… again, it doesn’t need to be run every single build, because it would take you too much time to build the system and get that CI result, but you can have a subset of your UI and API tests from the QA running based on that CI and then have a nightly build. I see that quite a lot, that you have a very small sanity subset that is being run on every CI build, but you still do have your nightly build that takes three hours to run and provide that report, so I do believe that if you want to have a successful automation in place, it needs to run as much as possible. Trying to say “Hey, you know what? Let’s forget about CI, in the context of automation, is a very big mistake that a lot of people do. It’s something that I’d like to point you to.


MATTHEW HEUSSER: Don’t get me wrong there. My first question that I ask when people say “We want to automate” is “Do you have unit tests, and do you run CI?” If the answer is no I say “Great, we can start with unit tests.”


JOEL MONTVELISKY: Exactly! Start with the framework.


MATTHEW HEUSSER: And I get “Oh no, no, we want to drive the GUI!” so often, that I wanted to speak to that and how error prone it is.


ALEX SCHLADEBECK: I think it’ a good conversation starter with customers to find out what the actual problem is. I’ve spoken to customers, or potential clients, multiple times where they’ve called me in to ask about “Okay, so you guys do a lot of UI testing. Okay, yes, what are your problems?” It actually turns out that the problem is that the developers aren’t talking to the testers, who only work half the week from halfway across the world, and that they have no idea what they are supposed to be testing, anyway. Some of those problems somewhere, at some point, might be able to be helped with some automation on some levels, but they don’t start there [laughter]. That’s not going to be the magic sauce. It’s useful if customers come to you with problems, instead of the solution they have bought. That’s part of our job to find out what the actual problem is.


JOEL MONTVELISKY: Oh, you are so right! When we come to a customer and they say “Hey, we need a test management tool, because we’re not able to manage the testing”, and then you say “Hey, you know what? I want to talk to the QA Leader, to the test leader, to the QA Manager” and they say “Hey, we don’t have…” Maybe your problem starts from the fact that you have testers who are reporting to developer managers who don’t really know what testing is about, and on top of that you are hiring testers who have never done testing in the past. Maybe look a little bit before where the problem starts in order to try to solve it.


JUSTIN ROHRMAN: It sounds to me right now like all of these automation problems are actually organization dysfunction problems. People are trying to implement things where they clearly don’t make sense, so their teams ae fighting each other and trying to build tooling for the wrong reasons, maybe to impress a CIO or whatever to get a bonus. None of these are striking me as technical problems right now.


MATTHEW HEUSSER: It’s almost always a people problem. That’s Weinberg, right? I think Weinberg say’s “Always”, and I say “almost always”.


JOEL MONTVELISKY: It rings the bell of “A fool with a tool is still a fool”. I guess that it happens. We have fools with a lot of money, and fools that are actually setting up our MBO saying “hey, you know what? You need to have 60% automation if you want your bonus”, and you go ahead and you do a lot of the automation and you say “Hey, it’s close to 60%” and people are not actually asking for the right question, “Where’s the value? Are you releasing quicker? Are you able to provide the visibility that you need to do to your stakeholders? To your developers? Are you working together?” I think that we are trying to optimize or to maximize the wrong functions here.


ALEX SCHLADEBECK: Someone said something at the Agile Testing Days. “We tend to measure things that are easy to measure, not the things that we should be measuring.”


JUSTIN ROHRMAN: Yeah, and I think that’s exactly true with automation. People tend to automate the things that are easy, rather than the things that actually need to be done, which is where tools like BDD come in. They’re easy to use and they provide this shallow sense of progress, but they don’t necessarily show anything about the products we’re testing.


MATTHEW HEUSSER: Well, there’s two things that I want to speak to there. The first one is that automation stuff is definitely jumping to a solution, and I ask the customers “What do we get if we have it?” and they’re like “Automation”. I’m like “Yeah, but what does that buy you? Do you want to release more often? Do you want to release with less effort? Do you want to shift time so that people are working on different things? What specifically does it buy you?” and I get a deer in the headlights, like “What, are you stupid? It’s automated!”


ALEX SCHLADEBECK: The tie in to doing work as a product owner when I’m in situations like that is so useful. Anyone who does anything with requirements knows that you have to ask “why” five times and actually get to the root cause of it. When you’re consulting people on their test problems, the same thing applies. They will have come up with some kind of solution, via tool, via “we should automate” or whatever. Finding out why they think they need that, really why they think they need that, is usually the first step.


MATTHEW HEUSSER: Yeah, it can be challenging, and sometimes, we don’t get the work. I wanted to mention BDD, it’s another bug-bear of mine. I see a lot of people writing Gherkin, and then they kind of just use it as a requirements language. Is that just me?


JUSTIN ROHRMAN: Meaning there’s no code behind it?


MATTHEW HEUSSER: Yeah, it’s like this 19th Century announcement. “Given, When, Then” is really awkward, super contracty language, which is only good because you can then parse it with a regular expression, turn it into ruby code, and then execute it.


JUSTIN ROHRMAN: That might be helpful in the sense that Scrum is helpful, because it gives people in absolute chaos a framework to talk to each other. Once you get that down you can probably wing it a little bit.


MATTHEW HEUSSER: Oh, I’m not saying that they’re talking to each other.


All [Laughter]


JUSTIN ROHRMAN: Oh, they’re not talking to each other? They’re just writing he stuff down? Ugh!!!


JOEL MONTVELISKY: You know what? It reminds me back in the 90s when we started to work everyone with UML. We used to send our product managers to learn UML, and these guys really didn’t have a clue what they were doing, but they knew UML, and whenever someone needed to put up a requirements document they were expected to at least write some UML in order to define, together with the developers, the features, and they did it because, again, it was the trendy thing to do, but there was not a lot of value in that. It was not helping communication at all. It was not helping design. But it was cool, so people did it.


MATTHEW HEUSSER: So if I come to an organization and I say “Let’s talk about getting really good at testing”, it’s likely they will say “Well, what do you mean ‘really good’? We’re really good now! Are you saying we suck?! Maybe he’s wrong! Maybe we’re good now! He’s wrong!” It challenges the ego, but if you can invent an abbreviation… “You’re not doing TDD!” That’s just something to learn, we just need to check the box, we can go learn it. The problem with that is that you get very shallow understanding. “Oh yeah, we do TDD. It’s expertise. Understanding expertise is just a real challenge in software development in general, I think. Any thoughts on how we can help people get better at that?


JOEL MONTVELISKY: I think that it goes back to measuring, meaning “Are people even aware why they are testing?” Or are they testing because “well, you need to have testers.” I think that it really depends on the industry. If you are working in a regulated environment, you have such a strict set of test cases that you need to run, and you know what you need to run, and you run it because it needs to be run because if it’s not documented you won’t get that FDA permit or FAA permit. At least you know [why] you are doing it. Go out of those regulated environments, and you have testers that are there because the development manager, or maybe the CEO, knows that you need to have testing. They start writing scripts, they start testing, and they don’t really know what they are doing, so how can I be a good tester if, by definition, I don’t know what I am here to do? What value am I here to provide my team? If you know the value, then you can measure against that value. I think, and I really believe, that most testers don’t really have an idea of “why are they testing?”, and if you don’t know what’s expected of you, how can you actually be good at that?


JUSTIN ROHRMAN: How do you measure against value?


JOEL MONTVELISKY: I’ve done a couple of things. For example, we do not measure a tester, obviously, based on the bugs that they are finding. We try to measure them based on how are they actually helping developers actually develop better? If we are telling the testers “Go ahead and test with the developer” in order to teach them how to test, then you can start counting how many bugs the developer found by himself. The problem is that you are not calculating stuff based on the easy to calculate stuff; the number of tests you run, the number of tests you write, the number of tests you automated. You’re trying to say “Hey, overall, the amount of user stories, the amount of features that we are able to release quarter by quarter, is increasing because we’re more stable. Meaning a tester should provide overall stability on what’s going on in the project, so it’s an indirect value that you are providing, but that is the value that you are providing. You’re not here to test, you’re here to make sure that you can release better code, faster, to the field. I’m sorry if it’s hard to measure, but measuring other stuff won’t really help.


ALEX SCHLADEBECK: I really like the approach, Matt, about how to get better, how to realize where we are. I think that it should be –I’m just going to go ahead and say it- mandatory that every company should send their testers to at least one good conference a year. That is always where I get my feeling of “Oh! That person just said that, and we do that better, or I do that better or I learned that already, and this person just mentioned that, and I have no idea about that or I’ve never tried that, or we’re still having that problem.” Getting out and spreading your wings and listening to people and sharing with people, I think, is one of the best ways of being encouraged to learn more, and also getting some feedback on where you are right now in terms of other people. I love going to conferences and I love it when I get to send team members there as well. I think that’ the best thing you can do for learning.


MATTHEW HEUSSER: Yeah, me too.


JOEL MONTVELISKY: I think, also, if you’re going on the conference road, I do believe that you should strive not only to attend a conference, but maybe even to talk in a conference. Obviously there are less parts to talk than to attend, but I remember I was working with a very large company and in order to be made a manager, you needed to have either published in StickyMinds or something like that (it was back in the day), or having spoken either in an Israeli or an international conference, not because you needed to have the ego trip of doing that, but because you need to prove that you could actually prepare yourself in order to speak, at a place where people actually wanted to listen to what you had to say. Having attended many conferences, having spoken at some of them, wow, when you need to speak, you really, really do your homework.


MATTHEW HEUSSER: You can sometimes hear a talk where the person hasn’t done their homework, and that’s usually “this thing worked for us, therefore it should always work for everyone else all the time, or very light on specifics.


JOEL MONTVELISKY: I remember being at CAST at one of those talks and that guy was buried alive.


MATTHEW HEUSSER: [Laughter] well, I would hope so.


JOEL MONTVELISKY: [Laughter Yeah, I don’t remember exactly who was in the room, but I guess that, after the first fifteen minutes, he just wished he would have stayed home, but yeah, you need to know that there’s a responsibility when you are going to be standing in front of people. They’re going to be critical about you, and actually that’s what I like about the CAST format of conferences, where it’s not about me just exposing what I’m doing and people applauding at the end, but people asking and participating even during the sessions. Whoever is listening to this talk and has never been to a CAST conference, I really really recommend you go to one. They are simply incredible, and you learn so much, even when they are not like the bigger STAR conference or stuff like that, so again, just something to recommend.



[End of Part 1]