Sensemaking with Cynefin
What happens when software development takes a cue from disciplines like law enforcement, counter intelligence and military operations? What do we do when we need to look at complex systems to find clues about issues that we didn’t even know existed, but the data shows it plainly? How can we harness the gut feelings of testers in a more scientific manner, and “make sense by sense making”? Confused? Dave Snowden wants to help with that.
Dave Snowden is the creator of Cynefin Framework, and it has been used with a broad array of applications, including government, immigration, counter-intelligence and software development. Cynefin is making inroads into the world of software testing, and Anna Royzman is possibly the person in the testing community most familiar with the Cynefin Framework. We are happy to have a conversation about Cynefin with both Dave and Anna, and its implications on software testing.
[Note: Due to challenges with Trans-Atlantic communications, the audio breaks up in various places. We have done our best to work around this, but there are places where audio will be spotty.]
- Cynefin Framework
- Youtube Presentation on Cynefin
- Snowden, David J., Boone, Mary E., A Leader’s Framework for Decision Making, November, 2007
- Evolutionary Biology
- Kahneman, Daniel, Thinking, Fast and Slow, Farrar, Straus and Giroux, 2011
- Keough, Liz, An Introduction to Cynefin and Related Awesomeness Thinking Tools
- Cognitive Edge: Dave Snowden and Cynefin Site
- Upcoming Events for Cognitive Edge/Dave Snowden
MATTHEW HEUSSER: Hello and welcome to a very special episode of The Testing Show. This episode we have Dave Snowden, who is the lead author and creator of Cynefin Framework, which is a sensemaking model. Welcome to the show, Dave.
DAVE SNOWDEN: Hi. Good to speak to you.
MATTHEW HEUSSER: And we also have Anna Royzman, who is probably one of the closest “tester people” that has worked with Dave the most on understanding Cynefin. Now Anna is usually based out of New York. Is that where you are at the moment?
ANNA ROYZMAN: Yes.
MATTHEW HEUSSER: And Anna’s just got elected to the Board of Directors of the Association for Software Testing, so congratulations, Anna.
ANNA ROYZMAN: Thank you.
MATTHEW HEUSSER: And we also have Justin Rohrman who is, I believe, the sitting president of AST, if you won the last re-election.
JUSTIN ROHRMAN: Yep, I was re-elected, and then re-nominated as President.
MATTHEW HEUSSER: So, Cynefin… I’m first aware of it, it’s a Welsh word, I think. I first became aware of it as a framework, in a Harvard Business Review Article, and it’s really snowballed since then. It’s been used for lots of different applications, from government to immigration to software end systems… so I should probably just let Dave talk about Cynefin for a couple of minutes first, maybe from the software angle. Dave, can you tell us what Cynefin is and how a software team can use it?
DAVE SNOWDEN: Mary Boone, who was the co-author of the Harvard article, and lives in Connecticut, she’s actually with me in Wiltshire at the moment, so that’s a real coincidence. Cynefin, basically, is a model which originates in natural science, which I think has been part of its strength. So it identifies ordered systems with repeating relationships between cause and effect. Which are either obvious, i.e. everybody can see what they are and everybody buys into them, or they’re complicated, they require analysis or some kind of investigation or expertise, and then complex systems, which are ones which are non-causal, they’re dispositional. So there aren’t repeating relationships between cause and effect, the sheer number of things interacting with what’s going on, means you haven’t got repeatable patterns that you can rely on to any degree. And then finally the chaotic systems, which are ones with the absence of constraints, which if they happen accidentally are a disaster, but if you can create them deliberately, it can be used for mass sensing for sophisticated decision support. The principle behind Cynefin, is to say that you need to, first of all, work out what type of system you’re in. That’s called Ontological Awareness, and then what type of knowledge you can have and how you make decisions changes according to the system. That’s actually very revolutionary, because for the past twenty or thirty years it was Systems Thinking. People have assumed that all systems are ordered, they can be engineered, they can be designed, they can be predicted.
MATTHEW HEUSSER: So once you’ve got the Cynefin Framework (and we’re going to include the video that’s on YouTube), what do you do with that as a tester?
ANNA ROYZMAN: Well, first of all you have to recognize what environment you’re in, and as a context-driven tester, this is the first thing I would do. I would need to understand the context. I think I’ve been a big proponent on this complex environment when I talk to testers about Cynefin. One of the differentiations in this complex system is, whatever you do does not guarantee your results every time. Meaning that success is not repeatable, and failure is. Customers have to live in this environment for so long, we’re mainly operating in this environment. Understanding this framework really helps testers to have a conversation with the rest of the team, because there is a lot of times that the tester’s been asked “when are you done testing? Is your testing complete?”, and we are working with the unknown unknowns, and testers have not been able to explain the type of environment they’re in. There’s really not enough information (the tester’s been immersed in that for so long), and now I see in Cynefin, I see hope [laughter]. I see hope for the testing profession because now they can have a conversation; with the management, with the team, and appreciation of testing, and part of that is not having false expectations of what testing can and cannot do.
DAVID SNOWDEN: I remember when I first presented Cynefin to a National Security Advisor in Washington. He said “that explains fifty years of failure in American Foreign Policy. We’ve been treating complex as if it was ordered.” As Anna said, recognizing the different types of systems is Stage one and Critical. Once you do that, though, there’s a whole body of tools and techniques. For example, in the complex domain, we focus on what’s called Anticipatory Alerts. So what we did on counter terrorism work for DARPA was to actually trigger alerts, to move that capability across into software testing. So alerting humans to look at things more closely, rather than assume that structured processes will produce predictable outcomes.
MATTHEW HEUSSER: OK, I think I get that. People that don’t understand testing assume that it’s an ordered system, and you can just break it down into its component parts, and that you can just test them and be done, and there will be no surprises because we have done all of the testing. It’s not like that, and I find this idea, an alert, intriguing. Can you give us some examples of what those alerts might look like?
DAVID SNOWDEN: So, let’s say that, when we’re about to build this for software testing (so this is a new program), we take the weak signals, the small things that people observe, and we then gather those, and then in software testing we keep continuous records at the observational level. We actually create new workbooks for this. Now we’re looking at this in software testing, but also in project management, so basically allowing testers and project managers to keep continuous records. This means we get the sort of micro-data we need based on multiple pass patterns. Again, the issue is not to create a predictive model, but to say “look at this more closely”. With Anna, we’re working on a collaborative project, run over a couple of years, which will actually formalize that in terms of methods and software.
JUSTIN ROHRMAN: Can you be a little more explicit about what you mean when you say “alerts”? It sounds like there’s data being captured and then something that happens based on this data, but I don’t quite understand the mechanism there.
ANNA ROYZMAN: So, we collect the narratives, which is little observations that our testers are good at. Sometimes, testers have the gut feeling that something is not right, and there is no way of communicating that. There is a lot of observation that testers do on an ongoing basis. Sometimes there is something going on with the process, and something going on with the communication, at some point big issues coming up. Those issues can be in your work now; there’s no way to capture something like that. If you have a large piece of those little signals, at some point you’re going to see you have more data, you’re going to see that something’s involved here, so if you want to pay attention to a certain area because the data alerts you that there might be a potential problem, then you can prevent those problems before they happen. Harnessing testers in the framework will really help because testers naturally observe a lot of things. For example, me as a tester, when I work with a team, I can predict what kind of bugs this team is going to have. Not because I know the software they’re going to produce, but just looking at the process, knowing the people, knowing what kinds of bugs, usually, happen in this environment, I can predict a lot. So I can create my testing strategy accordingly. This is me, I’ve done it for many years, I didn’t know what to do with this type of understanding that I have. When I talk to people and I predict certain things, I have no base, why my things are right. Now, with that project that me and Dave will be working on, I see a lot of potential of exposing some risks.
MATTHEW HEUSSER: So, I’m going to give an example that I think is concrete, and hopefully you’ll tell me that might make sense, or you’ll tell me that I’m totally off, and then refine it. So one of the common problems that I’ve seen is, I am testing search, and to do that I need a custom catalog, so I load a custom catalog from disk, or something. There’s some way I have of making it work… and my test passes, all my search functionality works, but I’m like “that’s not right”. So I go and I talk to the release engineer, and he says “oh yes, that’s a known issue in the test environment, and it’s because of this data-basey thing and don’t worry about it, it’s going to get fixed tomorrow. ” Often that was a real bug, and when I go to the meeting say “hey , this thing happens, I am [growls], don’t worry about it.” Actually one of the biggest bugs we had at Socialtext –I was there three years- one of the biggest problems that reached production we ever had was exactly that. “Oh, it’s a dev environment, oh it’s a staging environment, no big deal it’s not going to happen in production”, and then we had a performance problem in production where the indexer ran wild, and the system was down for a day before we could figure it out. What I’m not quite clear on is, are we hoping that people are going to gather all this data, we’re going to have lots of these little micro-notes, and someone is going to compare the micro-notes, and say “hey, I noticed that four different testers were blocked by this issue that everybody keeps saying isn’t a problem.”
DAVID SNOWDEN: You’re close, but not quite there. So, basically, you’re testing software, and every time you see something or notice something, you make a note, but you also interpret it. This is key. This is human meta-data, and this is the stuff we developed around counter-terrorism. So, basically, you would position it in to what’s called high abstract sub-structures. You’re describing, you’re not evaluating, and that’s key, Because it means you’re more likely to spot weak signals, it’s less pejorative, etc. Over the course of a year or so, we’d have multiple of those micro-narratives, micro-observations, which means if the software starts to trigger an alert, you’ve got an objective statement. That’s the second element. Regardless of what you think about it going operational, we’ve got an overall pattern here, which is similar to similar fail patterns. We can’t just take it for granted. We’ve actually got to go and do something.
The visualization we use for that come from evolutionary biology, so they actually measure fitness of an ecosystem, and the ability of an ecosystem to be resilient. Effectively, we’re bringing that biological metaphor across as well, in terms of the visualizations and the evidence base. Lots more observations than “as you go along”. Rather, very specific concerns of those can be recorded and then looked for, building previous histories of failures, where we know the early stage weak signals which were ignored, into the system, and creating an evidence base, which is when the screen goes red… “Guys, yeah, you can’t say we can ignore this. We’ve got to look at it.”
MATTHEW HEUSSER: So I see some sort of mechanism or funnel to gather all this together.
DAVE SNOWDEN: Yeah, and the idea is, what we’re going to do is create workbooks, like we did with the U.S. Army in Afghanistan. We said “you don’t have to write patrol reports if you keep multiple observations while on patrol.” So that gives us better data than a sort of retrospective, we’re using this as an alternative to Retrospectives in Scrum. Basically we create specific tools for testers. This is using the core software, in which they continuously record observations, notes, etc., and you tend to not have to write formal reports. That data can be integrated across multiple testers, you know, both user testers and techy testers, but constantly accumulating that data at a meta-data level, so we can see and detect patterns much faster.
JUSTIN ROHRMAN: So what constitutes a pattern? Is it just a repeated word or a…
DAVE SNOWDEN: No, no it’s not. We deliberately don’t search on that, so you can actually literally make an observation and have it link to the code. It’s the human interpretation. We will, for example, give you four triangles with labels on the corner of each triangle, and you place the observation into those four triangles. There’s a whole bunch of cognitive neuroscience behind why we do this. You’re increasing the cognitive load on a tester’s brain, so that they can’t just go into a sort of “business as usual” response, they’re forced to evaluate different things. Of course, that creates the meta-data, and then we can look at patterns over multiple observations. Human interpretation is key on this, because human beings see things that machine algorithms can’t.
JUSTIN ROHRMAN: That reminds me a lot of Kahnemann System 1, System 2 thinking. It sounds like you’re forcing people to slow down.
DAVE SNOWDEN: What you’re doing is, the thinking fast, thinking slow. It’s a good theory based on observations, but we already knew about that in neuroscience, it’s called autonomic and novelty receptive processing. Basically, if the autonomic side of the brain can’t cope, because it’s facing something unusual, the novelty receptive side clicks in, and that thinks deeper, which is the sort of “thinking slow” side. When you increase the cognitive load on the brain, by not giving it an easy answer, is you force it into novelty receptive mode, which means you get deeper interpretation, which has more value. We’re using this in a lot of fields. Anna, she’s the experienced tester. She said “I feel this”, so we’re allowing for people to actually put the human cognitive layer of meta-data over the observational data. Actually it would be more easy for us to solve this since the Internet of Things came along, because what we got is a human sensor network to overlay the mechanical sensors. And that makes it more effective.
JUSTIN ROHRMAN: Can I get you to repeat something real quick? You said you were putting the human cognitive layer over something, and I didn’t quite understand what you were saying.
DAVE SNOWDEN: Over mechanical sensors, so for example, if you look at public transport, big data can tell you what journey’s people took, but it can’t tell you why they took them. So we add that layer over the sensor data. We’re doing work on obesity, where we’ve got fitness trackers, but now we can capture the human aspect of how people see their condition to go with that material. It’s the same for software testing. So it actually will allow us to build databases over multiple projects, and multiple contents, and use that to create a wisdom based approach, to trigger alerts even on a small project. So as a system becomes more and more intelligent, it will be able to trigger more and more alerts, or trigger alerts that are more sensitive and focused.
JUSTIN ROHRMAN: So I guess you have case studies of companies or small projects or contacts that are actively using this right now?
DAVE SNOWDEN: We’re moving into software testing for the first time. We’re now taking something which has been proven in a range of fields, and applying it to a new field. That’s what we’re doing with Anna. If people want to be part of that, it’s going to be announced as a collaborative program, so companies can join and get early access, and be part of the creation, but the technology and the approach have been established now for over a decade.
MATTHEW HEUSSER: Great! So, how much progress have we made here? Have we tried this on a project? Is this documented? Is there a case study we can read about?
DAVE SNOWDEN: Not on testing yet, no. we’ve done our first ones on user requirements capture, and we’re about to start on project management. There are multiple cases in the development and the security sectors that you can look at, but you gotta’ move those sideways to software, in terms of the way you think. So we’re doing the same, for example, with user requirements, sort of continuous capture of user fragmented observations and experiences. Once they start to produce a discernable pattern, you can allocate a programming tram to build a prototype. So we get rid of the user requirements spec. a pre-Scrum process within Agile. So there’s a body of new tools and techniques being moved sideways into software, but all of them come originally from development in counter intelligence.
MATTHEW HEUSSER: Is there a write-up of this approach somewhere that we can learn more about?
DAVE SNOWDEN: We’re doing this through the university where I just set up a new research center, so there will be a formal invitation to join, which will have a basic write-up available in a week or so’s time.
MATTHEW HEUSSER: Great! Please! Anna, let us know when that’s up, and we’ll link to it in the show notes. Sounds fascinating. Justin, do you have any questions?
JUSTIN ROHRMAN: No, I think you’ve answered most of my questions. I am only vaguely familiar with Cynefin model right now so, what I’m trying to do, is make sense of the sensemaking framework, and figure out how we can apply it. I guess that’s the exact study y’all are doing right now.
DAVE SNOWDEN: You’ll find it’s been hugely used within the software development community. within Dev-Ops, and with Agile. It’s probably the dominant sense-making framework now. If you do a quick search on Cynefin and Agile, you’ll find a whole bunch of stuff. Anything with Liz Keough in it is worth looking at. She really gets it, but she’s not the only one. Anything I’ve re-tweeted I’m OK with.
MATTHEW HEUSSER: And where can our listeners go to learn more about you? CognitiveEdge.com, I think?
DAVE SNOWDEN: Yeah, that’s the website.
MATTHEW HEUSSER: And what are you up to now? Ae you speaking at any major events that they might be able to get to see you?
DAVE SNOWDEN: I’m doing stuff with Cisco next week, in San Jose, but that’s in company. I’m with the European Commission on this on leadership this week. San Jose, in a month or so’s time, we have a full four day training course. I’m speaking at Agile in Riga, speaking in Bosnia, bug software simulation conference I’m speaking at in Melbourne in a few weeks’ time, so that’s where I’m hanging about.
MATTHEW HEUSSER: We’ll link our readers to your “About” page, or your calendar, in our Show Notes; I’m sure you’ve got a public calendar.
DAVE SNOWDEN: Yeah.
MATTHEW HEUSSER: Any parting thoughts before we say goodbye? I’ sorry we’ve got a hard stop at the top of the hour.
DAVE SNOWDEN: The thing that’s worth thinking about is, what we’re doing here is that we’re taking a natural science, not a social science, approach, and we’re shipping to an ecological metaphor from an engineering metaphor. So the basic argument is that software development and software testing, is much more a sort of ecosystem than it is an engineering problem, and it needs to be treated as such. The engineering is hygiene factor, it’s the wider ecological issues we need to address.
MATTHEW HEUSSER: All right. Thanks, Dave. Anna, thanks for being on the show. We’ll have to have you back soon. Talk about what you’re up to.
ANNA ROYZMAN: OK
DAVE SNOWDEN: Cheers.
MATTHEW HEUSSER: Have a great day. Thanks, guys.
JUSTIN ROHRMAN: Later.
DAVE SNOWDEN: Bye.
ANNA ROYZMAN: Bye.