Introduction

What is agile software development, and what changes does it require of a tester? How does a tester become more effective in an agile environment? This white paper runs through the evolution of software development and compares agile techniques to Orange project management methodologies, as well as discussing whether the SMaRT methodology is relevant in an agile environment.

A Brief History of Software Engineering

Ever since the so-called “Software Crisis” of the mid-1960s, the IT community has been searching for a process model or methodology to provide a miracle cure. Originally, a lack of productivity was used to characterize this crisis, but soon it became clear that a lack of quality was actually the problem because many software projects ran over their budget and schedule, some caused property damage, and a few even caused loss of life.

The early solutions, such as the Waterfall Model, attempted to follow the formal and rigorous processes of other engineering disciplines. Such models advocate the process of requirements capture, analysis, design, coding, and testing carried out in a strict, pre-planned sequence. Progress is generally measured in terms of deliverables such as requirement specifications, design documents, test plans and the like. The controls embedded within the process offer high levels of user accountability and reduce the risk of failure. However, this can result in substantial testing effort toward the end of the project cycle. There are also criticisms that this was a contradiction of the way that software engineers’ work and that it was bureaucratic and slow. The slow project progress associated with this method consequently increased the chance of requirement churn – the changing of requirements whilst the project is still ongoing. These factors are often touted as cause of waterfall project failure.

In reaction to this, James Martin developed the Rapid Application Development (RAD) model. This is an iterative approach which has five core elements; prototyping, iterative development, time boxing, team members, and management approach. The prototype helps the user draw out requirements and then the software is created in iterations with short development cycles. After each iteration, the user requirements are re-evaluated; the application becomes increasingly feature-rich as the versions increase. The goal of time-boxing is to keep the timescales of iterations down, by deferring feature addition until future versions and so reducing the possibility of requirement churn. There should be a small number of team members who are experienced and able to perform multiple roles; management should focus on keeping development cycles short, enforcing deadlines, and clearing bureaucratic or political obstacles. It is argued that this method increases speed through the use of CASE tools and the lack of bureaucracy and documentation, and because the user has more input in the analysis and design stage, quality increases as well. However, there are also several criticisms, namely that it lacks rigor and so produces fragile systems that do not scale, whilst raising user and manager expectations to unrealistic levels.

The Waterfall method is an example of a “Heavyweight” method, whereas RAD is an example of a “Lightweight” method. An alternative way to see this is to view these two techniques on a scale spanning from “Predictive” methods to “Adaptive” methods. Predictive methods focus on planning the future in detail; features and tasks are planned for the entire length of the development process and the plan is typically optimized for the original destination. Changing direction can be difficult or cause completed work to be thrown away and so a change control board is often appointed to ensure that only the most valuable changes are considered.

Adaptive methods, on the other hand, adjust quickly when the needs of a project change, but will have difficulties when attempting to describe exactly what will happen in the future; they may be able to report exactly what tasks are being done next week, but only which features are planned for next month. The mission statement of the release or a cost/benefit analysis is probably the only semi-realistic measure of timescales which can be provided for any targets further away than that.

The descendent of the Waterfall Model, the V-Model, is not exactly common today but still stands as a good example of a predictive method. The V-Model starts testing earlier in the project lifecycle and associates a test phase to each design phase, thereby spreading the testing effort more evenly over the whole project cycle. However, it is still characterized as bureaucratic, slow, and open to requirement churn.

In the late 1990s, new Lightweight or Adaptive models such as eXtreme Programming (XP) which attempted to address the problems associated with Lightweight or Adaptive methods continued to be misleadingly criticized as unplanned or undisciplined. Therefore, in 2001, members of the community met and adopted the name Agile Methods. They created the Agile Manifesto (https://agilemanifesto.org/), a canonical definition of agile development and the accompanying agile principles.

Agile Software Development

Agile Software Development breaks down one product release into small packages and plans to release these packages in short time windows or iterations. These iterations generally last one to four weeks, depending on business need, and are viewed as small projects in their own right. Consequently, each one has all the normal project phases: planning, requirements analysis, design, coding, testing, and documentation. This is one of the main differences between agile methods and past iterative methods: the synergy of an adaptive fast-development process, and the repeatability and rigor provided by the predictive project structure. It is predictive and accountable in the short term and adaptive to change over the medium- to long-term. Each iteration is not required to add enough functionality to justify releasing the product, but the aim is to release some software at each stage.

Agile Software Development recognizes that requirement churn is almost inevitable during the project lifecycle and mitigates any associated drawbacks with this by accepting the fact and including it as part of the planning phase. After the release of an iteration, the overall project goals are reviewed to see whether it is necessary to alter the requirements. Another key aspect of this methodology is its focus on face-to-face communication over the written form. This is not to say that there is no documentation, but more importance is placed on keeping information current over standards and formal processes. As a result, the majority of agile teams sit together and comprise the people necessary to complete the delivery. This can be a combination of programmers, customers, testers, design authorities, technical writers and managers, but at minimum is a programmer and a customer. A “customer” can be an actual customer, but is more likely to be someone who can provide business sign-off, e.g. product managers, business analysts, project sponsors, etc. One final point to make about Agile Software Development is that the emphasis is firmly placed on exercising the software as the main measure of progress; this, coupled with the small timescales of the delivery release cycle, means that automation is often a key factor in agile environments. One methodology that most emphasizes this is XP, the most prominent of several agile methodologies.

Like all agile methods, XP has an iterative lifecycle, team collaboration, and significant customer involvement. The first step in planning is the creation of user stories. These are similar to use cases, but are also used to create time estimates. Acceptance tests are created from the user stories and are used as black box system tests. The customer is responsible for verifying the validity of the tests, including whether or not they were passed. Criticisms still exist surrounding the agile methodology in that the relative lack of documentation when compared with more predictive methods encourages “cowboy” coding and there is no record of alterations. It is highly reliant on the individuals within the agile team to be of a high quality and mistakes are less visible than they would be in a more predictive, document-intensive methodology. Furthermore, it seems that it would not apply to large, complex systems with interactions between many components. This is because of the face-to-face communication that is one of the bedrocks of the agile method. The size of team needed for such systems would hamper communications and also makes it less logistically possible to sit the team in one location.

From a testing perspective, agile methods don’t even define a mandatory role for a tester, with the minimum team being a developer and a customer. XP defines a role for an expert customer/product specialist tester, but this would be a significant role-shift for the many existing system and product independent testers and would more seem to suit the shift of a customer to specialize in testing rather than vice versa. So what is the role of a tester in an agile environment?

Testing in an Agile Environment

It could be argued that the customer can fulfill the role of a tester in an agile environment. After all, they should know how the product is supposed to work and can therefore provide all the use cases and draw acceptance tests from these. Furthermore, there is usually a higher degree of automated unit testing that is carried out by the developers which decrease the chance that defects should make it past the unit-testing phase.

It is often said that developers and testers view the world from two different perspectives. Developers are happy-go-lucky, glass-half-full optimists who assume that most of the time, things work. Testers are world-weary, glass-half-empty pessimists who assume that there will always be something wrong. Customers are also generally never happy with what they get, so why can’t they replace the tester? The answer is that testers are more likely to apply critical thinking. This critical thinking can be used to assist the customer to draw out product requirements and then to draw out test cases from those requirements. Whereas a customer would create scenarios for normal use, exception cases should also be created and the customer may not have the experience or the right abilities to accomplish this.

Also, a system and use protocol for actual defect reporting is necessary in any development cycle; again, the customer may not have the skills or experience for this. Testers often occupy the ground between developers and customers, with a greater technical knowledge of how the system works and a focus on making sure it is fit-for-purpose. This technical knowledge means that they can raise more accurate defect reports that cut down the investigatory work required of the developers. Furthermore, testers are more likely to create, and use more efficiently, a better defect reporting system.

Lastly, the automation required for an agile approach will require effective configuration management. Although it is not conventionally a role that a tester will perform, perhaps in an agile environment testing should subsume that role to provide a degree of independence of the release from the developers. This should help discourage cowboy coding and last-minute, undocumented code badges.

The Orange Test Process

There are many test processes used within the companies under the umbrella of Orange-branded companies, but for the purpose of this article, the process defined for projects that fall under the scope of Validation and Testing Support will be examined. The Orange Project Delivery Process that falls within the scope of Validation and Testing Support draws heavily from Prince 2; there is a defined organization structure for the project management team, a product-based planning approach is used, and projects are divided into manageable and controllable stages. The process is split into five phases; Idea, Concept, Initiation, Delivery and Close, which are separated by Decision Points (DP). Testing activities commence after DP2, the transition from Concept to Initiation, and finish at DP4, the transition from Delivery to Close.

The Test process also follows a product-based planning approach and hence the following are mandatory deliverables in their specified stages:

  • Initiation: Test Strategy, TTRM, Environment Delivery Plan
  • Delivery: Test Plans, Test Cases, Test-bed Environment, Test Exit Report

Transition into the next stage is not possible until all these have been baselined and accepted by the Project Sponsor or the Project Manager. The Delivery stage of testing is normally split into three phases: Preliminary Acceptance Testing (PAT), Integration Testing (INT) and Orange Acceptance Testing (OAT). These phases are based on the V-Model process as all can be mapped to development stages; therefore, all the criticisms leveled at the V-Model can also be leveled at the Orange test process.

The recent acquisition of Wanadoo into Orange Group will provide a challenge to this methodology because Internet providers have traditionally been associated with a more agile approach.

A SMaRTer way to Test

SMaRT is an acronym for Structured Managed and Realistic Testing. It is a risk-based approach which is based heavily on Prince 2 fundamentals and the testing V-Model. The test phase of a project is split into 5 stages: Test Strategy, Test Planning, Test Scripting, Test Execution, and Test Closure. It relies on inputs from the business and previous stages and requires a set of deliverables necessary before procession to the next stage is achieved. This is assessed at exit and entry meetings between each stage that assess the criteria attached to both.

Like the Orange test process, this is a predictive approach and the criticisms discussed earlier can be leveled here as well. So is it possible to use the SMaRT approach in an agile environment?

SMaRT and Agile? You don’t see that very often!

The first stage of SMaRT sets out a test strategy for the whole project. There is only ever one strategy for one project. This could lend itself to an agile method by defining a test strategy for the project as a whole which contains the project goals and the current in-scope main requirements. It would still define environments, resources, test approach, projected timescales, etc. but after an initial baseline at project initiation, it would be subject to change after the delivery of each iteration in the required meetings. When the requirements are updated and timescales and scope altered, these would also be updated in the test strategy. After the initial creation of the document, these changes would only require a small amount of time and effort.

The second stage produces detailed plans for each testing phase or stage within the project and it is likely that there will be more than one testing phase or stage within the project. This being the case, rather than writing a test plan for each test phase, one could be written for each iteration. This would be a slight adaptation of the SMaRT method, as normally a plan only deals with one phase of testing, whereas in this case, as a minimum, it would have to include the unit test phase, the acceptance test phase, and whatever other test phases might exist. However, as the unit test phase will be mostly controlled by the developers, the main functions that it might serve for this phase would simply be a test specification of unit tests developed and setting entry and exit criteria. For the acceptance phase it would be of more importance and here it would be used to ensure that the acceptance test cases satisfy the customer that the requirements of the iteration in question are met. After the first test plan has been created for the first iteration, future plans for future iterations should be based heavily on the plan for the previous iteration.

Test automation is essential in an agile environment and so the next two stages in SMaRT, test scripting and test execution, would simply be the updating or creation and subsequent running of automated scripts in an appropriate tool. Results would be collated and distributed as per project guidelines.

The final phase of SMaRT, test closure, would have to adapt to the needs of the project. The meeting after each iteration delivery could be seen as a test closure, but after defining a strategy for the whole project, it goes against SMaRT to have closure for each stage. Also, if a project is the updating and maintaining of a system, then there would be no closure until the system is put out of service. Perhaps the solution might be to define two types of closure; iteration closure and project closure.

Conclusion

The use of SMaRT in an agile environment would have many benefits. Deliverables such as quality analysis, failure mode analysis and business process walkthroughs lend themselves very well to small agile teams comprising all the people necessary to complete the delivery because all the people needed to conduct them should be within touching distance. This counters the normal disadvantages of the time and effort spent getting all the necessary people together normally associated with such exercises. Also, with the rise of companies wanting to comply with legislation like Sarbanes-Oxley, the audit trail that SMaRT provides would allow an agile method to be used while still accomplishing this. However, SMaRT is predictive in nature; to use it in an adaptive development process, it would need to become more adaptive. As part of project initiation, an analysis of what deliverables are necessary would have to be conducted in order to complete the project whilst keeping the risk within agreed parameters. It is probably necessary to create a version of the SMaRT methodology which is specifically suited for agile environments and yet still adaptive and adaptable to the requirements of the project.