Introduction

These days, there are many challenges which face test teams who are adopting one of various Agile methods, such as Scrum, which combines activities such as daily meetings, sprints, etc. There is also a variety of terms that the average tester and test manager must contend with at different levels.

Let’s assume that we understand a particular activity- what the managers are looking for, what concerns the Scrum Master. Moreover, let’s assume that the test engineers, TLs, and the test managers understand why this Scrum approach was appropriate. Now we are facing a totally different set of problems, some of which are:

  • How are the scrum teams built, and how are team communications handled?
  • How does the team run in everyday life, when parts of the team (both developers and testers) are in different parts of the world?
  • What do we do when several teams test multiple different products during the same time frame?

The following article provides examples of how we, as a team dealt, with the problems above and describes mistakes we made and lessons learned. Obviously, there are many more ways to deal with this set of problems. Where relevant, we will share with you the different thoughts we had before making a decision.

Background

Our test team was assembled from teams in the US, Israel, and China while development was divided between teams in these countries as well as outsourced to the Ukraine.

  • The product line which the QA department was in charge of included 4 different “off the shelf” software products, where each one had a distinct life cycle, its own targets and goals, release versions, etc. Moreover, there were different one-time projects that occurred during the year (between 5-6 per annum).
  • Between the different products, there were many overlapping areas, meaning test engineers could efficiently test more than one product.
  • The tests were divided into 3 major sections:
    • Manual tests, including requirement analysis, test designs, etc.
    • Automation development “on the go” for newly added functionality.
    • Customer simulation labs, which were established to simulate the environment, working procedures, configuration and up-to-date data  of main customers.
  • All of the test engineers were divided among 5 scrum teams. History shows that “team formation procedures” happened once a year on average. The team formation process consisted of the assignment of all developers/testers/analysts, etc. to scrum teams in order to fulfill the backlog as efficiently as possible.
  • To uphold company standards, each scrum team should have included at least one QA engineer. No engineer could be part of more than two different scrum teams at once; due to the overlap between products, it made sense that some engineers might test two different products in these overlapping areas.

When we met in order to form the scrum teams according to the product future needs, we could either disregard geography, forming teams based on capabilities and knowledge only, or position geography as a limitation to guide the team’s structure. The second alternative basically meant that a given scrum team wouldn’t have members from two different countries.

We chose the first option as our model, meaning capabilities were the leading factor. Multiple scrum teams were formed with engineers from different parts of the globe. However, we did manage to assemble the teams with people residing in no more than 2 different time zones to facilitate communications (For example:  US, Israel /Ukraine, Israel /Ukraine, China /US, China).

The major disadvantages of this approach are the time zone challenge of daily communication among different team members and understanding the team’s status along the sprint.

We overcame these problems by using a few techniques, most of which were simple and easy to put into practice:

  • Daily meetings: At first, the local team (wherever most of the team members were) held the daily meeting alone. The other remote parts of the team would send daily updates by email and receive updates the same way after the daily meeting. As time passed it was understood that despite the comfort associated with this offline approach, moving to an online meeting with ALL team members prevented dragging issues along and improved communication and relationships between team members. Plus, issues were raised more frequently. Hence, with the absence of a sophisticated video conference mechanism, most daily meetings were done by phone, or with parts on Skype with webcam. The teams suffered some difficulties during the first few weeks, but after 2-3 weeks, meetings were held as if all were at one geographical place. Some teams may have had it better than others, but on average most found this approach to be more than satisfactory.
  • Personalized communication: the team members tried to communicate face to face by using video chat and phone, and less by documents and email. This made information transfer significantly easier despite language barriers.

Beyond that, there are a number of difficulties to consider:

  • Time differences :  In our case, US – Israel/Ukraine is 10 hours, Israel/Ukraine – China is 6 hours, US – China is 16 hours, meaning the team is talking but in a different day, which by itself is hard to grasp. This is something that needs to be taken into consideration with every step the team makes.
  • Cultural differences: The minute the team is formed, both the QA engineers as well as the developers are required to work smoothly with their peers who may belong to an entirely different culture. The difference can be in terms of anything: the way they talk (even if all sides speak English, different accents or colloquialisms may make it difficult to understand at first) to the way they think, working habits, and much more. The way we approached this issue was by going through a cultural differences workshop which stressed the key differences and where everyone needed to be careful. Over time it became much easier, although it did take a significant amount of time.
  • Method of communication: At first, communication between team members was problematic. A simple message conveyed in speech or writing can actually be relatively complicated and take a long time to puzzle out. The effect of this, which I have personally encountered many times after years of practicing Scrum, is that many team members try to avoid making contact online at all. In my opinion, the name of the game here needs to be talking directly on video, phone, or chat rather than offline (e.g. mail).

How to operate scrum development and tests when parts of the team are around the world?

Beyond online communications, Scrum teams need to maintain some helpful aids to manage their joint information.

  • Scrum board: Initially the teams did it on an Excel spreadsheet, which was at a shared point and thus accessible for the entire team regardless of their physical location. Today it is managed by scrum management software (detailed below), upon which the teams can view and update the tasks / tests / user stories easily. By that, we prevented the need for multiple scrum boards for each team (in every site where part of the team was located).
  • Scrum management software: By using this, the teams have online access to all of the relevant data for managing their tasks – from product backlog, user stories details, scrum boards, STP’s, STD’s, impediments, sprint backlog etc. Every detail that is being updated by the product owner or scrum master or by one of the team members is seen on the spot, and some of the daily meetings are being done in front of the boards within the software. One of the advantages of using such a tool is that all the material regarding the different user stories (from requirement documents, any discussion done regarding it, added material, and of course the development state) is in one centralized place and convenient to use. Examples of such software are Rally, Version One, Green Hopper etc.

Real World Experience

When the company went through this transition, we knew we wanted to conduct the functionality tests as close to the development stage as possible, and at the same time provide relevant automatic tests while the first cycle of tests was done. Up until that point there were many cases where there was a time gap between the end of development and the first testing cycle period.

 

We realized we had a gap to bridge in the first few sprints: we should have fully tested the functionality that was developed during the months before, plus the new functionality which would be developed during the first few sprints. To overcome this problem, a special task force was created with testers, CS engineers, test oriented developers and more. The goal was achieved after only two sprints. Today, the testers in the different scrum teams provide tests covering approximately 80% of the functionality developed within the same sprint. Within that sprint, the testers develop automated tests for about 50% of that functionality; a large gain, but there was still much room for improvement. Naturally, if a given function is supposed to be potentially shippable at the end of a certain sprint, it is developed, tested, and fixed within that same sprint (Luckily, this was rarely required).

During the first few months, it was hard for the team members to move away from the traditional phases of software development: develop, test, fix, re-test, and so on. Well, as we all know, even in traditional methods there are parallel processes (Example – writing an STD during the design phase), and yet it seemed that despite trying to make the processes as parallel as possible in order to reach the finish line faster, we were still under the influence of the “old” way of thinking. Step by step, our methods changed. For example: an early design for a function, which was done by a developer, turned out to be test-oriented. This meant that there was much thought about how to develop it, such that parts will be testable in parallel to other parts being developed, and of course without hurting the development efficiency.

This approach went even further. A year after we adopted Scrum, a complex project was established for the leading product and a dedicated scrum team was assembled. That project required extensive infrastructure work, new complex algorithms, wide and complex GUI, and there was a very short timeframe. The team was assembled from engineers in the US and China. Only one of the team members had relevant application knowledge, so there was a learning curve for everyone. The team got to work and made a list of development tasks and tests, where every module in this project would be testable from the user level (meaning through its dedicated GUI).

From the moment the planning phase concluded to the end of that project, it seemed that every daily meeting had team members asking, “I just finished the last part of this phase, what should I move on to?” The decision regarding what to do next was made by the entire team. In order to prevent bottlenecks, it was often integral that the next task was not familiar to the chosen team member and extra learning was necessary. In many cases, for example, the developers helped with testing (with assistance and support from the testers and TLs), in part based on a written STD and in part exploratory, according to the knowledge and expertise of that developer. Conversely, testers helped developers perform better unit tests where it was deemed necessary. That actually meant that anyone on the team became capable of doing any task, after a bit of instruction and exploration. The team’s primary goal was simply to complete the project on time, with the desired quality, and to reach the milestones they had defined within every sprint. As a personal experience from that project, I can add that as the one in charge of the testers, as an observer, and as a manager, this was the project in which I had the most fun participating, and this is probably true for everyone involved.

What did we do when several testing teams test a number of products in parallel?

As you know, every sprint begins with planning,  at the end of which the relevant scrum teams commit to provide certain functions. In our case there were many test engineers who, due to their capabilities and the company’s products, were required to take part in more than one scrum team during a given sprint. For example – in case we have two different products, with an identical functionality to be developed for both. As a matter of fact, when the testers and TLs got to planning a sprint they faced a few dilemmas:

  • Given two products and therefore two different backlogs, how do you prioritize between every task within one of them and every task in the other?
  • What is the required capacity for each product? How do I as a tester or a TL define what it is for every product?
  • Which scrum team, given the tasks which are its responsibility, needs a particular tester more than another for the next sprint?
  • When the sprint begins, how can we make sure a tester doesn’t move between teams too many times, causing unnecessary context switches?

Our way to solve the above dilemmas consisted of 3 steps done sequentially:

  • Understand the overall picture in terms of constraints and the developer’s plan: the QA manager publishes the priorities, meaning which versions for which products are about to release and what the priorities between the products are accordingly. (For example – version X at product one is first priority, version Y of a second product is second priority, and so on). In parallel, every tester sits down with his scrum team to understand the developers plan, meaning what the team sees that they will work on during the sprint, which developments are riskier and their meaning, etc. Moreover, identify leftovers from previous sprints to the scrum team, the TL, and the QA manager.
  • Have each tester / TL prepare an individual plan; everyone prepares a plan as a base for changes, since in the planning stage things are certain to change. This plan takes into consideration leftovers from the last sprint (if there are any), relatively risky developments ahead, the different backlogs that need to be considered, and of course the constraints from phase one.
  • Synchronize all personal plans, providing feedback and planning the tests: in case a given tester belongs to one scrum team, the issue is relatively simple, since only the relevant product’s needs and his scrum team’s plans will affect his sprint’s preparation. When the situation is different, we gather opinions from all testers and prepare plans together:
    • If there is a product that will have large gaps, which we can foresee prior to the sprint’s planning, meaning if due to lack of resources, this product won’t get the needed tests during the coming sprint. The required response means testing every function developed, releasing another version where needed, and automation developments to ease future tests.
    • Is a different resource allocation from the tester’s recommendation needed in order to cover for such gaps? After we give the testers this feedback, and making adjustments while having the testers take full part in the decision making, every tester has a general picture as to the tests he will probably do within the sprint. Of course, this is not a final plan, but a base plan since things will probably change during the sprint planning, at least to some extent.
  • The purpose of the 3 phases above was to make sure that the resource allocation between the products was sufficient for the coming sprint including the following sprint’s sniffing period.
  • Timing of the tests within the sprint: We tried to put risky tests first and of course tests which are dependent on others. In addition, we tried to unify tests that required similar actions to improve the entire process. The main idea is to map the sprint such that you won’t get stuck with half of the tests in the last few days of the sprint.

As a personal note, I might add that the process above didn’t always look like this. It took us some time to get the people abroad to follow this procedure, and until recently there was still much room for improvement.