Some applications are easy to test and automate, others are significantly less so. Now, it is a well-known fact in the software development industry that the earlier a bug is found, the cheaper it is to fix. The question, then, is how to find bugs as quickly and efficiently as possible. A good answer is to design and write code in a way that is very friendly to testing. The measure of such friendliness is usually called “testability”, and it can be summed up as four principles.
“Understandability” is the measure of how clear the tester is on what he should be doing. The biggest part of understandability is documentation of both the code and the requirements. Having clear requirements helps the testers design test cases that have the best possible coverage, and also serves as a checklist for the developers themselves. Clear documentation enables the testing team to quickly access information on how any particular piece of functionality works, which reduces the number of questions the team needs to ask of the developer team, and generally quickens the process of familiarization of the test team with the product. Also, whenever the application under test violates implicit requirements that the test team and users bring in with them from prior experience, a clearly-documented set of requirements will let them know whether the violation was a conscious design choice or an accident that should be reported as a usability or accessibility bug. Clear code documentation, on the other hand, allows for faster and better unit test development as well as faster problem localization, and even makes the likelihood of bugs appearing during adjustments to legacy systems much lower.
“Visibility” is the term given to the degree to which a tester can see the progress the application is making. The primary source of visibility is logging. Logs should be both prolific and customizable, so the test team can filter them to only show the events that are currently relevant. Also, if the configuration is properly done, the developers can use the same logging tool to see the entire debug flow when they are trying to fix the issue. The other large part of visibility is data sources, whenever that is an applicable portion of the application. Information duplication in databases should be minimized, and the databases themselves should be available to testers, so that they can help verify where the problem occurs – before, in, or after the database.
“Automatability” is the measure to which your application can be automated. This is typically the largest hole in development, since very few developers are actually aware of what is required in order for an application to have high automatability. There are basically three big requirements for automatability.
The first requirement is that application behavior should be consistent, or should have a mode where behavior becomes consistent. An example of this is advertisement popups. Suppose an application has a 30% chance to load an advertisement ad on each page load. If it were 100% chance, the testers could easily factor it into the test, and move on, losing minimal time and spending practically no effort. If the chance was 0%, that would require even less effort, obviously. However, since the chance is there, the testers have to check for it on every page load, which typically adds 10-20 seconds per page load, slowing the tests, and slowing the test development. This also has the unfortunate result that repeatability of issues can be hard to pinpoint as if there are multiple events that have a chance of occurring, and then all of the states might need to be found, and then replicated. This also plays havoc with coverage, as testing all combinations of randomly occurring effects can be frustrating and take much longer than it should, not to mention it making a mess out of the test report organization. Now, you don’t really want to commit to 100% or 0% in all test builds, as we do want to test the different configurations, so the best way to do this is to have a configuration tool or interface for the application that allows the testers to set the chances of random events to always, never, and whatever the production value will be (as they will want to test whether it actually occurs with that probability or not).
The second requirement is object visibility. Automation is done through tools that tend to use the windows API in order to find and manipulate objects in the application. If the developers use standard objects (such as those found in java’s swing library, or .Net), then those objects have most of the important functions exposed to Windows, and, by extension, to the test tools. However, if the developers choose to use custom objects, as they are wont to do, then they have a choice. They can either extend the basic object and implement its expository functions, or they can just declare it whatever type of object and make it opaque. The former is much better, but sadly, the latter is much more common.
The third requirement is quantifiability. This is not applicable to all applications, but it is a significant issue when it is. If the application has a slider that is supposed to set something, or if the application generates an image, then testing that automatically can be a real pain as the testers are reduced to either image comparison or to going off coordinates. An even worse case is when a setting slider gets moved in response to some action or another and that movement needs to be verified. If there was a debug label that the testers could see that contained the numerical value of the setting, they could just compare the numerical value as desired. But if it’s not, they have to figure out some way or another to get the coordinates of the slider. This is ugly and time-consuming, and therefore undesirable. Same goes for custom color changes of elements. If the testers cannot see the exact value of the color, they have to rely on image comparison, which is slow. So remember, always try to have a way to extract a numerical value from an analog or picture output. This also applies to being able to extract the values of a complex object that is being passed around fairly easily.
“Isolation” is the final principle of testability, and this one is relevant both to manual testing, and to automation. The premise of this principle is that the application should be dividable into units, which can be tested more or less independently from each other. This is relevant for manual testing mostly when things go wrong, and the testing team wishes to narrow down the cause of the issue. The fastest and simplest way to do this is to start segmenting the application. Lack of isolation can cause the issue localization to take longer, and may limit what the testing team can do for the development team, but it is in automation that its true importance comes to light.
Take, for example, unit tests. If the isolation of units in the application is low, then the smallest units that can be tested will be huge, and not very helpful. Low isolation may also mean that each particular module takes a lot of unnecessary information, making invoking it really difficult. A good way of insuring isolation is to make gateways or wrappers for all calls going out from the application under test to third-party interfaces. Then it is possible to insert debugging code and / or logging into said gateway, and to stub it when appropriate. Another good pattern is to separate the making of a logic decision, and its execution. The decision-making tends to be significantly simpler than the execution, and when a problem occurs in a non-separated block, it is usually in the execution portion of the code. Thus, if the problem is in the decision making, the focus for issue location would be on the execution part of the code. If the two were separated, however, the appropriate area of focus would become obvious. There is also an added benefit in that it would make stubbing the execution simpler, in case the execution takes a lot of time and is not currently of interest.
As long as the development team and the people generating requirements keep these four principles in mind, they are sure to see significant improvements in their testing turn-around time. And that time, as we know, is money.
https://msdn.microsoft.com/en-us/magazine/dd263069.aspx#id0070007