In the new world of network functions virtualization (NFV), testing changes dramatically and becomes a core discipline that is central to the overall goal of meeting customer needs in a way that is as inexpensive and efficient as possible for network owners.

The virtualization of a network allows the agile real time shifting of assets to where they are needed. This fluidity means that the entire landscape must be looked at as a cohesive whole. The idea of testing individual purpose-built devices and the connections between them is antiquated. The emphasis shifts from testing hardware to making sure that software is robust, correctly uploaded and otherwise doing its job. The inter-dependencies of elements of a NFV-based network are far greater than in the legacy world

Testing is continuous and involves many parameters that aren’t tested in a legacy network. GAP analysis – identifying and analyzing differences between what should be happening on a network and what actually is going on – is constant. In addition, the standardization of the environment allows more non-functional testing on elements that don’t directly impact current operations but that are vital nonetheless. These parameters – stress, load, performance and usability, for instance – can raise yellow flags on potential problems or areas where improvements are possible.

Obviously, just about everything changes in an NFV environment. At the end of the day, however, the network must meet or exceed the performance of the legacy network. The methods and metrics of gauging networks’ success, failure and status – which have existed for decades – must be emulated. Operators won’t transition to a system using unfamiliar metrics. Performance must be proven using traditional service level agreements (SLAs) and other familiar key performance indicators (KPIs).

Heart and Soul

The heart and soul of NFV is the Management and Orchestration (MANO). As the name implies, MANO coordinates the multiple levels of real and virtualized computing and networking elements. These levels escalate from the base non-virtualized (“real”) computing, storage and networking assets through the virtualization layer and the integration of those virtualized elements into discreet virtualized network functions. These VNFs can be firewalls, load balancers, customer premise equipment or other networked elements. The orchestrator oversees the linking of the VNFs and their deployment into virtualized networks.

It’s very complicated, but the bottom line is clear: NFV involves very complex integration of various software elements, piece by piece, from the most granular, like virtualized storage) to the most complex, like a virtualized load balancer working with a virtualized firewall on a virtualized network. It is impossible to do this without a highly targeted, sophisticated and comprehensive testing regimen.

Examples of what is tested include “on-boarding” of virtualized networking functions into the network, the creation of a virtualized service that results from the onboarding of those elements, the networking of those virtualized services, the ability to check the statues of VNFs on a real-time basis and the ability to capacity of the network service over time.

There is another dimension to the NFV challenge: Things happen more quickly. The premise of NFV – why it is of interest to network owners in the first place — is that services must be added, dropped and upgraded in real time. The software development life cycle is a fraction of the time of even a few years ago. Once deployed, tweaks and changes – things that are missed in the truncated development cycle and desired changes based on customer reaction and other new data – must be made.

This is a complicated environment in which things will go astray.  The message is simple: Don’t think of an extensive platform of testing, monitoring and oversight as a nice thing to add later. In fact, if testing isn’t seen as necessary, central, and important on day one, it is perhaps best to not consider NFV at all.

Testing used to be the key to finding out if something is going wrong. It still is. In this transformed environment of deep interdependencies and lightning fast technology introductions and changes, testing has a second and equality important function: It is the quality gatekeeper.

Once the importance of testing is accepted, the next logical challenge is to create a test methodology. In our next NFV blog, Qualitest will discuss this vital question. Topics will include MANO testing, compliance testing for virtual network functions launched into the network and many other important topics.