Insights Blog Disaster Recovery with Cloud

White Paper

Disaster Recovery with Cloud

The reality is that even the most robust Disaster Recovery plan is only as good as your ability to test it.

Kelvin Kam

Audit test


Cloud computing refers to the use and access of multiple server-based computational resources via a digital network. In cloud computing, applications are provided and managed by the cloud server and data is also stored remotely in the cloud configuration. Users do not download and install applications on their own device or computer; all processing and storage is maintained by the cloud server.

How about Disaster Recovery with cloud computing?  They seem to be the perfect match. All of the servers are virtualized, instead of just backing up the data; we can now back up the entire server off-site. It’s easy to take a snapshot of the server every night, send it off-site, and then that entire server can be spun up fairly quickly. This system has many possibilities and opportunities, but there are also issues and risks inherent to it.


The reality is that even the most robust Disaster Recovery plan is only as good as your ability to test it. IT environments are dynamic in nature; over time, changes made to the protected environment may not be identically reflected in the recovery environment. Since manually testing all the risks in an ongoing fashion is virtually impossible, problems may go undetected, leading to an astounding failure. At the moment, users can only replicate data to servers located in a third-party provider’s data center, but they cannot mirror their full systems, which include the operating system and applications, for example. So, if disaster strikes, they would first need to rebuild systems manually before recovering data stored in the cloud.

Due to this immaturity of the current market, very few large or even mid-sized organizations have been willing to risk going down this route. Small businesses in particular have much to gain by adopting a data protection plan based in the cloud; by storing backup data in a remote location, as well as updating it regularly, a company can reduce the time needed to recover from a disaster and resume business. More importantly, budget constraints often prohibit smaller companies from investing the time and money into building their own DR system; using the cloud for this has proven both time-and cost-effective for these businesses.


Most cloud-based DR vendors provide infrastructure based on Windows or Linux systems and databases. Consequently, these vendors may simply not be able to replicate data and update databases from older, non-web based and bespoke applications without developing a customized system. Another consideration is that although it may be possible to port older, non-web based or bespoke applications to a cloud environment, usually packages not written specifically for the cloud tend to run very slowly or may even crash. Additionally, there are no cloud interoperability standards or even best practice documentation available to deal with issues such as data migration. So once a vendor has been chosen, switching possibly large amounts of data away from it will not necessarily be easy or cheap.


As mentioned previously, most of the users are small businesses. Obviously they will not and cannot afford a 10Gbps leased line, because bandwidth will probably not be much of a concern. It’s most likely they are just ADSL users and have a typical download speed between 4Mbps to 8Mbps, but their upload speed is more likely to be in a region of 500Kbps to 1Mbps. Hence it’s very slow for uploading large amounts of data.

Some consider incremental backups a viable option, but you still have to get to that point and it’s still a massive job. So how do you get your data from your building to their server quickly and efficiently? I’ve seen people on forums saying that they’ve been backing up their data for six months and they’re still not finished. Another worry is what happens if the internet connection goes down, in either your building or your local area? To protect against these situations in particular requires using at least two telecom providers or ISPs that do not have equipment at the same telephone exchange, which obviously adds to total cost.


Talking about costs, cloud services are usually paid for by per-user and per-GB bases. It will make estimation a bit more difficult if you don’t have previous experience or examples. A further consideration is the potentially high costs associated with migrating data if you decide to switch vendors. For instance, if the company already has five years of archive data (i.e. emails, invoices, stocks, etc.) stored with a cloud supplier, it would be a massive job trying to move it to a new supplier, and if it isn’t in a standard format it will cost another huge fee to import it. Therefore, if you’re going to do anything in the cloud, it’s important to do it right the first time by completing proper research and cost estimations; otherwise, it’s very easy to get locked into an unprofitable situation for a very long, costly amount of time.


Some things will never change.  Any efficient DR will need to have a good strategy and lots of planning and, most importantly, a lot of testing to validate the plan once it’s established. The options for disaster recovery always really depend on two closely-related things – your RTO (Recovery Time Objective), which is how quick you need to recover from your disaster, and your budget. If, for instance, if you need to recover your data in a 7-day time frame, it’s going to be a lot more cost-effective than if you need to make sure that you can recover your data and your infrastructure in a 5-minute time frame.

DR-based cloud services are maturing at an increasing rate; soon, they may become the most commonly-used system for disaster recovery due to the convenience and cost-effective possibilities associated with them.




quality engineering free assessment