Your business is only as redundant as the integrity of the data that you have stored on your servers. For companies that service customers in the cloud, if you can’t offer 99.9999% uptime and absolutely ensure data backup and restoration, you might as well not be in business.
There are a few issues at hand here. Not only must you ensure that the data is accurately and securely backed up whereby every packet and byte is accounted for, but you must also ensure that when the time comes, the data is “clean” enough to be plugged back into the system without a hiccup. It’s the hiccup that companies need to avoid which is why they look for ways to backup their data to begin with, however they aren’t always as proactive as the results they were expecting.
There has to be a process to acquire backups. Recent advancements in network and backup technologies have improved the performance, making it easier to backup data over the network. The traditional process which involves tape drives at branch servers, where an end-of-day-tape backup is usually taken and physically sent to the head office. This, by the way, is not what they were talking about when the term “data in transit” was discussed.
There are obvious problems with physically moving data – the unavailability of qualified technical resources to take properly handle the backups, verification of the backed up data before shipping it to the head office, risk of damaging the tape, data loss or theft during transportation; the list is quite endless. More importantly, because the backup in the current scenario, at least is done on an ad hoc basis, when the IT Administrator tries to sync the data into the data center, they find that it was not a successful backup to begin with. But most of these issues are usually revealed beyond the point of no return - when the restoration or a DR (Disaster Recovery) drill is performed.
By virtue of the technological advancements, there are solutions available which support a wide variety of operating systems and applications which can take optimized backups over WAN links.
The solution in this case is actually a paradigm shift making use of modern data protection technologies to cope up with the ever increasing backup data vis a vis bandwidth. It uses fingerprint technology to distinguish unique file segments and maintain a check on all the redundant data in the remote sites. The solution that addresses these requirement includes a storage pool at each remote locations which then replicate the data over the WAN. These agents are thin software deployed in the remote servers. The software makes sure to send only new unique segments of file to a local storage pool, which automatically reduces the size of the transfer. It then becomes the job of the pool to check the uniqueness of the file across all local agents. It only replicates unique file segments to the main storage located in the centralized Data Center. This minimizes WAN bandwidth requirements and allows scalability because of the reduced storage capacity requirements.
A storage pool on the remote site optimizes the data over all the branch’s clients by identifying unique contents and storing the backup data locally. This shortens backup and restore tasks, and enables synchronization with central locations. Instead the files can be backed up over the WAN to a central storage.
The solution is also able to deal with one of the most important aspect of data security over the WAN by encrypting every file segment that it sends to the storage. The data in transit, as discussed earlier, comes to life in this segment of the solution whereby the data is encrypted before it is sent over the WAN to the storage, ensuring that the data is secure during the communication. This architecture eliminates the risk of accidental loss of tapes and unauthorized access of data, both in transit and at rest.
The solution has a very tangible return on investment. It is cost effective because the alternative would be to deploy separate tape drives, media, backup software, and technical support at each distinct site. There are then additional costs involved in the administration of each site and management of the off-site storage of the tape media. Since the solution removes the need for on-site tapes, the customers who have deployed this, have been able to justify their investment in a relatively short span of time.
Bank Islami – A Case Study
Innovative Integration recently deployed the solution at Bank Islami, ensuring that they had scalable, high performance data protection architecture for the bank’s Linux environment. Their environment includes more than 100 HP servers and a centralized pool of Network Appliance Storage. The problem the bank was having was to consolidate the data from its more than 100 branches across the major cities, located all over Pakistan. The institution was relying on a local data protection solution, built on an Open Source backup software. Almost all of the PCs and servers operated by the bank are running a SuSe Linux environment, which extended to the branch network as well.
Bank-Islami Setup at a Glance
“Each Bank Islami branch contains a file server which holds files from the specific branch users,” says Asad Alim, Head of Information Technology at Bank Islami. “At that time, there was no option other than to place a tape drive in each of the branches and use conventional scripts to perform the requisite backup.”
Using the traditional method, it could take up to a day to manage a recovery. As long as the tape was readable, it could always restore successfully providing the tapes were acquired from the remote location, locating the right tape containing the right block of information and then getting down to the restoration before having to send the tape back to the site. “Now none of this hassle is involved,” explains Asad.
Talking about bandwidth constraints Asad Alim commented that “We were concerned about the pressure the backup would put on the network, but despite of the 256Kbps bandwidth connectivity, the performance has proven to be stable”. Changes in data is first compressed at source and then sent across WAN to the central storage where it is decompressed and then stored in a duplicated fashion. It then employs an intelligent algorithm for data transfer. In case a backup fails during the process due to link failure or connectivity loss, the backup will resume from the point it was interrupted.
“The cost savings are a great endorsement but what is most important is that the branch data is now secure. You have to remember that in the past, we could never be 100% certain that we could restore a lost file. Now we are. With PureDisk, we are reducing our reliance on tapes for Disaster Recovery with secure replication of the data we backup,” says Asad Alim.
Please visit www.innovativeintegration.net for more details.