IT admins tasked with restoring servers or lost data during a disruption are consumed with a single-minded purpose: successful recovery. But it shouldnít take an adverse event to underscore the importance of recovery as part of an overall backup strategy. This is especially true with large datasets. Before you consider how youíre going to back up large datasets, first consider how you
may need to recover the data.
Variables abound. Is it critical or non-critical data? A simple file deletion or a system-wide outage? A physical server running onsite or a virtual one hosted offsite? These and a handful of other criteria will determine your backup and disaster recovery (BDR) deployment. What do we mean by large? A simple question with a not-so-simple answer. If your total data footprint is 5 TB or more, thatís considered large. But what kind of data is it? How many actual files are there? How frequently do they change? How much can they be compressed? Itís likely that two different 5 TB
environments would require different data protection schemes if they were comprised of different file types that changed at different rates.
On the other hand, bandwidth capacity restrictions are a common
denominator for all environments. The question boils down to this: How should you back up data so that it can be reliably recovered through a process that doesnít interfere with daily workloads traveling across the network? IT pros on the frontlines have no single tool for determining the impact that backing up large datasets will have on bandwidth. Itís a process of trial and
error, even for the experts who do it daily. You can only protect as much data as your network will allow. And thereís little use backing up data that canít be recovered in a timely fashion.