Where Did My Data Go?

It seems that as we find newer, faster and more efficient ways to store, access and manipulate data, we can’t seem to keep up with the growth of the data itself. Even worse, we seem to be at odds with finding ways to successfully protect that data from being lost in the abyss.

Backups exist for one function, (No, it’s not to cause a nightly headache for your storage admin), it’s to facilitate the ability to restore data in the case of its disappearance. This can happen in many ways, and whether it’s from accidental user deletion, data corruption, failed disks, power outage or natural disaster, the result is the same… users scream “Where did my data go?!?!?!”

Many companies have complex backup schedules which utilize technologies such as disk staging, data de-duplication, virtual tape libraries, and physical tape libraries. But if the data itself can’t be restored, what good are the underlying technologies? Not much at all.

Many of the organizations I talk to focus all their attention on the “backup” process, but very few ever want to discuss the “restore” process. They spend thousands of dollars on nifty software that supports things like:

  • Data De-duplication – The ability to reduce data sets by only storing 1 copy of each block of data or file
  • Object Consolidation – The ability to create and amalgamate different data sets from different dates into one “synthetic/virtual” backup job. This allows them to run an “incremental forever” policy
  • Granular Recovery Functions – Very important within virtual environments as this allows administrators to recover full VM hosts, VM’s within a host, folders attached to a VM, or even single files within a VM folder
  • Zero Downtime Backup – Which is the ability to integrate onsite storage arrays with the application and backup stacks to provide fully application consistent  backups through the use of array snapshot technology.

All these tools help client reduce backup windows, add flexibility, speed and even granularity to their backups. They also increase automation and reduce user intervention. So isn’t technology a wonderful thing? And haven’t backups come so far over the years? The short answer is YES. But unless you can restore that data successfully from the archive medium being used, then the backups really aren’t all that useful, and the money spent on the bells and whistles quickly becomes a poor investment.

So as one of your trusted architecture advisors, I implore you all to test those backups by trying to restore them, and not just once a year, but on a regular basis.  I see organizations very often shrug off a “failed backup” as not being a big deal, and even their senior level management taking this point of view as well. So what happens the next time if you are discussing business continuity and backup with upper level management and change the term “failed backup” to “failed restore”. How would management react to that term?, While its underlying definition may be the same, the perception will be far different, and all of a sudden it will become a big deal.

Related Posts

It’s Spring Cleaning Time for Your Data Center   Spring may mean warmer weather, but for most of us it also means getting in touch with our inner hoarder: seriously digging into our closets, cupboards and drawers and...
Don’t let data recovery times keep profits down You’re an IT decision maker, and business continuity (BC) is an important component of your IT infrastructure. You understand that accidental or malicious data loss, unplanned...
Could there still be a place for tape? Often, new technologies bury old ones – think of the demise of the horse and buggy or the record player. But sometimes, established technologies have a funny way of flailing f...

About Adam Wolfson

Adam Wolfson is now an HP Technical Architect here at Softchoice and holds more than 8 enterprise technical architecture certifications from HP.