2 Principles Of Data Backup That Save $12,500 Per Hour

The 2 Principles of Data Backup that Save You $12,500 per Hour

According to a recent survey of over 2,000 SMBs, the average cost per hour of a data center outage is $12,500 for a SMB organization, and up to $60,000 per hour for a mid-sized enterprise. The report also indicates that less than 20% of SMBs back up all of their data, with 88% of businesses having lost critical data within the last two years. This of course doesn’t include businesses that are scheduling backups but have never tried to restore data.

Even in these instances, many organizations are unsure if these backups are valid until they receive an audit or need to restore data outside of their internal backup policy.

Here are the only 2 data backup principles you need to follow: [Read more…]

Feeling The Pressure Of Big Data?

Feeling the pressure?

Over the years, data centers became fragmented, with numerous types of proprietary software living in silos inside specialized hardware components – making them complex and frustrating to manage. Today, virtualization helps absorb and minimize this challenge, but creates another: Server and application sprawl from explosive data growth.

Server and application sprawl will cost you

A sprawl of uncontrolled and poorly managed application deployments leads to ‘application unavailability’ that endangers an organization’s profitability. Numerous examples of this continue to sprout up all around us. For example, do you remember the Amazon datacenter failure? I always like to remember an article I read that states:

“As business becomes increasingly dependent on technology and information, availability is a universal concern for every business, in every industry…And  globalization means there are no more periods of  ‘acceptable’ downtime. At any time of the day or night, somewhere in the world, customers and vendors need access to your corporate information. If they can’t get it, they’ll go elsewhere – creating an opportunity for your  competition.”

David M. Fishman, Sun MicroSystems, Application Availability: An Approach to Measurement

I was young, and I have to admit that it touched me and created a sense of urgency. Maybe this is why I am so passionate about what I do today. Considering the problem of server and application sprawl, automation and ease of management are no longer a ‘nice to have’ they are a MUST. With this in mind, where do you start?

[Read more…]

Automated Storage Tiering: 5 Reasons You Should Consider It

Organizations of all sizes implement virtualization to gain efficiencies for both IT and the company. Typically, the server admin group leads this charge by running other virtualized servers on “virtual hosts,” or existing physical servers. In other words, it is possible to implement server virtualization without ever changing the server, storage, or networking hardware. From a cost perspective, this is a huge benefit!  However, before you get started you must analyze your server and storage hardware to find out:

  • Are your server and storage hardware compatible?
  • Is a more efficient solution available? [Read more…]

Why Data Backup And Recovery Systems Are Like Your Insurance Policy

 

Your data backup and recovery systems are like your insurance policy. Knowing that you have a process and system in place to ensure your data is secure and recoverable quickly and reliably means you can sleep at night knowing that your valuable corporate assets are safe.

But wait, have you tested your tape backup lately? If you have legacy tape backup systems in place – can they cope with the demands of a consolidated and virtualized infrastructure? How about the expanding volume of data growing exponentially year over year?

Challenges of a Tape Back-up Environment

Tape has long been the baseline backup medium used by most businesses, but with the arrival of consolidated infrastructures and virtualized environments the demands on legacy systems may be too complex or costly to manage depending on the types of applications and recovery point objectives in scope for the back-up. [Read more…]

It’s Spring Cleaning Time for Your Data Center

 

Spring may mean warmer weather, but for most of us it also means getting in touch with our inner hoarder: seriously digging into our closets, cupboards and drawers and parting with stuff we don’t use anymore. The alternative, of course, is watching helplessly as our dens, basements, hallways and garages simply become makeshift storage rooms.

As frustrating as spring cleaning can be, it’s a piece of cake compared to the hoarding that’s going on in the average data center. Consider the astronomical growth in data that’s causing organizations’ storage needs to rise by 40% a year – all while IT budgets remain flat and data center resources are stretched to the limit. You can dig into your closet and toss your acid wash jeans from 1993 or that gaudy bowl you got from your aunt in Albuquerque, but how do you toss gigabytes and terabytes of data you can’t see? Where do you start? [Read more…]

Juggling storage challenges with unified management: How to avoid dropping the ball

I don’t know about you, but I find juggling one ball hard, let alone three or 43. But keeping all those balls from crashing all around you is a little like the challenge organizations face as they try to store and manage their ever-increasing volumes of data.

And I do mean ever-increasing. Because Great Recession or not, data growth has continued unabated – thanks to the digitization of infrastructures worldwide, the need to keep more copies of data for longer periods and the rapid increase in distributed data sources.

When it comes to managing this tidal wave of data, there is no shortage of products and approaches to choose from. But most of these more traditional offerings have unfortunately not kept pace with the many new and complex requirements of storage, nor do they address the need for a single management perspective. [Read more…]