It’s Spring Cleaning Time for Your Data Center

 

Spring may mean warmer weather, but for most of us it also means getting in touch with our inner hoarder: seriously digging into our closets, cupboards and drawers and parting with stuff we don’t use anymore. The alternative, of course, is watching helplessly as our dens, basements, hallways and garages simply become makeshift storage rooms.

As frustrating as spring cleaning can be, it’s a piece of cake compared to the hoarding that’s going on in the average data center. Consider the astronomical growth in data that’s causing organizations’ storage needs to rise by 40% a year – all while IT budgets remain flat and data center resources are stretched to the limit. You can dig into your closet and toss your acid wash jeans from 1993 or that gaudy bowl you got from your aunt in Albuquerque, but how do you toss gigabytes and terabytes of data you can’t see? Where do you start?

Organizations should expect more from their data center, more efficiency to reduce costs, more intelligence to increase application performance and more automation to boost administrator productivity.

After all, the growth in data should be driving innovation, not slowing it down.

Unfortunately, too many IT managers – faced with double-digit increases in data year after year – are making cost-related decisions that may hurt their business in the long run. Among the three most common missteps:

1. Don’t reduce storage by eliminating data

As I mentioned, you can dump the old jeans or ugly wedding gift but you can’t play hard and loose with critical data – for instance storing all email locally to preserve space on the company email server. The risk, of course, is that critical company information is vulnerable to disk crashes that can put employees or entire departments out of commission for days.

The bottom line: When space is tight, don’t stop storing important data. Instead, maximize your existing capacity through storage reduction technologies like compression and deduplication.

2. Don’t skimp on backup and restore

To stem the exponential growth of multiple backups, it may seem smart to limit the number of database snapshots you take, or the number of redundant files you keep. But if your system is compromised and your data is not current, your customers, employees, vendors and partners will be inconvenienced or worse.

The bottom line: Be sure your storage strategy supports backup and restore of data without impacting performance. Implement a tiered storage solution that gives you the right price/performance for data at each stage.

3. Don’t simply throw hardware at the problem

Storage may be relatively less expensive than it once was but it’s not free, nor is the physical space is consumes or costs like maintenance, power and cooling. Not to mention, indiscriminately loading more and more data onto more and more hardware can impact application performance, costing in lost productivity.

The bottom line: The answer isn’t more storage – it’s making your existing storage work more efficiently by investing in a virtualization solution that dramatically increases capacity utilization of your existing storage hardware.

The latest virtualization storage systems  are designed to consolidate block and file workloads into a single storage system for simpler management, reduced costs, scalable capacity, smart performance and high availability. By virtualizing and reusing existing disk systems, virtualization storage systems can offer greater potential return on investment – not to mention a lot more room to store your shoes.

Related post: A unified front against the threat of data overgrowth.

About Ferrol Macon

Ferrol Macon’s career for more than 20 years has lived at the intersection of people and technology. He has been a part of designing storage and data management solutions for companies large and small in almost every industry.