Why Data Backup And Recovery Systems Are Like Your Insurance Policy

 

Your data backup and recovery systems are like your insurance policy. Knowing that you have a process and system in place to ensure your data is secure and recoverable quickly and reliably means you can sleep at night knowing that your valuable corporate assets are safe.

But wait, have you tested your tape backup lately? If you have legacy tape backup systems in place – can they cope with the demands of a consolidated and virtualized infrastructure? How about the expanding volume of data growing exponentially year over year?

Challenges of a Tape Back-up Environment

Tape has long been the baseline backup medium used by most businesses, but with the arrival of consolidated infrastructures and virtualized environments the demands on legacy systems may be too complex or costly to manage depending on the types of applications and recovery point objectives in scope for the back-up. [Read more…]

Juggling storage challenges with unified management: How to avoid dropping the ball

I don’t know about you, but I find juggling one ball hard, let alone three or 43. But keeping all those balls from crashing all around you is a little like the challenge organizations face as they try to store and manage their ever-increasing volumes of data.

And I do mean ever-increasing. Because Great Recession or not, data growth has continued unabated – thanks to the digitization of infrastructures worldwide, the need to keep more copies of data for longer periods and the rapid increase in distributed data sources.

When it comes to managing this tidal wave of data, there is no shortage of products and approaches to choose from. But most of these more traditional offerings have unfortunately not kept pace with the many new and complex requirements of storage, nor do they address the need for a single management perspective. [Read more…]

Don’t let data recovery times keep profits down

You’re an IT decision maker, and business continuity (BC) is an important component of your IT infrastructure. You understand that accidental or malicious data loss, unplanned system outages, user error, hardware theft or failure, power failure, software failure, fire, flood, earthquakes, landslides, hurricanes, tidal waves and tornadoes can blow your company’s data into oblivion.

Have you considered refreshing your backup architecture and processes with short recovery windows being the primary objective?

[Read more…]

Bigger, Faster and “More Efficient” Doesn’t Always Mean Better

In today’s dynamic and ever changing IT landscape there is a lot of emphasis on purchasing technologies that do more with less, increase performance, and make existing approaches more efficient. Clients are turning to their trusted advisors and asking them to sift through all the stories, FUD and hype in the hopes that their solution providers will help them architect a strategy that utilizes the newest technologies to increase competitiveness, all while reducing total cost of ownership.

The single greatest advance in this area, at least in my opinion is the virtualization of servers, which has helped clients consolidate silo’ed resources and management structures, while increasing performance, availability and reducing TCO in massive ways.

Another area in which massive savings have been found is in the de-duplication of data within an IT environment. This is a tactic employed to reduce the amount of data that resides in an environment, both on primary storage systems, as well as in the backup stack in an effort to reduce the strain on networks, as well as the time and money spent on expensive disk technologies.

While both of these tools can provide massive savings in capex/opex to clients when implemented in the right way, they can also cause as many issues as they solve if not properly thought out and managed through their life cycle.

Was That VM Ever Really Needed??

When working with clients who have been virtualized for a few years now and have moved onto standardizing the virtualization of all applications that are supported in a virtualized state, the ability to create services so quickly (VM’s) can be an issue in itself. [Read more…]

Virtual storage capacity management: an admin’s worst nightmare?

Most hear “server virtualization” and think: efficiency, ease of management, high availability and flexibility. But these benefits – the aim of sound IT planning – really only extend to the server (and in some cases application) layer. Administration, it turns out, is a whole other kettle of fish.

That’s because the complexities of introducing server virtualization into an environment force administrators to spend far more time than in the past on planning the overall capacity requirements of an environment and how to lay down data to ensure that the benefits virtualization brings to servers isn’t offset by problems to the storage environment.

Here are the three most common technology features created to help alleviate this pain point – as well as some of their pitfalls:

Thin Provisioning: Thin Provisioning allows administrators to show an OS/App/Hyper-visor an amount of storage they can grow into, while not actually allocating the physical space on the SAN/NAS. The SAN/NAS only allocates storage as data is written to it, so administrators can spend far less time planning — and only need to purchase and allocate what’s needed NOW, versus in 6 or 12 months.

DWhile thin provisioning provides a lot of value – extending existing capacity lifespan and lessening the number of tasks to manage virtual machines and data — it also causes issues. [Read more…]

Get in Front of Growing Data

You’re storing more data than you need to. Cut the excess in your infrastructure, and you can benefit from efficiency gains, cost savings and budget that can be redeployed to high impact IT initiatives. All it takes is a little data deduplication. Lower the amount of data you store without sacrificing what you need, and maintaining the status quo becomes less expensive and more efficient.

Your organization generates a lot of data, yet not all of it needs to be stored. There are redundancies in systems throughout your data center, and by identifying them and getting rid of the duplicates, you can cut your storage footprint, invest less in equipment and streamline your datacenter operations. Ultimately, the cost savings can be redeployed to other IT initiatives, particularly if they come with a compelling ROI proposition.

Using a data deduplication solution, you can reduce the amount of data you store by up to 60X. This translates to backup times that are 90 percent faster and a drop in bandwidth consumption of up to 98 percent. Quite simply, the implications of data deduplication involve a clearly defined storage management advantage. These are results you can see … and measure.

As your company continues to generate data (which needs to be stored), the use of data deduplication solutions enables you to make room in your storage infrastructure, rather than purchase new storage equipment. Imminent budgetary commitments, consequently, can be deferred.

Few IT investments deliver the sort of ROI that you can realize with data deduplication as part of a virtualized storage infrastructure. The implications are salient and rapidly realized. Get rid of the extra data that you’re paying to store, and the cost to operate your IT environment – and power the business – drops substantially. At the same time, you’re extending the value of your existing storage environment well into the future.

Data deduplication isn’t just a technology decision – it’s a financial one. Implement a deduplication solution, and you’ll succeed from both perspectives.