Where Did My Data Go?

It seems that as we find newer, faster and more efficient ways to store, access and manipulate data, we can’t seem to keep up with the growth of the data itself. Even worse, we seem to be at odds with finding ways to successfully protect that data from being lost in the abyss.

Backups exist for one function, (No, it’s not to cause a nightly headache for your storage admin), it’s to facilitate the ability to restore data in the case of its disappearance. This can happen in many ways, and whether it’s from accidental user deletion, data corruption, failed disks, power outage or natural disaster, the result is the same… users scream “Where did my data go?!?!?!”

Many companies have complex backup schedules which utilize technologies such as disk staging, data de-duplication, virtual tape libraries, and physical tape libraries. But if the data itself can’t be restored, what good are the underlying technologies? Not much at all.

Many of the organizations I talk to focus all their attention on the “backup” process, but very few ever want to discuss the “restore” process. They spend thousands of dollars on nifty software that supports things like:

  • Data De-duplication – The ability to reduce data sets by only storing 1 copy of each block of data or file
  • Object Consolidation – The ability to create and amalgamate different data sets from different dates into one “synthetic/virtual” backup job. This allows them to run an “incremental forever” policy
  • Granular Recovery Functions – Very important within virtual environments as this allows administrators to recover full VM hosts, VM’s within a host, folders attached to a VM, or even single files within a VM folder
  • Zero Downtime Backup – Which is the ability to integrate onsite storage arrays with the application and backup stacks to provide fully application consistent  backups through the use of array snapshot technology.

All these tools help client reduce backup windows, add flexibility, speed and even granularity to their backups. They also increase automation and reduce user intervention. So isn’t technology a wonderful thing? And haven’t backups come so far over the years? The short answer is YES. But unless you can restore that data successfully [Read more…]

Times are a-changin’ in the network

Networking is a word that for many causes trepidation and excitement at the same time, and I’m not necessarily talking only about networking equipment in IT, I’m also talking about human networking. We all know how nerve racking it can be to step into a room full of strangers with the goal of branching out to meet new people.

As society has become more digitally focused with the way we interact, human networking has become a lot less scary for many. (while more scary for others!) Today, through tools like Facebook and Linkedin people can learn about each others most intimate details, interact in both a personal and professional capacity, and in many cases become closer to that person with far less effort or time than in the past.

Many people may be wondering what the above has to do with IT, and may be thinking that I’m just off on another one of my tangents and have no real direction with this blog post… Well here comes the meat and potatoes… [Read more…]

The color of your cloud

 The term “cloud” in IT today evokes many responses, feelings and ideas on its purpose, makeup, and overall value to an organization. Some believe “cloud” is exclusive to IT services fully residing in an externally owned and run data center, in which an organization rents resources through an on-demand model. Some people believe cloud to be the creation of IT as a service within the organization with the hopes of creating true utility computing. And some people think cloud is nothing more than hyperbole, clever marketing and vendors trying to hock more of their gear to unsuspecting punters.

I’m here to tell you that in my humble opinion “cloud” is none of this, and all of the above at the same time. Sound contradictory? It is, and isn’t… Confusing? It doesn’t have to be.

To lend credence to my approach at explaining cloud, a bit of a history lesson may be needed (so bear with me). In the heady days when the mainframe ruled the corporate IT landscape, the idea was to provide a centralized computing model that could allocate resources to services as need be. Mainframes provided a stable, highly available and scalable platform that IT could count on to run and support an entire business.  While proprietary, expensive and about as easy to manage as a room of 30 toddler’s is, it provided stability, some flexibility and the needed resources for even the most demanding services that companies required to be competitive.

The industry chugged along, mainframes were the way to go and Unix was king… Then it all changed with the creation of the personal computer [Read more…]

Bigger, Faster and “More Efficient” Doesn’t Always Mean Better

In today’s dynamic and ever changing IT landscape there is a lot of emphasis on purchasing technologies that do more with less, increase performance, and make existing approaches more efficient. Clients are turning to their trusted advisors and asking them to sift through all the stories, FUD and hype in the hopes that their solution providers will help them architect a strategy that utilizes the newest technologies to increase competitiveness, all while reducing total cost of ownership.

The single greatest advance in this area, at least in my opinion is the virtualization of servers, which has helped clients consolidate silo’ed resources and management structures, while increasing performance, availability and reducing TCO in massive ways.

Another area in which massive savings have been found is in the de-duplication of data within an IT environment. This is a tactic employed to reduce the amount of data that resides in an environment, both on primary storage systems, as well as in the backup stack in an effort to reduce the strain on networks, as well as the time and money spent on expensive disk technologies.

While both of these tools can provide massive savings in capex/opex to clients when implemented in the right way, they can also cause as many issues as they solve if not properly thought out and managed through their life cycle.

Was That VM Ever Really Needed??

When working with clients who have been virtualized for a few years now and have moved onto standardizing the virtualization of all applications that are supported in a virtualized state, the ability to create services so quickly (VM’s) can be an issue in itself. [Read more…]

Virtual storage capacity management: an admin’s worst nightmare?

Most hear “server virtualization” and think: efficiency, ease of management, high availability and flexibility. But these benefits – the aim of sound IT planning – really only extend to the server (and in some cases application) layer. Administration, it turns out, is a whole other kettle of fish.

That’s because the complexities of introducing server virtualization into an environment force administrators to spend far more time than in the past on planning the overall capacity requirements of an environment and how to lay down data to ensure that the benefits virtualization brings to servers isn’t offset by problems to the storage environment.

Here are the three most common technology features created to help alleviate this pain point – as well as some of their pitfalls:

Thin Provisioning: Thin Provisioning allows administrators to show an OS/App/Hyper-visor an amount of storage they can grow into, while not actually allocating the physical space on the SAN/NAS. The SAN/NAS only allocates storage as data is written to it, so administrators can spend far less time planning — and only need to purchase and allocate what’s needed NOW, versus in 6 or 12 months.

DWhile thin provisioning provides a lot of value – extending existing capacity lifespan and lessening the number of tasks to manage virtual machines and data — it also causes issues. [Read more…]