Bigger, Faster and “More Efficient” Doesn’t Always Mean Better

In today’s dynamic and ever changing IT landscape there is a lot of emphasis on purchasing technologies that do more with less, increase performance, and make existing approaches more efficient. Clients are turning to their trusted advisors and asking them to sift through all the stories, FUD and hype in the hopes that their solution providers will help them architect a strategy that utilizes the newest technologies to increase competitiveness, all while reducing total cost of ownership.

The single greatest advance in this area, at least in my opinion is the virtualization of servers, which has helped clients consolidate silo’ed resources and management structures, while increasing performance, availability and reducing TCO in massive ways.

Another area in which massive savings have been found is in the de-duplication of data within an IT environment. This is a tactic employed to reduce the amount of data that resides in an environment, both on primary storage systems, as well as in the backup stack in an effort to reduce the strain on networks, as well as the time and money spent on expensive disk technologies.

While both of these tools can provide massive savings in capex/opex to clients when implemented in the right way, they can also cause as many issues as they solve if not properly thought out and managed through their life cycle.

Was That VM Ever Really Needed??

When working with clients who have been virtualized for a few years now and have moved onto standardizing the virtualization of all applications that are supported in a virtualized state, the ability to create services so quickly (VM’s) can be an issue in itself. [Read more…]

Virtual storage capacity management: an admin’s worst nightmare?

Most hear “server virtualization” and think: efficiency, ease of management, high availability and flexibility. But these benefits – the aim of sound IT planning – really only extend to the server (and in some cases application) layer. Administration, it turns out, is a whole other kettle of fish.

That’s because the complexities of introducing server virtualization into an environment force administrators to spend far more time than in the past on planning the overall capacity requirements of an environment and how to lay down data to ensure that the benefits virtualization brings to servers isn’t offset by problems to the storage environment.

Here are the three most common technology features created to help alleviate this pain point – as well as some of their pitfalls:

Thin Provisioning: Thin Provisioning allows administrators to show an OS/App/Hyper-visor an amount of storage they can grow into, while not actually allocating the physical space on the SAN/NAS. The SAN/NAS only allocates storage as data is written to it, so administrators can spend far less time planning — and only need to purchase and allocate what’s needed NOW, versus in 6 or 12 months.

DWhile thin provisioning provides a lot of value – extending existing capacity lifespan and lessening the number of tasks to manage virtual machines and data — it also causes issues. [Read more…]