Not all data is created equal.

SAN’s and Storage Tiering: Difficult Situations Inspire Ingenious Solutions.  

“We hold these truths to be self-evident, that all men are created equal…” – Thomas Jefferson, US Declaration of Independence 

While it is self-evident people are created equal, it is also self-evident to IT professionals managing application infrastructure that not all data is created equal.

In the late 1980’s through the 1990’s, Information Technology was only used by a small number of distinct applications – Financial Systems, Document Creation and one or two custom applications, all with dedicated hardware.  Data sets were small.  For most companies, the number of applications that pushed the performance envelope could be counted on one hand.  The primary purpose of consolidated storage platforms in that era was enabling high availability for applications by providing redundant storage architecture and supporting clustering at the server level.

As database technologies, ERP applications, Electronic Data Interchange, Web-Enabled ecommerce applications, Business Intelligence, Multimedia and now social media have been integrated into all aspects of the enterprise, both the performance and capacity limits of storage architectures have been challenged.  [Read more…]

With storage, what’s old is new again.

Open Systems Availability: Virtualization’s First Frontier

What has been will be again, what has been done will be done again; there is nothing new under the sun… – Ancient proverb

 Server virtualization has brought great advances in flexibility, efficiency and responsiveness to today’s always-on, web-enabled data centers.  Although that is what most people think today when they hear the word “virtualization” it is simply a current application of an old principle.

 When open systems technologies, UNIX and Windows in particular, came on the scene in the late 1980’s and early 1990’s they represented much more affordable ways for business to use technology.  How we work today is a direct result of the productivity advances made possible through these technologies.  However, they had one great downfall. 

 Compared to the much more expensive mainframe and mini-computing technologies of their day, early open systems were less reliable and full of single points of failure.  Disk drive reliability was addressed by [Read more…]

How not to get stung by storage virtualization: Part 2

In Part 1 of this post, I blogged about how more and more companies are catching on to the value and importance of virtually storing their data. Storage virtualization offers the flexibility to simplify and modernize IT systems and control costs with as little disruption to a business’s data availability as possible. But while it can be a boon to a company’s ability to use available resources efficiently and cost-effectively, there are some risks.

I’ve already touched on two risks – failed implementation and challenges with interoperability. Here are three more:

Risk 3: The challenge of complexity

An important objective of virtualization is reducing and hiding the complexity associated with managing discrete devices. But while a virtual storage infrastructure benefits from a single point of logical disk and replication service management, there can still be some complications. According to informIT.com:

“…although shared storage represents a major technological advance over direct-attached storage, it has introduced its own complexity in terms of implementation and support.”

Plus: [Read more…]