Your DIY infrastructure is costing you…here’s why Hyper-Converged Infrastructure (HCI) promises a better way

The promise of hyper converged infrastructure HCI for DIY infrastructures

Your DIY infrastructure is costing you…here’s why Hyper-Converged Infrastructure (HCI) promises a better way

Modern business has accelerated. Most organizations realize they need to change the way they consume IT resources. They recognize the challenge: Adjust consumption to make sense at a given time for a given project, or risk falling behind.

The “build-it-yourself” infrastructure model is still relevant for most organizations. That’s what feels comfortable. It uses familiar concepts and offers loads of references to emulate.

Today, an unprecedented gap has opened. Traditional IT can no longer deliver a customer experience that meets business demands. To close the gap, users step outside the IT department to find alternatives that can.

On-Demand Webinar: The Next Generation of Hyper Converged Infrastructure


NetApp HCI enterprise-scale solutions let you break free from limitations. Consolidate multiple mixed workloads, scale storage and compute independently on demand, and guarantee performance.

But building infrastructure in-house comes at a cost.

Embracing non-sanctioned “Shadow IT” solutions exposes the organization to security gaps. Then there is the attendant decline in performance. Softchoice surveyed over 900 CEOs on the issue. The result? More than 90% felt hybrid IT was the best approach to closing the gap and eliminating shadow IT.

Nonetheless, the term “hybrid IT” is subject. From our perspective, it means:

  • Adopting public cloud to increase flexibility, lower costs and reduce time to market.
  • Modernizing the data center to improve performance and make operational gains.
  • Ensuring the network is secure and ready to handle mobile, remote office and cloud services.

When it comes to transforming the data center, the goal is often to mend the gap between supply and demand. This means evaluating other consumption models, like pay-as-you-go and converged infrastructure.

Hyper-converged architecture refers to infrastructure with 100% virtualized components. This option allows organizations to simplify operations while delivering applications to market faster.

IDC predicts this approach will define 60% of the server, storage and network deployments by 2020. It’s no surprise: Hyper-converged infrastructure (HCI) meets every need of most modern businesses:

  • It offers a tight integration between server, storage, and networking. Then, it adds the advantages of a software-defined data center. It’s the best of both worlds.
  • Organizations deal with a single hardware vendor, meaning a smooth, cohesive support experience.
  • It’s flexible around deploying existing architecture and expanding storage and compute resources.
  • It offers a “single pane of glass” for deployment and management.

Companies looking to catch up would do well to get on board. HCI is the fastest growing category in the data center market. Revenue should reach $10.8 billion by 2021 with 48% compound annual from 2016.

Despite its many positives, HCI still qualifies as “too good to be true.” The typical HCI offering comes with flaws that often knock it out of consideration.

Many converged, and HCI offerings failed to deliver consistent quality of service (QoS) across applications. As business needs changed, overloaded system resources created the infamous “HCI tax.” Many more mandated fixed rations for resource expansion, making adaptation slow and difficult. Streamlining stack management was often impossible. This was due to product immaturity and long-standing integration with existing hypervisor platforms.

But we’ve reached the next step in the evolution of hyper-converged infrastructure. Today, many of the elements lacking in “HCI 1.0” have arrived in version 2.0 from NetApp HCI.

Gartner recognizes NetApp as the leader in network-attached storage. They call its NetApp HCI solution an evolutionary disruptor.

NetApp HCI is the first enterprise-scale hyper-converged infrastructure solution. It’s evolved from the SolidFire all-flash storage product. As such, it delivers integrated storage efficiency tools, including always-on de-duplication. Also, on the menu: integrated replication, data protection, and high availability. VMware provides a mature and intuitive control interface for the entire infrastructure.

NetApp HCI sets itself apart from competitors in 4 ways:

  1. Guaranteed Quality of Service (QoS): NetApp HCI allows granular QoS control regardless of the number of applications. This eliminates “noisy neighbors” and satisfies every performance SLA.
  2. Flexibility and Scalability: Unlike its competitors, NetApp HCI allows independent scaling of computing and storage resources. This cuts the 30% “HCI tax” from controller VM overhead. Get simpler capacity and resource planning with no more over-provisioning.
  3. Automated Infrastructure: NetApp HCI’s deployment engine automates routine manual processes of deploying infrastructure. Meanwhile, VMware vCenter plug-ins make management much simpler. The full NetApp architecture goes from zero to fully-operational in under 30 minutes.
  4. Integration with NetApp Data Fabric: Take full advantage of NetApp’s storage offerings. Access and move your data between public and private cloud with zero downtime. Don’t compromise on high performance and security.

If you’re intrigued, we encourage you to tap into Softchoice’s experience with NetApp. Learn more about NetApp’s HCI by downloading this brief that dives a little deeper into the key benefits of NetApp’s HCI.

On the road with NetApp HyperConverged Infrastructure

On the road with NetApp HyperConverged Infrastructure

Keith Aasen NetApp

This is a guest article written by Keith Aasen, Solutions Architect, NetApp. Keith has been working exclusively with virtualization since 2004. He designed and built some of the earliest VMware and VDI environments in Western Canada. He designed and implemented projects of variable sizes both in Canada and in the southern US for some of the largest companies in North America.


I recently completed a 6 city roadshow to talk about the NetApp announcements that made up the 25th-anniversary celebration of NetApp (if you missed it, you can watch the recording here). Although the payload for the 9.2 release of our ONTAP operating system was huge, I have done announcements like this before and was ready for the questions posed by customers in each city.

This roadshow, however, was the first opportunity I had to present the new NetApp HyperConverged Infrastructure (HCI). I was less sure how this was going to go over. With this offering, we are breaking the mold of HCI version 1.0, enabling true enterprise workloads on an HCI platform. As such, I was not sure how the attendees would respond. Would they understand the purpose and benefits of such an architecture? Would they understand the limitations of the existing offerings and how the NetApp HCI offering was different?

I shouldn’t have been worried.

As far as understanding the purpose of such an architecture, they definitely got it. Our partner community has done an excellent job of explaining how this sort of converged infrastructure is an enabler for data center transformation. What is it about converged infrastructure, hyper-converged in particular, that enables this transformation? In a word, Simplicity. HCI simplifies the deployment of resources, simplifies the management of infrastructure and even simplifies the teams managing the infrastructure.

This simplicity and the unification of the traditionally disparate resources allows customers to optimize the available resources reducing cost and increasing the value to the business.

So every city I visited got this, Simplicity was key. What about the limitations of the existing solutions?

The missing element of HCI version 1.0 solutions was Flexibility. These solutions achieved simple deployment but were wildly inflexible in how they were deployed, used and scaled. Here are some examples;

1. Existing Compute.

I asked the audience how many customers already had HCI deployed (very few) and then asked how many already had hypervisor servers deployed. Of course, everyone had that. Wouldn’t it be nice to leverage the existing investment in those servers rather than having to abandon the investment? You see, with NetApp HCI you can purchase the initial cluster loaded toward the storage components and then use your existing VMware hosts. Then as those hosts age, you can grow the HCI platform as it makes sense. Reduces the redundant compute in the environment and allows customers to move to an HCI platform as it makes sense for them.

2. Massive Scalability.

The means that most existing HCI vendors protect their data tends to limit the size of each cluster to a modest number of nodes to preserve performance. This results in stranded resources (perhaps one cluster has excess CPU while another is starving). This increases management costs and expense as stranded resources are unable to be used. The NetApp HCI platform can scale massively with no performance impact allowing no islands of resources to form. We isolate and protect different workloads through the use of Quality of Service policies.

3. Part of a larger Data Fabric.

In a Hybrid cloud data center model, it is critical to moving your data where you need it when you need to. Some data and applications lend themselves to the public cloud model, others do not. Perhaps you have data created on site that you want to leverage the cloud to do analytics against. The NetApp HCI platform is part of the NetApp Data Fabric which allows you to replicate the data to ONTAP based systems near or in major hyper-scale clouds such as AWS and Azure. This ability ensures you can have the right data on-prem and the right data in the cloud without being trapped.

I want to thank everyone who came out for the roadshow and for everyone who took the time to watch the recording of the webcast. If you want to hear more about the simplicity of HCI and the flexibility of the Hybrid Cloud model, please reach out to your technology partner.

Why more clients will refresh W2K3 servers with Cisco UCS

Cisco UCS for Microsoft Applications

What? A refresh? But servers last longer than ever! That’s true, but unless innovation slows down (ha ha) they will reach an inevitable point where they are more venerable than valuable. In the data center, this means tough and expensive decisions must be made.

In this blog post, I argue for Cisco’s UCS as a stable platform for Microsoft applications. Especially if you’re looking at Microsoft System Server 2012 and especially if you need to migrate from Windows Server 2003.

[Read more…]