On the road with NetApp HyperConverged Infrastructure

On the road with NetApp HyperConverged Infrastructure

Keith Aasen NetApp

This is a guest article written by Keith Aasen, Solutions Architect, NetApp. Keith has been working exclusively with virtualization since 2004. He designed and built some of the earliest VMware and VDI environments in Western Canada. He designed and implemented projects of variable sizes both in Canada and in the southern US for some of the largest companies in North America.

Article:

I recently completed a 6 city roadshow to talk about the NetApp announcements that made up the 25th-anniversary celebration of NetApp (if you missed it, you can watch the recording here). Although the payload for the 9.2 release of our ONTAP operating system was huge, I have done announcements like this before and was ready for the questions posed by customers in each city.

This roadshow, however, was the first opportunity I had to present the new NetApp HyperConverged Infrastructure (HCI). I was less sure how this was going to go over. With this offering, we are breaking the mold of HCI version 1.0, enabling true enterprise workloads on an HCI platform. As such, I was not sure how the attendees would respond. Would they understand the purpose and benefits of such an architecture? Would they understand the limitations of the existing offerings and how the NetApp HCI offering was different?

I shouldn’t have been worried.

As far as understanding the purpose of such an architecture, they definitely got it. Our partner community has done an excellent job of explaining how this sort of converged infrastructure is an enabler for data center transformation. What is it about converged infrastructure, hyper-converged in particular, that enables this transformation? In a word, Simplicity. HCI simplifies the deployment of resources, simplifies the management of infrastructure and even simplifies the teams managing the infrastructure.

This simplicity and the unification of the traditionally disparate resources allows customers to optimize the available resources reducing cost and increasing the value to the business.

So every city I visited got this, Simplicity was key. What about the limitations of the existing solutions?

The missing element of HCI version 1.0 solutions was Flexibility. These solutions achieved simple deployment but were wildly inflexible in how they were deployed, used and scaled. Here are some examples;

1. Existing Compute.

I asked the audience how many customers already had HCI deployed (very few) and then asked how many already had hypervisor servers deployed. Of course, everyone had that. Wouldn’t it be nice to leverage the existing investment in those servers rather than having to abandon the investment? You see, with NetApp HCI you can purchase the initial cluster loaded toward the storage components and then use your existing VMware hosts. Then as those hosts age, you can grow the HCI platform as it makes sense. Reduces the redundant compute in the environment and allows customers to move to an HCI platform as it makes sense for them.

2. Massive Scalability.

The means that most existing HCI vendors protect their data tends to limit the size of each cluster to a modest number of nodes to preserve performance. This results in stranded resources (perhaps one cluster has excess CPU while another is starving). This increases management costs and expense as stranded resources are unable to be used. The NetApp HCI platform can scale massively with no performance impact allowing no islands of resources to form. We isolate and protect different workloads through the use of Quality of Service policies.

3. Part of a larger Data Fabric.

In a Hybrid cloud data center model, it is critical to moving your data where you need it when you need to. Some data and applications lend themselves to the public cloud model, others do not. Perhaps you have data created on site that you want to leverage the cloud to do analytics against. The NetApp HCI platform is part of the NetApp Data Fabric which allows you to replicate the data to ONTAP based systems near or in major hyper-scale clouds such as AWS and Azure. This ability ensures you can have the right data on-prem and the right data in the cloud without being trapped.

I want to thank everyone who came out for the roadshow and for everyone who took the time to watch the recording of the webcast. If you want to hear more about the simplicity of HCI and the flexibility of the Hybrid Cloud model, please reach out to your technology partner.

Why micro-segmentation security makes SDN safer

Why micro-segmentation security makes SDN safer

I can explain why modern data centers need micro-segmentation in two words: Trojan Horse.

Not the malware, but that timeless story of wooden-horsey riding saboteurs. In it, we see that even the most powerful perimeters fall short. The bad guys always find a way in.

With virtualized data centers and desktops, this notion is particularly troubling. What if someone breaches the firewall protecting your virtual environments? Once inside, malware and attackers freely move laterally (east-west) causing mayhem and tons of financial damage.

Micro-segmentation is the solution – but it’s not without its share of confusion and challenges. So before you jump in, consider why it’s the right choice, and some of the common sticking points slowing down its adoption.

[Read more…]

16 Upgrade FAQs for Backup Exec 16

Get 16 upgrade FAQs for Veritas Backup Exec 16

If you’re considering Backup Exec 16, this article has (almost) every detail you need to know in 16 answers. This release has the many technology partners of Veritas very excited and for good reasons.

[Read more…]