On the road with NetApp HyperConverged Infrastructure

On the road with NetApp HyperConverged Infrastructure

Keith Aasen NetApp

This is a guest article written by Keith Aasen, Solutions Architect, NetApp. Keith has been working exclusively with virtualization since 2004. He designed and built some of the earliest VMware and VDI environments in Western Canada. He designed and implemented projects of variable sizes both in Canada and in the southern US for some of the largest companies in North America.

Article:

I recently completed a 6 city roadshow to talk about the NetApp announcements that made up the 25th-anniversary celebration of NetApp (if you missed it, you can watch the recording here). Although the payload for the 9.2 release of our ONTAP operating system was huge, I have done announcements like this before and was ready for the questions posed by customers in each city.

This roadshow, however, was the first opportunity I had to present the new NetApp HyperConverged Infrastructure (HCI). I was less sure how this was going to go over. With this offering, we are breaking the mold of HCI version 1.0, enabling true enterprise workloads on an HCI platform. As such, I was not sure how the attendees would respond. Would they understand the purpose and benefits of such an architecture? Would they understand the limitations of the existing offerings and how the NetApp HCI offering was different?

I shouldn’t have been worried.

As far as understanding the purpose of such an architecture, they definitely got it. Our partner community has done an excellent job of explaining how this sort of converged infrastructure is an enabler for data center transformation. What is it about converged infrastructure, hyper-converged in particular, that enables this transformation? In a word, Simplicity. HCI simplifies the deployment of resources, simplifies the management of infrastructure and even simplifies the teams managing the infrastructure.

This simplicity and the unification of the traditionally disparate resources allows customers to optimize the available resources reducing cost and increasing the value to the business.

So every city I visited got this, Simplicity was key. What about the limitations of the existing solutions?

The missing element of HCI version 1.0 solutions was Flexibility. These solutions achieved simple deployment but were wildly inflexible in how they were deployed, used and scaled. Here are some examples;

1. Existing Compute.

I asked the audience how many customers already had HCI deployed (very few) and then asked how many already had hypervisor servers deployed. Of course, everyone had that. Wouldn’t it be nice to leverage the existing investment in those servers rather than having to abandon the investment? You see, with NetApp HCI you can purchase the initial cluster loaded toward the storage components and then use your existing VMware hosts. Then as those hosts age, you can grow the HCI platform as it makes sense. Reduces the redundant compute in the environment and allows customers to move to an HCI platform as it makes sense for them.

2. Massive Scalability.

The means that most existing HCI vendors protect their data tends to limit the size of each cluster to a modest number of nodes to preserve performance. This results in stranded resources (perhaps one cluster has excess CPU while another is starving). This increases management costs and expense as stranded resources are unable to be used. The NetApp HCI platform can scale massively with no performance impact allowing no islands of resources to form. We isolate and protect different workloads through the use of Quality of Service policies.

3. Part of a larger Data Fabric.

In a Hybrid cloud data center model, it is critical to moving your data where you need it when you need to. Some data and applications lend themselves to the public cloud model, others do not. Perhaps you have data created on site that you want to leverage the cloud to do analytics against. The NetApp HCI platform is part of the NetApp Data Fabric which allows you to replicate the data to ONTAP based systems near or in major hyper-scale clouds such as AWS and Azure. This ability ensures you can have the right data on-prem and the right data in the cloud without being trapped.

I want to thank everyone who came out for the roadshow and for everyone who took the time to watch the recording of the webcast. If you want to hear more about the simplicity of HCI and the flexibility of the Hybrid Cloud model, please reach out to your technology partner.

Flash = the IT IQ test for 2016

flash_chadsakac_post_dell_emc

This is a guest blog post from Chad Sakac, DELL EMC President of Converged Platforms and Solutions Division. Chad leads the Dell EMC business responsible for the broad portfolio of next generation Converged Platforms & Solutions that allows customers to build their own outcomes by individually sourcing compute, networking, storage, and software, or buy their outcomes through integrated solutions. Chad authors one of the top 20 virtualization, cloud and infrastructure blogs, “Virtual Geek” and is proud to be an Executive Sponsor of EMC’s LGBTA employee circle.

A BIG list of challenges

The entire IT industry is in a state of disruption that is unlike anything I’ve seen in my IT career. I love it!

Recently meeting the CIO and the IT leadership team of a great customer, he commented that the disruption is not only unique in scope, but unique in that it is touching everything all at once.

Think about it.

  • Massive workload movements to SaaS.
  • The new and emerging role of container and cluster managers in the enterprise.
  • Commercial and cultural disruption of open-source models.
  • Continuing shift towards software-defined datacenter stacks as the “new x86 mainframe.”
  • The bi-modal operational reality of ITSM/DevOps coexistence.
  • The new role of IT as the service manager of public/private multiple clouds – and determining the best workload fit.
  • The changing mobile device and client experience.

That’s insane. It’s a HUGE list – and it’s far from exhaustive.

It’s also an INTIMIDATING list.

Frankly, I’m finding a lot of customers are “stuck.” They are too paralyzed by the sheer number of things they are trying to make sense of to create patterns, to set priorities.

At another recent customer – like most customers – the CIO’s whiteboard had a list of “priorities.” There were 21 of them, which of course means no priority at all.

All those trends, buzzwords, and disruptors are real and are germane. But, wouldn’t it be nice to have one simple no-brainer thing? Something simple to do? Something that while simple, would have a big impact on IT?

Flash is that simple, no-brainer thing. It’s why flash is also the “IQ test” for 2016.

Flash (and here specifically I’m talking about NAND-based flash non-volatile memory) has crossed over in every metric. It’s time to consign hybrid designs and magnetic media to the dustbin of history. Flash is hundreds to thousands of times faster in terms of IOPS (throughput) and tens to hundreds of times faster in GBps (bandwidth). Flash, when measured with a common yardstick like IOPS, is hundreds of times denser and tens of times better on power consumption.

IQ Test #1 = Do you want to be 10-100x times faster? For the same price?

In the early days of flash, there was a lot of concern about wear. But this is now also a thing of the past. NAND vendors have figured out a continuum of Write Per Day (WPD) media forms. Storage vendors have optimized wear-levelling approaches and any storage vendor worth their salt has a program of “lifetime guarantee” of one sort or another to take the concern off the table. Are there differences in drive/media types? Yeah. Are there differences in array approaches to media management? Sure. Are they materially relevant? Nope.

IQ Test #2 = If there is a lifetime guarantee, why would you worry about wear?

Flash is now more financially viable, delivering better economics and better TCO. That’s BEFORE all the advances in data reduction, whether it’s 2:1 inline compression, n:1 data deduplication (highly variant based on data type), or mega-capacity drives (15TB and climbing).

IQ Test #3 = do you want all that goodness? At a BETTER price?

Furthermore, there are simpler ways to migrate in a non-disruptive fashion: VMAX All Flash supporting non-disruptive array-to-array migrations, VPLEX fronted systems being able to pop a new all-flash target behind them and magically migrate. Heck, VMware Storage VMotion is an incredible “any-to-any” migration tool that is totally non-disruptive.

IQ Test #4 = if you could get all the goodness of Flash, a great TCO, way better performance/cost/cooling/power benefits, and migration was simple – why wouldn’t you?

Yes, there are still some cases where magnetic media, in extremely capacity-dense use cases (Object Storage, capacity-oriented Scale-out NAS) is preferable. And, the drive manufacturers are doing all sorts of funky helium-filled drives to keep supporting that environment. But don’t let that distract you. In the enterprise, the vast majority of workloads are mostly transactional and for that universe, 2016 is the Year of All Flash (#YOAF!)

Yes, there are all sorts of interesting new non-volatile media types and interface forms that are emerging, from 3D Crosspoint to Phase-Change Memory and Carbon Nanotube-based memory. Form factors range from SAS (most common still) to NVMe (expect this to become the standard over the coming years. It will take a little bit of time for things like dual-ported interfaces to mainstream, meaning first use cases will be in SDS/HCI) and DIMM-based approaches (lots of interesting stuff on this path that will have commercial mass-market application in 2018). But don’t let that distract you. In our industry, things are always changing and moving. The key is to get on the train and not wait.

IQ Test #5 = why wait for some future benefits (which will only be additive), when there is an immediate benefit to moving from hybrids to all-flash approaches and every day that you don’t move, you’re wasting money?

Now, there are two ways to go all-flash: “Build” approaches and “Buy” approaches.

Some customers like the flexibility of a “Build” approach, where you pick components of a stack and put it all together. I must say, with each passing day, it becomes clearer that this is a waste of precious time and brain cells. But hey, if you want to muck around with building your own stack, you can simply start with something incredibly small and powerful, like a Dell EMC Unity All-Flash array – starting below $10K – and then add Dell EMC PowerEdge servers. Conversely, if you want to “build” but using a Hyper-Converged approach, you can start with Dell PowerEdge based VMware vSAN-Ready nodes and load them up with NAND-based flash.

More and more customers every day look at all infrastructure as a commodity and just want it turnkey. This is less flexible than the “Build” approach, but the “Buy” approach is far more outcome-oriented. For customers, willing to let go of the wheel and move on to the infrastructure equivalent of autonomous driving (someone else does it for you), they can transform their IT organization by freeing up dollars, hours, and synapses wasted on testing, building, and then doing all the lifecycle tasks (patch, maintain, test, validate, troubleshoot) inherent in infrastructure stacks. The “Buy” version comes in Converged and Hyper-Converged Infrastructure forms. Dell EMC VxBlock 350 (Converged) and VxRail (Hyper-Converged) are in both cases designed for all-flash.

In the end, while all IT practitioners must think about ALL the disruptive things changing and how to prioritize, don’t miss the opportunity for a no-brainer, quick-hit win. Go all-flash.

Great Dell Technologies partners like Softchoice see the same thing we see. Moving to all all-flash datacenter is the one of the simplest ways for their customers to move forward. They have developed a “tech check” which is a quick and easy best practice since they do this for many customers. What I love about pointing customers to our partner ecosystem is that many of them cover a diverse portfolio within the tech ecosystem. The best partners specialize of course, but even then, they can act as trusted consultative partners to the customer. The points I’ve made about the drivers for 2016 being the Year of All Flash are certainly true for Dell Technologies (Dell EMC, VMware most of all) – but isn’t limited to us. Softchoice’s approach is vendor agnostic.

The Softchoice Datacenter Techcheck will assess the specific workloads and the specific environment of a specific customer and generate specific savings, efficiency gains, and ultimately the TCO of moving forward into the all-flash era.

Flash. It’s the IQ Test of 2016.

Keeping up with the cloud: Embracing a more dynamic data center

Keeping up with the cloud: Embracing a more dynamic data center

Increased adoption of virtualization over the past decade and a recent boost in cloud adoption is causing data centers to evolve at an unprecedented pace.

“The drive to the cloud or hybrid IT changes the dynamic of the data center,” says Chris Martin, Senior Systems Engineer at Softchoice. “Where application data resides and where applications are hosted are very different today than five years ago.”

“And that’s changing traffic in the data center and now you need to ask, ‘How do I support that as an IT organization?’”

This blog is second in a three-part Cisco ‘Ask an Expert’ series looking at how hybrid IT is influencing the technology topics of the network, data center, and unified communications. We asked Martin to outline the issues and trends that are reshaping the face of data centers and how organizations can better support them today.

What are the trends impacting data centers today?

There’s a lot of consolidation. Virtualization has driven a lot of it, from multiple dedicated computers for each application down to just a single server running them all. And now even that is being stretched between cloud, or multiple data centers or even just various nodes for storage. The architecture has changed so much over the past few years.

Technology trends like big data, analytics, the IoT (the Internet of Things) each have their own unique impact on the data center as well. As we connect more and more devices we create more and more data, and that leads to increased storage requirements. For companies with IoT or big data initiatives, how the data center is designed and deployed is changing.

How can IT better keep up with this changing face of the data center and support it?

One big change is around the expectation of availability created by the cloud, and how fast you can turn IT resources up and down. To be cost effective, organizations need to turn their data centers into a private cloud, where they can treat applications similarly whether on or off premise. The way that happens is through automation, whether it’s software-defined networking (SDN) or the orchestration of VMware applications.

Basically, it comes down to needing to do more with less. You need to automate, streamline, and bring in orchestration technologies to help manage that. You also need architectures, vendors, and equipment that are very open in order to support that process.

It’s a long migration roadmap, but even as you take small steps towards it, you want to keep the end goal in sight. This is where we help a lot of customers by identifying where they are today and how they can move their networks into the required architecture, such as SDN. We then assist with implementation, and have the ability, through Keystone Managed Services, to support that architecture.

How can organizations better ready their data centers to take advantage of things like IoT and big data?

Don’t overlook the importance of the network. It’s always a critical part of the data center, even if traffic stays within it. It’s not just performance that is crucial, but also security.

Make sure the network is designed to support the different architectures involved and aligns with the technology initiatives being launched.

It’s like a town. Think of the network as the roads. Every town is different and is going to have their own roadmaps and requirements based on size and throughput and requirements. Like a town, you’re going to have standardization and notable things true of all data centers, but where and how they happen are different for each.
What key things need to be asked when evaluating data center solutions?

One of the most important considerations you must ask yourself is, how do I manage this? What strain is this new piece of equipment going to put on my administrators?

Performance is always the number one thing to look at. The number two consideration is how easily it can be managed. Number three is understanding what support options exist. Number four is keeping in mind the business initiatives and what is needed to complete them. And, moving towards them.

Having a high degree of support from vendors and partners is important. With any solution, you need to consider how hard it is to replace an administrator when you lose them.
Remember the network is key. You could have the best data center technology in the world, but if you don’t have the right technology connecting it everyone is going to suffer. The network is the lifeblood of the data center.

Ask an Expert Webinar

Want to learn more about the role of the network on the changing face of the data center for the Cisco infrastructure? Join Chris Martin for 30 minutes on January 24th and register for our Ask an Expert Series focused on the data center.