Your DIY infrastructure is costing you…here’s why Hyper-Converged Infrastructure (HCI) promises a better way

The promise of hyper converged infrastructure HCI for DIY infrastructures

Your DIY infrastructure is costing you…here’s why Hyper-Converged Infrastructure (HCI) promises a better way

Modern business has accelerated. Most organizations realize they need to change the way they consume IT resources. They recognize the challenge: Adjust consumption to make sense at a given time for a given project, or risk falling behind.

The “build-it-yourself” infrastructure model is still relevant for most organizations. That’s what feels comfortable. It uses familiar concepts and offers loads of references to emulate.

Today, an unprecedented gap has opened. Traditional IT can no longer deliver a customer experience that meets business demands. To close the gap, users step outside the IT department to find alternatives that can.

On-Demand Webinar: The Next Generation of Hyper Converged Infrastructure

  

NetApp HCI enterprise-scale solutions let you break free from limitations. Consolidate multiple mixed workloads, scale storage and compute independently on demand, and guarantee performance.

But building infrastructure in-house comes at a cost.

Embracing non-sanctioned “Shadow IT” solutions exposes the organization to security gaps. Then there is the attendant decline in performance. Softchoice surveyed over 900 CEOs on the issue. The result? More than 90% felt hybrid IT was the best approach to closing the gap and eliminating shadow IT.

Nonetheless, the term “hybrid IT” is subject. From our perspective, it means:

  • Adopting public cloud to increase flexibility, lower costs and reduce time to market.
  • Modernizing the data center to improve performance and make operational gains.
  • Ensuring the network is secure and ready to handle mobile, remote office and cloud services.

When it comes to transforming the data center, the goal is often to mend the gap between supply and demand. This means evaluating other consumption models, like pay-as-you-go and converged infrastructure.

Hyper-converged architecture refers to infrastructure with 100% virtualized components. This option allows organizations to simplify operations while delivering applications to market faster.

IDC predicts this approach will define 60% of the server, storage and network deployments by 2020. It’s no surprise: Hyper-converged infrastructure (HCI) meets every need of most modern businesses:

  • It offers a tight integration between server, storage, and networking. Then, it adds the advantages of a software-defined data center. It’s the best of both worlds.
  • Organizations deal with a single hardware vendor, meaning a smooth, cohesive support experience.
  • It’s flexible around deploying existing architecture and expanding storage and compute resources.
  • It offers a “single pane of glass” for deployment and management.

Companies looking to catch up would do well to get on board. HCI is the fastest growing category in the data center market. Revenue should reach $10.8 billion by 2021 with 48% compound annual from 2016.

Despite its many positives, HCI still qualifies as “too good to be true.” The typical HCI offering comes with flaws that often knock it out of consideration.

Many converged, and HCI offerings failed to deliver consistent quality of service (QoS) across applications. As business needs changed, overloaded system resources created the infamous “HCI tax.” Many more mandated fixed rations for resource expansion, making adaptation slow and difficult. Streamlining stack management was often impossible. This was due to product immaturity and long-standing integration with existing hypervisor platforms.

But we’ve reached the next step in the evolution of hyper-converged infrastructure. Today, many of the elements lacking in “HCI 1.0” have arrived in version 2.0 from NetApp HCI.

Gartner recognizes NetApp as the leader in network-attached storage. They call its NetApp HCI solution an evolutionary disruptor.

NetApp HCI is the first enterprise-scale hyper-converged infrastructure solution. It’s evolved from the SolidFire all-flash storage product. As such, it delivers integrated storage efficiency tools, including always-on de-duplication. Also, on the menu: integrated replication, data protection, and high availability. VMware provides a mature and intuitive control interface for the entire infrastructure.

NetApp HCI sets itself apart from competitors in 4 ways:

  1. Guaranteed Quality of Service (QoS): NetApp HCI allows granular QoS control regardless of the number of applications. This eliminates “noisy neighbors” and satisfies every performance SLA.
  2. Flexibility and Scalability: Unlike its competitors, NetApp HCI allows independent scaling of computing and storage resources. This cuts the 30% “HCI tax” from controller VM overhead. Get simpler capacity and resource planning with no more over-provisioning.
  3. Automated Infrastructure: NetApp HCI’s deployment engine automates routine manual processes of deploying infrastructure. Meanwhile, VMware vCenter plug-ins make management much simpler. The full NetApp architecture goes from zero to fully-operational in under 30 minutes.
  4. Integration with NetApp Data Fabric: Take full advantage of NetApp’s storage offerings. Access and move your data between public and private cloud with zero downtime. Don’t compromise on high performance and security.

If you’re intrigued, we encourage you to tap into Softchoice’s experience with NetApp. Learn more about NetApp’s HCI by downloading this brief that dives a little deeper into the key benefits of NetApp’s HCI.

Hyper-Convergence and the HPE HC 250 — An Interview with Nigel Barnes (Part 2/2)

HPE Hyper Converged 250 System

HPE HC 250 enables  your disaster recovery and business continuity strategy

In part one of our recent interview with Nigel Barnes, HPE Technical Architect at Softchoice, we covered several use cases for hyper-convergence and the features that differentiate the HPE HC 250 from other options in the market. As we close out our interview, Nigel discusses how the HC 250 fits into a business’ disaster recovery and business continuity strategy. [Read more…]

Hyper-Convergence and the HPE HC 250 — An Interview with Nigel Barnes (Part 1/2)

Hyper-Convergence and the HPE HC 250 — An Interview with Nigel Barnes (Part 1/2)

IT organizations of all sizes are looking to save time and money while delivering the right computing resources quickly and securely.

Hyper-Convergence makes this possible.

We sat down with Softchoice’s HPE Technical Architect, Nigel Barnes, to discuss the benefits of hyper-convergence and the HPE Hyper-Converged (HC) 250.

Q: Nigel, before we get into how your customers are leveraging hyper-convergence and what HPE has to offer, can you start out by telling us a little bit about what hyper-convergence is?

A: Sure. It’s a pretty simple concept, but to fully understand the benefits of hyper-convergence, we need to take a quick look at a traditional data center issue. Historically, data center managers acquired resources such as servers, networking, and storage from different vendors. This led to challenges such as compatibility issues between components. Additionally, IT departments lacked the flexibility and agility to respond to new opportunities or address capacity issues quickly because it wasn’t fast or easy to allocate resources.

Hyper-convergence addresses these issues by combining compute, storage, network, and other resources into one package from one vendor. So, this gives the IT manager some pretty significant benefits such as a smaller footprint, less hardware to maintain, one vendor support, and most importantly, making it easier to provision resources. Of course, the HPE HC 250 that we’re talking about today offers additional benefits above and beyond that baseline.

Q: I know we’ll talk about more detailed examples, but give us a high-level look at what types of businesses might need that level of agility.

A: It’s not so much the size of the organization as it is the type of organization. Let’s say, for example, you’re running a manufacturing operation in a pretty static environment. Your capacity demands are probably pretty predictable. You may not even need a cloud environment.

On the other hand, take an omnichannel retailer with a highly seasonal business. Capacity requirements are less predictable. They may have some idea of when sales are going to start ticking up, but they don’t know how much network traffic they will need to accommodate or what their data storage requirements will be. Plus, in the off-season, their resource needs drop significantly, and they end up paying for capacity they aren’t using.

Software developers are another common example. As they are developing and testing new applications, they may need to provision additional resources. But, once the application is rolled out, those resources are no longer needed.

Q: Great. Let’s get into more of the specifics of what HPE has to offer. You want to talk to us specifically about the HPE HC 250, right?

A: Right. The HPE HC 250 is a 2 to 4 node 2U system, and you can have up to 4 of these HC 250s managed under a single domain. There are a couple of things that makes this solution stand out.

First, when it comes to hyper-converged solutions, hardware is becoming almost a secondary consideration. It’s really the software that makes the difference because that’s what allows for the ease of deployment and maintenance. The HC 250 has a shopping cart-style management interface that allows you to provision resources very quickly. It’s built on software-defined storage, which is their store virtual technology, and it’s integrated with Hyper-V or VMWare.

Q: As I understand it, the HC 250 makes provisioning cloud resources easy because it comes with Azure pack, but it can also be used as an on-premise only solution, right?

A: Exactly. The HPE interface makes it easy to deploy resources and provides a consistent management experience whether they are looking at on-prem, within Azure itself or even in a hosted environment from a 3rd party.

And, to clarify a bit on the Azure side, I think Microsoft would prefer a Hyper-V environment integrating with Azure, but that’s not an absolute prerequisite. Microsoft also hosts VMWare VMs in Azure as well. That’s good news for the customer because it gives them a choice.

Q: Beside the user interface, is there anything else that makes the HC 250 stand out?

A: The UI is the biggest differentiator, and it is applicable pretty much regardless of which solution the HPE is being compared to; however, there are other differences that sometimes come into play. For example, the HC 250 allows customers to start with a 2-node cluster, whereas some solutions require a 3-node starting point.

Q: That makes it a little more appropriate for a medium-sized business.

A: Right. That’s the sweet spot for the HC 250. It can be appropriate for an enterprise, too, but HPE has other solutions for larger organizations that the customer might want to consider.

In part two if this interview, we’ll ask Nigel for his perspectives on how the HC 250 helps organizations deal with two pressing issues: disaster recovery and business continuity. If you have questions about hyper-convergence or the HC 250, reach out to Nigel at or use the comment box below.