Legacy vs. Cloud Environments: How to Determine Workload Placement

Imagine buying a home. Soon after you purchase, you discover that it has a hidden structural flaw – termite damage, black mold or crumbling clay-tile plumbing. This requires a very expensive fix. In that case, it may have been better to have not purchased the home at all, and instead found a place with fewer issues. 

 Similar situations can occur with cloud workload placement. In 2019, IDC noted the continued trend of reverse cloud migrations, which involve organizations moving workloads that had previously been placed in the cloud back into on-premises environments, often at great expense.  

IDC also estimated that by 2020, 75% of enterprises will also use a private cloud. The reasons include security and compliance, anticipated cost savings that failed to materialize and growing interest in hyper-converged infrastructure. Performing due diligence before a workload goes into the cloud is the best way to avoid the technical complications and potentially high costs of making such adjustments after the fact.  

 In this article, we’ll look at what steps to consider when deciding where to place a workload, so that you can make an informed choice that will provide the right combination of cloud performance and cost-effectiveness. Let’s go through them one by one, starting with what you need to discover during an initial hybrid IT assessment. 

What are the performance characteristics of the workloads in question?

By measuring each workload’s usage of CPU, memory, and networks, it’s possible to get a general sense of whether it would be a good candidate for the cloud. 

 For example, a highly variable workload such as an intermittently busy e-commerce site or student enrollment system would be ideal for public cloud placement. Because it doesn’t require a steady stream of IT resources, there’s no real reason to purchase and provide them in-house. You would only invest in copious amounts of memory and numerous CPUs that would occasionally be needed at full capacity. Apps with unpredictable demand area are also good cloud candidates for these same reasons. 

 In contrast, an application that sees constant demand will usually be better suited to legacy/on-prem deployment. Moving it to the cloud could result in major sticker shock, as its level of activity will accrue significant charges for all of the necessary public cloud services, including snapshots and bandwidth. 

Does the workload need any modifications before it’s placed in the cloud?

Most of the time, the answer to this question is “yes.” We estimate that at least 80% of workloads require some adjustments before they’re cloud-ready. Sending a legacy workload as-is into a public cloud is typically a recipe for subpar performance and potentially a reverse migration. 

Porting an on-prem workload into a cloud-optimized version is a multi-step process, often requiring: 

  • Applying relevant patches and security upgrades. 
  • Assessing all of its security contingencies, e.g. with HIPAA, GDPR, etc. 
  • Identifying the workload’s dependencies on specific pieces of infrastructure. 
  • Updating or rewriting it for a different OS or framework. 
  • Placing it into a container for increased portability. 

It’s also possible that after performing deeper analysis using versioning tools, you’ll discover that a workload isn’t suitable for a public cloud environment. This could be because of its performance characteristics. Or, because there is a lack of support for its requirements. For example, the decision to port an application to the cloud may not ensure passage of a regulatory audit. The result would require you to know the exact locations of application data and prevent any unauthorized access. 

 If a workload isn’t cloud-ready or even cloud-suitable, it should be left in place and perhaps revisited later on to see if the situation has changed or if it could be replaced by SaaS.

Would a hybrid cloud that extends the current environment make sense for the workload?

 Using multiple clouds is fast becoming the norm. In 2019, RightScale found that most enterprises had a multicloud strategy in place and that the share of businesses combining public and private cloud deployments rose from 51% in 2018 to 58% 

 A hybrid cloud can sometimes provide the right balance of control and performance for workloads that are being migrated from a legacy environment. Solutions such as VMware on AWS allow current data center processes and tools to be copied over into a cloud environment.  

 Moving to this type of hybrid platform requires no sweeping changes. At the same time, it opens the door to additional services for security and disaster recovery. A hybrid cloud ultimately allows for more streamlined data center operations in support of key workflows such as devtest. 

What is the business impact of migrating this workload to the cloud? 

Without getting into all of the technical details discussed above, a workload’s suitability for the cloud can be evaluated based on how its migration would affect the organization as a whole. For instance, how would it change the day-to-day experience of its end users? 

 Keeping a workload in a single main data center might actually degrade its performance, due to it being centralized in a location that’s physically quite far from some branch sites. Also, single-site deployments are at higher risk from disasters, since all the eggs are in the same technical basket.   

 Consider what it would be like to have the bulk of your employees or customers in Seattle while your data center was in New York City. That distance would materially affect workload performance, plus any failure would immediately put you in a bind. The rise of the Internet of Things and the corresponding need to backhaul more data with less latency shows how difficult it will be to maintain certain data center setups going forward. 

 Shifting more of these sorts of workloads into the cloud might provide some much-needed redundancy and greater geographical distribution. However, there are some different physical limitations that could come into play. A workload that runs on system memory and flash storage in a data center might theoretically be public cloud-able, but it wouldn’t perform exactly the same way once there. 

How to make the best decision about workload placement 

We’ve covered some of the biggest considerations for placing workloads, but there are others that will inevitably enter the picture. Looking to learn more about the challenges of cloud migration? Check out this Forrester report, 10 Facts Tech Leaders Should Know About Cloud Migration.

Softchoice can help you navigate them en route to making the best possible choices for your environment. Learn more by contacting our team or explore Softchoice cloud services.

 

Hyper-Convergence and the HPE HC 250 — An Interview with Nigel Barnes (Part 2/2)

HPE Hyper Converged 250 System

HPE HC 250 enables  your disaster recovery and business continuity strategy

In part one of our recent interview with Nigel Barnes, HPE Technical Architect at Softchoice, we covered several use cases for hyper-convergence and the features that differentiate the HPE HC 250 from other options in the market. As we close out our interview, Nigel discusses how the HC 250 fits into a business’ disaster recovery and business continuity strategy. [Read more…]

Hyper-Convergence and the HPE HC 250 — An Interview with Nigel Barnes (Part 1/2)

Hyper-Convergence and the HPE HC 250 — An Interview with Nigel Barnes (Part 1/2)

IT organizations of all sizes are looking to save time and money while delivering the right computing resources quickly and securely.

Hyper-Convergence makes this possible.

We sat down with Softchoice’s HPE Technical Architect, Nigel Barnes, to discuss the benefits of hyper-convergence and the HPE Hyper-Converged (HC) 250.

Q: Nigel, before we get into how your customers are leveraging hyper-convergence and what HPE has to offer, can you start out by telling us a little bit about what hyper-convergence is?

A: Sure. It’s a pretty simple concept, but to fully understand the benefits of hyper-convergence, we need to take a quick look at a traditional data center issue. Historically, data center managers acquired resources such as servers, networking, and storage from different vendors. This led to challenges such as compatibility issues between components. Additionally, IT departments lacked the flexibility and agility to respond to new opportunities or address capacity issues quickly because it wasn’t fast or easy to allocate resources.

Hyper-convergence addresses these issues by combining compute, storage, network, and other resources into one package from one vendor. So, this gives the IT manager some pretty significant benefits such as a smaller footprint, less hardware to maintain, one vendor support, and most importantly, making it easier to provision resources. Of course, the HPE HC 250 that we’re talking about today offers additional benefits above and beyond that baseline.

Q: I know we’ll talk about more detailed examples, but give us a high-level look at what types of businesses might need that level of agility.

A: It’s not so much the size of the organization as it is the type of organization. Let’s say, for example, you’re running a manufacturing operation in a pretty static environment. Your capacity demands are probably pretty predictable. You may not even need a cloud environment.

On the other hand, take an omnichannel retailer with a highly seasonal business. Capacity requirements are less predictable. They may have some idea of when sales are going to start ticking up, but they don’t know how much network traffic they will need to accommodate or what their data storage requirements will be. Plus, in the off-season, their resource needs drop significantly, and they end up paying for capacity they aren’t using.

Software developers are another common example. As they are developing and testing new applications, they may need to provision additional resources. But, once the application is rolled out, those resources are no longer needed.

Q: Great. Let’s get into more of the specifics of what HPE has to offer. You want to talk to us specifically about the HPE HC 250, right?

A: Right. The HPE HC 250 is a 2 to 4 node 2U system, and you can have up to 4 of these HC 250s managed under a single domain. There are a couple of things that makes this solution stand out.

First, when it comes to hyper-converged solutions, hardware is becoming almost a secondary consideration. It’s really the software that makes the difference because that’s what allows for the ease of deployment and maintenance. The HC 250 has a shopping cart-style management interface that allows you to provision resources very quickly. It’s built on software-defined storage, which is their store virtual technology, and it’s integrated with Hyper-V or VMWare.

Q: As I understand it, the HC 250 makes provisioning cloud resources easy because it comes with Azure pack, but it can also be used as an on-premise only solution, right?

A: Exactly. The HPE interface makes it easy to deploy resources and provides a consistent management experience whether they are looking at on-prem, within Azure itself or even in a hosted environment from a 3rd party.

And, to clarify a bit on the Azure side, I think Microsoft would prefer a Hyper-V environment integrating with Azure, but that’s not an absolute prerequisite. Microsoft also hosts VMWare VMs in Azure as well. That’s good news for the customer because it gives them a choice.

Q: Beside the user interface, is there anything else that makes the HC 250 stand out?

A: The UI is the biggest differentiator, and it is applicable pretty much regardless of which solution the HPE is being compared to; however, there are other differences that sometimes come into play. For example, the HC 250 allows customers to start with a 2-node cluster, whereas some solutions require a 3-node starting point.

Q: That makes it a little more appropriate for a medium-sized business.

A: Right. That’s the sweet spot for the HC 250. It can be appropriate for an enterprise, too, but HPE has other solutions for larger organizations that the customer might want to consider.

In part two if this interview, we’ll ask Nigel for his perspectives on how the HC 250 helps organizations deal with two pressing issues: disaster recovery and business continuity. If you have questions about hyper-convergence or the HC 250, reach out to Nigel at or use the comment box below.