Legacy vs. Cloud Environments: How to Determine Workload Placement

Imagine buying a home. Soon after you purchase, you discover that it has a hidden structural flaw – termite damage, black mold or crumbling clay-tile plumbing. This requires a very expensive fix. In that case, it may have been better to have not purchased the home at all, and instead found a place with fewer issues. 

 Similar situations can occur with cloud workload placement. In 2019, IDC noted the continued trend of reverse cloud migrations, which involve organizations moving workloads that had previously been placed in the cloud back into on-premises environments, often at great expense.  

IDC also estimated that by 2020, 75% of enterprises will also use a private cloud. The reasons include security and compliance, anticipated cost savings that failed to materialize and growing interest in hyper-converged infrastructure. Performing due diligence before a workload goes into the cloud is the best way to avoid the technical complications and potentially high costs of making such adjustments after the fact.  

 In this article, we’ll look at what steps to consider when deciding where to place a workload, so that you can make an informed choice that will provide the right combination of cloud performance and cost-effectiveness. Let’s go through them one by one, starting with what you need to discover during an initial hybrid IT assessment. 

What are the performance characteristics of the workloads in question?

By measuring each workload’s usage of CPU, memory, and networks, it’s possible to get a general sense of whether it would be a good candidate for the cloud. 

 For example, a highly variable workload such as an intermittently busy e-commerce site or student enrollment system would be ideal for public cloud placement. Because it doesn’t require a steady stream of IT resources, there’s no real reason to purchase and provide them in-house. You would only invest in copious amounts of memory and numerous CPUs that would occasionally be needed at full capacity. Apps with unpredictable demand area are also good cloud candidates for these same reasons. 

 In contrast, an application that sees constant demand will usually be better suited to legacy/on-prem deployment. Moving it to the cloud could result in major sticker shock, as its level of activity will accrue significant charges for all of the necessary public cloud services, including snapshots and bandwidth. 

Does the workload need any modifications before it’s placed in the cloud?

Most of the time, the answer to this question is “yes.” We estimate that at least 80% of workloads require some adjustments before they’re cloud-ready. Sending a legacy workload as-is into a public cloud is typically a recipe for subpar performance and potentially a reverse migration. 

Porting an on-prem workload into a cloud-optimized version is a multi-step process, often requiring: 

  • Applying relevant patches and security upgrades. 
  • Assessing all of its security contingencies, e.g. with HIPAA, GDPR, etc. 
  • Identifying the workload’s dependencies on specific pieces of infrastructure. 
  • Updating or rewriting it for a different OS or framework. 
  • Placing it into a container for increased portability. 

It’s also possible that after performing deeper analysis using versioning tools, you’ll discover that a workload isn’t suitable for a public cloud environment. This could be because of its performance characteristics. Or, because there is a lack of support for its requirements. For example, the decision to port an application to the cloud may not ensure passage of a regulatory audit. The result would require you to know the exact locations of application data and prevent any unauthorized access. 

 If a workload isn’t cloud-ready or even cloud-suitable, it should be left in place and perhaps revisited later on to see if the situation has changed or if it could be replaced by SaaS.

Would a hybrid cloud that extends the current environment make sense for the workload?

 Using multiple clouds is fast becoming the norm. In 2019, RightScale found that most enterprises had a multicloud strategy in place and that the share of businesses combining public and private cloud deployments rose from 51% in 2018 to 58% 

 A hybrid cloud can sometimes provide the right balance of control and performance for workloads that are being migrated from a legacy environment. Solutions such as VMware on AWS allow current data center processes and tools to be copied over into a cloud environment.  

 Moving to this type of hybrid platform requires no sweeping changes. At the same time, it opens the door to additional services for security and disaster recovery. A hybrid cloud ultimately allows for more streamlined data center operations in support of key workflows such as devtest. 

What is the business impact of migrating this workload to the cloud? 

Without getting into all of the technical details discussed above, a workload’s suitability for the cloud can be evaluated based on how its migration would affect the organization as a whole. For instance, how would it change the day-to-day experience of its end users? 

 Keeping a workload in a single main data center might actually degrade its performance, due to it being centralized in a location that’s physically quite far from some branch sites. Also, single-site deployments are at higher risk from disasters, since all the eggs are in the same technical basket.   

 Consider what it would be like to have the bulk of your employees or customers in Seattle while your data center was in New York City. That distance would materially affect workload performance, plus any failure would immediately put you in a bind. The rise of the Internet of Things and the corresponding need to backhaul more data with less latency shows how difficult it will be to maintain certain data center setups going forward. 

 Shifting more of these sorts of workloads into the cloud might provide some much-needed redundancy and greater geographical distribution. However, there are some different physical limitations that could come into play. A workload that runs on system memory and flash storage in a data center might theoretically be public cloud-able, but it wouldn’t perform exactly the same way once there. 

How to make the best decision about workload placement 

We’ve covered some of the biggest considerations for placing workloads, but there are others that will inevitably enter the picture. Looking to learn more about the challenges of cloud migration? Check out this Forrester report, 10 Facts Tech Leaders Should Know About Cloud Migration.

Softchoice can help you navigate them en route to making the best possible choices for your environment. Learn more by contacting our team or explore Softchoice cloud services.

 

How IT Solves the Business Agility Formula

Business agility

It’s 2018 – it’s time to retire the job title ‘IT Manager’.

When we think of an ‘IT Manager’, a traditional IT department comes to mind. The individuals who “keep the lights on” at an organization. IT Managers/teams are not those people anymore. They are driving organizations towards continued growth and success in today’s digital, cloud-everywhere, mobile-first world. That’s why we feel that the job title today should be “Orchestrator of Digital Innovation”. [Read more…]

Your DIY infrastructure is costing you…here’s why Hyper-Converged Infrastructure (HCI) promises a better way

The promise of hyper converged infrastructure HCI for DIY infrastructures

Your DIY infrastructure is costing you…here’s why Hyper-Converged Infrastructure (HCI) promises a better way

Modern business has accelerated. Most organizations realize they need to change the way they consume IT resources. They recognize the challenge: Adjust consumption to make sense at a given time for a given project, or risk falling behind.

The “build-it-yourself” infrastructure model is still relevant for most organizations. That’s what feels comfortable. It uses familiar concepts and offers loads of references to emulate.

Today, an unprecedented gap has opened. Traditional IT can no longer deliver a customer experience that meets business demands. To close the gap, users step outside the IT department to find alternatives that can.

On-Demand Webinar: The Next Generation of Hyper Converged Infrastructure

  

NetApp HCI enterprise-scale solutions let you break free from limitations. Consolidate multiple mixed workloads, scale storage and compute independently on demand, and guarantee performance.

But building infrastructure in-house comes at a cost.

Embracing non-sanctioned “Shadow IT” solutions exposes the organization to security gaps. Then there is the attendant decline in performance. Softchoice surveyed over 900 CEOs on the issue. The result? More than 90% felt hybrid IT was the best approach to closing the gap and eliminating shadow IT.

Nonetheless, the term “hybrid IT” is subject. From our perspective, it means:

  • Adopting public cloud to increase flexibility, lower costs and reduce time to market.
  • Modernizing the data center to improve performance and make operational gains.
  • Ensuring the network is secure and ready to handle mobile, remote office and cloud services.

When it comes to transforming the data center, the goal is often to mend the gap between supply and demand. This means evaluating other consumption models, like pay-as-you-go and converged infrastructure.

Hyper-converged architecture refers to infrastructure with 100% virtualized components. This option allows organizations to simplify operations while delivering applications to market faster.

IDC predicts this approach will define 60% of the server, storage and network deployments by 2020. It’s no surprise: Hyper-converged infrastructure (HCI) meets every need of most modern businesses:

  • It offers a tight integration between server, storage, and networking. Then, it adds the advantages of a software-defined data center. It’s the best of both worlds.
  • Organizations deal with a single hardware vendor, meaning a smooth, cohesive support experience.
  • It’s flexible around deploying existing architecture and expanding storage and compute resources.
  • It offers a “single pane of glass” for deployment and management.

Companies looking to catch up would do well to get on board. HCI is the fastest growing category in the data center market. Revenue should reach $10.8 billion by 2021 with 48% compound annual from 2016.

Despite its many positives, HCI still qualifies as “too good to be true.” The typical HCI offering comes with flaws that often knock it out of consideration.

Many converged, and HCI offerings failed to deliver consistent quality of service (QoS) across applications. As business needs changed, overloaded system resources created the infamous “HCI tax.” Many more mandated fixed rations for resource expansion, making adaptation slow and difficult. Streamlining stack management was often impossible. This was due to product immaturity and long-standing integration with existing hypervisor platforms.

But we’ve reached the next step in the evolution of hyper-converged infrastructure. Today, many of the elements lacking in “HCI 1.0” have arrived in version 2.0 from NetApp HCI.

Gartner recognizes NetApp as the leader in network-attached storage. They call its NetApp HCI solution an evolutionary disruptor.

NetApp HCI is the first enterprise-scale hyper-converged infrastructure solution. It’s evolved from the SolidFire all-flash storage product. As such, it delivers integrated storage efficiency tools, including always-on de-duplication. Also, on the menu: integrated replication, data protection, and high availability. VMware provides a mature and intuitive control interface for the entire infrastructure.

NetApp HCI sets itself apart from competitors in 4 ways:

  1. Guaranteed Quality of Service (QoS): NetApp HCI allows granular QoS control regardless of the number of applications. This eliminates “noisy neighbors” and satisfies every performance SLA.
  2. Flexibility and Scalability: Unlike its competitors, NetApp HCI allows independent scaling of computing and storage resources. This cuts the 30% “HCI tax” from controller VM overhead. Get simpler capacity and resource planning with no more over-provisioning.
  3. Automated Infrastructure: NetApp HCI’s deployment engine automates routine manual processes of deploying infrastructure. Meanwhile, VMware vCenter plug-ins make management much simpler. The full NetApp architecture goes from zero to fully-operational in under 30 minutes.
  4. Integration with NetApp Data Fabric: Take full advantage of NetApp’s storage offerings. Access and move your data between public and private cloud with zero downtime. Don’t compromise on high performance and security.

If you’re intrigued, we encourage you to tap into Softchoice’s experience with NetApp. Learn more about NetApp’s HCI by downloading this brief that dives a little deeper into the key benefits of NetApp’s HCI.