Cloud Success Stories – Part 1

Multicloud has become a popular approach for organizations moving to the cloud.  

Although it isn’t practical in all business casesRightScale finds 84% of companies already run applications in a mix of cloud environments.[1]  

In the last several years, Softchoice has seen many of our customers revisiting their approach to the cloud to gain the distinct advantages of several cloud platforms. At the same time, many are looking to de-risk their cloud strategy by avoiding vendor lock-in 

But today’s measure of success in the cloud isn’t just how an organization gains in efficiency or agility. Instead, it’s how fast the cloud can drive real business transformation.  

Sharing Customer Stories

Many of our customers are turning to one or more clouds to stay true to the goal: Spend more time delivering great products and services than maintaining infrastructure.  

Nonetheless, each organization – and each application – is on a journey of its own. We wanted to share our experience helping 1,400+ organizations transition to the cloud and help others benefit from what they’ve learned.  

This series will explore real-life stories on the journey to the cloud. In this article, we’ll look at three organizations and how they integrated Google Cloud into their cloud strategies to deliver the best possible customer experiences.  

Michael Young, Vice President – Technology Strategy, Birch Hill Equity  

The Challenge: Birch Hill Equity wanted to adopt serverless architecture to the greatest extent possible to focus on delivering analytics products rather than maintaining infrastructure.  

Multicloud certainly ties into our strategy at Birch Hill…Our number one priority is for as much of what we do as possible to be serverless.”  

The Journey 

  • Birch Hill started in the public cloud with AWS, using a PostgreSQL, Databricks and a data warehouse to support its then lightweight data center needs.  
  • The company’s growing portfolio of analytics products required faster and faster response times, prompting the company to adopt Google BigQuery, due to its sub-second response times and lack of infrastructure to maintain.  
  • Google identity and secure access through OAuth met some critical security needs while allowing a small team to run data-intensive analytics workloads.  
  • Today, Google Cloud allows Birch Hill to spin up new analytics offerings fast while AWS supports heavy workloads with Databricks over EC2 clusters along with other infrastructure components.  

“We wanted to focus our time on delivering analytics products, not maintaining our cloud.”   

Next Steps: 

  • Birch Hill still faces challenges in implementing and managing effective security across multiple clouds – a common difficulty for multicloud adopters.  
  • Meanwhile, the company struggles with skills shortages in DevOps, infrastructure and architecture design, preferring to focus on expanding its analyst bench.  

Read the full conversation 

Sergei Leschinsky, Senior Director – Information Services, Polar Inc.  

The Challenge:  Polar needed to minimize delays and embrace a distributed network to support exponential growth and global expansion for its real-time bidding product for digital advertising.  

Distributed geographies became an essential part of the Polar Platform, which is a distinct advantage of public cloud. 

The Journey: 

  • Polar started its cloud journey as an early adopter, extending some of its production workloads to the public cloud – however, the project was unsuccessful.  
  • Overcoming an early false start, the company began a second migration with AWS, favoring its industry-leading versatility of services and solutions.  
  • As its product entered a period of rapid growth, Polar consolidated CDN providers and started bringing its heaviest-traffic workloads to Google Cloud. The result was a 50% savings in egress traffic.   
  • This allowed them to take advantage of geo-locations to support expansion in Europe and Australia. 
  • Today, Polar is using Google Cloud to support compute, load balancing and MySQL while AWS supports its data storage needs.  

Next Steps: 

  • Polar’s next steps in the cloud are to migrate its remaining high-traffic workloads to its Google Cloud environment.  
  • However, the company finds getting the attention of cloud providers to escalate and resolve issues is sometimes an uphill battle as a customer with a smaller footprint.  
  • They also see some difficulty in navigating the changes in billing structures and program changes across several large, complex and innovative service providers.  

“Our approach and need for public cloud today are very different than what we were trying to use it for in the past.”  

Read the full conversation.  

Norman Shi, Chief Technology Officer, Gradient.io  

The Challenge: As a startup, Gradient needed to process massive amounts of data in very short periods to support its SaaS tool ranking brand performance on Amazon’s retail platform.  

“Eventually, your application requirements will get to a stage where you require a higher level of infrastructure that offers greater scale, elasticity and processing speed.” 

The Journey: 

  • As a cloud-native company, Gradient started its journey without legacy infrastructure, allowing it to select the cloud provider or providers best able to meet their needs.   
  • Although Gradient recognized the strengths of AWS, potential data hosting conflicts with its retailer customers rendered it impractical for their needs.    
  • The company built its technology stack on Google Cloud to take advantage of its exceptional data collection and processing capabilities.  
  • Gradient also wanted to benefit from Google Cloud’s user-friendly interface and open source services like Kubernetes.  
  • Today, Gradient uses Google Cloud to power and optimize its SaaS dashboard for a fast-growing customer base.  

Next Steps:  

  • As a small but growing company in the cloud, Gradient still struggles with resource constraints and the challenges in accessing Google Cloud-specialized skills 
  • They also have some trouble tracking, managing and optimizing their cloud spend as their offering goes through a period of rapid growth.  

“These services are game-changers for any organizations who want to process terabytes and petabytes of data. 

Read the full conversation 

What’s Next for You Cloud Journey? 

The cloud journey is not always a pleasant or a complete success at first.”  

We’ve covered three real-life journeys that led to successful cloud transitions. However, no cloud transition is ever fully complete. Working with a strategic managed services partner like Softchoice will help you:  

  • Achieve the right mix of cloud services to meet your business needs 
  • Take the risk out of cloud adoption and migration 
  • Optimize your cloud spending across multiple providers 
  • Balance product and service innovation with proper cloud governance  
  • Upskill your team on every aspect of the cloud 

Learn more about how we can help by exploring Softchoice Cloud Services. 

Planning to migrate one or more workloads to the public cloud? First, check out this Forrester report, 10 Facts Tech Leaders Should Know About Cloud Migration. 

 [1] RightScale 2019 State of the Cloud Report from Flexera

 

Legacy vs. Cloud Environments: How to Determine Workload Placement

Imagine buying a home. Soon after you purchase, you discover that it has a hidden structural flaw – termite damage, black mold or crumbling clay-tile plumbing. This requires a very expensive fix. In that case, it may have been better to have not purchased the home at all, and instead found a place with fewer issues. 

 Similar situations can occur with cloud workload placement. In 2019, IDC noted the continued trend of reverse cloud migrations, which involve organizations moving workloads that had previously been placed in the cloud back into on-premises environments, often at great expense.  

IDC also estimated that by 2020, 75% of enterprises will also use a private cloud. The reasons include security and compliance, anticipated cost savings that failed to materialize and growing interest in hyper-converged infrastructure. Performing due diligence before a workload goes into the cloud is the best way to avoid the technical complications and potentially high costs of making such adjustments after the fact.  

 In this article, we’ll look at what steps to consider when deciding where to place a workload, so that you can make an informed choice that will provide the right combination of cloud performance and cost-effectiveness. Let’s go through them one by one, starting with what you need to discover during an initial hybrid IT assessment. 

What are the performance characteristics of the workloads in question?

By measuring each workload’s usage of CPU, memory, and networks, it’s possible to get a general sense of whether it would be a good candidate for the cloud. 

 For example, a highly variable workload such as an intermittently busy e-commerce site or student enrollment system would be ideal for public cloud placement. Because it doesn’t require a steady stream of IT resources, there’s no real reason to purchase and provide them in-house. You would only invest in copious amounts of memory and numerous CPUs that would occasionally be needed at full capacity. Apps with unpredictable demand area are also good cloud candidates for these same reasons. 

 In contrast, an application that sees constant demand will usually be better suited to legacy/on-prem deployment. Moving it to the cloud could result in major sticker shock, as its level of activity will accrue significant charges for all of the necessary public cloud services, including snapshots and bandwidth. 

Does the workload need any modifications before it’s placed in the cloud?

Most of the time, the answer to this question is “yes.” We estimate that at least 80% of workloads require some adjustments before they’re cloud-ready. Sending a legacy workload as-is into a public cloud is typically a recipe for subpar performance and potentially a reverse migration. 

Porting an on-prem workload into a cloud-optimized version is a multi-step process, often requiring: 

  • Applying relevant patches and security upgrades. 
  • Assessing all of its security contingencies, e.g. with HIPAA, GDPR, etc. 
  • Identifying the workload’s dependencies on specific pieces of infrastructure. 
  • Updating or rewriting it for a different OS or framework. 
  • Placing it into a container for increased portability. 

It’s also possible that after performing deeper analysis using versioning tools, you’ll discover that a workload isn’t suitable for a public cloud environment. This could be because of its performance characteristics. Or, because there is a lack of support for its requirements. For example, the decision to port an application to the cloud may not ensure passage of a regulatory audit. The result would require you to know the exact locations of application data and prevent any unauthorized access. 

 If a workload isn’t cloud-ready or even cloud-suitable, it should be left in place and perhaps revisited later on to see if the situation has changed or if it could be replaced by SaaS.

Would a hybrid cloud that extends the current environment make sense for the workload?

 Using multiple clouds is fast becoming the norm. In 2019, RightScale found that most enterprises had a multicloud strategy in place and that the share of businesses combining public and private cloud deployments rose from 51% in 2018 to 58% 

 A hybrid cloud can sometimes provide the right balance of control and performance for workloads that are being migrated from a legacy environment. Solutions such as VMware on AWS allow current data center processes and tools to be copied over into a cloud environment.  

 Moving to this type of hybrid platform requires no sweeping changes. At the same time, it opens the door to additional services for security and disaster recovery. A hybrid cloud ultimately allows for more streamlined data center operations in support of key workflows such as devtest. 

What is the business impact of migrating this workload to the cloud? 

Without getting into all of the technical details discussed above, a workload’s suitability for the cloud can be evaluated based on how its migration would affect the organization as a whole. For instance, how would it change the day-to-day experience of its end users? 

 Keeping a workload in a single main data center might actually degrade its performance, due to it being centralized in a location that’s physically quite far from some branch sites. Also, single-site deployments are at higher risk from disasters, since all the eggs are in the same technical basket.   

 Consider what it would be like to have the bulk of your employees or customers in Seattle while your data center was in New York City. That distance would materially affect workload performance, plus any failure would immediately put you in a bind. The rise of the Internet of Things and the corresponding need to backhaul more data with less latency shows how difficult it will be to maintain certain data center setups going forward. 

 Shifting more of these sorts of workloads into the cloud might provide some much-needed redundancy and greater geographical distribution. However, there are some different physical limitations that could come into play. A workload that runs on system memory and flash storage in a data center might theoretically be public cloud-able, but it wouldn’t perform exactly the same way once there. 

How to make the best decision about workload placement 

We’ve covered some of the biggest considerations for placing workloads, but there are others that will inevitably enter the picture. Looking to learn more about the challenges of cloud migration? Check out this Forrester report, 10 Facts Tech Leaders Should Know About Cloud Migration.

Softchoice can help you navigate them en route to making the best possible choices for your environment. Learn more by contacting our team or explore Softchoice cloud services.

 

Why We’re Excited for VMworld 2019

VMWorld 2019

VMworld is the marquee VMware event of the year.

The conference showcases the technology and solutions providers that are transforming the IT landscape. From mobility and the cloud to networking and security – VMworld offers a glimpse of what’s happening in IT now – and what’s coming next.

The annual US conference kicks off in San Francisco on August 25 and Softchoice has a team of VMware experts attending. We asked a few of them to share what’s got them most excited about this year’s event.

Here’s what they had to say…

Scott Mathewson, Software-Defined Data Center (SDDC) Practice Lead

There are a lot of exciting announcements coming from VMware this year.

I’m especially curious to see what’s new and upcoming as it relates to VMware’s SDN strategy and roadmaps, especially NSX.  Being able to automate network provisioning and have network and security settings wrapped around applications provides efficient, error-free provisioning with “set-once” security policies. This means that IT can reduce the time and effort needed to provision new workloads or move applications to the cloud.

New versions of VMware NSX will also allow for greater flexibility in software-defined network (SDN) configuration for high availability and backup across multiple clouds. NSX services, firewalls, routers and load balancers will now be consumable in the cloud as a service. The on-prem and cloud versions of NSX come together and provide the ultimate options for a customer when it comes to networking and security. Businesses can now choose the networking and security services that are critical to the business – whether they are in the cloud or on-prem.

Meanwhile, VMware Cloud on AWS is growing. New features like Elastic Block Storage (EBS) integrated into VSAN and Relational Database Support (RDS) native to the virtual machine enable much greater application flexibility.  Customers can now truly take advantage of software services to modernize business applications. They can also reduce costs and increase performance and agility for business applications. VMC on AWS has a robust global scale. And it’s the only solution that allows companies to get the cloud within hours instead of months and years by having the ability to immediately drag and drop applications with VMware from on-prem to the cloud.

I’m also looking forward to learning more about announcements relating to VMware on Azure and Google cloud. I see this development making VMware the provider of choice for many organizations looking for a fast, secure solution for moving applications to the cloud. VMware and the CloudHealth platform will also play an important role in managing applications in the cloud.

John Long, VMware Technical Architect

At this year’s conference, I’m excited for content and sessions that dive deeper into two emerging products: VMware VeloCloud and VMware CloudHealth, (whose acquisition was announced at VMworld last year).

VeloCloud is an SD-WAN solution that can be deployed in minutes from remote sites. It’s has taken the market by storm. It’s arguably the most strategic purchase VMware has made since it acquired Nicira (now called NSX) in 2012.

VeloCloud is changing the way we think about internet connectivity and the way customers are bridging between data center to co-location, data center to the cloud, and cloud-to-cloud. NSX provides the ability to stretch Layer 2 networks over Layer 3. From there, VeloCloud comes in to ensure connectivity through multiple connections that upgrade ordinary broadband connections into enterprise SD-WAN.

It also assures optimal application performance – a growing requirement for VoIP, video and other bandwidth-intensive applications. One of the most compelling reasons we are excited about VeloCloud is how the quality of service (QoS) is provided through public and private cloud-based management as a service. Branch deployment is also virtually automated.

But the VeloCloud SD-WAN “secret sauce” is Dynamic Multipath Optimization. This provides continuous path monitoring along with automated bandwidth discovery. In turn, this enables application-aware dynamic per-packet steering as well as on-demand remediation with QoS.

It’s certainly feature-rich as it exists today (with SD-WAN artificial intelligence and machine learning). But we’re excited to see how this offering will evolve in the coming months and year.

At the same time, CloudHealth is fast becoming one of the most trusted multicloud management platforms. Some organizations are transitioning to hybrid cloud strategies. But many are moving to multi-cloud. With so many clouds to manage, there is a need for a solution to wrangle all of them before a storm happens.

CloudHealth provides customers with cloud visibility, cost management, security and governance. By providing visibility into usage, performance and the way each workload consumes resources, CloudHealth can support decision-making on workload placement. This, in turn, helps to save management resources and enable organizations to make decisions faster. This visibility extends into container orchestrators, such as Kubernetes, Mesos or Amazon ECS and EKS and helps to maximize utilization of container platforms.

CloudHealth also allows enforcement of security policies to minimize risk along with governance to maintain control over cloud environments through policies and workflows.

Organizations building their multicloud strategy should consider the value CloudHealth could play. So should those who are already on this journey,

Learn more about hybrid vs. multicloud and get help with your strategy.

 

Jacy Townsend, Sr. VMware Technical Architect

I’m very excited for VMworld this year! No doubt there’ll be several new product and feature announcements.

In anticipation of this year’s conference, I spent some time going through the catalog of sessions. There is a lot of content and themes that piqued my interest. What I’m most excited about this year is seeing the direction VMware is headed when it comes to supporting the industry shift toward containerization of workloads.  I’m most looking forward to “Architecting VMware PKS on HCI Powered by vSAN”.

For those who don’t know, VMware PKS is also known as “Pivotal Container Service.” It’s an enterprise Kubernetes platform that simplifies container orchestration on any infrastructure. On the other hand, vSAN is the policy-based storage piece of VMware’s hyper-converged infrastructure solution. I’m very curious to see how these play off of one another.

See You in San Francisco!

VMworld is full of exciting sessions, labs, keynotes and exhibits all focused on technology that will take IT to the next level. The Softchoice team is excited to see you there! Learn more about VMworld.

Attending VMworld US and want to connect? Reach out at vmware@softchoice.com