Where to Find Savings in Your Cloud or Data Center Environment

Part 1 of our 2-part series on Driving Efficiency through Infrastructure Optimization. Read Part 2, “How to Add to Your IT Environment without Adding Costs.” 

For IT departments, the mandate to do more with less and get the most out of technology investments isn’t new. But today there’s much more pressure to find and seize immediate opportunities to cut costs.

In addition to rationalizing software and restructuring contracts, on-premise data center and public cloud infrastructure are two high impact areas for potential short-term savings.

There are some common challenges, however. In the cloud, lack of visibility and formal governance practices makes it harder to learn where and how to find savings. In the data center, the need to avoid new capital expenditures makes it necessary to free up existing capacity to support new projects.

In fact, cloud consuming organizations waste an average of 30% of their cloud spend due to redundant resources (Source: RightScale). At the same time, inactive data accounts for 50% of total storage capacity, taking up valuable space (Source: NetApp).

The best options for making short-term financial impact in the infrastructure environment are:

  • Reducing cloud costs by improving management and visibility
  • Freeing up data center compute and storage capacity to avoid future costs

Below, we go deeper into each of these cost saving opportunities.

Reducing Cloud Costs through Improved Management and Visibility

Without careful management, public cloud infrastructure costs can get out of control.

Because organizations can procure and consume public cloud resources much easier than their on-premise counterparts, losing track of workloads and associated spend is a common problem.

Redundant resources, the absence of adequate monitoring tools and lack of control over who initiates or decommissions workloads in the cloud all contribute to over-spend.

The Flexera State of the Cloud Report for 2020 found that 79% of those surveyed cited managing costs as a top cloud challenge, second only to security.  The report also found that enterprise companies overspent their cloud budgets by 23% on average in 2019.

To right-size public cloud infrastructure and drive cost efficiency, consider the following actions:

  • Find and remove overprovisioned or idle resources: Identifying and reviewing accounts with low I/O activity helps you determine which resources could be decommissioned with minimal impact to the business.
  • Implement and enforce formal cloud governance: A formal cloud governance policy helps you better understand the structure of cloud costs, establish accountability and control access and decision-making around cloud resources.
  • Adopt a cloud management platform: A cloud management platform helps enhance visibility into your public cloud environment to promote better forecasting for cloud budgets based on real-time usage. Further categorizing cloud instances by assigning metadata tags related to billing, environments, applicable compliance requirements and more allows IT teams to track usage and associated cost across cloud instances, even in a hybrid or multicloud environment. IT can then augment and automate tagging using cloud native tools for policy enforcement. Together, these ensure that utilization meets requirements while reducing financial risk.
  • Optimize cloud storage: As with on-premise infrastructure, automating the categorization and storage of active and inactive data into performance and capacity tiers in the cloud helps drive further efficiency.
  • Implement automated scaling: Putting automated scaling in place allows you to scale up resources when needed and scale down the rest of the time. This replaces the need to accommodate maximum utilization, which is often a needless expense.
  • Use reserved versus on-demand instances: The leading public cloud providers offer discounts to customers for reserving instances for anticipated future needs in advance rather than pay higher rates for on-demand usage.

Looking to learn more about managing in the cloud? Get the guide

Freeing Up Data Center Resources to Avoid Costs

Compared with adding new usage-based public cloud resources, the cost to continue operating an owned data center is often negligible. However, when capacity isn’t optimized for efficiency, the result is additional capital expenditures when the time comes to support new applications or projects.

For instance, many organizations over-provision data center hardware to avoid the problem of running short of capacity within their virtualized infrastructure. Meanwhile, inactive data stored on-premises takes up valuable storage resources that could be tapped for other initiatives.

To free up on-premise infrastructure and avoid unnecessary future spend, we recommend these steps:

  • Optimize virtual machine resources: Optimizing workload placements and right-sizing VM allocations addresses inefficiencies by addressing risk and capacity waste. This increases efficiency by reclaiming resources from over-sized Virtual Machines (VMs). At the same time, increasing VM density by rebalancing VMs helps to safely address workload requirements and avoid resource contention.
  • Optimize on-premise storage: While not a direct cost reduction, optimizing on-premise storage allows you to extend the life of existing storage and defer capital costs. Tiering storage to the cloud automates the categorization of active and inactive data. By moving inactive data to a lower-cost cloud storage provider, you can free up on-premise capacity for new projects and pay for additional storage at a lower monthly rate.

Next Steps to Finding Cost Savings in Your Environment

Finding short-term opportunities and immediate steps to reduce infrastructure spending may require the help of an experienced and specialized solutions provider like Softchoice.

We offer the following solutions to assist organizations like yours to find and take advantage of these savings opportunities.

  • Cloud Cost Assessment: Analyze your existing public cloud workloads to uncover immediate cost-savings opportunities and improve visibility into cloud cost drivers.
  • Data Center Technology Review: Pinpoint opportunities to optimize infrastructure with the goal of freeing up existing capacity to offset future capital expenses. The review targets server, storage, virtualization, hybrid cloud, backup and file systems.
  • Cloud Data Tiering Accelerator: Identify inactive data stored on-premise that could be moved to lower-cost public cloud storage to free up on-premises capacity.

Our team of licensing and technology vendor experts are ready to help you find efficiencies wherever you are in your journey from response to recovery.

Looking for help to find and address cost savings opportunities in your IT environment?

Connect with an Expert.

Cloud Success Stories – Part 1

Multicloud has become a popular approach for organizations moving to the cloud.  

Although it isn’t practical in all business casesRightScale finds 84% of companies already run applications in a mix of cloud environments.[1]  

In the last several years, Softchoice has seen many of our customers revisiting their approach to the cloud to gain the distinct advantages of several cloud platforms. At the same time, many are looking to de-risk their cloud strategy by avoiding vendor lock-in 

But today’s measure of success in the cloud isn’t just how an organization gains in efficiency or agility. Instead, it’s how fast the cloud can drive real business transformation.  

Sharing Customer Stories

Many of our customers are turning to one or more clouds to stay true to the goal: Spend more time delivering great products and services than maintaining infrastructure.  

Nonetheless, each organization – and each application – is on a journey of its own. We wanted to share our experience helping 1,400+ organizations transition to the cloud and help others benefit from what they’ve learned.  

This series will explore real-life stories on the journey to the cloud. In this article, we’ll look at three organizations and how they integrated Google Cloud into their cloud strategies to deliver the best possible customer experiences.  

Michael Young, Vice President – Technology Strategy, Birch Hill Equity  

The Challenge: Birch Hill Equity wanted to adopt serverless architecture to the greatest extent possible to focus on delivering analytics products rather than maintaining infrastructure.  

Multicloud certainly ties into our strategy at Birch Hill…Our number one priority is for as much of what we do as possible to be serverless.”  

The Journey 

  • Birch Hill started in the public cloud with AWS, using a PostgreSQL, Databricks and a data warehouse to support its then lightweight data center needs.  
  • The company’s growing portfolio of analytics products required faster and faster response times, prompting the company to adopt Google BigQuery, due to its sub-second response times and lack of infrastructure to maintain.  
  • Google identity and secure access through OAuth met some critical security needs while allowing a small team to run data-intensive analytics workloads.  
  • Today, Google Cloud allows Birch Hill to spin up new analytics offerings fast while AWS supports heavy workloads with Databricks over EC2 clusters along with other infrastructure components.  

“We wanted to focus our time on delivering analytics products, not maintaining our cloud.”   

Next Steps: 

  • Birch Hill still faces challenges in implementing and managing effective security across multiple clouds – a common difficulty for multicloud adopters.  
  • Meanwhile, the company struggles with skills shortages in DevOps, infrastructure and architecture design, preferring to focus on expanding its analyst bench.  

Read the full conversation 

Sergei Leschinsky, Senior Director – Information Services, Polar Inc.  

The Challenge:  Polar needed to minimize delays and embrace a distributed network to support exponential growth and global expansion for its real-time bidding product for digital advertising.  

Distributed geographies became an essential part of the Polar Platform, which is a distinct advantage of public cloud. 

The Journey: 

  • Polar started its cloud journey as an early adopter, extending some of its production workloads to the public cloud – however, the project was unsuccessful.  
  • Overcoming an early false start, the company began a second migration with AWS, favoring its industry-leading versatility of services and solutions.  
  • As its product entered a period of rapid growth, Polar consolidated CDN providers and started bringing its heaviest-traffic workloads to Google Cloud. The result was a 50% savings in egress traffic.   
  • This allowed them to take advantage of geo-locations to support expansion in Europe and Australia. 
  • Today, Polar is using Google Cloud to support compute, load balancing and MySQL while AWS supports its data storage needs.  

Next Steps: 

  • Polar’s next steps in the cloud are to migrate its remaining high-traffic workloads to its Google Cloud environment.  
  • However, the company finds getting the attention of cloud providers to escalate and resolve issues is sometimes an uphill battle as a customer with a smaller footprint.  
  • They also see some difficulty in navigating the changes in billing structures and program changes across several large, complex and innovative service providers.  

“Our approach and need for public cloud today are very different than what we were trying to use it for in the past.”  

Read the full conversation.  

Norman Shi, Chief Technology Officer, Gradient.io  

The Challenge: As a startup, Gradient needed to process massive amounts of data in very short periods to support its SaaS tool ranking brand performance on Amazon’s retail platform.  

“Eventually, your application requirements will get to a stage where you require a higher level of infrastructure that offers greater scale, elasticity and processing speed.” 

The Journey: 

  • As a cloud-native company, Gradient started its journey without legacy infrastructure, allowing it to select the cloud provider or providers best able to meet their needs.   
  • Although Gradient recognized the strengths of AWS, potential data hosting conflicts with its retailer customers rendered it impractical for their needs.    
  • The company built its technology stack on Google Cloud to take advantage of its exceptional data collection and processing capabilities.  
  • Gradient also wanted to benefit from Google Cloud’s user-friendly interface and open source services like Kubernetes.  
  • Today, Gradient uses Google Cloud to power and optimize its SaaS dashboard for a fast-growing customer base.  

Next Steps:  

  • As a small but growing company in the cloud, Gradient still struggles with resource constraints and the challenges in accessing Google Cloud-specialized skills 
  • They also have some trouble tracking, managing and optimizing their cloud spend as their offering goes through a period of rapid growth.  

“These services are game-changers for any organizations who want to process terabytes and petabytes of data. 

Read the full conversation 

What’s Next for You Cloud Journey? 

The cloud journey is not always a pleasant or a complete success at first.”  

We’ve covered three real-life journeys that led to successful cloud transitions. However, no cloud transition is ever fully complete. Working with a strategic managed services partner like Softchoice will help you:  

  • Achieve the right mix of cloud services to meet your business needs 
  • Take the risk out of cloud adoption and migration 
  • Optimize your cloud spending across multiple providers 
  • Balance product and service innovation with proper cloud governance  
  • Upskill your team on every aspect of the cloud 

Learn more about how we can help by exploring Softchoice Cloud Services. 

Planning to migrate one or more workloads to the public cloud? First, check out this Forrester report, 10 Facts Tech Leaders Should Know About Cloud Migration. 

 [1] RightScale 2019 State of the Cloud Report from Flexera

 

Legacy vs. Cloud Environments: How to Determine Workload Placement

Imagine buying a home. Soon after you purchase, you discover that it has a hidden structural flaw – termite damage, black mold or crumbling clay-tile plumbing. This requires a very expensive fix. In that case, it may have been better to have not purchased the home at all, and instead found a place with fewer issues. 

 Similar situations can occur with cloud workload placement. In 2019, IDC noted the continued trend of reverse cloud migrations, which involve organizations moving workloads that had previously been placed in the cloud back into on-premises environments, often at great expense.  

IDC also estimated that by 2020, 75% of enterprises will also use a private cloud. The reasons include security and compliance, anticipated cost savings that failed to materialize and growing interest in hyper-converged infrastructure. Performing due diligence before a workload goes into the cloud is the best way to avoid the technical complications and potentially high costs of making such adjustments after the fact.  

 In this article, we’ll look at what steps to consider when deciding where to place a workload, so that you can make an informed choice that will provide the right combination of cloud performance and cost-effectiveness. Let’s go through them one by one, starting with what you need to discover during an initial hybrid IT assessment. 

What are the performance characteristics of the workloads in question?

By measuring each workload’s usage of CPU, memory, and networks, it’s possible to get a general sense of whether it would be a good candidate for the cloud. 

 For example, a highly variable workload such as an intermittently busy e-commerce site or student enrollment system would be ideal for public cloud placement. Because it doesn’t require a steady stream of IT resources, there’s no real reason to purchase and provide them in-house. You would only invest in copious amounts of memory and numerous CPUs that would occasionally be needed at full capacity. Apps with unpredictable demand area are also good cloud candidates for these same reasons. 

 In contrast, an application that sees constant demand will usually be better suited to legacy/on-prem deployment. Moving it to the cloud could result in major sticker shock, as its level of activity will accrue significant charges for all of the necessary public cloud services, including snapshots and bandwidth. 

Does the workload need any modifications before it’s placed in the cloud?

Most of the time, the answer to this question is “yes.” We estimate that at least 80% of workloads require some adjustments before they’re cloud-ready. Sending a legacy workload as-is into a public cloud is typically a recipe for subpar performance and potentially a reverse migration. 

Porting an on-prem workload into a cloud-optimized version is a multi-step process, often requiring: 

  • Applying relevant patches and security upgrades. 
  • Assessing all of its security contingencies, e.g. with HIPAA, GDPR, etc. 
  • Identifying the workload’s dependencies on specific pieces of infrastructure. 
  • Updating or rewriting it for a different OS or framework. 
  • Placing it into a container for increased portability. 

It’s also possible that after performing deeper analysis using versioning tools, you’ll discover that a workload isn’t suitable for a public cloud environment. This could be because of its performance characteristics. Or, because there is a lack of support for its requirements. For example, the decision to port an application to the cloud may not ensure passage of a regulatory audit. The result would require you to know the exact locations of application data and prevent any unauthorized access. 

 If a workload isn’t cloud-ready or even cloud-suitable, it should be left in place and perhaps revisited later on to see if the situation has changed or if it could be replaced by SaaS.

Would a hybrid cloud that extends the current environment make sense for the workload?

 Using multiple clouds is fast becoming the norm. In 2019, RightScale found that most enterprises had a multicloud strategy in place and that the share of businesses combining public and private cloud deployments rose from 51% in 2018 to 58% 

 A hybrid cloud can sometimes provide the right balance of control and performance for workloads that are being migrated from a legacy environment. Solutions such as VMware on AWS allow current data center processes and tools to be copied over into a cloud environment.  

 Moving to this type of hybrid platform requires no sweeping changes. At the same time, it opens the door to additional services for security and disaster recovery. A hybrid cloud ultimately allows for more streamlined data center operations in support of key workflows such as devtest. 

What is the business impact of migrating this workload to the cloud? 

Without getting into all of the technical details discussed above, a workload’s suitability for the cloud can be evaluated based on how its migration would affect the organization as a whole. For instance, how would it change the day-to-day experience of its end users? 

 Keeping a workload in a single main data center might actually degrade its performance, due to it being centralized in a location that’s physically quite far from some branch sites. Also, single-site deployments are at higher risk from disasters, since all the eggs are in the same technical basket.   

 Consider what it would be like to have the bulk of your employees or customers in Seattle while your data center was in New York City. That distance would materially affect workload performance, plus any failure would immediately put you in a bind. The rise of the Internet of Things and the corresponding need to backhaul more data with less latency shows how difficult it will be to maintain certain data center setups going forward. 

 Shifting more of these sorts of workloads into the cloud might provide some much-needed redundancy and greater geographical distribution. However, there are some different physical limitations that could come into play. A workload that runs on system memory and flash storage in a data center might theoretically be public cloud-able, but it wouldn’t perform exactly the same way once there. 

How to make the best decision about workload placement 

We’ve covered some of the biggest considerations for placing workloads, but there are others that will inevitably enter the picture. Looking to learn more about the challenges of cloud migration? Check out this Forrester report, 10 Facts Tech Leaders Should Know About Cloud Migration.

Softchoice can help you navigate them en route to making the best possible choices for your environment. Learn more by contacting our team or explore Softchoice cloud services.