Where to Find Savings in Your Cloud or Data Center Environment

Part 1 of our 2-part series on Driving Efficiency through Infrastructure Optimization. Read Part 2, “How to Add to Your IT Environment without Adding Costs.” 

For IT departments, the mandate to do more with less and get the most out of technology investments isn’t new. But today there’s much more pressure to find and seize immediate opportunities to cut costs.

In addition to rationalizing software and restructuring contracts, on-premise data center and public cloud infrastructure are two high impact areas for potential short-term savings.

There are some common challenges, however. In the cloud, lack of visibility and formal governance practices makes it harder to learn where and how to find savings. In the data center, the need to avoid new capital expenditures makes it necessary to free up existing capacity to support new projects.

In fact, cloud consuming organizations waste an average of 30% of their cloud spend due to redundant resources (Source: RightScale). At the same time, inactive data accounts for 50% of total storage capacity, taking up valuable space (Source: NetApp).

The best options for making short-term financial impact in the infrastructure environment are:

  • Reducing cloud costs by improving management and visibility
  • Freeing up data center compute and storage capacity to avoid future costs

Below, we go deeper into each of these cost saving opportunities.

Reducing Cloud Costs through Improved Management and Visibility

Without careful management, public cloud infrastructure costs can get out of control.

Because organizations can procure and consume public cloud resources much easier than their on-premise counterparts, losing track of workloads and associated spend is a common problem.

Redundant resources, the absence of adequate monitoring tools and lack of control over who initiates or decommissions workloads in the cloud all contribute to over-spend.

The Flexera State of the Cloud Report for 2020 found that 79% of those surveyed cited managing costs as a top cloud challenge, second only to security.  The report also found that enterprise companies overspent their cloud budgets by 23% on average in 2019.

To right-size public cloud infrastructure and drive cost efficiency, consider the following actions:

  • Find and remove overprovisioned or idle resources: Identifying and reviewing accounts with low I/O activity helps you determine which resources could be decommissioned with minimal impact to the business.
  • Implement and enforce formal cloud governance: A formal cloud governance policy helps you better understand the structure of cloud costs, establish accountability and control access and decision-making around cloud resources.
  • Adopt a cloud management platform: A cloud management platform helps enhance visibility into your public cloud environment to promote better forecasting for cloud budgets based on real-time usage. Further categorizing cloud instances by assigning metadata tags related to billing, environments, applicable compliance requirements and more allows IT teams to track usage and associated cost across cloud instances, even in a hybrid or multicloud environment. IT can then augment and automate tagging using cloud native tools for policy enforcement. Together, these ensure that utilization meets requirements while reducing financial risk.
  • Optimize cloud storage: As with on-premise infrastructure, automating the categorization and storage of active and inactive data into performance and capacity tiers in the cloud helps drive further efficiency.
  • Implement automated scaling: Putting automated scaling in place allows you to scale up resources when needed and scale down the rest of the time. This replaces the need to accommodate maximum utilization, which is often a needless expense.
  • Use reserved versus on-demand instances: The leading public cloud providers offer discounts to customers for reserving instances for anticipated future needs in advance rather than pay higher rates for on-demand usage.

Looking to learn more about managing in the cloud? Get the guide

Freeing Up Data Center Resources to Avoid Costs

Compared with adding new usage-based public cloud resources, the cost to continue operating an owned data center is often negligible. However, when capacity isn’t optimized for efficiency, the result is additional capital expenditures when the time comes to support new applications or projects.

For instance, many organizations over-provision data center hardware to avoid the problem of running short of capacity within their virtualized infrastructure. Meanwhile, inactive data stored on-premises takes up valuable storage resources that could be tapped for other initiatives.

To free up on-premise infrastructure and avoid unnecessary future spend, we recommend these steps:

  • Optimize virtual machine resources: Optimizing workload placements and right-sizing VM allocations addresses inefficiencies by addressing risk and capacity waste. This increases efficiency by reclaiming resources from over-sized Virtual Machines (VMs). At the same time, increasing VM density by rebalancing VMs helps to safely address workload requirements and avoid resource contention.
  • Optimize on-premise storage: While not a direct cost reduction, optimizing on-premise storage allows you to extend the life of existing storage and defer capital costs. Tiering storage to the cloud automates the categorization of active and inactive data. By moving inactive data to a lower-cost cloud storage provider, you can free up on-premise capacity for new projects and pay for additional storage at a lower monthly rate.

Next Steps to Finding Cost Savings in Your Environment

Finding short-term opportunities and immediate steps to reduce infrastructure spending may require the help of an experienced and specialized solutions provider like Softchoice.

We offer the following solutions to assist organizations like yours to find and take advantage of these savings opportunities.

  • Cloud Cost Assessment: Analyze your existing public cloud workloads to uncover immediate cost-savings opportunities and improve visibility into cloud cost drivers.
  • Data Center Technology Review: Pinpoint opportunities to optimize infrastructure with the goal of freeing up existing capacity to offset future capital expenses. The review targets server, storage, virtualization, hybrid cloud, backup and file systems.
  • Cloud Data Tiering Accelerator: Identify inactive data stored on-premise that could be moved to lower-cost public cloud storage to free up on-premises capacity.

Our team of licensing and technology vendor experts are ready to help you find efficiencies wherever you are in your journey from response to recovery.

Looking for help to find and address cost savings opportunities in your IT environment?

Connect with an Expert.

Is Your Network Ready to Support a Remote Workforce?

The recent surge in full-time remote workers is putting corporate networks under unusual stress.

More people than ever are connecting through virtual private networks (VPNs), taking frequent video calls or meetings and accessing business applications from outside the office.

Without a LAN/WAN infrastructure designed and optimized for the new all-remote workforce, poor connectivity and degraded performance may be frustrating end users. Over time, these issues could prevent people from being their most productive while working from home full-time.

The keys to improving network performance while supporting remote work lie in alleviating network traffic, better supporting bandwidth-intensive applications and routing traffic intelligently. An assessment-led approach will help you map the traffic patterns in your current networking infrastructure and identify the main areas for improvement.

Below, we’ll look at the three key questions you need to ask to pinpoint problems and remove the barriers to network readiness for a remote workforce.

#1 Are you using best practices for VPN?

A sudden increase in the volume of connections can overwhelm a VPN infrastructure designed to support a limited remote workforce. In some cases, the surge in volume strains VPN concentrators at the edge of the network while in others, the number of VPN circuits isn’t enough to support a much higher-than-usual number of users.

As such, the response to COVID-19 has put many IT departments under pressure to scale their VPN implementations in days or weeks. Consider the following advice to ensure your VPN solution is ready to alleviate the traffic resulting from a massive spike in volume.

  • Upgrading VPN bandwidth: Remember, users expect the same connection speed from a corporate VPN as they have in the office. You may need to upgrade your VPN solution to handle bandwidth usage from a much higher volume of users.
  • Stress testing for stability: The ability to handle 24-hour connectivity requirements is a must for many organizations, especially those supporting essential services. Ensuring your VPN implementation is stable at all hours is critical.
  • Strong encryption and authentication: More users than usual will be connecting over unsecured public internet connections. It’s important to verify that traffic to and from the corporate network is safe. To this end, consider implementing multi-factor or other advanced authentication methods.
  • Cost-efficient licensing: As cost considerations become more important during this period, making sure you can afford to scale your VPN solution to accommodate the entire workforce is a primary concern. Ensure your VPN solution provider will support a cost-effective scale up in user and device counts.

#2 Are you doing everything you can to support bandwidth-intensive applications?

Working from home full-time has prompted a dramatic rise in the number of people participating in video calls and meetings. Meanwhile, users accustomed to using CPU or GPU-intensive applications in the office may need to do so remotely through virtual desktops.

This increase in bandwidth-intensive traffic puts a lot of strain on LAN/WAN infrastructure, leading to degraded performance and user experience.

The first step to better supporting these critical yet bandwidth-intensive applications is to assess the increase in traffic volume across a few categories: voice calls, real-time interactive video, streaming video (such as training content), collaborative applications (such as in-document collaboration tools), and bulk file transfers.

Next, it’s important to consider possible network stress points and remedies, including:

  • Traffic routing and internet access: You may need to consider rerouting network traffic to optimize performance while most or all users are connecting from outside the office. Routers, firewalls and other networking equipment may also need to be reconfigured to carry ingress and egress traffic.
  • Strain on the network edge: A surge in connections will likely strain VPN concentrators on the network edge. Virtualized solutions may be your best option to scale quickly.
  • Conference and video call limitations: Higher demand for video and conference calls may push the physical limits of equipment meant to support these calls in-office. In this case, cloud-hosted solutions may help alleviate connection problems.
  • Advanced virtual desktop requirements: You may need to support virtual desktops for “power user” profiles with CPU/GPU-intensive workflows like CAD drafting or high motion video. Here, cloud hosted VDI is a fast, cost-efficient option for scaling remote access.
  • Remote phone issues: Over longer paths, remote or “soft” phones may be subject to packet loss or latency issues. Consider diagnostic or testing tools to identify connectivity problems.

Other considerations outside the corporate IT environment may also have a hand in degrading user experience as they attempt to connect. These include:

  • Home networking equipment: The networking equipment people have at home is often less advanced than its corporate counterparts. At the same time, interference and bandwidth competition from inside the home (especially from streaming video) may be degrading connectivity.
  • Public ISP congestion: Past increases in the number of remote workers have tended to cause congestion in public ISP exchanges, especially in areas with lower public network quality. With a historic surge, many people may be experiencing added difficulty.

#3 Could SD-WAN help you improve support for critical applications and locations with intelligent traffic routing?

The shift to an all-remote workforce will cause significant changes in the way traffic flows in and out of the corporate network. Meanwhile, most legacy WAN infrastructure was designed assuming most employees would be connecting from a core office environment.

Modernizing the network by adopting SD-WAN could yield benefits, including:

  • Software-driven management and monitoring: With SD-WAN, monitoring and management happen in the cloud while traffic passes through the LAN/WAN infrastructure. This allows the network to remain secure without relying on continuous cloud connectivity.
  • Intelligent traffic routing: The leading SD-WAN vendors offer solutions with application-aware connectivity, which supports segmentation of traffic by differentiating high-priority workloads, such as productivity or collaboration tools, from typical internet usage.
  • Improved quality of experience (QoE): Intelligent routing and more predictable performance in turn support better user experience for end users along with centralized, streamlined administration for IT teams.
  • Cost efficiency: SD-WAN also eliminates the need to back-haul traffic to the data center over MPLS links, a significant cause of performance degradation, especially for cloud-based SaaS applications. As MPLS links are traditionally expensive to operate, the move to SD-WAN also has the potential to drive further cost savings in the long term.

Where to Go Next

Most corporate networks were not designed to support a sudden shift to all-remote work.

The related performance issues could be slowing productivity as calls and meetings drop, critical files fail to transfer, or users are unable to connect. Solving these issues may be critical to business continuity. The first step is to assess your current environment to pinpoint problem areas and put the necessary solutions in place.

No matter where your organization is in its response to the global pandemic, our team of experts is ready to help you identify and resolve network performance problems and in turn enable your employees for productive work from any location.

Looking for help to address network performance issues?

Watch our virtual workshop “Performance Meets Demand: Is your Network Ready to Support a Remote Workforce? on-demand. Or explore Softchoice Business Continuity solutions.

4 Ways to Improve Data Security in 2020

The stakes surrounding data security and risk mitigation rise with each passing year. Data breach costs continue to increase and potential threats grow more sophisticated. 

According to IBM, the average total cost of a data breach – after accounting for remediation, reputational damage and regulatory issues – has reached $3.92 million. While smaller organizations may not face expenses that high, addressing an incident could cost tens of thousands of dollars or more.

Security issues can also jeopardize the transition of workloads into the cloud. This prevents organizations from taking advantage of this technology and making progress toward full-scale digital transformation.

Organizations should keep data security at high priority in 2020 and use every opportunity to improve their security posture and safeguard databases, systems, applications, networks and other assets. Backup-as-a-Service solutions, along with more intensive security assessments, personnel training and advanced analytics tools, can play a pivotal role in those efforts.

In the article below, we’ll explore four options for boosting data security capabilities and preventing data breaches in the coming year.

1. Perform regular review and testing of controls

To stay secure, every organization needs a well-defined organizational structure for managing data security needs. Having a comprehensive security governance strategy in place removes confusion and ambiguity regarding security responsibilities. 

For that strategy to work, it requires regular updates to address shifting security requirements, emerging threats and changing best practices. It should be well-maintained between tests to ensure the organization is doing everything possible to prevent or mitigate a data breach.

To get the best results from a security strategy also requires consistent testing to ensure everything is in proper working order and every contingency covered. To that end, testing security controls should be a key priority. Access management is one of the most important components of modern cybersecurity. Compartmentalizing various platforms and databases helps to prevent unauthorized access or compromise to sensitive data and systems.

Revisiting governance this governance strategy also creates accountability both around security as well as workload management. A lack of accountability in these areas is a dangerous financial and security liability. If internal stakeholders don’t understand who’s responsible for data security controls and remediation efforts, organizations may be too slow to respond to a breach and minimize its impact.

2. Conduct security training for all key stakeholders

In the world of data security, your employees can either be a major asset or a huge liability. When staff members understand the malware and security threats facing the organization and know how to distinguish between legitimate and malicious activity, the business is in a far better position to prevent bad actors from penetrating their defenses. 

On the other hand, employees who are unfamiliar with security best practices and common cybercrime strategies put their own organizations at risk. Their accounts make easy targets for securing unauthorized access to sensitive data and applications.

With that in mind, regular and in-depth security training is an essential component of a robust security posture. As employees undergo such training, they begin to understand how an attacker might try to manipulate them. From here, they can recognize potential attacks and respond as necessary. 

Data security has often focused on external threats. But an organization looking to protect its data needs to pay just as much attention – if not more – to breaches that start from the inside. A 2019 survey of more than 1,000 information security leaders revealed that 69% of respondents reported data breaches stemming from an insider threat.

3. Monitor for internal threats – malicious or otherwise

Not all insider threats are malicious. Many of these result from ignorance regarding proper security measures. Poor security hygiene can be a systemic issue that includes everyone from ground-level employees to C-level executives. That same report found that 78% of CSOs and 65% of CEOs had clicked on suspicious links in the past. Moreover, 43% of business leaders use their personal email accounts to share documents and communicate with their colleagues. 

It should be obvious that this behavior presents major security risks. For instance, people often use the same login credentials for various personal accounts. If one is compromised, the rest will be at risk. By using their personal email for business purposes, employees widen the organization’s threat exposure.

Training for all employees will help create a company culture that values data security best practices. Routine training ensures people adhere to them at every level of the organization.

4. Build-in artificial intelligence-based security protection

Data security best practices have shifted from relying on perimeter-focused efforts to crafting strategies around threat remediation and incident response. It’s unfeasible to expect security mechanisms to block every threat and intrusion. Businesses need to prepare for worst-case scenarios. That entails detecting malicious activity after it’s breached perimeter defenses. 

Organizations should monitor their networks for any anomalous behavior that could indicate the presence of a bad actor. The next step is to analyze the available data to spot trends that indicate network or security flaws.

Accurate detection of malicious activity requires constant visibility combined with sophisticated analytics. Organizations can augment their monitoring and threat detection capabilities with the help of artificial intelligence-based security protection. 

AI solutions can analyze more data with a finer level of precision than any human operator could hope to match. They can comb through far more data and identify even the most subtle indication of anomalous behavior. This enables organizations to address cyber threats before they have an opportunity to cause lasting damage. AI-based security tools are also able to update threat signatures in real-time.  Meanwhile, they also help businesses keep up with cybercriminal activity and the rapid release of new malware strains.

Build up IT resilience to weather the data security storm

Given the high cost of a data breach, businesses need to make a concerted effort to upgrade their security strategies in this coming year. New threats will continue to emerge and exploit lingering vulnerabilities. Having the support of an expert MSP that constantly monitors your network and adheres to the latest security best practices will significantly reduce the risk of a costly data breach.

To learn how to introduce scalable and reliable data backup solutions into your digital transformation strategy, download our guide “6 Practices for Better IT Resiliency Planning”.

Check our previous articles in this series, “3 Ways Your Infrastructure is Preventing IT Resilience” and “Is Your Risk Mitigation Strategy Resilient Enough?“.

Protect your critical data and applications with our turnkey Backup as a Service solution. Reinforced by our deep understanding of data center and network technologies and enterprise-grade managed services, this offering helps you resolve issues faster and free IT resources to refocus on business transformation.