3 Best Practices for Reducing and Governing Cloud Spend

3 Best Practices for Reducing and Governing Cloud Spend

The original version of this post was contributed by Rachel Dines, Director of Product Marketing at CloudHealth Technologies. and featured on the CloudHealth Tech Blog.

Amazon Web Services (AWS) started what I like to refer to as the “Cloud Gold Rush” in 2006. They began offering storage at a monthly rate of 15 cents per GB, and compute at an hourly rate of 10 cents. This caught the attention of thousands of organizations looking to decrease IT costs. Although prices have dropped drastically throughout the years – AWS claims over 50 price drops – many organizations discovered migrating to the public cloud didn’t necessarily result in the expected cost savings. Companies actually found their public cloud bills were two to three times higher than expected.

This does not mean migrating to the public cloud is a mistake. There are many benefits that come with utilizing a public cloud infrastructure, such as simplified operations, responsiveness, agility, improved security, and greater innovation. The mistake is the assumption that moving to the public cloud without implementing any sort of governance, management, or automation, will result in cost savings. It’s an oversight of the fact that people and process have a larger impact on solving problems than technology alone. The speed, flexibility, elasticity, and utility-based cost of the cloud requires the capability to consistently optimize and govern environments, and track and manage changes that occur as you scale.

Have you taken a look at your AWS bill lately and wondered “how can I get this back in check?” Here are three ways leading cloud innovators maintain control over their cloud spend.

Utilize Reserved Instances (RIs) – and make sure they’re optimized

If you’re not already taking advantage of AWS RIs, you’re missing a big opportunity to save money. Perhaps you are overwhelmed by the range of choices (there are a lot!), or not able to find time for the analysis (who has free time?). RIs have the potential to save you up to 75% as opposed to On-Demand pricing, making them an obvious choice for organizations with continuous EC2 or RDS usage. For a quick overview of RIs and to familiarize yourself with their benefits, I suggest reading this eBook.

Rightsize your infrastructure consistently

Cloud workloads are naturally dynamic – the capability to scale workloads up and down on-demand is a huge benefit of the public cloud. Workloads are also self-service: they can be easily spun up by anyone who needs to provision them, just by logging in. However, they can also come with a pitfall: many companies’ compute instances tend to be highly underutilized for two primary reasons. A workload is either not as resource-intensive as it once was, or someone has over-provisioned the instance (whether by accident or on purpose). Without proper management, this drives up cost rapidly. Check out this blog post for the easiest way to understand rightsizing, and why it’s so important.

Terminate zombie instances

Zombie instances are pertinent and not to mention costly, issue. Just for kicks, go take a gander at your C-type EC2 instances – it is likely that anything below 5% CPU utilization is a zombie. It is also not uncommon to see thousands of dollars of EBS volumes in AWS that are unattached. These volumes are costing you money, even though they’re not being used for anything. A good rule of thumb is to terminate any volume that has been unattached for two weeks, as it is likely a zombie.

Final thoughts

Since the only constant is change, you have to consistently search for ways to optimize your cloud infrastructure, adjust reservations, and terminate zombies, or else you risk leaving serious cost savings on the table. You need a way to get visibility into your cloud costs, drive accountability, and establish governance policies. This will set you up in the best position to control your cloud expenses and accurately forecast future costs.

Learn more about Softchoice Cloud Manage Services to see how you can manage your cloud spend and forecast future costs.

 

Making Hybrid IT a Reality in 2017

Making Hybrid IT a Reality in 2017

Traditional IT – while continuing to innovate and bring new technology to the business – is not able to keep up with the rapid pace of change and innovation happening within the business. As a result, organizations are searching beyond traditional IT for new ways to increase agility, improve time to market, and more flexible consumption models that reduce the risk associated with large capital investments.

According to a recent Softchoice study of 1500 end users, 1-in-3 users have downloaded a cloud application without letting their IT department know and the majority don’t understand the risks. [Read more…]

Keeping up with the cloud: Embracing a more dynamic data center

Keeping up with the cloud: Embracing a more dynamic data center

Increased adoption of virtualization over the past decade and a recent boost in cloud adoption is causing data centers to evolve at an unprecedented pace.

“The drive to the cloud or hybrid IT changes the dynamic of the data center,” says Chris Martin, Senior Systems Engineer at Softchoice. “Where application data resides and where applications are hosted are very different today than five years ago.”

“And that’s changing traffic in the data center and now you need to ask, ‘How do I support that as an IT organization?’”

This blog is second in a three-part Cisco ‘Ask an Expert’ series looking at how hybrid IT is influencing the technology topics of the network, data center, and unified communications. We asked Martin to outline the issues and trends that are reshaping the face of data centers and how organizations can better support them today.

What are the trends impacting data centers today?

There’s a lot of consolidation. Virtualization has driven a lot of it, from multiple dedicated computers for each application down to just a single server running them all. And now even that is being stretched between cloud, or multiple data centers or even just various nodes for storage. The architecture has changed so much over the past few years.

Technology trends like big data, analytics, the IoT (the Internet of Things) each have their own unique impact on the data center as well. As we connect more and more devices we create more and more data, and that leads to increased storage requirements. For companies with IoT or big data initiatives, how the data center is designed and deployed is changing.

How can IT better keep up with this changing face of the data center and support it?

One big change is around the expectation of availability created by the cloud, and how fast you can turn IT resources up and down. To be cost effective, organizations need to turn their data centers into a private cloud, where they can treat applications similarly whether on or off premise. The way that happens is through automation, whether it’s software-defined networking (SDN) or the orchestration of VMware applications.

Basically, it comes down to needing to do more with less. You need to automate, streamline, and bring in orchestration technologies to help manage that. You also need architectures, vendors, and equipment that are very open in order to support that process.

It’s a long migration roadmap, but even as you take small steps towards it, you want to keep the end goal in sight. This is where we help a lot of customers by identifying where they are today and how they can move their networks into the required architecture, such as SDN. We then assist with implementation, and have the ability, through Keystone Managed Services, to support that architecture.

How can organizations better ready their data centers to take advantage of things like IoT and big data?

Don’t overlook the importance of the network. It’s always a critical part of the data center, even if traffic stays within it. It’s not just performance that is crucial, but also security.

Make sure the network is designed to support the different architectures involved and aligns with the technology initiatives being launched.

It’s like a town. Think of the network as the roads. Every town is different and is going to have their own roadmaps and requirements based on size and throughput and requirements. Like a town, you’re going to have standardization and notable things true of all data centers, but where and how they happen are different for each.
What key things need to be asked when evaluating data center solutions?

One of the most important considerations you must ask yourself is, how do I manage this? What strain is this new piece of equipment going to put on my administrators?

Performance is always the number one thing to look at. The number two consideration is how easily it can be managed. Number three is understanding what support options exist. Number four is keeping in mind the business initiatives and what is needed to complete them. And, moving towards them.

Having a high degree of support from vendors and partners is important. With any solution, you need to consider how hard it is to replace an administrator when you lose them.
Remember the network is key. You could have the best data center technology in the world, but if you don’t have the right technology connecting it everyone is going to suffer. The network is the lifeblood of the data center.

Ask an Expert Webinar

Want to learn more about the role of the network on the changing face of the data center for the Cisco infrastructure? Join Chris Martin for 30 minutes on January 24th and register for our Ask an Expert Series focused on the data center.