Rolling out a ‘virtualize first’ rule for new IT applications.

 Virtualize first: The #1 rule for new IT applications.

 There are a ton of things you can’t completely predict when it comes to the growth, responsiveness and success of your IT infrastructure. But one that you can bet on, if you’re going to keep pace with change and become or remain a highly efficient and competitive organization, is the need for new applications. Whether it’s an enterprise resource planning tool, a purchasing tool, an HR tool or anything in between, applications will continue to expand in your data center.

 And no matter what tool, service or application you integrate into your IT infrastructure, one rule that should trump nearly everything else is its ability to be virtualized. Because as I’ve mentioned if you’ve adopted a one server per application model, it turns out you’re wasting a lot of resources that most organizations these days really can’t afford to. You’re stepping backwards in terms of your opportunity to be efficient and maximize your capital expenses over the long term.

 So how do you do it? How do you make a ‘virtualize first’ rule work when you’re looking at introducing a new application or service to your IT environment? Realize that with a one server per application model, you’re probably actually going to have to end up running three or four or five applications together, with the need for elements like a web server, an application server, a database server – it’s rarely ever one physical machine and therefore the costs associated with hardware and management are going to escalate. But if you’ve adopted a ‘virtualize first’ rule for your organization, you might theoretically take three of these application’s components and run them on one server and another on a second server and create a high availability model around your hypervisor to maximize how these servers are used. The result can be significant cost savings.

 But that’s not even the full story because it’s not simply about saving on the cost of additional servers. Keep in mind that for every server, you have multiple network connections, multiple switches, and multiple wires – a whole rat’s nest of parts. So if you can reduce the number of servers with virtualization, you also end up saving substantial costs with all the connective tissue.

 With virtualization, it’s a little like going back to the mainframe perspective – one large machine and a lot of applications. But here it’s one large machine composed of smaller ones, one big logical unit, a virtualization organism, if you will. As with any organism you have individual components working on what they’re each responsible for – a nucleus, a mitochondria, a Golgi apparatus, so to speak, if you remember your high school biology! – each relying on the other’s components or dynamic pool of resources to provide support where and when it’s needed. It’s worked pretty efficiently in nature for billions of years and you might say, the IT world is finally catching up.

Related Posts

Are You Ready to Join the Revolution Against Outdated Data Centers? –... Editor’s note: our virtualization expert Stephen Akuffo weighs in on the Virtual Space Race Study blog series – see his notes below. Do you remember the days of dial up...
Change Your IT Role from Gatekeeper to Innovator – UPDATED Editors note: our virtualization expert Stephen Akuffo weighs in on the Virtual Space Race Study blog series - see his notes below. Long gone are the days of a single ser...
Does Your Infrastructure Have the “Right Stuff” to Win the Virtual Space Ra... Editors note: our virtualization expert Stephen Akuffo weighs in on the Virtual Space Race Study – see his notes below. Technology is changing the way we do business ev...

About Steve McDonald

Steve leads the Server Platform and Virtualization practice at Softchoice where he is responsible for ensuring that Softchoice is delivering the full value of emerging data center and desktop virtualization trends to our customers.