No Silver Bullets: The Case for Enhancing Office 365 Security

No Silver Bullets: The Case for Enhancing Office 365 Security

All solutions have their pros and cons. Office 365 has more pros than one can count. For example, Exchange Online pulls Exchange Server into the cloud as a hosted multi-tenant cousin of its on-premises self. Not having to worry about your email server’s hardware and upgrades is a dream come true for many. Office 365 also bundles a bevy of rich solutions, including SharePoint, Skype for Business, OneDrive, Teams, Office applications and more. At the same time, you have the ultimate SaaS-worthy solution if you’re ready to migrate off premises.

But, what of the ‘cons’?

I’ve personally worked with Exchange Server on-premises for two decades and have never deployed Exchange in an enterprise environment and walked away. Instead, I’ve bolstered its security through the surrounding third-party ecosystem. My team and I have always won praise from Microsoft for our valid and valued solutions. These include security gateway, archive, monitoring, availability, and backup/recovery solutions. The list goes on. We never considered it “anti-Exchange” to say we prefer to enhance it with other software, appliances, or cloud solutions. The same mindset holds true for Exchange Online.

So where are the gaps? What holes are we looking to plug with bolted-on, third-party solutions? We see many of the same gaps we have with Exchange run on-premises. These are security, compliance/archive, backup/recovery, monitoring, continuity and so forth. To be fair, Microsoft has more control over their own Exchange environment. For this reason, they’ve improved and continue to improve their security story in many of these cases. The question for you is whether there is a gap wide enough to warrant added budget for third-party help.

There is. Let me make my point using Security as the prime example.

The defense-in-depth approach

Today, security is paramount. Ransom-ware, spear-phishing, and impersonation attacks have increased in cadence and sophistication. Office 365 includes basic Online Protection (EOP) for free. This has the effect of blocking malicious email from entering. Beyond that, Microsoft has a for-pay solution called Advanced Threat Protection (ATP). This offers a “safe attachments” solution and a static block list of “safe links.” But, even together, EOP and ATP provide less protection than a third-party security gateway like Mimecast.

Only a “defense-in-depth approach” will protect you in this insidious and ever-evolving world. With this method, you locate points of weakness and bolster those points. For instance, you protect the end-user, end-point, the DNS level, infrastructure (patches) and the gateway. To achieve this, you must start with your weakest link – your people. Ensure your employees have the right training and redundancies. You must be sure to update and guard your infrastructure. And, you must be certain to protect your email solution. This is the easiest way into your organization for weaponized attachments and links.

Filling in the gaps

3rd party services will help you make the most of Office 365 – whether it’s adding advanced archiving capabilities, improving eDiscovery, ensuring 100% up-time or more critically augmenting it’s security to enable a “defense-in-depth-approach”. At this point, you should be somewhat curious to hear the rest of this story. Join me, on Tuesday, September 19th at 2:00 pm EST as we dive deeper into the story of  “Taking the Defense-in-Depth Approach for Office 365 “ and how Mimecast’s solutions can help.

By: Guest Blog by J. Peter Bruzzese (Office Servers and Services MVP)

On the road with NetApp HyperConverged Infrastructure

On the road with NetApp HyperConverged Infrastructure

Keith Aasen NetApp

This is a guest article written by Keith Aasen, Solutions Architect, NetApp. Keith has been working exclusively with virtualization since 2004. He designed and built some of the earliest VMware and VDI environments in Western Canada. He designed and implemented projects of variable sizes both in Canada and in the southern US for some of the largest companies in North America.

Article:

I recently completed a 6 city roadshow to talk about the NetApp announcements that made up the 25th-anniversary celebration of NetApp (if you missed it, you can watch the recording here). Although the payload for the 9.2 release of our ONTAP operating system was huge, I have done announcements like this before and was ready for the questions posed by customers in each city.

This roadshow, however, was the first opportunity I had to present the new NetApp HyperConverged Infrastructure (HCI). I was less sure how this was going to go over. With this offering, we are breaking the mold of HCI version 1.0, enabling true enterprise workloads on an HCI platform. As such, I was not sure how the attendees would respond. Would they understand the purpose and benefits of such an architecture? Would they understand the limitations of the existing offerings and how the NetApp HCI offering was different?

I shouldn’t have been worried.

As far as understanding the purpose of such an architecture, they definitely got it. Our partner community has done an excellent job of explaining how this sort of converged infrastructure is an enabler for data center transformation. What is it about converged infrastructure, hyper-converged in particular, that enables this transformation? In a word, Simplicity. HCI simplifies the deployment of resources, simplifies the management of infrastructure and even simplifies the teams managing the infrastructure.

This simplicity and the unification of the traditionally disparate resources allows customers to optimize the available resources reducing cost and increasing the value to the business.

So every city I visited got this, Simplicity was key. What about the limitations of the existing solutions?

The missing element of HCI version 1.0 solutions was Flexibility. These solutions achieved simple deployment but were wildly inflexible in how they were deployed, used and scaled. Here are some examples;

1. Existing Compute.

I asked the audience how many customers already had HCI deployed (very few) and then asked how many already had hypervisor servers deployed. Of course, everyone had that. Wouldn’t it be nice to leverage the existing investment in those servers rather than having to abandon the investment? You see, with NetApp HCI you can purchase the initial cluster loaded toward the storage components and then use your existing VMware hosts. Then as those hosts age, you can grow the HCI platform as it makes sense. Reduces the redundant compute in the environment and allows customers to move to an HCI platform as it makes sense for them.

2. Massive Scalability.

The means that most existing HCI vendors protect their data tends to limit the size of each cluster to a modest number of nodes to preserve performance. This results in stranded resources (perhaps one cluster has excess CPU while another is starving). This increases management costs and expense as stranded resources are unable to be used. The NetApp HCI platform can scale massively with no performance impact allowing no islands of resources to form. We isolate and protect different workloads through the use of Quality of Service policies.

3. Part of a larger Data Fabric.

In a Hybrid cloud data center model, it is critical to moving your data where you need it when you need to. Some data and applications lend themselves to the public cloud model, others do not. Perhaps you have data created on site that you want to leverage the cloud to do analytics against. The NetApp HCI platform is part of the NetApp Data Fabric which allows you to replicate the data to ONTAP based systems near or in major hyper-scale clouds such as AWS and Azure. This ability ensures you can have the right data on-prem and the right data in the cloud without being trapped.

I want to thank everyone who came out for the roadshow and for everyone who took the time to watch the recording of the webcast. If you want to hear more about the simplicity of HCI and the flexibility of the Hybrid Cloud model, please reach out to your technology partner.

Flash = the IT IQ test for 2016

flash_chadsakac_post_dell_emc

This is a guest blog post from Chad Sakac, DELL EMC President of Converged Platforms and Solutions Division. Chad leads the Dell EMC business responsible for the broad portfolio of next generation Converged Platforms & Solutions that allows customers to build their own outcomes by individually sourcing compute, networking, storage, and software, or buy their outcomes through integrated solutions. Chad authors one of the top 20 virtualization, cloud and infrastructure blogs, “Virtual Geek” and is proud to be an Executive Sponsor of EMC’s LGBTA employee circle.

A BIG list of challenges

The entire IT industry is in a state of disruption that is unlike anything I’ve seen in my IT career. I love it!

Recently meeting the CIO and the IT leadership team of a great customer, he commented that the disruption is not only unique in scope, but unique in that it is touching everything all at once.

Think about it.

  • Massive workload movements to SaaS.
  • The new and emerging role of container and cluster managers in the enterprise.
  • Commercial and cultural disruption of open-source models.
  • Continuing shift towards software-defined datacenter stacks as the “new x86 mainframe.”
  • The bi-modal operational reality of ITSM/DevOps coexistence.
  • The new role of IT as the service manager of public/private multiple clouds – and determining the best workload fit.
  • The changing mobile device and client experience.

That’s insane. It’s a HUGE list – and it’s far from exhaustive.

It’s also an INTIMIDATING list.

Frankly, I’m finding a lot of customers are “stuck.” They are too paralyzed by the sheer number of things they are trying to make sense of to create patterns, to set priorities.

At another recent customer – like most customers – the CIO’s whiteboard had a list of “priorities.” There were 21 of them, which of course means no priority at all.

All those trends, buzzwords, and disruptors are real and are germane. But, wouldn’t it be nice to have one simple no-brainer thing? Something simple to do? Something that while simple, would have a big impact on IT?

Flash is that simple, no-brainer thing. It’s why flash is also the “IQ test” for 2016.

Flash (and here specifically I’m talking about NAND-based flash non-volatile memory) has crossed over in every metric. It’s time to consign hybrid designs and magnetic media to the dustbin of history. Flash is hundreds to thousands of times faster in terms of IOPS (throughput) and tens to hundreds of times faster in GBps (bandwidth). Flash, when measured with a common yardstick like IOPS, is hundreds of times denser and tens of times better on power consumption.

IQ Test #1 = Do you want to be 10-100x times faster? For the same price?

In the early days of flash, there was a lot of concern about wear. But this is now also a thing of the past. NAND vendors have figured out a continuum of Write Per Day (WPD) media forms. Storage vendors have optimized wear-levelling approaches and any storage vendor worth their salt has a program of “lifetime guarantee” of one sort or another to take the concern off the table. Are there differences in drive/media types? Yeah. Are there differences in array approaches to media management? Sure. Are they materially relevant? Nope.

IQ Test #2 = If there is a lifetime guarantee, why would you worry about wear?

Flash is now more financially viable, delivering better economics and better TCO. That’s BEFORE all the advances in data reduction, whether it’s 2:1 inline compression, n:1 data deduplication (highly variant based on data type), or mega-capacity drives (15TB and climbing).

IQ Test #3 = do you want all that goodness? At a BETTER price?

Furthermore, there are simpler ways to migrate in a non-disruptive fashion: VMAX All Flash supporting non-disruptive array-to-array migrations, VPLEX fronted systems being able to pop a new all-flash target behind them and magically migrate. Heck, VMware Storage VMotion is an incredible “any-to-any” migration tool that is totally non-disruptive.

IQ Test #4 = if you could get all the goodness of Flash, a great TCO, way better performance/cost/cooling/power benefits, and migration was simple – why wouldn’t you?

Yes, there are still some cases where magnetic media, in extremely capacity-dense use cases (Object Storage, capacity-oriented Scale-out NAS) is preferable. And, the drive manufacturers are doing all sorts of funky helium-filled drives to keep supporting that environment. But don’t let that distract you. In the enterprise, the vast majority of workloads are mostly transactional and for that universe, 2016 is the Year of All Flash (#YOAF!)

Yes, there are all sorts of interesting new non-volatile media types and interface forms that are emerging, from 3D Crosspoint to Phase-Change Memory and Carbon Nanotube-based memory. Form factors range from SAS (most common still) to NVMe (expect this to become the standard over the coming years. It will take a little bit of time for things like dual-ported interfaces to mainstream, meaning first use cases will be in SDS/HCI) and DIMM-based approaches (lots of interesting stuff on this path that will have commercial mass-market application in 2018). But don’t let that distract you. In our industry, things are always changing and moving. The key is to get on the train and not wait.

IQ Test #5 = why wait for some future benefits (which will only be additive), when there is an immediate benefit to moving from hybrids to all-flash approaches and every day that you don’t move, you’re wasting money?

Now, there are two ways to go all-flash: “Build” approaches and “Buy” approaches.

Some customers like the flexibility of a “Build” approach, where you pick components of a stack and put it all together. I must say, with each passing day, it becomes clearer that this is a waste of precious time and brain cells. But hey, if you want to muck around with building your own stack, you can simply start with something incredibly small and powerful, like a Dell EMC Unity All-Flash array – starting below $10K – and then add Dell EMC PowerEdge servers. Conversely, if you want to “build” but using a Hyper-Converged approach, you can start with Dell PowerEdge based VMware vSAN-Ready nodes and load them up with NAND-based flash.

More and more customers every day look at all infrastructure as a commodity and just want it turnkey. This is less flexible than the “Build” approach, but the “Buy” approach is far more outcome-oriented. For customers, willing to let go of the wheel and move on to the infrastructure equivalent of autonomous driving (someone else does it for you), they can transform their IT organization by freeing up dollars, hours, and synapses wasted on testing, building, and then doing all the lifecycle tasks (patch, maintain, test, validate, troubleshoot) inherent in infrastructure stacks. The “Buy” version comes in Converged and Hyper-Converged Infrastructure forms. Dell EMC VxBlock 350 (Converged) and VxRail (Hyper-Converged) are in both cases designed for all-flash.

In the end, while all IT practitioners must think about ALL the disruptive things changing and how to prioritize, don’t miss the opportunity for a no-brainer, quick-hit win. Go all-flash.

Great Dell Technologies partners like Softchoice see the same thing we see. Moving to all all-flash datacenter is the one of the simplest ways for their customers to move forward. They have developed a “tech check” which is a quick and easy best practice since they do this for many customers. What I love about pointing customers to our partner ecosystem is that many of them cover a diverse portfolio within the tech ecosystem. The best partners specialize of course, but even then, they can act as trusted consultative partners to the customer. The points I’ve made about the drivers for 2016 being the Year of All Flash are certainly true for Dell Technologies (Dell EMC, VMware most of all) – but isn’t limited to us. Softchoice’s approach is vendor agnostic.

The Softchoice Datacenter Techcheck will assess the specific workloads and the specific environment of a specific customer and generate specific savings, efficiency gains, and ultimately the TCO of moving forward into the all-flash era.

Flash. It’s the IQ Test of 2016.