Cloud Heresy

I’m about to commit a bit of cloud heresy as a technology guy writing about cloud and claiming that it’s really not all about hypervisors, automation and orchestration.  Sure, you need a measure of these components in order to be able to deliver on the cloud vision and model efficiently, but does that really solve the problems that are driving the consumers of IT to try and skirt enterprise IT and give their dollars to the public cloud?  I think the number of services being consumed that are called cloud but really aren’t and the amount of cloud washing going on in the marketplace clue us in on the fact that it’s not the technology per se that is driving the consumption of cloud.  The key thing I am hearing from my customers, and more importantly their customers, is that what is driving people to consume these services, some of which are actually inferior from a service management stand point to what is already offered internally, is the ease of consumption.  Consumers are voting with their dollars for quick provisioning, knowing what they’ll pay and the levers that effect that cost, and transparency around what they are getting and using.

Now, I’m not saying that technology isn’t a key play in enabling IT to deliver quick provisioning, transparency and chargeback, but the key to being able to do these things is service management.  I often see companies get bogged down in the mire of trying to get the level of virtualization in their environment up over 80% citing cost savings, power savings, improved agility and other IT-focused hallmarks as their reasoning.  Virtualization is suffering from over-exposure and the difficulty that enterprise IT has in putting the benefits in terms that are meaningful to their customers.  Telling an application owner that you want to virtualize the servers she’s on so that it’ll be more efficient or so that it can be consolidated on hardware with other workloads, or because you have an MBO metric keyed on % of the environment virtualized isn’t exactly compelling for her.  I think I’ve mentioned before that any transformation is as much about marketing the change as it is about actually executing it.  Don’t even get me started about trying to convince her to take downtime so that you can install new tools for automation and orchestration so that the operators can be more efficient and effective.  These reasons are definitely compelling to the operators, they just aren’t to the consumer.

When I ask consumers of enterprise IT what bothers them about dealing with IT the answers invariably come back that the hardware, software and support seem to be very expensive, it takes a lot of work to define and implement what exactly it is they want, and they’ve got to fill out all manner of forms and get approvals from a host of managers and potentially need to petition purchasing before they can finally get what it is they want.  And then, when they do get it, they’ve got to integrate it all together, install software, hope that they requested enough resources, move things from dev to test to pre-prod and finally production and hope that their equipment isn’t “borrowed” for the latest emergency request in the process.  Consumers of enterprise IT have gotten spoiled by iPads and Android phones and DVRs and services that allow them to consume huge amounts of data to any end point that they want at any time.  They then come in to the office where their department is charged tens and hundreds of thousands of dollars for IT services delivered within the next few months.  Let’s just say there’s a growing expectation gap there.

So how do we solve these sorts of problems?  It’s not through trying to virtualize every workload in the environment and automating and orchestrating every process or activity.  That’s seen as fixing your problem and not my problem by the consumer.  The organizations who have been most successful in this transformation are those who start by sitting down with their customers and defining with them the services they actually need.  These are what I call “consumer” or “aggregated” services that combine hardware, software, tools and processes to deliver something that is meaningful in business terms.  It could be a development environment for 10 people that is available for 6 months that includes archiving, backups, filesharing, virtual desktops for access to the environment, source code management, &c.  It may be the provisioning of a new employee with an email account, access to the knowledge management systems, enrollment in direct deposit, order of a laptop and software, a soft token, &c.  These services need to be disaggregated into their core components, mapped to reference architectures, have service levels and operating levels defined and agreed to, priced, codified in a service catalog and published via a portal for consumption.

Once the portal is open for business you can use demand, release and capacity management to prioritize what gets automated and orchestrated rather than trying to do it all at once.  Don’t let the perfect become the enemy of the good, incremental progress during transformation is sustainable and improves your relationship with your customers.  You don’t need to define every service you’ll ever offer right up front, pick partners amongst your customers and start with them, do some “consumer” services and some “operator” services, spend some time thinking about how this will be marketed and presented to your customers and then execute ruthlessly to deliver.

Start solving customer problems and the perception and expectation gaps are going to start closing.  If you limit the scope of the services out of the gate it is still possible to give greatly improved provisioning times without tons of automation and orchestration, but if you’re able to build all of this on a pre-integrated target architecture you will certainly have a leg up and be solving your problems and theirs at the same time.  The target platform these services will be run off of doesn’t need to be wholly integrated into every system and tool that already exists in the environment, as long as you are meeting your security and compliance requirements and you map out the interdependencies and relationships the environment can be somewhat separated.  The goal should be to continue to move more and more of the environment into having pre-defined services consumed via the portal run off of the cloud or cloud-like architecture and to continue to improve the maturity of that internal or private cloud as you understand how your customers are consuming the services.  More to come on these ideas . . .

Share
Edward

About Edward

Edward is an unabashed geek currently employed at EMC Corporation as the Global Director of Cloud and Virtual Data Center Services. He spends his free time with his wife and two daughters listening to music, reading, building Lego projects, being dragged around the neighborhood by his 95# bulldog and just generally enjoying life.

This entry was posted in Private Cloud and tagged , , , , , , . Bookmark the permalink.

One Response to Cloud Heresy

  1. Pingback: What is Virtualization? | Mr. Infrastructure

Leave a Reply

Your email address will not be published. Required fields are marked *