Storage Service Catalogs and Automated Storage Tiering

I just got off the phone with a client and pointed them to my blog to read this article, which only after they checked did I find out it was never published.  DOH!  The timing is right, though, to revisit this topic since it keeps coming up, often like this:

“I just bought (insert favorite automated storage tiering technology), so I can avoid building a catalog for my storage services”

This misconception is not unusual, so it bears some examination.  That, and with ViPR almost on the street makes this a reasonably timely conversation.  I’m going to keep it loose and not so technical (so cut me some slack, nerd patrol).  FAST (or any other automated storage tiering technology) is not the same thing as a catalog and doesn’t supplant a catalog but it DOES make the catalog more powerful.

Defining some terms:

First off: FAST-VP and FAST-Cache are both fantastic technologies for enabling automated data mobility.  Notice I did not mention “Tiering”.  That is the word we too often use to describe what FAST is doing, partly because it is captured in the acronym: Fully Automated Storage Tiering.  We can split that hair later.  Most times we are discussing FAST-VP.  FAST requires us to create storage pools that include resources (disks) which have varying levels of performance along with policies (rules) that govern where data should reside at any point in time (which disks), usually based on access needs.  Hot stuff moves to the speediest resource, cold stuff to the slowest.  FAST-Cache operates a little differently but these two technologies allow us to do some interesting things with the architecture and make a proper services catalog around storage that much more valuable.

But what’s a catalog?  What’s “tiering”?

When we build a catalog that includes tiers of service for storage, performance is just one of the 100+ metrics we are itemizing to develop the reference architecture and configuration.  Scalability, availability, communications channels, storage model (SAN, NAS, Object, etc.), and functionality around OR/DR/Archive all need to be considered.  A good catalog allows the client to minimize expenditure by adding in the services for a particular “tier” that make sense for the price point and client demand.  Tiers of service in this context are not the same thing as “tiers” according to FAST.  FAST is only concerned with performance.  That is NOT a drawback; it is just an operating parameter we (or a customer) can exploit effectively, so long as we understand its limitations.  We design tiers that incorporate the functionality desired.  For example, my old Tier-1 at work almost 10 years ago included replication both local and remote, high speed/low latency access, high scalability and 99.99999% availability guarantees.  Tier-2 had no replication, lower scalability and 99.999% availability.  (just a few of the 100+ data points).  If you needed replication but not the high speed the data profile was Tier-1, I wasn’t offering replication on Tier-2.  Why?  Because I had 3 vendor platforms (Clariion, IBM, Compaq) that could service Tier-2 requirements just fine at a low cost.  Replication was a pain to manage effectively and I could not justify the labor or hardware costs.  I was able to standardize all of my builds by packaging up each storage service offering (Tier) into defined bundles.

They don’t sell cars without a heater even if you do live in the Sahara.  I didn’t sell Tier-1 without replication, whether you replicated or not.

Enter FAST, the problem solver

In my case about 40% of the application data had to live on a Tier-1 stack (15K FC on DMX), about 40% Tier-2 (10K FC on CX or the equivalent IBM/CPQ) and the rest was Tier-3 (SATA running on whatever we had).  The problem in management was almost always in performance.  This is where FAST would have been helpful to control costs.  Most of the action on Tier-1 was driven by monthly data loads, month end processing and replication splits for testing.  The rest of the time that disk just sat idle.  I could tell you exactly which disks would spin up hard every month, too.  We had to Tier to the available technology, unfortunately.  If, however, I was able to construct my Tier-2 with part SATA drives and part Flash drives and also incorporate both FAST-VP and FAST Cache, I could have worked some magic.  My Tier-2 costs could be cut way down because I know applications that didn’t need replication and fell on the Tier-1 performance border could read/write to Flash part of the month and then sit idle on SATA the remainder of the time instead of Fibre Channel.  Applications that occasionally demanded unexpected performance would not be limited by the fact that they sit on Tier-2.  In addition, wacky Tier-1 application spikes (emergency database refreshes) could impact even the best frames.  Adding a little Flash to the mix (5%-10%) would have solved that problem before it started.

The benefits just waterfall.  More Tier-2 uptake, less Tier-1 maintenance and everybody is happier with lower costs.  But notice that at no point in this discussion is FAST doing any of the decision making as to what constitutes a “Tier”.  People have to do that.  FAST can take 2 different disks, combine them with your rules and determine where data should live.  FAST cannot determine which disks you should buy or what the rules ought to be.  This is critical: FAST-Cache and FAST-VP have two different approaches to the same function: they are an iron clad insurance policy.  Here’s an example in real life…

Our catalog for one service provider is a model.  They had 4 tiers based on the underlying technology: Flash (5%-10%), FC15K (25%), FC10K (25%) and SATA (40%).  The clients all want Flash performance with SATA cost.  To meet the need most data sat on the FC drives, waiting for demand that came infrequently and just to avoid an SLA violation.  Our advice was to pare back the FC drives to one option and move the other 25% of that FC data over to SATA, then beef up the Flash.  With the right use of policies, Flash handles the ever important first-write push for phenomenal response time, then migrates the data back to FC or SATA based on demand, which always falls off.  Flash, then, is their performance insurance policy for demand spikes and is no longer “Tier-1”.  Tier-1 is now a policy: FC15K drives for static content and Flash as the instant-hot-water buffer when things get dicey.  Tier-2 is FC15K drives without Flash support, while Tier-3 is SATA+Flash and Tier-4 is SATA all by itself.

In a Nutshell

FAST is an insurance policy, not a hands-off tiering solution.  People still need to evaluate the workloads to design the smallest number of storage service offerings that will meet the greatest need with the maximum standardization.  FAST makes it possible for an enterprise to go from 6 Tiers down to 3, or 2, while offering more services, performance, availability and scalability.  FAST makes our services catalogs that much more powerful because we can automate SLAs and performance guarantees.  It gives them teeth and it is a fantastic technology but it is not a substitute for the catalog, it is the enabler.

And when ViPR is GA, I believe automation of the storage service catalog will be complete.


About Peter

Peter is a Geocacher, competitive cribbage player, surfer, amateur magician, golfer and star watcher (the astronomical kind). In his day job for Datalink, Peter is a Senior Manager with their Cloud Service Management Practice helping customers build, manage and improve their legacy IT and Private Cloud infrastructures through Automation, Orchestration and clean living. We're not so sure on the clean living.
This entry was posted in I Get E-Mail, The Nature of IT, Transformation and tagged , , , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *