Maximizing the output of your limited time and resources for a workload analysis.
We do a lot of work building service catalogs, looking for infrastructure cost optimization opportunities and assessing workloads for suitability to a cloud service provider or general virtualization and/or re-platforming. When performing a study like this, whether yourself or through some outside agency (like a consultant), it’s usually better to look at a subset of all potential application workloads rather than the entire estate. Selecting a balanced sample improves the potential for meaningful findings and allows you to extrapolate them so you know whether it makes sense to even continue. Out of a typical environment consisting of 200-300 workloads we normally study between 30 and 50 as a general rule. This means choosing wisely to avoid delays (and other time-sucks) and assure successful data gathering with high confidence in the analysis.
Tribal or System of Record Knowledge
At least 90% of the workloads should come with significant tribal knowledge or “system of record” information. Since our data gathering is highly people-centric (read: we talk to a lot of folks), preference is given to workloads with a significant base of knowledge from the owner and user communities and where a subject matter expert can be readily identified for data collection interviews or workshops. This helps minimize the number of interviews or other data requests from participants. Alternatively, if the systems of record (CMDB, CRM, ticketing systems, performance/monitoring, etc.) are reasonably complete and a subject matter expert can rapidly parse those records, workloads covered by those engines would be good candidates.
Eliminate the No-Brainers
Some workloads are well known by the community to be end of life, due for replacement, permanently attached to dedicated hardware or have other technical limitations that cannot (or will not, due to executive decision) be overcome. These should be eliminated from consideration in the study. Alternates should include those replacement workloads, new workloads designed to function on more commodity hardware and workloads for which a re-platforming decision has yet to be made. Significant intelligence can be derived from examining those alternates.
Make the Sample Representative
The temptation to choose only the largest, most painful to manage, highest demand workloads for analysis is difficult to resist. It seems axiomatic that most ROI is to be obtained from this group. Do not succumb to this temptation. Select the applications that, taken together, represent best the totality of your enterprise operation. Some of these will be large, some small. Some will have larger or smaller resource demands. Some may scale rapidly or have revenue impact, others may not. The workloads should, as best possible, represent the business. A selection that includes all types of workloads is best, given the constraints already mentioned.
When in Doubt: Ask the Helpdesk
If all else fails, ask the people who support these applications on a daily basis to recommend their top candidates for analysis. They will invariably come back with those most frequently requested for resource provisioning, modification, performance complaints and development activity. That’s not likely to be your final list but such front-line knowledge can be used to seed the effort nicely.
I’m sure one or two of my senior people will weigh in eventually and harp on me for missing their favorite “low hanging fruit” flag. Leave it in the comments and let the jabs commence.