Just-in-time (JIT) production practices, which view inventory not as an asset but a cost, have accelerated the delivery and reduced the cost of products in a wide range of industries. But perhaps the biggest benefit to the companies that adopted them was the exposure of widespread manufacturing inefficiencies that were holding them back. Without the cushion of a large inventory, every little mechanical or personnel hiccup in the assembly line had an immediate effect on output.
Virtualization technology is playing a similar role for IT, and nowhere is this more visible than in storage. Server virtualization has been incredibly successful in reducing the processor “inventory” needed to provide agile response to business demands for more and better application performance. Average processor utilization often zooms from the 10% range to 60-70% in successful implementations. But this success exposed serious storage capacity management inefficiencies.
As Jon Toigo of the Data Management Institute points out, between 33 and 70 cents of every IT dollar expended goes for storage, and the TCO of storage is estimated to be as much as 5 to 8 times the cost of acquisition on an annualized basis. However, as illustrated in that paper, on average only 30% of that expenditure is actually used for working data. This isn’t due to carelessness on the part of IT managers. They are doing the same sort of thing manufacturers did before JIT: in this case using large storage inventories to compensate for inefficiencies in storage capacity management that make it impossible to provision storage as fast as they can provision virtual servers.
This is a major factor driving the adoption of storage virtualization, which can abstract storage resources into a single virtual pool to make capacity management far more efficient. (It can do the same for performance management and data protection management, as well—I’ll look at them in future posts.) I say “can” because, given the diverse storage infrastructures that are the reality for most organizations, full exploitation of the benefits of storage virtualization requires the use of a storage hypervisor. This is a portable software program, running on interchangeable servers, that virtualizes all your disk storage resources—SAN, server direct-attached storage (DAS) and even those disks orphaned by the transition to server virtualization—not just the disks controlled by the firmware within a proprietary SAN or disk storage system.
With a storage hypervisor such as DataCore’s SANsymphony-V, the storage capacity management inefficiencies exposed by server virtualization are truly a thing of the past. Rather than laboriously matching up individual storage sources with applications—and likely over-provisioning them just to be sure of having enough, you can draw on a single virtual pool of storage for just the right amount. Thin provisioning permits allocating an amount of storage to an application or end user that is far larger than the actual physical storage behind it, and then provisioning real capacity only as needed based on actual usage patterns. Auto-tiering largely automates the task of matching the right storage resource to applications based on performance level needs.
The result is true just-in-time storage: capacity when, where, and how it’s needed. And, because the capacity management capabilities reside in the storage hypervisor, not the underlying devices, they’re available for existing and future storage purchases, regardless of vendor. You can choose the storage brands and models needed to match your specific price/performance requirements, and when new features and capabilities are added to the hypervisor, they’re available to every storage device.
Next time I’ll look at how a storage hypervisor boosts storage performance management.