by Arthur Cole, IT Business Edge
Arthur Cole spoke with Augie Gonzalez, director of product marketing for DataCore.
The concept of storage virtualization has been drawing a lot of heat lately. Can you, in fact, virtualize a resource that can only accommodate a finite amount of data at any given time? Server virtualization proved so successful because most enterprises were sitting on a lot of untapped capability. A storage cell is either occupied or it’s not. End of story. Yet many firms continue to tout storage virtualization, with DataCore even going so far as describe itself as a “storage hypervisor company.” The company’s Augie Gonzalez explains the meaning behind the phrase.
“Many of the bottlenecks encountered in virtual environments today can be directly attributed to the mechanical characteristics of spinning disk drives. We’ve found two effective ways to address the performance and resulting cost problems that they create.” – Augie Gonzalez
Cole: Storage has always been the slow-poke in the modern data environment, a problem that only seems to have accelerated now that everything is going virtual. What are some of the best ways to improve storage performance in virtual environments short of a wholesale rebuild?
Gonzalez: Many of the bottlenecks encountered in virtual environments today can be directly attributed to the mechanical characteristics of spinning disk drives. We’ve found two effective ways to address the performance and resulting cost problems that they create. Both are made possible by having our storage hypervisor intercept disk requests in an infrastructure-wide software layer slotted between the server hypervisors and the disk subsystems. From that unique vantage point we have visibility to all read-and-write requests generated by the cluster of virtual machines across the pool of disks. We can then adaptively cache the requests in very fast electronic memories to yield a many-fold performance boost.
The second turbo-charging benefit comes from sensing the most demanding disk block requests and automatically directing them to the fastest tier in the pool according to priorities established by the customer. The tiers range from high-speed Solid State Disk (SSD) flash cards all the way down to inexpensive internal Serial Advanced Technology Attachment (SATA) drives. The pool incorporates the diverse set of storage resources available to the data center, rather than what a manufacturer may have chosen to package in a disk array enclosure.
Cole: Some people have questioned your use of the term “storage hypervisor” to describe the new SANsymphony-V platform. What’s your take?
Gonzalez: Controversy generates awareness, so we encourage the IT community to weigh in. Such healthy dialogues help expand the lexicon of technologists to encompass inventions while drawing parallels from familiar concepts.
Visualize a stack of rich, hardware-independent services spread across the IT environment. You’ll immediately spot three hypervisor layers. Most IT pros are well-versed with the server hypervisor floating above server-class machines. Those with a virtual desktop infrastructure (VDI) slant will clearly pick out the desktop hypervisor atop a collection of thin clients and thick PCs. The DataCore storage hypervisor soars above the diverse disk infrastructure to supplement device-level capabilities with extended provisioning, automated storage tiering, replication and performance acceleration services. It even offers a seamless ramp to the cloud for low price storage of less critical data.
Cole: Is it possible, then, for storage architectures to gain the same degree of fluidity and dynamism that virtualization brings to servers and networking?
Gonzalez: It’s not just possible; that’s exactly what SANsymphony-V customers experience day in and day out. As importantly, they realize the collective value of their storage resources in ways that they could not when used in isolation. The results can be more concretely measured in terms of higher availability, faster performance and maximum disk utilization.
The storage hypervisor also removes device-specific compatibility constraints, which had tied the buyer’s hands. Their newfound interchangeability grants customers the clout to negotiate the best value from among competing suppliers at every capacity expansion, hardware refresh and maintenance renewal event. Pretty compelling, don’t you think?