#1 Virtualization Prediction: Enterprises wake up and realize they forgot to virtualize the storage tier – which means IT infrastructure virtualization finally will begin to deliver on its promised ROI.

2011 Predictions

It’s a new year and that means it’s time to make some predictions. Over
the next few days, we’ll make our “Big Predictions” for 2011. But unlike
all those prognosticators who live secure in the knowledge that nobody
will remember to actually check on whether or not their predictions come
true, we’re making a resolution to revisit these predictions one year
from now and discuss how accurate (or not) we were. And now, Big
Prediction #1:

Enterprises wake up and realize they forgot to virtualize the storage tier – which means IT infrastructure virtualization finally begins delivering its promised ROI.

Virtualization is not about servers or desktops. Virtualization is about creating agile, cost-effective, and enduring IT infrastructures that can evolve to support the enterprise over time.

To date, virtualization has largely failed to live up to this promise. Many server and desktop virtualization projects have stalled or failed due to cost overruns or unanticipated infrastructure issues. Why is this happening? Simple: the answer is in the names themselves: server virtualization is about more than virtualized servers, and desktop virtualization is about more than virtualized desktops. They are both about virtual IT infrastructure – and if you don’t virtualize the entire infrastructure, you will not achieve your anticipated ROI and business benefits. And if you focus only on desktops and servers, you are leaving out a rather substantial tier of your infrastructure – storage.

Enterprises have learned some hard lessons by trying to support virtualized server and desktop infrastructures either through forklift upgrades of their storage infrastructure, or through vendor-specific virtualization strategies that could more accurately be categorized as “Virtualized Vendor Lock-in.” Both of these create massive cost overruns for new hardware, major application availability issues due to poor performance in the storage tier, and perhaps most importantly, highly complex and expensive (and sometimes unworkable) business-continuity architectures.

Interestingly, the recession and its impact on IT budgets has only magnified this problem – cost overruns simply are not an option which means the margin for error in any IT project has never been thinner, hence the impact of failure has never been larger. This environment has raised the “Big Storage Problem” in virtualization projects to the front burner and that is why in 2011 there will be large-scale adoption of storage virtualization strategies that more closely resemble server virtualization strategies: deploying a software layer that turns existing devices and infrastructure into a shared pool of storage. This approach eliminates the “Big Storage Problem” by enabling much more efficient use of storage capacity, extending the useful life of existing storage hardware, reducing the opportunity for storage incidents that impact application performance and availability, and simplifying disaster recovery and infrastructure management.

Oh, and we left one out – this approach to storage virtualization eliminates forklift upgrades and virtual vendor lock-in, and we predict that would make everyone happy.

Stay tuned for prediction #2 – it’s coming soon!

Get a Live Demo of SANsymphony

Talk with a solution advisor about how DataCore Software-Defined Storage can make your storage infrastructure modern, performant, and flexible.

Get a Live Demo