The Virtual Viewpoint

"We chose DataCore because we wanted a solution that would allow MEF to modernize and virtualize our storage and IT infrastructure without being locked- in to specific hardware vendors or technologies. This gives us flexibility, scalability and freedom in our choice. SANsymphony-V allows us to use the most appropriate and innovative offerings on the market and makes it easy if needed to grow or adapt our environment to meet future requirements. DataCore not only reduces our storage related costs by consolidating management, it enables us to buy less expensive hardware and it also enables us to protect our existing investments. Moreover, the software-defined layer in our infrastructure gives us flexibility to optimize whatever we use and puts us back in control to shop storage for the best value to allow us to cost-effectively deal with growth,”

The DataCore software-defined storage platform has been implemented to centralize storage management and improve the utilization across a wide range of storage hardware systems and devices from different vendors including EMC and HP. DataCore consolidates and simplifies the provisioning of storage resources, significantly accelerates performance and adds high availability and a new level of flexibility to the existing mix as well as to future storage additions.

The Ministry of Economy and Finance, also known by the acronym MEF, is one of the most important and influential ministries within the Italian Government. It is the executive body responsible for economic, financial and budget policy. The organization manages the planning of public investments, coordinates public expenditures and verifies its trends, revenue policies and the overall tax system. The MEF operates the State’s public land and heritage, land register and customs; it plans, coordinates and verifies operations to foster economic, local and sectorial development, and is responsible for setting out cohesive policies, processes and the requirements pertaining to the public budget.

As part of the project to optimize and consolidate IT data centers, MEF selected DataCore’s software-defined storage solution with the primary objective of preserving their existing and very diverse set of storage investments made over many years - comprising of a range of systems from EMC VMax and EMC Centera to HP EVAs. In addition, it was critical for MEF to streamline and centralize management in order to gain productivity and to provision highly available storage capacity when needed, where needed quickly within minutes.

After evaluation of a number of industry solutions, MEF decided it needed to implement the DataCore storage virtualization as the best fit for their various and demanding requirements. MEF’s decision for DataCore’s software-defined storage approach was based on a number of factors, one being that SANsymphony-V transforms storage into an enterprise-wide resource that can be pooled and used more efficiently than hardware-driven SAN approaches where each storage system creates a separate inefficient island of storage. The DataCore software also optimizes overall utilization and makes storage provisioning dynamic and automatic – a rapid process versus a time consuming and complicated task that in the past often took days, if not weeks, to accomplish.

SpeedyCrew technology partners, an authorized and trained DataCore software solution provider with a highly skilled team and a long history of IT field experience, designed and implemented the project. SpeedyCrew deployed DataCore SANsymphony-V on four standard x86 server platforms, providing redundancy and data protection together with centralized management of over 200TB of storage across multiple EMC VMax, EMC Centera and HP EVA systems. MEF will also benefit from the addition of high-end advanced storage features including thin provisioning, metro-wide mirroring, high-speed adaptive caching, replication and auto-tiering, all of which can be applied to their existing and future storage investments.

For high availability and business continuity, all relevant data is synchronously mirrored across buildings and departments between the DataCore nodes. When one server is down, the remaining nodes take the workloads (auto-failover) until the system is up and resynchronized (auto-failback) automatically. To accelerate the performance of the underlying hardware, DataCore leverages the Random Access Memory (RAM) in each node. This allows for fast ‘in-memory’ high-speed caching to accelerate the workloads for MEF’s business critical applications. DataCore also made it easy for MEF to scale out and grow dynamically in the future by allowing it to add storage hardware of their choice of vendor, model or technology including flash SSD resources when they may be required.

  • English (US)

Software-Defined Storage Architecture Enables Research Institute to Significantly Improve Application Performance and Maximize Existing Storage Investments

Arizona State University “DataCore enables all the different storage devices that comprise our architecture to communicate and work with each other – even though they come from a wide mixture of vendors – thereby allowing the Institute to gain efficiencies and reduce our costs,” said Scott LeComte, senior IT manager at Arizona State University’s Biodesign Institute. “Just as important is the fact that DataCore’s software is portable and can reside in different locations, meaning we avoid a single point of failure by deploying two DataCore-powered nodes that operate synchronously, campus-wide and automatically can take over for each other in the event of a failure.

The Biodesign Institute dedicates its research to addressing today’s critical global challenges in healthcare, sustainability and security. The Institute is made up of 14 different research centers, almost $60 million in annual research expenditures and more than 500 personnel who rely on the DataCore-powered storage infrastructure. Because the Institute is so research-intensive, researchers are all required to save large amounts of data generated from experiments that have been conducted for long periods of time. The reason for doing so is that researchers are frequently called to “prove” how they achieved a particular discovery – either by a federal agency, like the Food and Drug Administration (FDA), or by a third party company seeking to buy the research outright.

“When we first got DataCore, it was amazing how easily it fit into the environment and just worked. We are very pleased with just how seamless and non-disruptive the solution has been and its flexibility proves itself time and time again” added LeComte.

Research at the biodesign institute

By using DataCore, the Institute’s IT team can now easily and readily pool all storage capacity and provide centralized oversight, provisioning and management to its entire storage architecture. The Biodesign Institute was able to expand its IT environment while cutting costs at the same time as it saw the total cost of supporting one terabyte of storage decrease from $2,500 per TB to $1,000 per TB. SANsymphony-V currently manages more than 300 TBs of total, mirrored capacity.

“IT departments, especially at state-run institutions such as Arizona State, are constantly being tasked to find new ways to reduce cost, while ensuring that IT operations run smoothly,” said Steve Houck, COO at DataCore. “By creating a software-defined storage architecture based on SANsymphony-V, the Biodesign Institute at Arizona State was not only able to significantly reduce costs, but increased application performance by up to five times as well.” The Biodesign Institute at Arizona State University (ASU), the second largest metropolitan university in the United States, has successfully adopted a software-defined storage architecture by implementing DataCore’s SANsymphony-V storage virtualization platform. With the help of DataCore, the Biodesign Institute has increased application performance by up to five times, maximized storage capacity of existing investments, improved uptime and significantly reduced costs by removing old storage controllers that had become high-maintenance and unaffordable.

A complete case study discussing the DataCore deployment at the Biodesign Institute at Arizona State University is available here:

As detailed in a recent article on Wikibon, “Data centers are at the center of modern software technology, serving a critical role in the expanding capabilities for enterprises.” Data centers have enabled the enterprise to do much more with much less, both in terms of physical space and the time required to create and maintain mission-critical information.

But the technology surrounding the data center is positioned to evolve even more dramatically, in terms of conception, configuration, and utilization. More importantly, the technologies surrounding the data center will both have an impact and be impacted over time. Towards the end of last year, Gartner identified eight areas to consider when developing a data center strategy that balances cost, risk and agility. We wanted to take that analysis further, reaching out to thought leaders across the enterprise technology space, for their perspectives on the future of data center technology.

DataCore CEO on the traditional Storage Model is Broken

George Teixiera, President, DataCore Software The traditional storage model is broken. Behavior has shifted and disrupted how businesses buy storage as they can no longer afford to rip out the old and throw more costly new hardware at their problems. Instead they are seeking smart automated software that runs on lower cost hardware and optimizes their existing investments and provides the agility to easily add new technologies non-disruptively. Bottom-line, the path to a software-defined data center where users gain freedom of hardware choice and control of their resources is inevitable and that means storage must also be software-defined.

Mr. Teixeira creates and executes the overall strategic direction and vision for DataCore Software. Mr. Teixeira co-founded the company and has served as CEO and President of DataCore Software since 1998.

Full story: What’s the future of the data center? The big list of thought leadership perspective

El Reg: “DataCore had a terrific year in 2013, breaking the 10,000 customer site number. It's a solidly successful supplier with a technology architecture that's enabled it and its customers to take advantage of industry standard server advances and storage architecture changes gracefully. 2014 should be no different with DataCore being stronger still by the end of it.”

Chris Mellor from the Register a.k.a. El Reg, had a conversation with DataCore president, CEO and co-founder George Teixeira about what’s likely to happen in 2014 with DataCore. Software-defined storage represents a trend that his company is well-positioned to take advantage of and he reckons DataCore could soar this year.

El Reg: How would you describe 2013 for DataCore?

George Teixeira: 2013 was the ‘tip of the iceberg’, in terms of the increasing complexity and the forces in play disrupting the old storage model, which has opened the market up. As a result, DataCore is positioned to have a breakout year in 2014 … Our momentum surged forward as we surpassed 25,000 license deployments at more than 10,000 customer sites [last year].

What’s more, the EMC ViPR announcement showcased the degree of industry disruption. It conceded that commoditisation and the movement to software-defined storage are inevitable. It was the exclamation point that the traditional storage model is broken.

El Reg: What are the major trends affecting DataCore and its customers in 2014?

George Teixeira: Storage got complicated as flash technologies emerged for performance, while SANs continued to optimise utilisation and management - two completely contradictory trends. Add cloud storage to the mix and all together, it has forced a redefinition of the scope and flexibility required by storage architectures. Moreover, the complexity and need to reconcile these contradictions put automation and management at the forefront, thus software.

A new refresh cycle is underway … the virtualisation revolution has made software the essential ingredient to raise productivity, increase utilisation, and extend the life of … current [IT] investments.

Last year set the tone. Sales were up in flash, commodity storage, and virtualization software. In contrast, look what happened to the expensive, higher margin system sales last year – they were down industry-wide. Businesses realized they no longer can afford inflexible and tactical hardware-defined models. Instead, they are increasingly applying their budget dollars to intelligent software that leverages existing investments and lower cost hardware – and less of it!

Server-side and flash technology for better application performance has taken off. The concept is simple. Keep the disks close to the applications, on the same server and add flash for even greater performance. Don’t go out over the wire to access storage for fear that network latency will slow down I/O response. Meanwhile, the storage networking and SAN supporters contend that server-side storage wastes resources and that flash is only needed for five percent of the real-world workloads, so there is no need to pay such premium prices. They argue it is better to centralize all your assets to get full utilization, consolidate expensive common services like back-ups and increase business productivity by making it easier to manage and readily shareable.

The major rub is obvious. It appears when local disk and flash storage resources, which improve performance, defeat the management and productivity gains from those same resources by unhooking them from being part of the centralised storage infrastructure. Software that can span these worlds appears to be the only way to reconcile the growing contradiction and close the gap. Hence the need for an all-inclusive software-defined storage architecture.

A software-defined storage architecture must manage, optimise and span all storage, whether located server-side or over storage networks. Both approaches make sense and need to be part of a modern software-defined architecture.

Why force users to choose? Our latest SANsymphony-V release allows both worlds to live in harmony since it can run on the server-side, on the SAN or both. Automation in software and auto-tiering across both worlds is just the beginning. Future architectures must take read and write paths, locality, cache and path optimizations and a hundred other factors into account and generally undermine the possibility of all-in-one solutions. A true ‘enterprise-wide’ software-defined storage architecture must work across multiple vendor offerings and across the varied mix and levels of RAM devices, flash technologies, spinning disks and even cloud storage.

El Reg: How will this drive DataCore's product (and service?) roadmap this year?

George Teixeira: The future is an increasingly complex, but hidden environment, which will not allow nor require much if any human intervention. This evolution is natural. Think about what one expects from today’s popular virtualization offerings and major applications, but exactly this type of sophistication and transparency. Why should storage be any different?

Unlike others who are talking roadmaps and promises, DataCore is already out front and has set the standard for software-defined storage as our software is in production at thousands of real world customer sites today. DataCore offers the most comprehensive and universal set of features and these features work across all the popular storage vendor brands, models of storage arrays and flash devices. Automating and optimizing the use of these devices no matter where they reside within servers or in storage networks.

DataCore will continue to focus on evolving its server-side capabilities, enhance the performance and productive use of in-memory technologies like DRAM, flash and new wave caching technologies across storage environments. [We'll take] automation to the next level.

DataCore already scales out to support up 16 node grid architectures and we expect to quadruple that number this year.

[We] will continue to abstract complexity away from the users and reduce mundane tasks through automation and self-adaptive technologies to increase productivity. ... For larger scale environments and private cloud deployments, there will be a number of enhancements in the areas of reporting, monitoring and storage domain capabilities to simplify management and optimise ‘enterprise-wide’ resource utilisation.

VSAN "opens the door without walking through it."

El Reg: How does DataCore view VMware's VSAN? Is this a storage resource it can use?

George Teixeira: Simply put, it opens the door without walking through it. It introduces virtual pooling capabilities for server-side storage that meets lower-end requirements while delivering promises of things to come for VMware-only environments. It sets the stage for DataCore to fullfil customers’ need to seamlessly build out a production class, enterprise-wide software-defined storage architecture.

It opens up many opportunities for DataCore for users who want to upscale. VSAN builds on DataCore concepts but is limited and just coming out of beta, whereas DataCore has a 9th generation platform in the marketplace.

Beyond VSAN, DataCore spans a composite world of storage running locally in servers, in storage networks, or in the cloud. Moreover, DataCore supports both physical and virtual storage for VMware, Microsoft Hyper-V, Citrix, Oracle, Linux, Apple, Netware, Unix and other diverse environments found in the real world.

Will you be cheeky enough to take a bite at EMC?

El Reg: How would you compare and contrast DataCore's technology with EMC's ScaleIO?

George Teixeira: We see ourselves as a much more comprehensive solution. We are a complete storage architecture. I think the better question is how will EMC differentiate its ScaleIO offerings from VMware VSAN solutions?

Both are trying to address the server-side storage issues using flash technology, but again why have different software and separate islands or silos of management, one for the VMware world, one for the Microsoft Hyper-V world, one for the physical storage world, one for the SAN?

This looks like a divide and complicate approach when consolidation, automation and common management across all these worlds are the real answers for productivity. That is the DataCore approach. Instead, this silo approach of many offerings appears to be driven by the commercial requirements of EMC versus the real needs of the marketplace.

El Reg: Does DataCore envisage using an appliance model to deliver its software?

George Teixeira: Do we envisage an appliance delivery model? Yes, in fact, it already exists. We work with Fujitsu, Dell and a number of system builders to package integrated software/hardware bundles. We are rolling out the new Fujitsu DataCore SVA storage virtualization appliance platform within Europe this quarter. We also have the DataCore appliance builder program in place targeting the system builder community and have a number of partners including Synnex Hyve Solutions which provides virtual storage appliances using Dell, HP or Supermicro Appliances Powered by DataCore and Fusion-io.

El Reg: How will DataCore take advantage of flash in servers used as storage memory? I'm thinking of the SanDisk/SMART ULLtraDIMM used in IBM's X6 server and Micron's coming NVDIMM technology.

George Teixeira: We are already positioned to do so. New hybrid NVDIMM type technology is appealing. It sits closer to the CPU and promises flash speeds without some of the negatives. But it is just one of the many innovations yet to come.

Technology will continue to blur the line between memory and disk technologies going forward, especially in the race for faster application response times by getting closer to the CPU and central RAM. Unlike others who will have to do a lot of hard coding, and suffer the delays and reversals of naivety trying to keep up with new innovations, DataCore learned from experience and designed early on software to rapidly absorb new technology and make it work. Bring on these types of innovations; we are ready to make the most of them.

El Reg: Does DataCore believe object storage technology could be a worthwhile addition to its feature set?

George Teixeira: Is object storage worthwhile? Yes, we are investing R&D and building out our OpenStack capabilities as part of our object storage strategy; other interfaces and standards are also underway. It is part of our software-defined storage platform, but it makes no sense to bet simply on the talk in the marketplace and on these emerging strategies exclusively.

The vast bulk of the marketplace is file and block storage capabilities and that is where we are primarily focused this year. Storage technology advances and evolves, but not necessarily in the way we direct it to, therefore object storage is one of the possibilities for the future along with much more relevant, near term advances that customers can benefit from today.

El Reg: DataCore had a terrific year in 2013, breaking the 10,000 customer site number. It's a solidly successful supplier with a technology architecture that's enabled it and its customers to take advantage of industry standard server advances and storage architecture changes gracefully. 2014 should be no different with DataCore being stronger still by the end of it.


Let’s face it. Large B2B tech companies don’t come up with groundbreaking innovations. They are too big and are too busy with action items, corporate politics and intense pressure from the street to develop new innovations and bring those ideas to market in a reasonable timeframe. They also have a big reason not to monkey with the status quo, billions of reasons actually. Faster, cheaper, better typically means less revenue.

Turns out Wall Street isn’t a big fan of cool innovations that decrease revenue, which shouldn’t come as a surprise. Perched high on the list of career limiting moves in Silicon Valley is driving a plan to develop innovative products and services that will decrease revenue.

The truth is, most great ideas from the major hardware and software vendors leave with their creators, stop on Sand Hill road for funding and end up in a little office in the Valley. Big ideas can mean big money. Visionary engineers aren’t turning their moment of genius over to a big political machine to get kicked around meeting rooms for three years only to resurface neutered and too late to matter anyway.

Big tech companies, especially hardware vendors, will continue to fight to the death to keep their antiquated products moving off the shelves. Sluggish economics made 2013 a great year to buy dated technology; 70, 80, even 90 percent discounts on six-figure hardware deals. What’s happening here? While these products are selling for next to nothing, vendors are now charging 20 percent of the list price for support and updates.

This is a plan that certainly works for Wall Street and it is hard for customers to resist. What would you do if you bought a new car two years ago for $50,000 and your dealer called you at the end of the year saying they’d give you a new care for $5,000? Anyone would take that deal!

VMware had this same problem getting their groundbreaking technology off the ground. Customers could install their technology and use 90 percent less server hardware. While this was great for the customers, HP, Dell and IBM certainly weren’t happy with VMware. The same thing could be said for resellers, as they certainly weren’t interested in selling 90 percent less hardware.

The resellers are just as weary about messing with their nest egg. However large or small, the few VARs that actually survive their first two years in business did so for a reason — they realized that running a tech resale business is about making money. The rash of leads, rebates and SPIFs coming from their big vendors are too lucrative for the resellers with influence to take a chance on anything new.

The hardest part of getting new innovations to market is cutting through the big tech companies marketing machines and their incredibly talented (and well paid) sales people to get prospective customers to consider new technologies. There really isn’t a villain to blame, but there is a hero.

Ultimately, the buyers of technology are the heroes of our industry. They are our saving grace. Despite pressure from their upper management, the people in the trenches that do this because they love it are the ones that keep us moving in the right direction.

To those visionary technologists, don’t believe all the marketing hype from big tech. Keep innovating. By doing so, you will help shape the future of the industry.

Paul Murphy is VP of Worldwide Marketing at DataCore.

The Virtual Viewpoint

A Storage Virtualization Blog

Here you will find the latest information and insights on storage virtualization and the virtual world in general. Please feel free to join the discussion by clicking on the "Comments" link below each post, we would love to hear your viewpoint.

rss_icon  Subscribe to RSS feed
© 2014 DataCore Software. All Rights Reserved.          Legal Notice | Privacy Statement