6 min read

Best-in-Class vs. All-in-One  Software-Defined Storage

Don AdamsConvergence has always been attractive. Remember the shoe phone? Convergence is intriguing, and often sounds like a logical thing to do.

There are times when convergence makes sense: As I sit in my kitchen writing this, I can think of my refrigerator with a built-in ice machine and water dispenser. Yet next to it there is a toaster oven, a microwave, and a gas oven – why haven’t these converged?

Convergence is a tricky thing; consider the smartphone. Steve Jobs launched the iPhone as a phone, an iPod, and a browser yet most of us carry a laptop (so much for the ‘post-PC era’) and I have a Kindle next to me, even though there is a Kindle app on both my phone and my laptop.

In IT we have examples where convergence has been useful, as in the case of Hyper-Converged Infrastructure sold by companies like Nutanix, which brought simplicity and modularity. As with all technologies, there are benefits and trade-offs with each approach, and HCI is not an exception.

Most IT departments are in a quest for simplicity: reducing the number of vendors and systems they need to manage. Storage has been a bit of a laggard when it comes to innovation. It is still very hardware and vendor centric and has been reluctant to adopt a software-defined approach. Can convergence help simplify storage?

Attempt at Unifying Software-Defined Storage

A single system that does block, file, and object has been attempted before. The problem is that block, file, and object are fundamentally different concepts requiring different architectures, therefore a unified product always must compromise and optimize for one or the other. Even with the newer Dell PowerStore™, you must choose between a system that enables file or one that is optimized for block.

The path most vendors have taken is to add a file interface to block systems or an object interface to file systems. These approaches are useful when the workload does not have specific requirements that require a file or object storage, and the interface is used mainly as a way for the application to be able to talk to the storage system.

Alternatively, an object system might meet the requirements of a medical records workload, but the imaging application was designed to write using the NFS file protocol, so it is not able to talk to an object system directly. An object system that has an NFS-to-object translator can be a good solution. However, we must not mistake this as a general-purpose file system.

Revisiting Block, File and Object

The key challenge is that these storage concepts are different beyond the interface: block (iSCSI, FC), file (SMB/NFS), and object (S3) are fundamentally different in their architecture, design choices, ideal use cases, and the way they manage data.

Block is ideal for Tier-1 applications that are local, require high performance, have frequent changes, and do not need data context. File is ideal for interfacing with users and applications with a simple interface, storing data in a hierarchical way, with strong access control. Object is ideal for global, hyperscale, metadata-rich storage of discrete objects or files that are not expected to change.

Understanding Block, File, and Object Storage

Understanding Block, File, and Object Storage

How does this make a unified software-defined storage system difficult? A few examples are:

  • The way these systems approach performance is fundamentally different:
    • Block is optimized for IOPS to handle many sequential database reads and writes.
    • File is optimized for latency to serve concurrent file requests.
    • Object is optimized for sequential throughput for streaming or delivering large files.
  • Objects usually do not change over time and if there are changes the object is often replaced with a new version.
  • Versioning, file locks, and access control can be problematic when objects and files are one and the same.
  • File protocols were not designed for handling advanced metadata.
  • Objects handle access control in a way that is fundamentally different than file systems – they often use tokens and other forms of authentication while file systems are local and connected to an identity service like AD.
  • When some object store systems add a file gateway as a front-end interface it can become a single point of failure that adds risk to the uptime of the entire solution
  • File systems are not efficient or economic at storing very large files or a very large number of files (i.e. billions of files).
  • From a security and performance perspective, you probably don’t want to have mission critical systems on the same system, or even on the same network as the one serving images for your website.

A Unified, Practical, Software-Defined Approach

A couple years ago, we started talking about DataCore One as a vision for a unified platform approach to storage and data management. It was a new concept. Some interpreted it as meaning we were going to build a single product that supported block, file, and object, but it really was about a lot more. Our vision is about what we see as the future of storage and data management, a future we are trying to make real, and one that looks like this:

  • Software-defined and hardware-agnostic
  • Adaptable to different deployment models
  • One approach across block, file, and object
  • Automation simplifies management
  • Data is protected, resilient, and always available
  • Data is dynamically optimized for right performance and cost
  • Single pane of glass provides visibility and predictive analytics

Note that the vision talks about one approach across file, block, and object – not one product. The unifying approach is software based, hardware independent, and software-defined. The unified approach also shares concepts like tiering, performance, automation, flexibility, uptime, and data protection.

For IT departments and partners, if you share our vision for the future, our approach means you can work with one company that is driving this vision into reality, a single vendor to modernize and optimize storage, a single pricing model, a single support contact, and a unified partnership based on trust.

Julia Palmer, Research Vice-President at Gartner, said recently: “…although more products support multiple types of storage, Gartner observed that large enterprises still tend to seek out “best of breed” in each category rather than unified or universal storage.“

A Unified, Best-Of-Breed, Software-Defined Storage Approach

Today, just a few months after we acquired Caringo, we are very proud to announce DataCore Swarm. A proven, high-performance, best-of-breed hyperscale object storage system. Swarm offers many unique capabilities from multi-tenancy, to a content portal for users, to self-healing and self-optimizing technology.

Swarm joins SANsymphony as well as MayaData, our strategic partnership with the company behind OpenEBS, the leading container storage platform.

Swarm is available now under the same simple licensing model as our other products and is also available in our xSP program for CSPs, MSPs, and other providers offering storage-as-a-service.

We are excited about announcing DataCore Swarm and I encourage you to read the press release and to visit the Swarm product page. Swarm takes us one step closer to making the vision of unified storage a reality.

Stay up-to-date

Subscribe to get the latest articles from the authority in software-defined storage, delivered directly to your inbox.

Related Posts
Adaptive Data Placement: The Next Step in Storage Evolution
Vinod Mohan
Adaptive Data Placement: The Next Step in Storage Evolution
Data Redundancy 101: Protecting Your Data Against the Unexpected
Vinod Mohan
Data Redundancy 101: Protecting Your Data Against the Unexpected
Your Data, Our Priority
Vinod Mohan
Your Data, Our Priority