So far, we have identified a fair amount of pros and cons with the 3 most common SDS deployment models. Now we will elaborate a bit more on the general benefits you will gain by adopting a software-defined architecture in your infrastructure.
Always-on data access is a top priority for every organization whose core revenue stream depends on mission-critical applications’ availability around the clock. Software-defined storage meets these requirements by offering high availability (HA) benefits within the same data center or across 2 separate data centers communicating via dark-fiber.
This type of N+1 architecture will require a minimum of 2 nodes. If one node experiences a failure, the second node will continue to host production IO without causing any downtime to core business applications. This type of protection is like an insurance policy that pays off whenever a SAN array fails.
Having 2 active/active software-defined nodes will reduce hardware-related downtime by 100% and will provide a fully-automated failover infrastructure for your hypervisors, databases, and applications.
As your application data and I/O performance demands continue to grow, your ability to scale on the fly will be crucial to stay ahead of your competitors. This can only be achieved if you have an adaptive infrastructure in place.
SDS enables you to scale out or scale up depending on the shifts from your business demands. If you need more capacity, you can add new storage arrays to the existing virtual pool and expand your storage capacity immediately.
For more performance, you can add more memory, CPU, or target ports to your SDS nodes and see an immediate performance boost. If your server’s hardware specs are maxed out, you can add a 3rd or 4th node, attach some storage, and gain both performance and capacity simultaneously.
Host OS and application IO requirements change faster than most IT departments can keep up with. It is mostly due to the lack of flexibility offered by their storage solutions. Whether it’s higher IOPS requirements or lower latency thresholds, the adjustments must be met at some point.
Businesses want to max out the ROI on their infrastructure investments and are always looking for solutions that can act like a Swiss Army knife of technology. Software-defined storage provides that level of leverage.
With SDS as your primary layer of intelligence, you don’t have to continue buying the same storage arrays from the same vendor. You have the flexibility to acquire arrays from different vendors and mix up SSD, SAS, and SATA disks for hot and cold data.
If you need more capacity for hot data, you can invest in SSDs from HP, Dell EMC, IBM, or NetApp. If you need additional capacity for cold data, you can deploy SAS/SATA arrays from Cisco, Super Micro, or Lenovo. You are in control.
Acquisitions and mergers have created nightmares for storage admins when it comes to merging data hosted in multi-vendor SANs. The task to integrate HPE storage, IBM arrays, and Dell EMC storage with each other is impossible because the SANs are not compatible with each other.
This similar to placing two people together who speak different languages. They simply won’t be able to communicate due to a language barrier. Likewise, multi-vendor storage compatibility and interoperability is not a realistic goal unless you add a translator.
SDS is the translator or bridge that can unify heterogeneous storage and place them in a consolidated virtual pool. What once seemed like an impossible task is now possible with SDS storage pooling technology.
Show me one IT executive that does not care about costs, and I will show a thousand that make their final buying decision solely on costs. Everyone wants to save on CapEx and OpEx, which makes software-defined options very attractive in the long and short terms.
You have learned about the many benefits this software technology offers, and cost-savings is a key factor this solution excels in. How much you can save totally depends on your unique situation.
If you want to repurpose existing storage and avoid hardware rip-and-replace, you will probably see the biggest savings as you only need to invest in the SDS nodes that will virtualize your storage.
If your situation calls for a storage refresh, you can opt to buy a few x86 servers and fill them up with SSD and SAS drives and add the storage virtualization software. This design will give you a super-fast SDS appliance using commodity disks instead of buying a traditional SAN.
If you have slow storage and need to boost performance, you can buy an x86 server and add a couple SSDs. The CPU and RAM will increase performance, and auto-tiering will shuffle your data between the SSDs and your slow storage. That’s an easy way to fulfill performance needs without breaking the bank going all-flash.
The evolution of storage solutions has provided IT admins with a long selection of attractive options. You have new players such as SolidFire, Nimble, Tegile, and old players like EMC, HP, NetApp, and IBM. By the way, the 3 new players I mentioned have been acquired by the bigger players.
As great as feature sets all those storage solutions can offer, they all present one common challenge: you cannot manage them from a single pane of glass. You literally have to open up several management consoles to do your storage administration tasks.
The answer to this problem once again is software-defined storage. Take an old NetApp box and connect to your SDS node. Then connect a Nimble array, add a VNX, throw in a 3PAR, and add them into an SDS pool.
Now you can manage all 4 storage arrays from a single console. You can create snapshots, full clones, or create a DR copy of your data. You can execute every enterprise-level function supported by your SDS node, even if the managed arrays are not licensed for those features. Problem solved.
Many IT administrators are turning to all-flash arrays (AFA) to gain more performance. Although this is a viable option, not everyone can afford it. Even if you get started with AFA right now, it is not a sustainable approach most companies can carry on for long.
SDS solutions can exceed AFA’s performance by leveraging 3 core features:
- I/O parallelism with multi-core CPUs
- Read/write data caching with RAM
- Hot/cold data auto-tiering
I/O parallelism enables multiple cores to dynamically participate in processing I/O requests from hosts. If workloads begin to peak, more cores are called on to help handle the heavy lifting. AFA solutions do not have this dynamic parallel I/O processing power.
Read/write caching is the key benefits found in traditional RAID controllers and SAN controllers, yet they support a very limited amount of memory for caching, ranging between 4GB to 32GB. SDS nodes, on the other hand, can support up to 8TB of RAM dedicated for data caching.
SDS auto-tiering provides massive benefits both from performance and costs perspectives. When it comes to the average hot-data changes per day, studies have shown it is less than 5% of your total used capacity. This means that you really don’t need flash for all of your data.
The alternative to an AFA is hybrid arrays, which are less expensive and employ auto-tiering to move the hot/cold data up and down the disk tiers. However, you can only auto-tier within the same storage enclosure, not external arrays.
Software-defined storage shines in this area as it enables you to make smart decisions when buying flash storage. Most SDS users invest in 5% to 10% of flash-based tier 1 storage, and the other 90% is a mixture of SAS and SATA arrays.