Gerardo Dada

Looking for the Answers to Performance Problems…in the Wrong Place

A few years ago, a leading insurance company was not meeting performance SLAs for some of its key applications. The VP of infrastructure requested millions in funding to update the organization’s existing systems with all-flash arrays. After spending months installing new hardware and migrating data, the team only saw small performance improvements—flash did not effectively solve the problem.

The reason is simple: storage media read/write throughput is only one component of a system—and more often than not, it is not a key contributor to performance.

I have always enjoyed music; one could say I am an audiophile. When I was making my first significant investment in a good audio system (a Carver amplifier with B&W 801 series studio monitors), I learned that the performance of the system is only as good as its worst component. In other words, if you have a killer amp, great speakers, gold cables, and a fantastic recording, but you are listening to a cheap cassette tape…. it’s just not going to sound good.

The same principle applies to applications. For example, databases are the heart of most applications. Wait-time analysis profiling shows that optimizing queries and adding indexes is important, but so is ensuring that there are no server resource contentions and that storage media is not a bottleneck. In a large percentage of cases, wait time analysis profiling showed I/O was the key bottleneck.

Of course, every situation is unique, and the performance profile is determined by many factors. It may also change over time depending on load and other elements. The best practice is to use monitoring tools to understand the performance profile of each element of your system and what actions will really impact performance.

Investing in Flash Storage as the Only Solution is a Failed Approach

This approach is only throwing money at the problem. Using the previous example, it is similar to upgrading already good speakers to even better ones when the problem is that the audio source (a cassette tape) has terrible signal-to-noise ratio and simply cannot produce good audio quality, no matter how much money you invest in speakers.

Research recently published by DataCore shows that 17% of IT departments had invested in AFAs but found they failed to deliver on the performance promise. I am sure many of the remaining 83% saw a performance improvement, but I wonder how many actually solved the performance issue—and how many saw the ROI expected.

To be clear, I am not advocating against flash. AFAs, NVMe, and other technologies are wonderful and a key enabler of higher performance in IT systems. My argument is simply to avoid looking at flash as the panacea that will solve all problems. Consider other performance improvement alternatives and understand their performance benefits.

Solving the I/O Bottleneck

We know that I/O bottlenecks are some of the most pervasive database and storage issues. As CPUs have gotten more powerful, storage systems and network connectivity became faster, yet I/O has remained mostly a serialized connection between the server and storage.

My friend Robert Mandeville, a database performance guru, told me: “the primary constraint in an investigation into MySQL performance pointed to database I/O as the biggest cause of user experience impact. After some quick back-of-the-napkin stats, I checked and I/O looked to be 75% of the waits for one MySQL instance.”

This is where Parallel I/O technology can make a huge difference. The technology was created by a group of incredibly smart engineers that were solving I/O problems in hyper-efficient real-time systems for enterprise storage systems more than a decade ago. Parallel I/O technology maximizes utilization of multi-core processors to dramatically cut latency. In addition, it uses AI-assisted caching and other techniques developed over the course of 20 years to maximize throughput.

It seems that every vendor in the storage market claims high performance, which is a relative term. This is why there are benchmarks. Parallel I/O set the industry-standard SPC-1 benchmark record with a response time that was 363% faster than the next fastest, an all flash array. IT departments report a 5X increase in performance, which sounds completely incredible until you see it for yourself.

I/O optimization is complementary to faster storage systems. It can speed up an older array, extending its life, and it can deliver on the maximum IOPS possible for an NVMe array – even though NVMe uses some parallelization technology. You may want to read this IDC report on using Parallel I/O to accelerate Hyperconverged infrastructure to learn more.

A Little Flash can go a Long Way with DBLAT

Let’s say you have identified storage media as the bottleneck for your systems, and investing in AFAs is the right solution for your performance problems. Your storage vendor will surely recommend that you replace all of your storage with their shiny AFA arrays, of course. However, there are a number of problems with this approach:

  1. It can be very expensive.
  2. You may need to migrate all of your data to the new storage arrays.
  3. You fail to leverage any value left in your current arrays.
  4. In a few years, when another ‘disruptive’ technology comes around, probably from a new vendor, are you going to replace everything again?
  5. You are betting your future on one vendor. They like locking you in for as many years as you will allow them to.

Another approach could be to buy ‘a little flash’ – one high-performance array. Now you have to decide which applications are going to get to enjoy the faster technology and which ones will linger on lower performance systems. In most cases, the decision is made based on who complains the most, which does not seem to be the best approach.

There is a better way.

Dynamic Block-Level Auto-Tiering (DBLAT), is a super-intelligent system that uses flash in the most efficient way. The result is a small investment in flash that can result in dramatic performance improvements for all applications.

DBLAT is a capability of DataCore software-defined storage (SDS), which provides an intelligent software layer across all storage systems. It abstracts multiple vendors, types, and configurations of storage and creates virtual storage pools. Once SDS is in place, you won’t have to do another migration, ever, as the software moves data around as needed.

(OK, I admit I just made up the “DBLAT” acronym as it makes the tech sound cooler, but SDS is a real acronym.)

Auto-tiering moves data to the storage system that delivers the best performance based on the observed performance profile for each application. Auto-tiering with SDS allows the user to do that across different vendors. So an all flash array can be added, and the system could move the apps that require the most performance to this array automatically.

Dynamic auto-tiering means this optimization is happening continuously, using machine learning to observe the requests and performance needs, adjusting as needed. Now, block-level auto-tiering is where things get really interesting.

Imagine you add a little NVMe storage, direct attached, to the same server running the SDS platform. SDS then identifies not which pools or which apps or which LUNs, but which blocks of data require auto-tiering, and moves just those blocks to the NVMe array.

Let’s say you have a product database with millions of products, but only 5% of those products are very popular. The system will move only the blocks where these products are stored to the NVMe storage so they can be accessed incredibly fast, leaving the product catalog items that are not accessed frequently in the slower storage array. This frees up the NVMe storage to make other blocks available to other applications.

Now imagine this is combined with dynamic intelligent caching, Parallel I/O-optimized request scheduling and other patented technology. The best part is this happens automatically, across all of your storage systems, new and old, and is included in every license of SDS at no extra cost, with a perpetual license.

Auto-tiering is a technology that can improve performance in almost every IT system that uses different types of storage systems. More importantly, it can help achieve dramatic performance improvements with a relatively small investment.

Learn more about auto-tiering here.

Related Posts
 
Dave Brown
Software-Defined Storage and How it Enables Continuity of Operations for Data Survivability
Did you know that software-defined storage (SDS), previously known in the industry as storage virtualization, enables continuity of operations (COOP) from the data availability and…
 
Sharon Munday
What’s Inside the Core?
As a freelance writer across many vendors, I have been repeatedly struck over the past 15 years with just how ‘sticky’ the DataCore SANsymphony platform…
 
Rizwan Pirani
The Era of Cloud Replication Leveraging Software-Defined Storage
Data is fundamentally what drives business operations; protecting this intellectual capital is essential. However, with the explosive growth of data and increasing requirements for business…