Recherche
Langues
On Demand Webcast
58 min

Top Trends You Need to Know: Hyperconverged, SDS, Flash, and Cloud Storage

Webcast Transcript

Danielle: Good morning and good afternoon, all. My name is Danielle Brown. I’m a marketing manager for DataCore software and I will be today’s moderator. We’d like to welcome you to today’s webinar, “Top Trends You Need to Know; Hyperconverged, SDS, Flash and Cloud Storage.” Before we begin today’s presentation, I’d like to go over just a few housekeeping points with everyone. This presentation is being recorded and will be available on demand.

You will be receiving an email from BrightTALK with an on demand link after the webinar. Also, feel free to type in any questions during the presentation and we’ll address those at the conclusion of this webinar. And lastly, we’ve also provided a PDF version of this presentation along with a few other links under the section called Attachments, for your convenience. So with that, I’d like to kick off the presentation by introducing one of our very special presenters today, our CEO, president and co-founder, George Teixeira. George?

George: Thank you very much for the introduction. I’m also here with Sushant Rao, who’s our senior director of product marketing. And I’d just like to say I’m excited about always talking about DataCore, but it’s an interesting perspective having been in this business a long time and really being part of the pioneers who actually invented storage virtualization back in the 1990s and watching that evolve and grow into what we call Software-Defined Storage and also into the hyper-conversion, all of that kind of capabilities.

So we’ve got a lot of experience. And one of the highlights each year is that we do put together our annual software survey around this. And I think it’s been really interesting to see how the world has changed and what kind of trends and the idea that this year we’ve gotten something like 426 IT professionals doing the survey; we want to make sure we’re sharing that information with you, and, you know, kind of giving your our thoughts and analysis of what we’re seeing. Sushant?

Sushant: Exactly right, George. So I think – let’s go straight into some of the results. So let’s start off with kind of the first one, which is what is driving the need for software-defined storage? As you can see on the screen, right, the four primary ones relate to simplified management, which was said by 55 percent; future-proofing infrastructure, 53 percent; then avoiding hardware lock-in, 52 percent; and then extend the life of existing storage assets, which was 47 percent.

So George, what strikes you about these drivers?

George: Well, you know, it’s interesting, because I’ve been – when we started the company, we actually talked about software driven storage and the idea that software is the key in really where things are going. And it’s frankly taken a long time but if you look at the beginning of that approach, we really looked at how you could pool and simplify management from the day one. So it’s still intriguing to see that the simplification and the ability to work with different storage is still key.

And by the way, I have to point out that when we ask the survey, it’s not just different vendors, it’s different makes and models. I mean a lot of times people have storage from let’s just say one vendor like HP and what you find is because of all the acquisitions, they may have 10 different types and models and brands within there, and none of them still speak together.

So that’s still the obvious first point. And I think the others are just sort of obvious. I mean if you’re going to do a software approach, what you’re really doing is staging yourself for the future. But you also want to make sure that existing investments you have are still able to play in that storage world. So to me these are the logical places, and if I go back even over the last six years that we’ve been doing these surveys, these tend to still be the top drivers.

Sushant: Yeah, and the other thing that kind of struck me is, you know, if you really think about what’s driving every one of these, it’s a need to lower cost, right? So simplify management is an opex, right, reducing the amount of time spent. But the other things are all related to capex, right? Future-proofing so you can add in new things without wasting what you have, avoiding it when the hardware – avoiding hardware lock-in, right, giving you choice so you can pick and choose the one that best fits your existing environment, extending the life of existing storage assets, all relating to being able to get more out of the investments that you’ve already made in your infrastructure.

George: Yeah, and I think besides the cost, I mean anybody who’s managing storage knows we’re living in a world of change. And words like future-proofing and avoiding the hardware lock-in and extending the life is all about dealing with change. So the more you are in a software-defined world, you are much more flexible and you can deal with that, whatever the change is that’s coming, with it’s to migrate or to upgrade into what’s coming into the future.

Especially with new technologies that are always coming on board.

Sushant: And it’s interesting. Because sometimes we get these requests from, you know, analysts saying, well, you know, give us your predictions for five or 10 years down the line and the thing that strikes me is, it’s so hard to predict three years out, right? And so these capabilities that you’re talking about, where the intelligence in the software allows you to, you know, kind of mix and match, what is, you know, maybe the technology de jour, but may not be in three to five years.

George: Exactly.

Sushant: So let’s move forward and still talk, you know, look at some of the other things that we saw from the software-defined storage. So this question is really related to what is it that people want to get from their infrastructure, right? So there’s cost efficiency, so by 60 percent; disaster recovery capabilities, 65; storage expansion without disruption, close to three-fourths; and then business continuity, which really means, you know, highly available systems, was said by more than four out of five respondents.

What’s the trend that you see, George?

George: I think, to be honest, the one that kind of still surprises me is how strong the business continuity and disaster recovery type capabilities are there. I think what’s happened is, while everybody believes the architectures are more and more reliable, the pain of suffering any kind of downtime or loss is just now so critical, it’s basically, you know, a life critical situation for most companies.

So I guess it doesn’t surprise me that that’s, you know, so key. I will tell you that, you know, if I go back years ago, people assume that the software wasn’t the real way to get to those kind of levels of reliability. And what they were assuming was, you would just do, you know, redundant hardware and the hardware was going to be the way to solve it. And what’s interesting is with basically all the software features, it’s really a software game.

So now the idea of doing sort of, you know, the ability to mirror in software and do all of the features in software I think is the real key for that. So I guess the continuity is a big one. You talked already about the cost efficiency. I think that’s endemic across all of the approaches here. So you know, I guess like I said, the business continuity, from my standpoint, I’m really glad to see it’s still the primary driver for a lot of people because I think it’s one of those where people sort of assume it but when you have the problem you really know you need it.

And I think what a lot of the failures out there and even downtime from cloud manufacturers and stuff like that, you know, just companies are realizing they cannot any down time today.

Sushant: Yeah, it’s really interesting because, you know, this is the, you know, I always give the example of, you know, what’s it like, how much can you get done when you don’t have access to communication systems like email, right? Now you think of missing critical, business critical systems; if they go down, what’s that impact to the organization, to the company? And I think that’s what’s showing up in this.

So let’s move on to the next slide and talk a little bit about performance and – as well as how it relates to flash. So we asked a question about virtualization, right, and what were the surprises after virtualizing mission critical applications. So what you’ll see is about 31 percent said they needed shared storages to make clusters highly available; 29 percent found application response time was slower than before virtualization. And lastly, 29 percent as well said determining storage requirement became more different.

So the one that, George, I wanted to get your thoughts on was the application response. You know, given that we’ve been virtualizing, you know, companies have been going through virtualization for 10 years, is you know, I’m surprised that people are still I guess surprised by the slow down after they virtualize, especially their mission critical systems.

George: Well, yeah, I mean I think what’s happened is people just assume that the virtual machines are exact copies of the physical machines. And you know, to be honest, when you’re still running on a physical machine, you can get more performance. And a lot of that is because of the resources and all that; especially when you virtualize, you’re tending to share a lot of resources.

The truth is, that’s getting better and better. And I think new technologies like parallel IO and stuff like that, and the way you can cache and use memory is key. But a lot of technologies still haven’t made it out there. So I think on the application performance, one of the things I’ve seen there, the primary issue is typically when you have transaction oriented workloads, especially with a lot of right traffic, things that are doing a lot of fundamental IO. And in those kind of cases, just throwing more computation isn’t going to solve the issue.

And what happens is, you’re running into a lot of IO contention and therefore, you know, again, trying to share all of that, if you would, on the same server, in which you’re, you know, basically contending for the resource to do IO becomes an issue. So here is an area that I think, you know, smart storage that can actually get more out of that hardware and can spread the workload to get better application performance becomes key.

Sushant: Well, and you’re hitting on something that I think is really important to note, is that when most companies have done root cause analysis, they’ve found that it’s not necessarily the server, it’s not the application itself; more often times than not, it’s the IO and storage that’s the weak leak, so to speak. Right? So that’s where the bottleneck is. And it slows down the entire application.

George: Yeah, and I think the other thing is, you know, when you go to virtualize, a lot of companies, when they started, they took over things like virtual desktops, and maybe lesser demanding applications over to virtualize first. The ones that have been the most reticent to virtualize are things like databases, and especially those that are much more transaction and IO heavy, and you know, there the benefit has been less to go to the virtual mode. And there’s other reasons for it, things like supporting fiber channel, which is often analogous to being tied to the database.

In a lot of these cases they don’t do as well when you go onto the virtual platforms. So there is other reasons here where I think that’s made it a little bit slower to go that way. Even though it is a trend where people are going to go more and more to virtualizing those mission critical applications.

Sushant: Yeah, I mean I think the flexibility that virtualization allows them is something that they want. But when I talk to customers, they really talk about, you know, how do I get the benefits of virtualization from a flexibility and agility perspective, but how do I not pay the performance penalty virtualizing, right? So this is something that companies do have to watch out for when they are virtualizing their mission critical or their key applications.

So let’s go on to the next slide, which talks about, you know, which are the applications that have the most severe performance issues related to storage. So you can see, you know, on the scale from 1 to 5, where 1 has the most issues, 5 has the least, it’s as you say, George, databases is number one. Interestingly enough, VDI and Enterprise Applications, ERP, CRM and J2EE, are kind of almost tied for second.

Then come Web and Mail Servers and then File and Print. I mean I was a little surprised that VDI is so high. But maybe I shouldn’t be, because, you know, it’s someone sitting at a desktop and hitting a key and then expecting a response, right?

George: Yup. No, I mean I think this kind of speaks to what we just discussed, and you know, if I look at the virtualization pyramid, it’s kind of following that. I mean the original folks did a lot of test and development; they moved their file print, Web and mail servers, and that’s the VDI; and now they’re going to the harder stuff and those harder applications, whether enterprise or database, they consume a lot more CPU, horsepower, more memory, and therefore they’re the harder ones to bring over to the virtual world.

Sushant: The other thing that this becomes interesting is the difference between scale up and scale out. Right? Some applications are more suitable to scaling out, so VDI, if you need to get more performance, you can scale out, right, by adding more boxes, by adding more nodes, and be able to do it. But that’s not true for databases and enterprise apps. I mean for those, you know, there’s only so much scale out you can do. You know, if you’re trying to do analysis or lots of transactions, sometimes – at some point you have to really look at scaling up, right?

George: Yeah, and I think these, you know, these applications, the ones you just mentioned, those are the ones where getting more performance is key. And you know, it’s interesting because if you look at going to virtual, it’s a stepping stone to going to the cloud. I mean it’s kind of like let’s go from the physical to a virtual. The virtual then allows me to become software defined. As I become software defined, it is the bridge into the cloud.

So people understand they have to do this. The problem is, if I’m not going to get the response times that I expect out of database, and I’m going to take a step back, that’s not acceptable. So yes, I’m throwing more performance at it by CPUs and stuff, but we still have a big challenge, especially in the IO and the storage world.

Sushant: Yup. And the other thing that you mention about cloud services, right, a step toward cloud, is even when you look at the cloud providers, they have different sizes of, you know, essentially virtual machines, right? From kind of small to large. Because they do understand that some applications need just a larger box, right? It’s a virtual box but it’s still a box vs. other things you can get out by going smaller boxes, but essentially scale out and just keep on adding more of those instances.

So that’s where something VDI fits. But again, right, for something like a database, you really have to think about how do I scale up. So moving on, so let’s – we then asked the question of, so you know, these – we asked the question about what applications have the most issues. Then we looked at what are people preferring from, you know, in terms of their approaches to solving these performance issues. So again, one is most preferred, five is least, and these are the top five that came out.

So you know, kind of first was all-flash arrays. Then came software acceleration on host machines. Then switching to an in-memory database. Then was moved to a private or hybrid Cloud infrastructure. And then lastly was, at least in the top five, was rewrite the application. George, what strikes you about the responses?

George: Actually, it’s not surprising to me. I mean all-flash arrays, when they came out, the hype was everything will go faster. And it seemed like the easy solution. I mean people, if they can get a piece of hardware and it makes a difference, you’re going to do it. Now obviously there’s been cost issues, there’s a bit of issues around the right performance and stuff. But I think the all-flash arrays, just a general technology, are making a difference.

It’s making it, you know, much faster in a lot of ways. The one other thing to realize though is that the D-RAM and the, you know, sort of memory approaches are still, you know, six times faster than all-flash arrays. So there’s still room to go. The other set of questions we asked, whether it was software acceleration, switching, if you actually look at each of those, there’s a lot of issues with it. I mean software acceleration on the host machine, to be honest, a lot of those required some specialized work and therefore made it disruptive.

Switching again to an in-memory database, another – one, it’s often very costly; secondly, it can be disruptive. And you know, moving to the private cloud, I think goes back to the issue that you mentioned; if you can scale out and get the performance, that’s not so bad. But a lot of times you just can’t scale up and get more performance. And the other side is on the public cloud it can get pretty expensive. So that’s another issue. And rewriting the application, honestly, is one of the last things people really want to do. So they’ll only do it if it’s a really critical situation. I know we even asked other questions like do you want to move to a super computer and things like that. But I think in general, what I got out of it is that flash is the obvious, quick way to try to get some more.

But you know, you really have to look at the cost and whether it really applies. There’s a lot of applications that it may not make a difference. So as we look back in the previous slides that we just discussed, it’s really those more ERP, the Windows, virtual desktop and the database; those are the places that are really going to make a difference.

Sushant: The other thing that struck me is, I think people have taken multiple steps. I think they’ve tried the all-flash and then because that hasn’t really fixed the issue for certain applications, then they have looked at some of the more esoteric or, you know, kind of more difficult steps such as, you know, saying, oh, well, you know, going to all-flash didn’t help; maybe I need to then look at the next app, which is the in-memory database, or rewriting the application. Right?

If you have performance issues with a critical application, you know, if there was kind of a magic bullet that solves all these issues, you would do it. And I think, you know, so generally people say, oh, let me try the all-flash, right? And if that doesn’t work, then they try some of these other things. Because really, you know, going, switching to in-memory, the people we’ve talked to who are considering that, it’s a long process. It’s not something you can do over a weekend vs., you know, adding an all-Flash array; you know, you can get that deployed in your infrastructure, you know, potentially over a weekend.

Some of these other things will take much longer, and I think it’s because the all-Flash doesn’t quite, isn’t quite the magic bullet that it’s made out to be. So let’s move on for the next one. So we looked at – we asked about flash storage adoption, right? And so you can see the scale. Right? The number of people who are not using flash has gotten smaller and smaller every year. Right now, you know, kind of 15 percent, less than 15 percent. But the other interesting thing is that if you look at the ones that are more than half, it’s, you know, say 10 percent roughly.

So the majority of people seem to be in the middle. You know, George, what does that strike you in terms of kind of those comments? Oh, everyone needs to move to an all-flash data center.

George: Well, I mean I think it’s an economic issue also. And it’s kind of interesting. I mean I watched all these trends, and when you listen to the hype, everybody goes, well, flash is obvious, everybody moves to flash. And they assume, you know, everybody will move so quickly. But it take a period of time for people to move, even when the direction is there. Now part of this is the flash cost, for a lot of it is still expensive. The other side is people have massive investments on existing hard drives. And you know, it’s a little bit crazy to assume that people are not going to try to use that. And this is one of the big drivers of software defined storage, the ability to take the existing and get more out of it. And you can do that with caching and other techniques which again allow more of the IO to be in memory, so you approach the flash type speeds.

Now this is to say flash I think will continue to progress, will become more and more used by the systems, and it’s very obvious, if you look at where we started a couple years ago, you know, a lot of flash companies and flash vendors erupted. I mean three years ago I think there were like 150 new flash guys. Everything you read was a new flash company. Today, pretty much everybody that was selling storage is selling flash. And where it’s going obviously is going to get into more, you know, PCI cards and NVME and all that.

So there’s a lot of direction here. But I see it as goodness. It’s just going to, you know, help make it more performant, but I will also remind the world that just because you’ve got more performance, it’s amazing how people find ways to use it.

Sushant: Yeah, definitely. The other thing that strikes me about this is, I think, you know, as you were talking, one of the interesting things is that, you know, really flash is for data in motion, is the term that some people use, right? Hot data, warm data. But when you have data at rest, cold data, you know, it doesn’t make sense to put it on flash. And I think this is the – one of the concerns. You talked about economics earlier.

So you, you know, if most of your data is at rest, does it really make sense to put it on this really high performance, you know, flash? You know, even excluding all the kind of, you know, issues with right performance, you know, is that an economic argument to be made that, you know, flash isn’t – is going to have a place but it’s not necessarily going to be the, you know, take over the data center, so to say?

George: No, that’s a great point, Sushant. You know, I started out in a period where it used to be called hierarchal storage management. And that’s going to date me back to the IBM days and all that stuff.

Sushant: Oh, come on, George, you’re too young for that. How do you even know what?

George: But in reality, the same problem has existed all the way through, and that is as new technologies and storage come up, there’s always speed and time gap between one layer and the other. Today if you look at it, if I want to go out to the cloud and look at things like glacial storage and all that, you know, there’s a speed issue. When I get to existing disc drives, I have the high speed, you know, 15,000 RPMs and I’ve got the slower drives.

I’ve got, even in the flash space, I’ve got multiple versions of speed. I’ve got PCI cards, etc., etc. And new versions are coming out that are going to, you know, meld the D-RAM with the existing flash. So what you need is really smart software defined storage that has auto tiering type capabilities. And I think that’s one of the issues. The customer itself really is thinking about the economics and getting their job done. In other words, what’s the service level agreement in terms of performance?

The job of our software, or the job of software defined storage, is really to make that invisible to the consumer. The consumer basically just says get me the lowest cost data and get it to me at the fastest way to meet my objectives to the applications. That’s really the key thing. So you know, while flash storage adoption is important, if you really look across the enterprise, it’s really a question of what kind of storage assets do you have. And if you have a mass amount of existing and you’ve got some flash, you know, can you move the data to the flash when you need it and can you move the data to the cloud if you just want to, for instance, archive it?

Those are the kind of movements that are happening and you want software that’s smart enough to do it.

Sushant: Definitely. All right, so let’s move on to the next slide. So now we’re going to be – we talked about performance, we talked about flash. Let’s switch gears a little bit and talk a little bit about hyperconverged. So we asked the question, right, first of all, definition of hyperconverged. And we gave people different choices because, you know, vendors have provide different options on the market.

So 41 percent said tightly integrated with hypervisor but hardware agnostic; 27 percent said integrated appliance hardware and software locked together; 17 percent had hardware and software that you put together and update independently; 12 percent had a hardware/software that can be updated independently; and 3 percent had other.

So George, you know, kind of what did you see when you – or what did you think when you saw these definitions and the kind of numbers?

George: I guess I was surprised and yet not surprised. It’s interesting because when the first definitions of doing things more converged – let me it put it that way – the obvious was could I take the hypervisors and also do storage on the same platform? And it seems like that definition, which I’m surprised here came out as the top definition, now you know, when Nutanix and others in that category really took off. And you know, if I got back two, three years ago, it’s interesting, they got to a point where people seem to define it as, oh, it’s a Nutanix, it’s an appliance. And the definition quickly switched to becoming a simple appliance that you would just turnkey and do the work.

I think as the customers have gotten more sophisticated, they’re back to, look, I’ve got a much larger problem too, and you know, where again I’ve got investments with hypervisors and storage. Where I can integrate those two together, I will. And you know, the reality is, we’re going to have all of these. I mean I think the integrated appliance and a lot of the applications like ROBO, Remote Offices –

Sushant: That makes sense.

George: You know, makes a lot of sense. But I think, you know, where we’ve tried to force fit, for instance, hyperconverged into data centers, especially where they’ve got, for instance, databases with a mix of, you know, fiber channel networks and they have to connect to old Unix boxes and things like that, or Solaris and IBM AIX, you know, there’s a big mix out there.

So I think what’s interesting is hyperconverged started out with sort of this hypervisor connectivity issue. I think that’s still going. The other counter-force was no, it’s really an appliance. The appliance makes it much more, less flexible. But also provides a lot more simplicity and automation. Those two are sort of fighting each other right now. The good news is I think the consumers are winning both ways. Because I think what used to be software defined storage used to be more complex.

It continues to get more automated, more simplified. And allows you to do a lot more with it. And where this line between sort of software defined storage and hyperconverged kind of – it’s starting to blur, is really what’s happening. And now it’s really a question of how do you want to deploy.

Sushant: And I think that comment you made at the end is really right, right? You have to understand the use case that you’re trying address, and then say, what makes most sense? You know, for example, you know, so you talked about the integrated appliance, but if you look at the fourth choice, which is a hardware/software bundle, that also speaks to the same thing, which is, hey, I need something somewhat simple, I just need to get it deployed, right?

Where does that makes sense? Something like ROBO, right? Where you have a remote office. Their needs aren’t too great. Or maybe it’s something like VDI, where it’s a cookie cutter, right? You just need a – in one sense a reference architecture that you can, as you scale out the number of users, you can just cookie cutter out the number of boxes you need. So that seems to kind of make sense. But I think the, you know, what we talked earlier about, being able to integrate with the hypervisor really speaks with ability to simplify the management of your infrastructure, right? When you are looking at someone who is saying, hey, I’ve got larger management issues, right, how do you know to silo, which is, in some cases a hyperconverged offering can bring, right? That’s problematic.

George: Yup. I mean you know, the other thing, if you look at all this, is you know, we’re software defined storage and we’re hyperconverged. Both sort of build on the same general theme, and that is you’re trying to work on commodity servers or off the shelf, you know, X86 type servers. And what you’re trying to do is create an approach that allows you to create almost sort of a Google Cloud on premise.

And I think both approaches make sense. I mean clearly the software defined storage provides more flexibility. The appliance type approach provides a much more turnkey, less sophisticated approach for some users. So the use cases you mention is the right approach that you really need to consider when you do this. And I get sort of tired when I talk to different people where I ask them, you know, why are you trying to buy this or something, and they’ll go, well, I want to hyperconverge, and then they describe something that when you look at it, the requirements are really for a data center; it’s got fiber channel, they’re trying to do a database.

And you go, I’m not sure what that is.

Sushant: Yeah, yeah.

George: So there is a mix here.

Sushant: I mean, you know, it used to be we talked about how flash was the, you know, kind of the savior of the data center. And in some ways hyperconverged is almost that, oh, you got a problem, let’s go hyperconverged, right? Without really thinking through what is the use case and, you know, what is the kind of right solution. It may be hyperconverged, it may not, but I think it’s really important to start off with saying, you know, what are you trying to do, what’s your goal, what’s your use case, and then figure out what’s the right solution.

George: And I think just very importantly, they’re both great technologies. Software defined storage and hyperconverge are both leading us to an approach which really is built on a software model.

Sushant: Yup.

George: And the fact that you’re using hardware as quote, an appliance, even those kind of things are breaking down. I mean if you even look at Nutanix, they’re also trying to move to a software only approach. So you know, it starts off, the appliance to me is a deployment model that kind of makes it simpler to get started but once you get started and you start learning about the challenges you’ve got, the software flexibility’s what’s the key. So getting to software defined gives us the way to really future proof ourselves.

Sushant: Definitely. So we then asked in the survey where companies were with their deployment of hyperconverged. And you can see the not considering, nearly a third of respondents in that. Another third said they’re strongly considering but they haven’t deployed yet. So that’s 2/3 that haven’t deployed. A 20 percent were a few nodes; 7 percent had a few major deployments; 6 percent have standardized on it. So George, I thought these were really interesting numbers, you know, given how much buzz hyperconverged kind of generates in the market.

George: Well, it’s interesting because it’s – you know, if you look at the overall storage market, it’s huge. And you know, we always hear all the hype and buzz that’s in the news and obviously things like IPOs and stuff like that. So those that are growing quickly and are making a lot of noise get a lot of attention. But when you look at it, I mean we’re still talking a storage market that’s in the 20, 30 billion dollar market size; hyperconverged is still relatively small.

So this doesn’t really surprise me in that. What I do know is that, you know, you’re going to see still a lot of growth in this area. I think hyperconverged is going to be one of the fastest growing. If I look at Gardener, even their own analysis say that by 2019, 30 percent of all storage will either because software defined or hyperconverged. So there’s a lot of space and growth here. I do find it, you know, interesting that if you look at some of the hyperconverged companies and the way they talk about it, you would assume that 90 percent of the market is already there. It’s not.

Sushant: Yes. And I think that’s kind of, you know, this is kind of the, you know, if you’re kind of bothered by seeing the hype, this is a reality check which says that, you know, 2/3 of people have yet to deploy hyperconverged. And I think that’s important to keep in mind because you may be hearing from, you know, your peers or even, you know, sales reps, knocking on your door and saying, you know, hyperconverged is the panacea to all your data center issue. And I think what they should tell you is, maybe not.

Maybe I should take a step back. I mean I should, you know, you should obviously look at it, as George mentioned, right, the key is software, right. But first you have to understand what is it that you’re trying to do. If you’re trying to do databases, if you need high performance, you’ve got a fiber channel infrastructure, is hyperconverged the right way? Possibly. But you, you know, but you really have to find the right solution for that.

So let’s move on. And we asked the question of, you know, what are the top reasons that people are deploying or evaluating hyperconverged? So 48 percent said simplified management; 39 percent said easy to scale out; and just a little over a third said reduce hardware costs. So George, you know, this is somewhat similar to what we saw with the, you know, business driver for software defined storage earlier, right?

George: Yeah, I mean I think on these, the simplify management, no doubt about it. If you have fewer options, fewer capabilities, you can make it an appliance, you can make it more turnkey. And especially in companies where you have less sophisticated technology staff and you just want your office workers, for instance, to run something; it makes a huge amount of sense. That also covers, you know, the easy to scale out – that one I’m a little – I see the numbers and I agree that’s the marketing of it.

I will tell you, after going out and talking to, you know, hundreds of customers and seeing what they’re deploying, what’s interesting is, typically with a lot of these systems, when they start out with a hyperconverged deployment, in order to have enough power and to do the high availability, often they start with something like four nodes.

Sushant: Yup.

George: And when I go check in those customers a year later, what I find is, they’re up a dozen nodes.

Sushant: Yup.

George: So yes, it is easy to scale out. But the other side is, as these things go from a few nodes, when you start getting a lot of them, all of a sudden there are a lot more complex to manage. And it actually goes counter to the third point that’s being shown there. What I’ve actually heard is a lot of customers saying after I’ve had this in place, my costs are actually not as competitive because I could have done the same thing with, you know, more commodity hardware and a software defined storage.

So I think it’s a bit of a tradeoff. The reduced business cost I think is sort of what people want to believe and I think it’s probably true in some of the initial deployments. I think as it gets over time, I’m not so clear on that one.

Sushant: Yup. One of the things that we’ve seen is that, you know, if you take two, you know, two vendors, one who can outperform the other with a similar sized box, over time you’ll have fewer boxes with the vendor that can provide more efficiency and performance in a single box, right? Versus someone who is less efficient.

I think one of the things that sometimes the whole oh, it’s easy to scale out, masks that fact that the individual boxes, you know, regardless of the, you know, the capabilities in terms of RAM and CPU, are inefficient. And you know, they get around it by saying, oh, let’s just keep on adding more boxes. Oh, you’re stuck with, you know, four boxes. Here, let’s add another four and that’ll solve your performance issues, right? Versus saying, okay, well, you know, here’s where I am. Here’s where I need to be in a year. Can’t I have a, you know, start with two and then go to four, right? Much better to be able to – and essentially scale up your boxes to be able to handle the load than just keep on adding boxes.

So let’s move on to the next question. Right. What are the use cases related to hyperconverged? So remote sites/ROBO was about 15 percent; VDI, 27; enterprise applications, 28; data center consolidation, 28; databases a little more than a third. But you know, keep in mind that most respondents, nearly 40 percent, were not currently deploying.

George, what did you find interesting about it?

George: Well, I mean the one counter argument that I saw was the database one. Because it’s pretty obvious that what I’ve heard most in the field is, I’m still reluctant to put my database on hyperconverged because I don’t believe it’s got the performance and the capabilities right now. On the other hand, you have to really look inside what those databases are. And what I found from talking with some of the survey folks and all is that there’s sort of two categories of databases even there.

There’s the databases that are much more transaction oriented. And sort of your lifeline to your business, your ERP and all that. And then there’s databases which are basically just a bunch of data storage –

Sushant: Kind of your batched job.

George: Batched jobs and stuff like that. So I think what’s happening here is, where they can move the databases over, they’re trying. But I guess that would be the one that stood out for me. I still, when I’ve talked to most users, it really has been ROBO, VDI, and less business critical stuff. Or at least there’s a belief that if you’re really going to try and drive a lot of IO or transactions, that that’s less of a capability to put on hyperconverged today.

Sushant: Yeah, the other thing I will say is, you know, because I was trying to look at this and, you know, given what we know about hyperconverged, that it’s not really a high performance platform, one of the things that struck me like you, right, was why databases. And as I went around and started talking, what I realized is, you know, if someone wants to create a database cluster, right, get rid of the noisy neighbor problem, how do they do that?

Well, the simplest way to get a cluster up is hyperconverged. You can get your four nodes, you don’t have to buy separate storage. And so if you have a, you know, a few databases like you said that are doing batch jobs that – and you want to create a cluster just to manage those, hyperconverged can be fine for that type of use case, right? Again, it’s only high performance, but it allows you to get rid of noisy neighbors, right? It’s sure an infrastructure, but only for a set of databases. So you know, that was kind of one of the things I learned, that oh, that makes sense. Again, right, you have to understand the use case, you have to make sure that it matches, you know, kind of the solution that you’re looking for.

Moving on, we also asked the questions, you know, if someone is not looking at hyperconverged, because obviously, you know, significant segments of the respondents were not deploying hyperconverged; we asked why. So the number one by far was lack of flexibility with 44 percent. Too expensive, 24; vendor lock-in, 22; doesn’t integrate with the current infrastructure, 19; and the last one is, which was kind of interesting, is unable to scale, compute and storage independently at 17 percent.

George, you know, there’s some really interesting, you know, in a sense gotchas for anyone that’s looking at hyperconverged, right?

George: Yeah, and I think, again, this goes back to, you know, the initial belief was hyperconverged was going to be a cure-all for everything. And then after the deploy, you start realizing things like, well, if I buy, you know, particular vendors, I’m pretty much locked into that vendor. The cost, like I said, it sounds good but because you’re buying storage at the same time as your compute, you know, you’re buying in pretty big modules. So like I said, instead of four nodes, all of a sudden you wind up with 12 or 20 nodes. And you know, your costs get pretty high in complexity.

The other thing is, you know, I think honestly that whole flexibility issue just really comes out to bear and it’s flexibility of how can I work with the existing stuff and investments that I have. I mean it’s – you’re bringing these in and where it’s a new, fresh deployment, I understand it much more. But if you really already have existing stuff, you’re now dealing with another silo of management that’s trying to deal with this as yet a completely separate environment from the rest of what you’re running.

Sushant: Yup. So if you at kind of number one, number – you know, so it’s lack of flexibility; vendor lock-in, number three; and doesn’t integrate with current, which is number four; and the last one, unable to scale, they’re all related to similar things, which is flexibility and choice. Right? When people are deciding not to go with hyperconverged, it really speaks to the fact of, you know, kind of what we said earlier, an integrated appliance makes sense in certain cases but there’s also cons, right?

The cons is lack of flexibility, you may be locked into that vendor, right? You can’t integrate with the current infrastructure. You can’t scale this computer and storage independently, which eventually becomes really expensive. Right? So I think it’s something that when people are looking at hyperconverged, they really have to think through, you know, kind of what happens. The other thing is, you know, this last one, this unable to scale, compute and storage – when I’ve talked to people at trade shows who’ve, you know, had hyperconverged for a year or two, that one really bugs them.

Because one of the things that they may not have thought about initially but comes very apparent over time is, you know, when you go hyperconverged, especially these integrated appliances, you’re essentially co-mingling your data and compute growth rates. And guess what? For most people, data is growing at a much faster rate than their compute needs. And so, right, one of the things that ends up happening is that the way you scale that computer in hyperconverged is, you end up buying more boxes.

So you know, as you said earlier, George, you know, you may have started with four but a year later you’re at eight, 12, you’re like oh my God, right? Is this what my life has become?

George: Yup.

Sushant: All right, so let’s move on to the next one, in the next category, which is cloud storage. So here we asked, you know, we asked the general question of what are the top disappointments of storage. So 31 percent said cloud storage failed to reduce costs; 29 percent talked about managing object storage is difficult; and 16 percent talked about flash failed to accelerate. So we – this is kind of a good segue to talk about cloud. And it’s kind of interesting that, you know, everyone talks about cloud as reducing costs, and a significant number of people say no, it didn’t really reduce my storage costs.

George: Yeah, that doesn’t surprise me. I mean if you’ve used the cloud, I mean just for raw capacity it’s, it tends to be a very low cost, but you know, it’s the kind of thing where once you put the storage in there, it’s very hard to get out. And the challenge is what you can move up and what you can move down. The problem is maintaining it in the cloud, especially when you’re trying to run with virtual machines and all that.

All of a sudden you get those bills back, it does get pretty expensive. And what’s interesting is, people aren’t doing the economics to really check, you know, what’s the cost of being in the cloud vs. what you could do yourself. And a lot of them are not looking at the way that software defined storage, for instance, could do more effective utilization of your existing storage. So you’re not having to pay as much for what’s in the cloud.

You know, what I don’t want to do is come across though as being negative to the trend, to the cloud, because it’s obvious. This is a megatrend that’s hitting our industry. People want to put more and more into the cloud. The trick is to be selective. This is where I think things like the auto-tiering, where we can really look at, you know, which is the performance, what’s the stuff that you want to back up, what do you want to archive. Because that’s very obvious, that you’re going to see, is those are the kind of use cases that are really going there.

Sushant: Yeah. And I think what you said, you hit upon, makes sense, right, which is what’s the use case, right? Again, with almost any technology, right, it’s not a panacea, there’s no silver bullet, it is definitely what’s the use case.

George: Before we jump off too, I just want to, you know, the object storage is interesting because even as a company, we invested in doing open stack and putting a lot of effort around it, and what I’m saying is, in the object, it seemed to be something that got a lot of noise about three years ago. It seems like people are still using their filers, and I think a lot of the, what was called object, today is object in the cloud.

So I’m not sure,  in this one I think the disappoint is that today if you talk about object, people are going, well, if it’s suitable for object, why I don’t I just put that in the cloud, and they’re trying it almost like archive. And as far as flash, we sort of already covered it. Again, I think it’s not a cure-all. It’s obviously, if you have a lot of read-intensive, it’s very good. If it’s write-intensive and transaction oriented, flash isn’t the cure all.

Sushant: Sure, definitely. So let’s talk a little bit more about cloud. So we asked about the top use cases for public cloud. So at 33 was DR in the Cloud; tied as was back-up to cloud, restore on premise; and then long term archive was 35 percent. The not currently deploying/using, just as a reminder, was about 40 percent. So I mean, George, this really does strike about, you know, being able to store your data, almost like the in case of emergency break glass kind of use case.

George: Yeah, I mean I think is across the board. If they can do it for backup, archive, that’s the number one use case in this. You know, I’ve talked to folks at Amazon as well as Azure they all say the same thing. This is the number one. Obviously when you start getting into the production, transaction oriented, real time stuff, it’s hard because you’re really dealing with all the interactions to get to the cloud.

So this is why, you know, the public cloud is not the cure all. It will and will continue to be a mix, and it’s going to be a private clouds and public clouds working together. And I think hybrid is really sort of the long term answer.

Sushant: Yup, definitely, you know, I’ve talked to customers who have a secondary site but they don’t necessarily have it staffed, if they don’t, you know – or even have difficulties just creating a suitable secondary site. Just in case of, you know, the equivalent of a DR site. So the product does make sense as a place to keep your data, just in case something happens and you need to get access to it.

We also asked a question of, the reasons, you know, so looking at that 40 percent, they weren’t deploying or using public Cloud storage. So you know, this was significant enough that we kind of made it all, just listed everything. So security was about 57 percent; sensitive data 56 percent; regulatory requirements 41 percent; control, 33 percent; performance, 32; and cost, 32. George, really does speak to some of the issues we’ve talked about earlier. Right? Related to what’s the right use case.

George: Yeah, and to be honest, I think in these it’s interesting because I think the security is getting better, the ability to have it in different geographies, there’s a lot more control, for – for instance, in Europe, one of the big issues was, well, I need to make sure it stays within my country.

Sushant: Yup, my border.

George: But you know, Azure, AWS, are both doing those kind of capabilities. I think this is an area that we’re going to see a lot of change going forward. And I think it’s going to get better and better. I think the fundaments are still cost, performance, and control. You know, control’s an interesting one. It’s also fear. You know, people don’t want to lose control. They put it out in the cloud if something happens.

And you know, the world of moving from one cloud to another and the ability to move quick and back forth into the cloud and out of the cloud is still a big issue. It’s sort of like in the backup days everybody goes, it’s easy to back up; the problem is how to restore. Well, I think the cloud’s suffering from the same thing. It’s easy to put data in there. But like the roach motel, once it’s in, it never comes out. So –

Sushant: Well, I think also the control you talked about was there was that incident some time back where AWS went down because of some networking issue on their side. Right? So if it happened on, you know, within a company, you can say you have a team that is, you know, you can check on and say, you know, what’s the ETA, what’s the problem, right, when will this get fixed? But if it’s Amazon, you’re at their mercy, right? You’re assuming they’re doing everything they can, but there’s a very lack of control in terms of troubleshooting and saying, you know, getting updates in terms of when will this get resolved. Right?

I think that becomes part of the issue in terms of control, especially if you have a critical application that can never go down. Right? You’re somewhat limited by that. So let’s – got a couple more, couple more slides before we wrap up. So we talked about top applications for a public and hybrid Cloud storage. So VDI, databases, analytics, ERP. And again, right, none. Right? Again, George, I think it speaks to what’s the right use case.

George: Yup. I think we’ve kind of covered this one well enough. And again, people do want to bring more in their – part of the question here is because it says public and hybrid cloud. When we dug into this, what was interesting was most of the database and sort of the more hardcore enterprise applications, they’re thinking of it in terms of a hybrid cloud, where, you know, what you can do in the local area you do, but use the cloud, the public cloud for the backup and archive.

Sushant: Sure. And then next, we talked a little bit about storage spending, and so here we asked, you know, kind of this first one is, you know, what are people spending their storage budgets on, right? So you can see that the primary ones are software defined storage and flash, over 70 percent each; a little bit more in the middle, private cloud, the hyperconverged, object storage, converged. A little bit less is, you know, much less is open stack; public cloud is somewhat in the middle.

I think, George, the things that we talked about earlier kind of speaks to this, right?

George: No, I agree. I mean it is interesting for me to see how software defined and hyperconverged are really going up there. And you know, being a company that’s been in this forefront of software defined for so long, that one obviously I always love when it’s in the top.

Sushant: Definitely.

George: But you know, the flash will continue to be strong. And I think stuff like open stack and all that I think is still trying to find its home.

Sushant: Exactly, right. I think what this really just says is, you know, there are some things that you can do with, in terms of software defined storage and flash to, you know, kind of modernize your storage infrastructure. We won’t cover this too much, but it’s there for your own reference, which is kind of more detailed into that yes column of where people are spending in a little bit more detail, right, 10 percent or less, or do they have the allocated 15 to 25 or 25 percent or more, right?

So again, it somewhat matches the general kind of trend that you’ve seen of how much they’re spending. But it just gives you a little bit more going forward. So with that, talk a little, you know, as we’re kind of wrapping it up, I do see a couple questions. I think we have time for, you know, kind of one or two of these. So there’s a question about can you share the survey details? How many responses, end users vs. vendors, size of companies responding?

So a lot of that information is in the report. We didn’t want to take your time kind of going through that. You can see that, you know, the report is available, it’s a free report, right. This is really meant to educate people on kind of what their peers in other companies are doing.

So you know, go ahead and go to that link and you’ll be able to get a copy of that report. One thing to keep in mind, these are all end users or end customers. There are no vendors or VARs in this. It’s actual people who are using and having to make purchase decision on this kind of stuff.

George: And the overall was 426 IT professionals that hit this. And they were across both Europe and Americas, and small, mid and large companies. So it was a pretty big – and of course not everybody answered every question. But you know, I think if you go through the report you’ll see the details.

Sushant: Yup. If you, you know, if you’re interested and you would like to discuss your stored infrastructure needs, or issues with the DataCore Solution Architect, click that link. With live demo you can, you know, kind of spend some time talking about what is it that you’re trying to accomplish. You know, again, as George and I talked about, I think the key is use case, right? What is it that you’re trying to do? And you know, DataCore has software defined storage. We’ve got hyperconverged. So – but it really depends on what your need is, what are you trying to do?

And if you’d like to try it out, we’ve got both software defined storage and hyperconverged virtual SAN if you’d like to try it out. So with that, let me bring Danielle back on to finish up the housekeeping. I want to thank you for your time.

George: Yes, thank you.

Sushant: Danielle?

Danielle: Yes. Thank you, Sushant and George. Lastly, everyone, we’re always looking for ways to improve, so don’t forget to rate this webinar before you go. And don’t hesitate to reach out as Sushant said, to schedule a 20 minute live demo with one of our solution architects. Because we’re always looking forward to engaging with you. And with that we’re going to go ahead and wrap up. So have a great day, everyone, and thank you for attending.

Read Full Transcript