I’d like to kick off and introduce our amazing presenters for today. First, we have Scott Sinclair, Senior Analyst of Enterprise Strategy Group and also Alfons Michels from DataCore Software as well, Senior Product Marketing Manager at DataCore Software. Alfons, would you like to kick it off for us?
Alfons Michels: Yeah, thank you very much, and also very warm welcome from my side. I just would like to give you a brief overview of what you can expect in the next 45 minutes, so something like a mini-agenda. We will start with some valuable insights on the rise of the digital area, and we’re going to discuss some drivers for complexity and a lot of further interesting findings. This is followed by how simple software-defined storage can be implemented in your infrastructure and some examples on what and especially also how it will be achieved that you can modernize your IT and achieve great flexibility for it. But it’s not only us providing information to you, it’s also you providing information to us and the others by anonymously answering our two polls we have during the webinar. After this brief overview, I would like to hand over to Scott to start with the first real topics.
Scott Sinclair: [Laughter] Well, thank you very much, Alfons. So as Alfons mentioned, my name Scott Sinclair, Senior Analyst with Enterprise Strategy Group. I’m excited to be here today to talk to everyone, and for those of you listening who may not be familiar with Enterprise Strategy Group, or ESG, we’re an IT analyst firm with a heavy focus on research and strategy. We work with many of the top IT technology vendors and some of you out there may actually be familiar with some of our validation work as we have an engineering team that validates many of the leading technology solutions that are out there. Today, however, I’m going to be sharing some of our research data and really talking about IT and technology and the importance of IT flexibility. Let’s kick it off by focusing on maybe the major theme in right now in IT and that’s digital transformation.
As you can see on the slide, this is from our research, so this of high-level IT decision makers. 86%, so nearly everyone, agrees with this statement. If we do not embrace digital transformation, we’ll be a less competitive and/or less effective organization. I think it’s important to take a moment to really define what digital transformation is. Everybody’s using this term nowadays. I see it all over the place and when I talk to CIOs, IT executives, they sit down, and they say, “Scott what is digital transformation? How is this different than just buying new stuff because IT has been buying new stuff forever? Or how is this different than just IT is important to business? Because frankly IT has been incredibly important to business since its inception.” Well, digital transformation in its purest sense is about the fact that businesses are now recognizing that technology is not only essential to business, but investing in technology and IT can actually increase revenue, create new opportunities, and enable a better and more competitive business.
So this idea of digital transformation is not just the fact that hey technology’s important because it is, it’s also the fact that companies are recognizing that they way they’ve been organized and the way that they operate and the technology they’ve been using are not going to be enough. The traditional that they’ve been doing are not going to be enough to enable them into this new wave, this new digital era, as you will, this new digital era of business.
Expanding on that a little further, on this next slide, we look at okay when we talk to companies and we do research into, okay what are your objectives with digital transformation, we see this. At the end goal is a better business, so businesses are increasingly turning towards technology to generate new and different opportunities and you see some of these here.
One of the top ones is becoming more operationally efficient. So we want to be leaner, we want to leverage data to do things in a new way to become a leaner and meaner operation followed by we want to provide better and more differentiated customer experience. We want to leverage data, understand our customers so well that we do things that are different than anybody else.
The next two are actually very close. So 38% are saying we want to develop new data-centric products. This is not just new products. This is new products where the data itself becomes a product and how do we make money off that. 37%, very close, now this is just new products. We want to leverage the data that we have in our environment to better understand how do we invest in R&D and develop into new things that better serve our customers’ needs. And finally nearly 1 in every 3 organizations we talked to are looking towards digital transformation to develop entirely new business models.
So right here they’re saying, “Look, we have acknowledged that the data that we have collected and the services that IT is providing, we can use this to do things in new different ways and generate more money.” At the end of the day that’s what all of this is about. It’s about all these impact the bottom line, the profitability of companies. But really if you think about four of the five, are all about increasing top line. So IT is not just something that makes the business efficient and keeps the light on. It’s about something that today, given data and the way that we’re able to use data in this digital era, it’s about creating new revenue and driving the business.
Now all this sounds wonderful, but it has a cost, and the cost I’m assuming many of you in the audience are very familiar with and that cost is IT complexity. When we ask IT organizations and IT leaders, and we say, “Please compare what you’re doing today to what IT was two years ago.” Two-thirds, 66%, say IT is more complex than it was just 2 years ago. Now the key point here is not the fact that IT is complex. IT has been a complicated industry for a while. There are very smart people in IT, and that’s one of the reasons very smart people go into IT. So one of the things that we look at though is really the timeframe. Two years should not be that long of a timeframe. If you think about it, the typical warranty level on many of the infrastructure products is 3-4 years. So basically by the time you buy something, your technology environment has increased in complexity, once you’ve just started getting it used. I am seeing a question coming in that says, what organizations do we talk to? We talk to both midrange and enterprise across a wide swath of industries, so pretty much everyone.
I think this is actually a good time to do our first poll, and really look at, I’d love to get some answers from the audience on what are some of the top drivers of complexity within your own environment?
So within IT complexity in our environments, and we’ll let everyone in the audience vote if you can vote at the same time as we look at the data coming in. But from our data, the top 6 drivers of IT complexity, we gave IT decision makers a laundry list of things to choose from, and really what we’re seeing is 2 major themes emerge. So the top 3 are all about this ever-increasing volume of things we have manage. More end-point devices, more data, a higher number and more diverse applications, and all of these things are something that IT organizations have dealt with during normal company growth phases. But what’s really different now is the increase of these things has actually accelerated because companies have figured out,
Hey, if we use data, we can make money off of it. Therefore, we’re hiring people that generate more data. We’re leveraging applications that need to use data in new and different ways and all this is accelerating the things that leverage data or create data as well as the need to store data.”
The second level of complexity, or the second theme, we’re seeing is this need to integrate new technologies. So you see now we have new technologies we have to learn and integrate such as artificial intelligence, advanced analytics, even blockchain for some industries. We’re seeing digital transformation, which I just talked about. Coming in nearly 1 in 4 organizations are saying now my business wants me to transform while I’m doing everything and keeping the lights on, and this has added more complexity. Something else which we’re actually seeing votes in the poll as well is the need to use both on and off premises infrastructure simultaneously. So I think all of these things are making IT organizations and life more complicated.
It looks like, I’m just going to do a quick look at the poll right now, and we see this right now many in the audience are focused on more of the new technology side. So how do we integrate and migrate data across generations? How do we rearchitect for these new technologies such as hyperconverged and NVMe, and how do we manage public cloud, you know, hybrid-cloud environment? How do we integrate those two things and I think all of those things align with our data as well, and at the end of the day the other thing too on this new technology, new technology is not a one-time event? This is something that we’re continuing to see innovation, and we’re going to continue to see this idea of innovation churn, where they’re continue to be new technologies, and IT organizations will continue to have to keep pace because as companies are finding out that data generates revenue and business opportunity, the accelerated rate at which businesses will demand IT services are just going to continue.
Now Alfons I have a question for you. I talk about what I’m seeing in our research. What are some things that you’re seeing with your customers? What are some top complexity elements that you see your customers dealing with?
Alfons Michels: If I can summarize them to a very simple sentence, I would say it’s a naturally grown infrastructure. It’s very simple, and this is also clearly shown by the poll we made that managing data migration across technology is by far the number one with 43%, and this is exactly what is the case for most of our customers. Because they would like to never touch a running system. If there is something new and for a specific application more performance is needed or whatever, then they’d rather make a point solution for especially this application than a general change in the infrastructure because of the risks and the effort associated with general change are much too high, and to avoid that it would it will be a pointed solution.
This drives at the end of the day the diversity, and as you already explained, the complexity in the data centers and this is I would say what we observe most with our customers.
Scott Sinclair: It makes tremendous sense. I think one of the things, and it’s funny being an analyst, I talk to many of the companies that are out there and the innovators. There’s such a focus on new technology. Often, the folks in the audience don’t ever forget this, and I know you don’t forget this over at DataCore, but some organizations tend to forget that IT organizations have established technology. And the switch or the change is even though, yeah, it’s okay this new widget looks great or has X benefit, you have to get from where you are today to whatever that new thing is, and that causes disruption. That causes challenge. So while IT organizations are dealing with all this new complexity, and these lives of many of the people in our audiences is more difficult, I might have more bad news, I guess, which is on my next slide here. The guardrails are changing.
So in this study, the data I showed you was among IT decision makers, IT executives, IT architects. The next 2 slides, I’m going to show you some data in a study that we did of line-of-business executives. So these are executives that are IT customers. We asked them, how would you rate your company’s IT organization? And 4 times as many executives identified their IT as a business inhibitor rather than a competitive differentiator, which honestly, it’s not what we need. It’s not where IT needs to be, especially when you think about everything, I just showed around digital transformation. The business is looking to IT to lead them and help drive new revenue and only ask that the line of business executives say, well, IT’s holding us back and to me a huge chunk of that is driving this complexity.
But we went and we asked the line of business executives and said, “Okay why don’t you tell us? Why don’t you come back and tell us what are the things that you see from your IT organization that is holding your company back?” So we have that on our next slide here, and this is just among the business inhibitors. Now, granted these challenges where one of these, the top one, was pervasive across pretty much everybody, or one of the highest ones regardless of how we cut the samples. But specifically among the IT executives that identified their IT organization as a business inhibitor. The first 2 that stand out is this idea that IT is not moving fast enough to keep pace with business. So 43% of executives that identified their IT organization as a business inhibitor say it takes too long to stand up new IT services.
The next, which was tied for first, is we can’t the data we need to do the things that we need to do to drive the business. This beat out some of the other ones I put on the side, non-data, kind of the second and third ones here are really third and fourth, I guess. Too many passwords, overall network connectivity, you know other IT aspects. But I think what’s important here is the first two are focused on data and speed. To me this is almost the unfair reality of modern IT. IT for years has been focused on job number one, which is ensuring IT services that are so critical for the business remain available and resilient. These things they never go down, this is the way IT’s been judged, and this is the way IT’s been organized and the way that they think about delivering services.
Now with this rise of the digital era is the businesses in many cases are changing the guardrails. They’re changing the rules, so not only does IT need to stay online as well as now we need things quickly. We need stuff now because we’re trying to use data in new ways to go drive the business. If we’re going to summarize here, the businesses are asking IT to do more because they’re seeing a close link between IT and business opportunities. All of this is making the life of IT more difficult, so you have more things to manage. The things that you manage not only are they increasing, they’re increasing at an accelerated rate, but also the fact that there are new technologies that you have to go figure out and learn and integrate. All the while, the metrics by which IT is being judged by customers is changing, which doesn’t seem fair.
All of this is about speed of IT and speed of really data and how we do data storage. As we saw, ideally, you’d love to go let’s go look at technology. Let’s go do something new, but technology churn adds complexity. We see this in just the poll we did. It’s not something just say, “Oh well go use the cloud or go use X.” IT needs better agility, and a technology that we’re seeing in our research that’s helping deliver this especially around data storage and data storage infrastructure is software-defined.
Now when we asked software-defined storage users to identify the benefits that they’ve realized, this is not things we hope to happen. These are real benefits that organizations have seen. One of the things that really jumps out, out of these top six and there was a laundry list. I think we gave people 15 or 20 different options, is a focus on ease and simplicity but also on agility. Which is particularly interesting because when software-defined storage first started showing up, and people started talking about it, the focus was always on CapEx.
How do I reduce the cost of the hardware I’m buying, and we see that as well. Nearly one third, 29% nearly is saying we saw a reduction in capital expenditures, but if you look at the other benefits we have, operation expenditures went down. We’re able to do more with the same people we have. We’re able to deploy storage and IT services faster. We’ve simplified our storage management which is tied a lot to OpEx, but also down there again, nearly 30%, nearly one in three have been able to achieve greater agility in matching the hardware to shifting requirements. And to me, this is all about that top challenge that we had in the vote from the audience which is managing data migrations across generations. This is what I think makes software-defined incredibly interesting, is its ability to not only deliver simplicity because many technologies can claim simplicity, but simplicity that maintains while you’re dealing with this churn of technology, while you’re integrating new technology, as you integrate new hardware, as new things come down.
The benefits are not tied to a specific silo, they are delivered as companies cross the silos or shift the underlying infrastructure, and I think that’s one of the key things around software-defined that really accelerates IT and can help simplify IT in response to all of these other forces that are placing on it. You know, the changing the guardrails, the greater expectations in terms of things to manage, as well as the integration of new technologies. So these are really the data points that we’ve seen, and I’ve seen from my research and ESG’s research into software-defined storage users. I’d love to ask Alfons at DataCore, you have your own customer set. You talk to your own customers. What benefits are you seeing from your customers that maybe I’m not capturing in my slide here?
Alfons Michels: I think it already describes it very well, and as you mention it, we already run a survey through our prospects and also customers at the end of last year, and there’s indeed one point which is not on this list which achieved a very high ranking in our thing, and this is improving performance for dedicated applications. So in our survey, we ask the customers and those who are evaluating software-defined storage and hyperconverged infrastructures, what is the number one what is the number one reason you are evaluating or currently deploying the following storage technologies and they were separated as said in software-defined storage and hyperconverged infrastructures, and surprisingly all fits with your findings, but on number two, those categories the speed up of performance for specific applications which in both cases has achieved a little bit more than 40% as the number one reason was mentioned.
This was also very interesting finding that people use these technologies to accelerate their applications. We will see later on a little bit how this works, but yeah, I think this is the only one which I recommend you to include in your next survey when talking about these technologies.
Scott Sinclair: Yeah, absolutely Alfons. When I did this study previously, I didn’t actually include improved performance as an option, and I think I’ll admit to this. I try to admit when I make mistakes. I didn’t immediately understand the connection between what software-defined can do for performance because I was equating performance to a hardware thing. If I move from a hard drive to flash for example, that I can understand, but there are things, and I know you have it in future slides, so I don’t want to spoil it, but there are things DataCore is doing that definitely gives those performance advantages. That’s really interesting, and just even further increases the benefits of software-defined.
I think now’s probably a really good time to transition to you Alfons and your topics because to me, I think the focus here and that we’re seeing with accelerating IT in general in the wake of this technology churn as well as what we saw from the votes of the audience is understanding how DataCore is addressing specifically this challenge of always needing to keep up with the latest and greatest technology while dealing with everything else.
Alfons Michels: Perfect. Here I take this as kickoff Scott.
Scott Sinclair: Yes, go head.
Alfons Michels: I would like to start to build the bridge what SDS means in practical terms and how it works with IT modernization and flexibility, so how this fits together. I would like to kickoff how simple is it to get to software-defined storage. It’s a very common example. I would say this is a typical simple infrastructure. You have a lot of virtualized server, one physical. You have a pair of storage systems underneath taking care of the central storage services. It means copying the data or providing certain point of availability, possibly also disaster recovery. They are central storage systems and they’re providing their storage to the applications. So to migrate often data, it’s very simple.
It’s just inserting either as virtual machine or as properly sized server depending on what kind of model you’re looking for into that, and then you have due to the capability of so-called pass-through disks. This is when the disk 1 to 1 is shown from a DataCore server, which is underlying in the storage system toward the applications, and this is downtime this causes, so it’s just a piece of software. Put it in obviously with appropriate hardware powered to support it but put in and then it takes over the entire software services, means that the availability or disaster recovery work we had and also the performance improvements, they directly apply after this downtime when the applications gets their new access points. Of course in the background, typically also migrate to a new storage system or to more up to date data, but this is exactly what we are talking about in the next minutes.
Now is the time I would say before we continue with the second poll. At the end of the day, the question is, “What was the cause for data migrations in your data center?” And I kindly ask you to respond. We have a choice of five answering options. The first one is a regular storage refresh, so means storage system was old, outdated. We need a new one. Typically if you don’t stay with the same vendor, this means I need to migrate my data. Integration of new technologies, for example the switch if you would like to move from spinning disk to flash disk, this can be a thing where you typically change vendors, and this is not possible with your existing environments that you then have a data migration. Other reasons could be that you have, for example, a technology switch where it would integrate Fibre channel or move to ethernet by iSCSI or Fibre channel over ethernet or even if that really is the case that you haven’t had any data migration so far so you’re on the lucky side because this is a very rare case. But at the end of the day of course, there are a lot of people who have made that experience.
How this works with software-defined storage we’re going to see on the next couple of slides. So let’s start with IT modernization and NVMe so means flash devices attached over NVMe as additional performance layer. What would be the motivation for this? Very simple, improved performance for some. Also they use it as expand capacity so if you intend to do this there are 4 reasonable approaches: just adding flash to the application server. This is an option. It has a good thing because the storage is very close to the application from the hyper perspective you get really the best performance out of it, but unfortunately it rides a full migration so all the data needs to be migrated. It means downtime for the application and at the end of the day, if you have more than one application requiring that performance, it also creates a lot of silos.
These silos have to be managed separately. If you make your storage to all-flash, also an option, but typically this is very cost-intensive, it also requires migration, and then you also need to take care that the connection to these flash devices is also fast enough. For example, we had a customer who started to exchange his anti-spinning disk in a storage system with flash systems, but unfortunately the control of that storage system was not capable to provide more power, so we have very fast disks, but at the end of the day, still slow applications. This fits also to the next example: expand existing storage where you have an uncertain result and you have a life-time dependency on that supplier. The advantage is typically as the most of each vendor are comparable to each other, the pain of migration is comparatively low in this case.
This can be totally ignored if you leverage SDS. How this happens, I will show you on the next slide, but with SDS, you can seamlessly migrate while adhering to the entire SLAs. SLAs and those terms means not only higher availability—everybody thinks about higher availability when somebody talks about service level agreements—but it’s also about responsiveness of the applications. And if you do it right as in visual performance layer, because there’s automatic storage tiering about different devices included, all applications would benefit from this additional performance layer. You should take into account, I need to state this is different from customer to customer, but an average for most customers, just 10-15% of the data generate more than 90% of the entire I/O. As said, it’s different from customer to customer and it needs to be double checked up front, but this means that if you have just 10-15% flash, you have dramatically improved the performance of your entire applications but not with these high investments.
Of course, you can run automatic storage tiering across different hardware levels, and obviously this provides the best TCO. Just looking to the poll, and I stopped the voting now. It hasn’t changed dramatically, but the people who were on the lucky side who hadn’t had any data migration are now at 7% and all the others, means in total 93%, have had data migrations. The number one reasons are nearly equally a regular storage refresh, integrate newest technologies, what we discussed right here in multiple courses, and some have other reasons for this. But while we’re at NVMe, Scott, I do have a question to you. How you do see the adoption of NVMe today in general?
Scott Sinclair: It’s a great question. NVMe in terms of NVMe-based SSDs has been growing steadily. The adoption in terms of external storage and in terms of as NVMe rolls out, there’s been numerous vendors that have jumped on the NVMe bandwagon and started releasing solutions. I expect it actually to roll out relatively quickly. There’s a number of leading vendors that do not want to charge premiums for NVMe over traditional SAS- or SATA-based flash storage, so I think those competitive forces will keep prices relatively contained, and we see just a driving force across IT organizations for more performance, and I think those two things coupled will drive adoption probably even faster than what we saw with just all-flash in general.
Alfons Michels: Here is as said, and as promised, the way it would work, you may remember the picture from the simple integration of software-defined storage, and what we’re going to see her is how this can be done ideally, and I took service here as an example. You have equipped the service already when you introduced the new layer into your infrastructure with appropriate NVMe drives, but at the end of the day, it’s just the question of adding the flash drives towards it. This can be done during operation without any disruption. If you don’t have equipped servers already with NVMe flash, you can still do this with hard-plug NVMe U.2 drives which as the name says, hot-plug can be inserted during operation, then we will be dropped to the entire storage pool, and the software-defined storage layer enables that this will be then a new storage tier, the fastest storage tier of course.
And that the data are copied in the background, the most-used data really automatically to that tier and that all reads and writes came then from the fastest flesh tier here, and this is done during operation, and you can even expand it to the maximum capacity obviously supported by that hardware to be connected via NVMe. This is a very simple and straightforward way to increase the power of your entire storage and assets for most customers. This would fit for all the applications, depending on how many data changes generate how many I/O. Continuing those examples, also in terms of IT modernization, and I saw that this was also the number reason for data migrations, is replacing an existing storage. I think I don’t have to go through all the motivation details.
Of course, it reduces operational expenses, especially the maintenance cost for all systems, expand capacity, improve performance, that the equipment is not supported, and you don’t get any spare parts any longer or consolidation. Also here, you have a lot of options to move forward. The most obvious approach is to stay with the current supplier, and possibly also with the same series, which means minor migration efforts. But typically this is cost-intensive because the supplier knows about these benefits, they have especially for you. You can change the supplier for an attractive offer, but his may be pretty cost-effective, but requires a full migration. Or you say okay, I do everything in server virtualization. There are great tools like storage demotion or storage life migration. So I have building tools, but at the end of the day what you do, you shift hardware dependency to software dependency.
When leveraging software-defined storage, again, this is possible with the entire SLAs in terms of availability and in terms of application performance, so data access, and you have an unlimited choice regarding the hardware you use in the background, which apps are on the top of operating system, virtualization layer, and also on the connectivity. Because the services provided like the auto-tiering we have previously are uniform across the entire storage underneath and obviously as you can use this mix, this provides an outstanding TCO. How this looks like, pretty simple. We take the same situation as we had before but now without the NVMe flash in there. What you have to do is just get your new storage. It has to be no way identical to the old storage. It can be a JBOD. It can be a storage system depending on your requirements of that.
What happens is that this will be connected to the DataCore servers and in the background whereby the entire storage servers remain the same, so the applications even do not recognize that this is underneath and when there is enough power free in the DataCore servers, the data is copied in the background, regardless of the source and destination and asset without any impact on the high availability and performance of those systems. What happens then? If the data are synchronized and copied, you can simple decommission the old systems. They are replaced automatically by the new systems as said, without any minutes of downtime, and you can synchronize mirroring or even between totally different systems, and you can switch from high-end or intelligence storage systems, even to unintelligent storage systems, or just a bunch of flash, JBOD, so everything is free and open and in that area. Leaving the modernization part, going to IT flexibility, and here I have an example to go from classical storage to hyperconverged.
Motivation, we discussed this already: it’s drive for change. Having a new infrastructure eliminates the separate network layer, possibly when it’s still Fibre channel so moving everything to ethernet and iSCSI reduced the operational expenses because the management can be done by one team then. You don’t need a separate team, and the apps are closer to the data. The most obvious approach here, you have not too much choices because if you talk to well-known vendors, it’s by 99% a rip and replace of the entire infrastructure and has typically hyperconverged assumes that everything is virtualized on the same layer, they have built-in migration tools storage, storage vMotion, and storage live migration. But a rip and replace typically is pretty is pretty cost-extensive.
During the migration the SLAs are affected, possibly not their availability, but at least the performance. Because please keep in mind that storage demotion requires a lot of capacity in terms of its server base mirroring and the server is at the same time running the applications, so there is an influence. To give you an example, there was a large data center move here in Germany, and it was just 10 kilometers, and they had dark Fibre. The migration was calculated theoretically for 2 months. At the end of the day, it took nearly 2 years because the performance interruptions caused by storage vMotion were so high that they only were able to do this on the weekend where the workload is not so high on these systems, just to give you an example.
Of course, you have a single server virtualization layer which again is a strong dependency. With leveraging SAS, this is a little bit different and you also achieve hyperconverged plus. What hyperconverged plus is, I will elaborate on another slide, but regarding hyperconverged, Scott, I do have a question to you. How you do see the adoption of hyperconvergence so far?
Scott Sinclair: So we definitely see a ton of interest in hyperconverged. It continues to grow. I will say though, one of the big things that we’re seeing is a strong replacement of traditional infrastructure. I think when HCI environments first showed up, a number of writings and analysts saw it as okay, is this in net-new greenfield? Is it one of the small one-off things? In one of our recent research studies over 60% of organizations said that HCI is replacing traditional infrastructure in some way. So it is increasing, there is interest in it. It’s impacting traditional environments.
Alfons Michels: Okay, thank you very much for the insights. Yeah, having a deeper look at this, how does this work? Here I took the same example as we had previously, so going fully hyperconverged, first of all, you see, on the top there is still a server which is not virtualized. Of course, if you would like to go further hyperconverged, this application needs to be virtualized. This is the first step, and then typically in this example, you need to take either this server or add a new server, depending on the capacity of the server, means capacity in terms of how many disks can be inserted? What is the CPU type that it can run storage and the applications on top, and also in terms of connectivity, so an appropriate sized server? This just needs to be equipped with, of course, enough storage space again, according to your needs. Then this server joins the group. At the end of the day what will be done here is it will be synchronously made a third copy of the data.
When this one is done, it already can take his first workload, so it’s virtualized. It can run workloads and the storage underneath. What you have achieved right now are three copies of the data and they are served to the upper layer. If we go now further, you just have to separate one. The SLA is still on. The starting point was 2 copies, you’re running now 2 copies and now he has the time to replace the server or even to equip these server, means extend the server again with appropriate storage and appropriate connectivity and let it join the group again. So you have still 3 copies. They are synchronized in the background. This does not influence the performance at all of the entire procedure, and then if it’s showing, you can put the next virtual machine on that. This system and procedure is exactly the same also for the third server, just decommission the old storage system or use it for something different.
The storage is provided by the 2 already virtualized and hyperconverged server, and the third one will be equipped with appropriate storage too. And after this is done and completed, it will be reinserted into the entire connection of the DataCore servers, and then the only thing which need to be done is to move all the virtual machines via live migration as not storage live migration is involved. This is pretty fast and can be done over the weekend when there is not too much workload, and all the virtual machines you have are running in a fully hyperconverged environment, and you have achieved this without any downtime. You can reuse the servers or even just decommission them. It’s up to you and what you would like to achieve with that, and now you have a fully hyperconverged environment with all of its benefits.
But if you have done this with DataCore, you not only have achieved this flexibility from classical to hyperconverged, you also have achieved what we call hyperconverged plus or hybrid-converged. Because this version of hyperconverged can serve as storage like it did during the migration phase, also to external servers and via connections, for example Fibre channel. If you have an old Legacy UNIX system somewhere and you would like to provide this storage from an hyperconverged environment to that UNIX system, this is still possible without any problems. Or if you would to reuse the storage which was decommissioned, this is also not a problem. This storage can be used as resource for one or two of those servers, and you can build it up as you like.
Real IT flexibility on this entire environment. If we move further, we’re already at the summary and next steps. And as we also have some questions, I will go through this. Regarding the summary and next steps: what you achieve with DataCore SDS is one SDS service for any use case. It means in terms of hyperconverged cluster, which may be useful for dedicated applications, to separate them for licensing purposes, or as your secondary storage where you have a lot of cheap stuff or for your mainstream IT services, which requires the entire data services set with auto-tiering, with disaster recovery, and so on, so forth, and also migrating the cloud, means you can, for example, leverage the cloud as a further storage tier.
For example, for your mainstream IT services, you have storage tiering, automated with some storage layers and to expand your capacity, you just can leverage the standard clouds via their interfaces as a storage tier, or even you can leverage the cloud as a disaster recovery destination. So it means you have DataCore instances, and there is one available in the Azure Cloud already preinstalled. This is called DataCore Cloud Replication. The disaster recovery site can also be in the cloud, so in the case of disaster, you can start up in the cloud, and of course your remote and branch offices were obviously hyperconverged as a preferred environment can be managed, and they are all under this one centralized managed identically and you can move and migrate data in between as you like and as your business needs it.
If we go further, you see here the entire software-defined storage control plane from DataCore. Start at the top with the consumers and the device access methods. With access methods beside Fibre channel, iSCSI, NFS, and SMB. Also NVMe over fabric is on the list, so should come during this year. Then you have a lot of storage, which can be leveraged underneath, so it means supportive storage protocol, how storage can be attached, so it’s already NVMe, Fibre channel, iCSCI, SAS/SATA, or the cloud gateways as described and managed all these things by dynamic and central management, which has real0time and historical charting which allows you to provide alerting as a central orchestration platforms and provides detailed analytics with a full REST API.
We integrate into many other management tools, so for example that we are invisible for the users of other infrastructures. Let’s take Kubernetes or Docker for container technologies as examples. Then we have PowerShell interface, various plug-ins to other tools. For example, support of the VASA provider for VMware and obviously in our on own console. As consumers, I didn’t mention this earlier, we have physical servers, virtual hosts, and containers likewise and it doesn’t matter but I’ve explained this earlier. What we’ve expanded here is data migration. We haven’t touched on many other interesting thing like the performance acceleration with caching or continuous data protection which allows you to go during the past 14 days to any point in time, back in your data. Very interesting, for example, when attack occurs or you have an unwanted change made, also very interesting and high-end storage data services which are provided by this software-defined storage layer.
We already talked about the different deployment models and also the seamless and non-disruptive migration between those. Also the licenses reflect these models, so you don’t have to buy new licenses or something like that. This is already covered as all licensing is per terabyte storage. With a granularity of 1 terabyte, it doesn’t matter whether you run a classical storage environment or serverized storage environment, those are more or less the dedicated SAN environments or you run the hyperconverged environment or even a mix of both, which we also have the first customers who use this really in production because there are good reasons to get this done.
All right, this is the last slide and the so-called call to action. So if you would like to know a little bit more and also to somehow get technical details, means that your concrete project will be discussed and also explained, we kindly ask you to schedule a meeting with one of our solution architects. The link is in the slides. Together you can find out how you can achieve IT modernization and flexibility without the pain of migrations especially for your infrastructure and your project. What I hope to have demonstrated is what tremendous flexibility you achieve by using software-defined storage and the simplicity of management if you have one management for your entire storage assets, regardless how they are deployed. Of course, it’s a high investment protection because the integration of simple technologies, you saw it for NVMe.
NVMe is not new, for four years approximately, but we don not know what’s next, but for software, it’s very simple to connect to those technologies and to integrate it and to make it usable for the entire infrastructure and regardless, whether you’re working with JBODs or intelligent storage systems underneath which also provides high investment protection and low total cost of ownership. I haven’t stressed this so far, but we also have very interesting tools to increase performance, which leads us for example in the SPC-1 version one benchmark as the performance leader still, price performance leader, in this category. The benchmarks were made in 2016, but nobody has beaten them so far, so this is also very interesting point we can probably elaborate on and as our customer survey shows, it’s very interesting for many of our customers.
So in short, DataCore’s software-defined storage enables you to disrupt without disruption which brings me to the end of the presentation portion, and I would like to go to the question and answer section.
Okay I take the first question. I read it aloud. “How does DataCore deal with scaling and compete with cloud services?” In terms of scalability, I would say that the software-defined storage is scalable in the same manner as cloud is, so there are very rare limits. At the end of the day, it’s an integration of part. You can also deal with the cloud as said. You can make it as a separate tier, or you can make it for example as a disaster recovery location in the cloud, and of course you can copy data between DataCore instances, between those instances, between cloud and onsite installations. What also is possible and will be done experimentally by one of our distribution partners is that they use the DataCore cloud replication between 2 different cloud providers. They ensure high availability of the application and the automatic advanced site recovery mechanisms from DataCore to ensure the data of one customer in 2 cloud installations. It’s a local offering in Europe and it’s the Azure Cloud. They are connecting with DataCore services. Hopefully this answers this question.
The next question we have, “In your example of migrating to HCI with DataCore, what is the hypervisor?” It’s a good question. Regarding the hypervisor, you have a few choices. It can be VMware ESXi, it can be Hyper-V, it can be KVM, it can Citrix XenServer. All of them are supported and theoretically, but this does not make a lot of sense for the applications, but theoretically also mixed environment would be possible. So if you just take a 2-node hyperconverged environment on one side, Hyper-V on the other side, VMware. This is also possible in practice, but it’s not recommended because then the hyperconverged, the applications are not highly available because they cannot fail over over to another server in that environment. But the data can be copied between those 2 instances. Hopefully this answers the questions too.
Do we have further questions? Okay. As we do not have any further questions, I thank you very much for your attention.