Search
Languages
<
On Demand Webcast
58 min

Hyperconvergence 101 – Getting Started with Hyperconvergence

Webcast Transcript

David: Hello, and welcome to today’s event. Hyperconverged infrastructure 101. Sponsored by DataCore and brought to you by ActualTech Media. My name is David Davis, and I’ll be the moderator on today’s event. I want to thank everyone for taking time out of your busy day to join us on the event today. We’ve got a great event lined up for you today. For those of you who are, you know, maybe relatively new or even totally new to hyperconverged infrastructure, I think this is going to be a really great event for you. We have an expert presenter lined up today, Mr. Steven Hunt, from DataCore. He’s been in the industry a long time. I’ve actually known Steve a long time, and I’ve seen his presentation. He’s got a great presentation and demo, all about hyperconverged infrastructure.

So, again, thanks for being on the event today. Before we get started, there’s a few things that you need to know about today’s hyperconverged infrastructure event. First off, we have a grand prize to give away for one lucky attendee on the live event today. That is an Amazon $300 dollar gift card, and we’ll be announcing the prize winner for that gift card at the end of the event. If you have questions about the terms and conditions of that drawing, you’ll see a link in your console there where it says handouts. You’ve got a handouts tab, and there’s a link in there to the ActualTech Media prize terms and conditions. Moving on, questions. You know, this is a 101 event; so, we want this to be a very educational event. We encourage questions.

Steve and I will be answering just about every question that comes in. We’ll be doing our best to get to every single question. So, feel free to use that questions-and-answers tab there gratuitously, you know. All the questions that you have about HCI, now is the time to get your questions answered. Also, social, we want this to be a social event. The hashtag for this is HCI 101, and today’s event is sponsored by DataCore and produced by ActualTech Media. So, you know, feel free to promote the event, mention the event on social. There is a Twitter icon at the bottom of the screen, and you can monitor the hashtag there and also tweet about the event there as well.

Also, you need to know about some special resources we’ve prepared for this event. Those are in the handouts tab on the left-hand side of your screen right there. In fact, I can actually spotlight that handouts tab for you there, can make it move a little bit; so, make sure you check that out. We’ve got some great resources from DataCore all about hyperconverged infrastructure. I should point out this interface that you’re looking at here is kind of a Windows operation system where you can drag or resize or move around the windows that are on the screen there, the slides window, and the questions-and-answers/handouts window. So, I encourage you to do that. If you want to maximize the slide window, you can do that just by clicking the, you know, maximize button at the top of the slide window. That’s something I encourage you to do because Steve has got some great architectural diagrams and demos and things like that to show us.

Moving along, today’s presenters, as I mentioned, Mr. Steve Hunt is our main presenter today. He’s the director of product management at DataCore Software: and I am your moderator. My name is David Davis. I’m a partner at ActualTech Media, and a 10-time VMware vExpert, VM certified professional, and CCIE, and been in the industry a long time, speaking, blogging and writing. And I can tell you we’ve got a great event lined up. I’m going to be asking Steve a lot of questions as he goes through his presentation and demo. We also took a lot of questions from our audience on the landing page for this event. I’ll be asking some of those questions, and then, of course, your live questions as they come in.

The topics for today’s event, the general kind of table of contents here, if you will, for the event. We’re going to talk about first just what is HCI. What is hyperconverged infrastructure? How can it help you? What are the benefits and, you know, use cases for HCI, and then via the live demo we’ll be showing you how you would administer a hyperconverged infrastructure. And then of course, we’re going to have an extensive Q&A session at the end of the event. So, with that it’s now time to turn it over to today’s presenter, Mr. Steve Hunt of DataCore. Steve, are you with us?

Steven: I am. Can you hear me okay?

David: I can, yeah. Thanks for being on. Take it away.

Steven: Excellent. Thank you for having me. All right. So, yeah, as David mentioned, he and I have probably known each other for quite some time going way back to the days when I did system consulting, and he managed an entire environment for a company out of the Dallas area. So, it’s great to do this presentation with him. So, those of you that are joining, some of you may know what hyperconvergence is. Some of you may have some familiarity. Some may be still trying to figure out what exactly that is; so, hopefully, I can answer some of those questions and then show you how DataCore can help you make that a realization. Really, what we try to focus on is an enterprise-ready hyperconverged solution to ensure that you can get the benefits of a hyperconverged environment and be able to scale out and deploy your workloads effectively.

So, what is hyperconverged? It’s a great question for those of you that still want some insight on it. At its core tenet and capability, it’s the ability to combine the storage, the application and network, everything within that you need to deliver applications to your business in a single box, really. And then as you want to scale out or as you want to provide high availability, obviously, you need the ability to connect additional nodes; but really, the core tenet of it is being able to consume all of that into a single box itself and then allow for a linear predictable scale-out model. As you start to scale out again, as I mentioned, high availability comes into place.

Speaking of David mentioning this is live session, a question that was just asked is how would you compare the solution to Nutanix.  It’s a really great question. One of the things about DataCore is kind of we’re born out of what you might call storage virtualization. The company’s actually been around for a while. It had some different flavors and capabilities. The deployment models that we help to enable is the hyperconverged model; but it’s more of a, you know, bring your own hardware, bring your own hypervisor model. And this allows you to then put those pieces together as you see fit and put those out. So, what you would do, and I’ll show this here in a little bit, and how we enable this is you would take, say, an X86 server from your vendor of choice. You would layer on your hypervisor like vSphere or Hyper-V which we support today, and then you would install our controller plane, if you will, into that box. And then present that local storage from that server into the hypervisor for consumption.

The other thing that people are starting to see now is, hey, there’s this cloud storage that I have some capability. How do I connect that in there? So, we actually help provide this as well. There’s some other ways of doing this; but really, it’s about being able to, you know, create a connection back into that cloud storage platform and then use that as an ability maybe to, say, do backups of your dataset into the cloud for disaster recovery. You know that kind of sets up what hyperconverged is. There’s some other things that I kind of want to highlight real quick that DataCore helps do that’s a little bit different, you know, but helps complete this hyperconverged model really for enterprise capability. And that is around what we call Hyperconverged Plus or Beyond Hyperconverged.

So, what does this really mean, right? The one of the things that we do with our solution is allow the connectivity of shared storage on the backend to then connect into this hyperconverged model and then you can use that as an additional storage pool for usage across the entire cluster. Why would we want to do this? Well, the reality is a lot of us have investments in existing shared storage architecture which is, you know, we need to still be able to leverage; and it would be great if we can plug this into the hyperconverged model. So, we actually allow that capability. You can, again, take that shared storage, present that back up to the hypervisors that are in your hyperconverged nodes and then utilize that storage so you can scale your storage out independently of the HCI or hyperconverged node. The other thing —

David:  Now, Steve, you mentioned –

Steven: Yep, go ahead.

David: — you mentioned scaling capacity and compute independently. Why is that important?

Steven: So, a lot of people as they start to put workloads into an HCI model they realize that they don’t always have a complete, 100-percent, equal or parallel linear scalability need from the compute to the storage side. And maybe it’s not the case so much if you have a single workload and that’s the only workload that you’re deploying, but the reality is a lot of people, as they start to bring in a hyperconverged solution into their environment, they start to add additional workloads into their environment and allows them to see some differences in terms of what that scalability is going to be. Sometimes, you may have to scale more on the CPU and memory side from a compute resource standpoint; but sometimes, you just start running out of capacity.

So, the ability to independently scale your capacity component is something that we see a lot of people really struggle with most HCI solutions; and that’s something, again, born out of, you know, where we come from. And our technology is really about abstracting the storage layer and ensuring that storage can be presented at any scenario. We allow that to either, A, leverage your existence shared storage platform or, B, just add an additional X86 server full of local disks into this overall configuration.

David: So, you might have an application with a massive dataset that needs a ton of storage or you might have an application that just does a lot of, you know, analysis and really doesn’t need storage.

Steven: Exactly, exactly. And people, the more they start deploying the HCI model, the more they realize they can put more and more workloads on there, and then that’s where you start to really run into the situation of, okay, do I need to start being able to scale my storage and my compute independently. And you’ll find that that’s actually a very common scenario.

David: Yeah, I bet, and then one more question before I let you get back to it here is on the previous slide and really just like this slide, you’ve got three different nodes in the hyperconverged infrastructure. Is that the minimum requirement to run with DataCore and hyperconvergence?

Steven: That’s a great question. So, you really only need one node just to have a hyperconverged system available, but if you want to create high availability you need to have at least two nodes. And that’s kind of something that differentiates us from a lot of our competitors is oftentimes you’re going to see a three-node requirement for high availability. The reality is we only need two nodes to ensure high availability.

David: Very nice. That gives you a lot of flexibility especially for, you know, SMBs or, you know, Edge locations. So, that’s very interesting. Okay. Thanks, Steve.

Steven:  Absolutely, and that’s one of the reasons why we really were to enable that model is because, in reality, if you’re a smaller organization or you have remote sites that really don’t have a significant need for a massive amount of storage or capacity or compute capacity, then you know, you should be able to just consume exactly what you need. But the reality is if you want highly available, there’s at least a two-node minimum; so, we wanted to adhere to that concept and allow that high availability to happen at a bare  minimum of a requirement as possible. So, we’re able to do so with two nodes. So, then there’s another model that we like to talk about. We’ve talked about the hyperconverged.

We talked about the Hyperconverged Plus where I can attach additional storage to that; and again, we aggregate that across the entire storage plain. So, it’s not something that you’re just managing independent storage units. You really have, you know, just multiple sets of disks that you can pool and leverage across, and I’ll talk about more what we can do with that in a minute; but then we have this other concept of hybrid-converged. So, this is again something that we don’t see a whole lot of people, you know, bring the capability; but we hear a lot about it. Remember I was mentioning the fact that, you know, people need to potentially scale storage separately; but the reality is they may also potentially need to scale compute separately or just have other application workloads on bare-metal servers that they want to provide, you know, connectivity to, and we can enable that.

So, we become an [iSCSI] target in its most simplistic form and then any other additional hypervisors out there that’s not a hybrid-converged box or just bare-metal Windows, Linux servers out there that you want to connect and have access to the shared pool, you can do so. So, we enable that capability as well; and again, it’s just all about providing a single control plain for the storage level and ensuring that all these application workloads have access to it. So, again it supports bare-metal or hypervisor and ensuring that you can have the HCI model where you have a conversion of the compute in the storage. You can add additional storage whether it be a shared storage device like a SAN or you want to connect, you know, just create an X86 server with a whole bunch of disks in it and just load our software on that and turn that into a pure storage node. And then again, if you want to connect additional hosts, whether it be additional hypervisors or additional bare-metal, you can do so and then again have access to this aggregate storage pool.

David: So, Steve, say I had an aging storage area network or NAS, something like that out there, that I’m running these server virtual machines on and maybe some physicals hosts as well attached to that SAN, could this hyperconverged infrastructure, you know as you’re describing here with DataCore, actually replace or displace that existing SAN that might be, you know,  coming up for maintenance?

Steven: Absolutely, and that’s one of the things that a lot of our customers find value in is allowing that connectivity of that legacy hardware that otherwise they might not get usage out of as they transition to that hyperconverged model or it’s just independent from the rest of what they’re doing with hyperconverged. That’s a lot of really good useful storage, and we want to bring that into the fold to make sure that you can maximize as much of your hardware investments as you’ve made, and so that’s why we really focus on enabling what we call this hybrid-converged model that ensures that you have access to, not only that HCI pool that you’ve created, but also being able to have the flexibility of leveraging any of that other storage investment that you’ve made historically or want to make in the future.

David: Very nice. Very nice. Another question that came in here is about, you know – I think you mentioned that this runs on a commodity hardware. In some cases, can people use existing hardware to create this or is there some sort of hardware compatibility list for DataCore?

Steven: That’s a great question. So, we do maintain a compatibility list; but the reality is it’s pretty much tightly coupled to whatever the operating system that you’re running with and its hardware compatibility list. So, as long as you can present the storage to our platform which is running on a Windows server. So, some people kind of like that’s crazy, but the reality is, you know, again we are a software component that just sits on top of the Windows servers that then allows it. So, you can provide access to or access from that, you know, whatever that disk is or that storage array on the back end, as long as there’s compatibility to connect to that, then we should be able to work with it.

David: Very nice.

Steven: So, you know, I mentioned that, you know, we kind of started as this storage virtualization solution and started enabling all these other use cases; and the reality is we bring all of this kind of enterprise storage service capability that our platform is across all of these different use cases. So, you know, we have built-in capabilities for synchronous mirroring. If you need to have a secondary site disaster recovery, we have built-in asynchronous replication. We have, you know, some built-in snapshot and backup capability; and one of the things that’s really beneficial, especially with your, you know, your hypervisor choice we have some integration native capabilities there.

We also have this functionality that we call continuous data protection, and the reality is it allows us – the way that we work we’re literally just facilitating all the IO that comes through. We can literally, if you will, snapshot down to the IO level. So, we’re constantly watching IO that comes in; and you can take basically a snapshot, if you will, of each individual IOs that are happening, and then you can restore back to a particular point in time. This becomes very common. One of the most common use cases I see is where, you know, someone had a data corruption, you know, an hour ago; and they need to restore that, you know, previous to the corruption or when malware infiltrated their data storage, they can roll back to the particular point in time when they knew that the data was where it needed to be.

So, we enable that capability natively within our platform, and then we have some other functionalities. You know, we have built-in caching that allows everything to happen as fast as it possibly it can because the fastest media that exists is memory. So, we facilitate all of our communication and our IO interaction starts with our cache in memory and then we write off to the disk based upon what the IO profile is. So, if it’s being accessed a lot, if it’s hot data, we’re going to write that off to the fastest media that we have available behind us that we determine and then we’ll [pare] that off into slower data as the request for that data or those bids become less and less frequent. So, we’re always serving the most constantly requested data from the highest performing storage that’s possible, whether it be from memory or flash drives or SSD or all the way into, you know, the cloud as well for, if you well, an [in]-tier or a cloud tier.

David: So, Steve, a question came in here about VMware site recovery manager. Does SANsymphony support that?

Steven: We do. So, we have a storage replication adaptor that helps facilitate integration with that. So, being able to replicate your data on the back-end isn’t that great if you don’t have the ability to ensure that the workload can transition with the data as well. So, we do have an ability to integrate. We have a storage replication adaptor that we’ve written that will work with site recovery manager.

David: And then another question came in about the cloud integration. Like, what sort of clouds do you support and I mean is like S3 or what sort of technologies are in use there?

Steven: Great question. So, today we have the capabilities. You can literally deploy an instance of our software as your AWS and then you can use that as a – it’s another DataCore storage server that then you can take whatever storage that you’ve connected to that, likely a bunch of EBS volumes on the backend and leverage that as another one of your tiers in your storage pool. So, that’s the capability we have today. We’re working on some additional functionality where we can expand our capabilities and leverage different components of the cloud storage that’s available, but today it’s being to deploy an instance of our software from the marketplace and then use that as another tier within your storage pool.

David: Okay. Very nice. Very nice.

Steven: Instead of talking about it, we can actually take a look at how, you know, you might be able to enable a hyperconverged scenario using DataCore software. Does that work for you, David?

David: Let’s do it. In fact, Rick just sent in a question. You know, basically how exactly does this work? Do I create LUNs? How does ECR connect to the storage? So, let’s see it in action.

Steven: Cool. Great. So, I’ll show a particular use case here, and then I’ll highlight some of the other capabilities that we have as well; and I think it will help answer some of that question. The short answer is, you know, as long as you can present raw disks to us that we can format and we can interact with, then we can leverage that. So, regardless of whether it’s local disk or shared storage, just as long as you’re presenting that where we can take it, format it, and then present it back out, that’s how we do it. In this scenario what I’ve got I basically have a couple vSphere servers that I’ve installed, EXSi6.5; and they just have some disks in them, and I want to turn those into hyperconverged nodes. So, how about going about doing that.

Well, the first thing that you want to do is download and run our vCenter installation manager. So, all I got to do is connect into a vCenter server. Typically, I may just deploy a vCenter appliance on the box itself. So, let me log in here. And one thing you’ll notice about this I did it a little this bit Ronco style because, you know, depending on how fast or slow your box may be, it could take a while to go through and build the virtual machines and get everything set up. So, what I figured I’d do is I’d show you the process of running installation manager and then just like the Ronco commercials I’ll bring the meal out of the oven ready to go; and we’ll do some more interaction with it. Otherwise, we probably would have had a much longer presentation in watching the deployment of virtual machines.

So, as soon as this decides to connect to – there we go. All right. So, we’ve connected to the vCenter environment. Now, what you’ll notice is, hey, you know, it’s saying I already found an existing deployment. What do I want to do? So, I could update or I could add a node. If this hasn’t been deployed at all, if it doesn’t find any HVSAN servers in the background, then it will just do a net new deployment. So, I’m just going to click add a node, and you can kind of see what this looks like. So, first, we’ll define what the virtual machine settings are for the DataCore virtual machine. So, we’ll go through, we’ll define what that’s going to be if I can type correctly, and then we’ll register that VM with vCenter or register that vCenter server in our software.

And you can change this, whether you need, you know, CPU RAM, local disk size, and we provide sizing guidelines for you to be able to utilize that. You’ll define your networking. So, basically, we need a target network so that we can allow connectivity to the virtual disk that we’re going to present. And then for the ability to have all of that local storage available across all the nodes in the cluster when we need a mirror network. We define a network by default for you, but you can change that should you want to do so. And then, again, I mentioned that our software runs on top of the Windows platform, so you’ll provide a Windows ISO and a Windows license key to then do the operating system installation. And then we’ll install our software on top of that.

And then you just select which ESXi servers that you actually want to deploy the virtual machine to, and again, this is basically our storage controller virtual machine that will need to be deployed to every single ESXi server that you want to have as a part of your HCI deployment. So, I can’t select these because they’ve already been done; and once you’ve selected them, you would click deploy, then you will watch the progress of the build happening. So, we’re going to build the VM storage controller. We’re going to grab all of the local disks from that machine. We’re going to present that up to the storage controller, then we’re going to connect and integrate that storage controller with vCenter.

We’re going to add all those hosts into that; and once that’s done, you can then log into one of your HVSAN servers, and you will see something like this. So, this s our management console; and you know, you notice here I have my DataCore servers that shows me some of the connectivity that I’ve got, different aspects of it, and then I see vSphere environment here within this pane here. Now, real quick, I’m going to flip back over the vSphere client; and I want to show you guys what I’m talking about. So, you know, here I have a couple of ESXi nodes; and they have a bunch of local disks on them. So, I’ve got two flash drives in here.

They’re “500-GB” flash drives, and then I have five HDD drives. And one of those is actually just being utilized as the local data store for that host where I actually installed the vSphere to, and then the other four were available for HVSANs to consume. And what you’ll notice is we have HVSANs servers that we’ve created and then those are running on each particular host. Now, the one thing I will call out here is typically you don’t have to have these put into a cluster. I did put them into a cluster for these purposes because after I got done building my environment I realized I had two different types of processers in each of the servers.

So, to be able to do the vMotion from one to the other because of processor incompatibility, I had to have it as a part of a cluster to use the enhanced vMotion capability. If you have like hardware that will allow vMotion without them being in a cluster you could do that. So, again, I have HVSAN running on each host; and of course, we have to wait for vSphere web client to – there we go.

David: And Steve, a question came in here. Is there a minimum requirement for SSD on each of the hosts?

Steven: There is not. So, again, you can use whatever local media that you want, whether it be SD or you know, regular SAS hard drive. The reality is it’s how much performance do you want to get out of it. The faster the media you put in it, the faster the storage devices you put in it, the faster performance we’re going to help facilitate. And again, part of that is you’re creating a cache out of memory for the HVSAN VM, and so everything initially is going through that cache and then everything is then being written back to disk. So, if you’ve got a ton of IO activity, and you’ve filled up your cache, then the next point it’s going to hit is actually interacting with the disks to be able to pull that data. So, the faster the media, the faster the overall performance; but the reality is there is no requirement to it; and as long as you have a large amount of RAM, you are facilitating a large amount of data via your memory cache.

David: And then another question here. They’re asking is the SSD, is it used for caching only or storage or both?

Steven: It is used for storage. So, the cache is actually being facilitated out of memory, and the SSD is purely just being used for fast media. When we start writing off the IOs two disks out of memory, then we’re just picking up the next fastest tier. And so, if you have some SSD drives in there, that’s going to be the next fasted tier, and we’re going to write that to it. Now, you can set storage profiles. So, you can say, hey, this is normal; so, I can just spread it amongst everything evenly or you could say this is critical, and I need to have it allocated to the fastest tier at all times. So, therefore, we won’t actually then cascade the data down to the lower media because we need to present as much of it as we can within the fastest tier.

So, the reality is the requirement for your drives is really how much of the storage do you need to leverage and what is the performance you need out of it. If you need everything to be as fast as possible, then you’re probably going to have a large amount of memory, and you’re going to have an X86 server over-full of – you’re going to create an HCI box out of all-flash arrays. But if you don’t need to do so, the reality is you can probably get a significant amount of benefit. And we actually have some testing that we did in some performance data that we show where, you know, we just used a couple SSD drives to help facilitate some of the hottest data. But the reality is we did a ton of IO intricate performance with a minimum amount of latency with a very small amount of disk set that what most people would expect that performance to come out of an all-flash array.

David:  And that’s because you’re leveraging RAM for the caching. Is that right?

Steven: Correct. Because we put everything in RAM first and then it’s also about how we do our auto-tiering. The fact that we’re putting all of the IO into the fastest media possible; so, the more commonly requested bits are then written to the fasted part of the media. And then we write off the less-requested bits to the rest of the media that’s available. And we’re just constantly cycling that data through those different tiers of media.

David: Okay. Another question here. They’re asking what about network. Do you require 10-GB networking minimum or what’s the requirement there?

Steven: We prefer a 10 GB at a minimum, especially when you’re talking about a hyperconverged node because that’s going to allow any of your synchronous mirroring to happen and keep everything up to speed as quickly as possible. But the reality is, again, everything’s being facilitated out of that memory cache first, then we’re just writing it and committing it to the disk, and we’re facilitating the communication and ensuring that that mirror is being written off to the other side. So, if you have a ton of IO that’s happening, you may want to increase that; but the reality is for most environments a 10 GB network is going to be sufficient. We do have support for 40 GB as well.

David: Okay, nice. Jeff is asking can I have more than two copies of my data. Could I have, you know, four copies of my data if I installed DataCore across four vSphere hosts?

Steven: So, today, we do a three-way copy, and we are looking towards what we call an in-way basically being able to have a copy across as many nodes as we want. The max that we to do is a three-way copy. We have found that the data resiliency that we provide we typically don’t need to go beyond that. We have ran into some scenarios with some customers where they – I don’t know if it’s a definition of, you know, requirement for them from a compliancy standpoint, but we have run into the request frequent enough that we are investigating doing a copy across more than just three; but we found out that the reality is a three-way copy facilities most people from a compliancy standpoint.

David: And then last question. I’ll let you get back to your demo here, and I think this just ties into what you’re about to show. And the question here is how are the physical disks tied to the virtualization hyper-visor. How is the VM eventually stored on these physical disks that you’ve created?

Steven: Yep. So, let’s dive right into that, and we’ll kind of explain that. Now, I’m going to do this all from the DataCore management console; but the reality is we also do have vSphere client plug-in. So, you could load that and you could do everything, almost everything that you need to do, directly from the vSphere console. But I did want to show you guys the DataCore management console and kind of be familiar with the concepts as they exist in here and then that immediately translates once you load the vSphere plug-in. So, you know, as I mentioned earlier we have our HVSAN servers, our VM controllers if you will, loaded. Those are identified in our DataCore server group.

We’ve got our ESXi nodes here listed. So, the one thing, you know, I showed that there was a physical disks that were there on the node itself. So, if you go into the DataCore server and then look under disks pools, we’ve actually created an initial disk pool with all of the disks that we found. So, if I click on disk pool, I can look at my physical disks; and I can see there are my two SSD drives, and then there are the other four hard drives. And those were the physical media, the raw physical disks, that were available when our software was installed.

So, we grabbed that. We allocated that disk pool. The reality is you could create additional pools. You could separate out these disks. You could power the server down. Add more disks to it. Let’s say, you know, your shelf was only half-full. So, you know, we have the flexibility of ensuring that as you put more media in here or as you want to break this out maybe into different polls for different particular reasons you have the ability to do so; but this is where we’ve taken all of those physical disks that were on that particular host and we’ve created a disk pool with it. So, from here, the next step – go ahead, David.

David: Question here from Eric. He’s asking is there a way to encrypt data with SANsymphony.

Steven: So, we do not today have native encryption available. It is something that we’re working with some partners to provide and also looking at opportunities for disk encryption; but what you can do is at the physical storage controller level, you can enable disk encryption for that so there is a capability to do so. But we’re actually looking at an ability to do software-level encryption natively within our software itself. All right. So, you know, we’ve created our pools. The one thing that you need to do once you’ve gotten this all set up is you need to create some virtual disks and present those to the host themselves. So, I’ve actually created one virtual disk. I called it mirrored datastore 1; and basically, what I did was I created that disk.

And I said I want to mirror this across both of my DataCore servers. So, I’m mirroring the data that’s being written from one server to the other  making sure that’s readily available. So, I’ll show you how easy it is to create a virtual disk; and we’ll just create another data store and present that. So, we have this getting started that all of these things we’ve actually gone through automatically for you, once you’ve ran the installation manager, then you just come in here and create virtual disks. We’re going to call this mirrored datastore 2. All right? And so, I mentioned this is mirrored. You could do a single DataCore server, so it would be non-highly available. You could do dual and provide some fault tolerance at the server level or you can do mirrored where literally any host is writing across all of it.

So, the most highly available, most false-tolerance solution is doing the mirrored. So, I’m just going to give this 100 GB. I’m going to leave everything else as  default. If I want to do additional ones of this I could, but I’ll just leave the rest as default. So, the next thing you do once you’ve created your virtual disk definition, then you want to define your server sources. So, I’ve got disk pool 1 and disk pool 2. So, right? So, disk pool 1 is coming from HVSAN SPCTest 3, and then disk pool 2 is coming from Test 2. So, I’m basically just saying, hey, I want to use these two pools as a part of my mirror. You know, I was mentioning some of the advanced capabilities we have like continuous data protection.

You know, here it’s identifying that we’re doing a redundant path for mirroring. I could change my storage profile. In fact, I’m going to do that. This is just going to be critical. That means we’re going to write everything to the fastest data media available; and anything else that’s set as maybe lower priorities, whether it be high-critical, normal, low, that’s all going to go off to, you know, the slower media that’s there and available. So, I’m going to click finish here. So, we’ve created our virtual –

David: And Steve, a question here from Robert. He’s asking about, you know, if we leverage some of the advanced features. Does that carry with it an additional price tag? Are there different additions or how is the licensing done?

Steven: Great question. So, we have three different additions. Standard,  enterprise and what we call large scale, which is really secondary storage, if you will. So, standard basically doesn’t have Fibre-Channel capability.  Enterprise has fiber-channel capability and some additional performance IO characteristics. What we’re showing right now literally can be done with a standard license. All of the features that you’re looking at like the continuous data protection, the replication, all of those things are included in the license. The only thing that you’re paying for is the capacity that you want to present to your environment. So, you know, in the case that I had – let’s see, your two terabytes plus another – I think I had just over 3-1/2 terabytes, that’s what I would need the license to be able to present and utilize all of that capacity.

David: So, if I –

Steven: And I get all of these other features.

David: Yeah. So, if I add a second site and I do replication to that, I’m not paying for a new license for the second site. I’m just paying for the amount of data that I’m additionally adding by replicating it. Is that how it works?

Steven: Yep. Exactly. So, let’s say that this cluster that I’ve created, it’s a three-node cluster. It’s got a 100 terabytes in it. So, I would just literally license 100 terabytes of capacity, and then I would have that, you know, the three nodes. Let’s say I wanted to break that up into four nodes and spread that 100 terabytes across the four nodes, I don’t have to buy any additional capacity. I’ve just got to move my drives around and then install another version of the server. So, you don’t have to license per node that you’re connecting with or that you’re presenting server storage with. It’s purely just the capacity. Now, let’s say I wanted to bring on that second site, and it’s disaster recovery, I don’t need to replicate everything across, just a certain set of pools.

And I’m only going to replicate, you know, 10 terabytes of data over there, and that server only has 10 terabytes, you would just license an additional 10 terabytes of capacity. And you could spread that across a single node, two node, three node, however many servers you need to. You’re just purely licensing the capacity at the other site.

David: Very nice. Very nice.

Steven: So, I’ve got my virtual disk created. Now, we need to make sure that it is now a datastore available for my servers to use. So, I’m going to come in here. I’m going to serve this to the host. I actually want to serve this to both of my ESXi hosts. Now I could serve this directly to virtual machines that are running, but in this scenario I’m going to serve it to the host just for simplicity sake. Which virtual disk am I going to serve? I’m going to do the one that I just created, mirrored datastore 2. Make sure that we’ve got our redundant path enabled, and then I’m going to create a VMFS datastore off of that discover disk.

So, basically, what I’m telling vSphere to do is say, hey, as soon as you find this, format this in VMFS – tongue twister – and then I should have a datastore available. So, I’m going to click finish – yep – and so now what we should see on the other side is we should see some datastores coming up or actually, we can probably go monitor this. Bear with me real quick.

David: Question from Rick, while you’re doing that, came in here. Can I create host groups? For example, production, non-production, test dev, things like that?

Steven: Yes. Yes, you can. Absolutely. Look at the events. Sometimes, it takes longer for the vSphere web client to actually update.

David: And I like, by the way, that you’re using the new HTML5 vSphere web client. It always causes me to take a second look to make sure, wait, is that really the vSphere client because it’s not blue, but it is.

Steven: Yes, that is true. You know, I love the fact that I can get this from a browser; but there are times when I get overly frustrated with it, and I do miss the old school full client, you know. Maybe that’s just because I’m a little bit old school. I’m an old-school Windows guy.

David: Another question came in. Ken is asking for resiliency. Is it best practice to deploy the hyperconverged SAN across multiple hosts?

Steven: Yes.

David: So, minimum two –

Steven: Absolutely correct.

David: But perhaps –

Steven: Correct. Yeah, it’s a minimum two. So, the reality is the more hosts you have the greater the resiliency, and the reality is the more hosts you have from resiliency and if you have redundant paths, you’re adding more memory cache to the mix and that way you will then see, you know, you’ll then get the aggregate performance of those caches as well. All right. So, there’s mirrored datastore 2. It’s actually been created. It’s available for all of my servers, and you can see here the connectivity. It’s been presented to both .39 and .40 of my two hosts. So, the other thing that I want to show you is again this is actually, you know, it’s shared storage. We’ve taken disks from both of those vSphere hosts, and we have presented that and made it available, you know, as an aggregate solution across.

So, what I’m going to do, I’ve got this Window 2016 server running, and you can essentially see – you know, we’re doing some activity, CPU and disk activity. I’m just going to go ahead and migrate this over. Let’s leave that guy right there, and we’re going to start the migration  process, and we’re going to go – actually let me show you. This should be on .40. Nope. So, we’re on .39. So, we’re going to migrate this to the other host, to .40.

David: So, are we using vSphere storage migration here to migrate a virtual machine’s disk file from one disk to another or – what are we doing exactly?

Steven: We’re not doing storage motion. Just vMotion. We’re basically saying the execution and the run time needs to happen on the other server. All of the storage for this server is sitting back in on a virtual disk that we presented via DataCore, and then it is allowing for, you know, either of the hosts to have access to that. So, remember, we’re mirroring the data across. We’re creating that redundancy and a fault-tolerance. So, we essentially can transition that virtual machine without having to move the files because they’re actually available to both hosts.

David: Very nice.

Steven: So, we’ll go ahead and we’ll say finish, and then you can see the virtual machine is still running in the background; and here in a moment, we should see the host change. Any moment, once it gets completed. There it is, and now we’re on running on .40. Everything’s still running. Everything’s still up. We had no issues. We literally just transferred a virtual machine from host to another, and the reality is all of that is running on local storage to the vSphere servers themselves.

David: Very nice. Cool demo. I love to see live demos in action. Even better when they work the first time. Cool stuff.

Steven: I prayed to the demo gods before we got started, and they smiled upon me.

David: Nice. Another question that came in here. They’re asking about deduplication, compression, then provisioning. I mean these features that a lot of times you use on a SAN to save storage capacity. Are those available, and if so, how does that work with the licensing model?

Steven: So, it’s all included. So, don’t have the built-in duplication. We have compression capabilities. We are looking at expanding those, and we’re going to continue to improve the platform itself; but everything that we do today, it’s all included in the existing licensing. So, the question was about dedupe, compression and there was a third one. What was it? My mind just went blank.

David: And then provisioning.

Steven: Then provisioning.

David: Native provisioning.

Steven: So, we natively support provisioning today. So, we can do thick and thin provisioning, and again, all of those are, you know, capabilities. Not extra license features.

 

David: Very nice, very nice. It looks like we’ve got seven minutes left here in the event. I’m going to pull up this slide that has the additional resources that are available for everyone out there. You can learn more about DataCore’s virtual SAN software at that link there on their website. You can request a live demo. You can download a free trial; and in fact, if you go to the handouts tab on your interface, you can just click to view the hyperconverged virtual SAN, you know, web page directly from there as well as a number of other resources that Steve’s selected for the event today. So, we’ve got a few minutes left for questions. If you’re out there, you have more questions, now is the time to ask.

One of the questions that came in here, Steve, they’re asking about, let’s see — what about integrated backup and recovery. Is that something that is built into DataCore or you partner with existing data protection solutions?  How does that work?

Steven: So, we do have some partnerships with some data protection vendors out there. The reality is there’s some aspects of like GDPR compliance that patient may want to take advantage with, but the reality is we do have a native capability. You know, I was talking about that asynchronous replication. We have the ability to ensure that the data is available, and with some of the integration components we have, like storage resource adaptor, you can make sure that your data’s backed up, it is available, and can be, you know, transitioned from site to site where necessary or if you need to take a snapshot of your existing data and restore it, we have a lot of those capabilities built in.

David: Can you put a NAS [filer] in front of the DataCore vSAN hyperconverged cluster? Is that possible?

Steven: Yeah. So, on the DataCore server, we actually have some recommendations on how to do this; but you could just leverage the Microsoft native NFS services and directly, you know, present that on the, say, in this scenario, the HVSAN server, the VM that we’ve created. You can literally just enable the NFS services on that and present file shares out via that or if you wanted to, you know, pick your favorite NAS software, install that, and present the storage from our solution to that server and then leverage those NFS shares that you created through that solution.

David: Another good question here. Do the nodes in the cluster have to be identical?

Steven: They don’t have to be identical. So, you can have different memory, different CPU, different disk types. There are some limitations from a hyper-visor standpoint obviously. I highlighted them maybe very quickly as we went through. I did put these into a cluster because they’re not identical; and actually, I’ll show you here. We should be able to see this in the web client, if I remember correctly. I  have different CPU types in each of these servers. One of them is- so yeah. So, this is a Sandy Bridge Intel CPU, and then I believe this one is an Ivy Bridge. So, because of that, to be able to vMotion, I had to create a cluster; but our requirement doesn’t have any – we don’t have a requirement for the nodes to be the same or identical.

David: And then another question here is about like getting started. Is there an evaluation version or kind of a version that could be used for proof of concept?

Steven: Absolutely. So, we have a 30-day free trial. When you go to our website and click download, you can download a 30-day free trial, and then you can work with us. If you have to extend your proof of concept, we’re, you know, happy and willing to work with you; but we do have that free trial that’s ready to download and available.

David: And when it comes to migration, if someone is interested in moving from an existing SAN to DataCore, how does that work? Are there services to help them? Are there tools to help them?

Steven: It’s a great question. So, the reality is it’s simply just connecting your existing shared storage service to, you know, whatever you have deployed form a DataCore service standpoint. So, you can literally deploy DataCore software on an X86 server, you know, install Windows, install DataCore, and then connect that SAN to that; and now, DataCore is your storage control plan or you can connect it to, you know, let’s say you deployed it like I did here. And then you want to connect that shared storage to it, all you simply have to do is present that shared storage to the DataCore server, and then we will take that and we will allow you to – say, you wanted to what we call evacuate, you wanted to move all of your data from one pool to the other, then you simply just say, hey, I want to do so.

And I want it go from where my shared storage server is to my, you know, maybe my X86 hyperconverged box, and I want all of that data to transition over there. We allow that because you’re just connected that shared storage into us, and then we are the control plan that allows the transition of the data across those different storage pools.

David: Well, I think that’s all the time we have for live questions and answers. We do have an Amazon $300 dollar gift card to give away. The winner of that gift card is Bryant Godfrey from Idaho. Congratulations to Bryant. We’ll reach out to you to deliver your gift card. Great presentation, Steve. Thanks for being on the event today.

Steven: Thanks for having me. I appreciate it very much.

David: Thank you everyone for joining us today. If you have questions about the DataCore hyperconvergence solution, you can email info@DataCore.com or visit DataCore.com. Also, check out the resources there in your console. Thanks again, everyone, and have a great day.

Read Full Transcript