Learn how DataCore SANsymphony allows you to achieve true high availability by using any and all Fibre Channel, iSCSI or DAS to create a single storage pool, whereby you can create any number of synchronously mirrored virtual disks.
I want to thank you for joining us today for this webinar on Fact or Fiction – Using the Same Storage Vendor to Achieve High Availability, presented by DataCore. And today’s presenter is Steve, who is the director of solution architecture at DataCore.
Steve Hunsaker: Thank you Whitney, and we appreciate all that you have done for us, and have been looking forward to this and excited to talk with [them] for the next 45 minutes about this Fact or Fiction, and I’ll just let the cat right out of the bag right now – of course, it’s fiction. [Laughter] We – I can remember distinctly the time I was in my data center leaning up against a wall, looking at all my infrastructure, all my assets, you know, and, you know, I started thinking well, I’ve got redundant switches and redundant routers and yet my data isn’t redunreadl-dant, and that’s not very good. So, it is fiction, and I hope to just in the short amount of time we have, explain why that’s the case, what DataCore does, how we pull that off, and some of our features and ability and I’ve – I obviously have a PowerPoint presentation, which I have to do. [Laughter] But I do have a whiteboard behind me, and a camera that will change to, to kind of whiteboard this out together, but please let me know if you have any questions; I will answer them in real time, and also please understand you can always email me at Steve.Hunsaker@datacore.com
All right. Let me just get started and jump right in. With this fiction that I don’t need to have like-for-like, I thought it would be best to start with our deployment models, and talk about everything that we are capable of doing and the flexibility that we have as a product, and what it is that we do and who we are, and that will come about as we go through this.
You probably all have storage arrays, and you know, they may be fiber channel or ISCSI, they may be NFS. SANsymphony is a block storage product, and so we have to really focus on fiber channel and ISCSI. Starting on the left, we have a traditional storage model where you typically have a storage array inside of your data center, and most – I would say 99 percent of the time – most storage arrays obviously are going to demand that you purchase the exact same make and model to get your data in another location. And that’s why the title of this webinar is Fact or Fiction, because what we’re proposing is that you don’t have to do that. Now, that might sound a little old fashioned, and it is, because we’ve been doing it for 21 years. But as we go through these deployment models, I think you’ll see how we’ve evolved, how the industry has evolved, and it’s really only magnified our ability to offer some pretty unique features and capabilities to the industry.
Then we have Converged, where I can take local storage, and build an enterprise-class SAN out of it, and offer fiber channel or ISCSI out and then I have hyper-converged, which is this, you know, buzzword lately and probably you’re all sick of hearing it, but it’s really history repeating itself again, where we have applications returning to local storage. And then hybrid converged, which is something that we try to propose as a new thought, that not only can the applications return, and storage be maintained on the same host as all the other guests, but you might have other physical servers still, in the form of Linux or Oracle or Unix, or other – maybe a Hyper-V cluster, or maybe another VMware cluster that we can also present disks to. And I will talk about more of those later. But those are some deployment models to think about as we roll this through.
One of the things that I think – and this is a famous saying of mine, that really, DataCore is to storage as VMware is to servers. I can remember distinctly, being anxious, nervous, hesitant even, to go from a physical server to a virtual server, and that might sound so 15 years ago, but honestly, it’s taken a little bit longer for the industry and yourselves to recognize the fact that storage can also be virtualized, and what I thought I’d do is point out some of the advantages that server virtualization brought to my table, and one of the reasons I would never go back. We probably never would go back. We all love the fact that I can spin up a server virtually that has mobility, it has redundancy, it has high availability, and all of these advantages come to the table and are really great lifesavers, and we wouldn’t have it any other way.
And I see the same thing happening with storage. I really wouldn’t want it any other way. These exact same reasons are the same way, and reasons I would virtualize my storage. Making and giving you the ability to move these virtual disks around, instead of virtual machines, but now virtual disks can be replaced and moved all while storage underneath is being replaced or upgraded or added to, and there’s all sorts of – you know, the mindset there is that we always have to increase capacity, data is always growing, there’s always going to be better storage, faster storage, the industry will always try and push the performance and we tend to use marketing terms like avoiding vendor lock-in, and it allows us to be more flexible, to take on storage underneath while providing you the availability, the five-nines, the six-nines, of storage availability while your storage is going under maintenance or being replaced, because of what we’re bringing to the table.
So, this is, in a nutshell, and this is really probably the last slide we’ll spend a lot of time on, but I want to kind of explain the framework of this slide and start from the bottom and work my way up. That way – and then I’ll whiteboard the very same thing, so you can see through pictures, if you will, diagrams of how this plays out. So, DataCore SANsymphony is the name of our product. We are a software-only play. We do have an appliance, but really, we are a software company, and the name of the software is SANsymphony. SANsymphony really brings the orchestration of different storage arrays to the table, whether or not they be NVMe – and if I’m a good presenter here, I’ll get my little marker out – whether they be NVMe or fiber channel, ISCSI, SAS or SATA, or the cloud, let’s make sure we take those slowly.
I can take NVMe inside of a server, fiber channel LUN from another array, or any number of fiber channel LUNs, an ISCSI LUN, a SAS or SATA type of disk inside the box, or a JBOD that’s connected, as well as a cloud gateway device, and I can leverage all of those storage protocols and bring them all together in a single, aggregated pool. And that’s an interesting concept, in and of itself. If I for example, have five terabytes of NVMe, and a five-terabyte fiber channel LUN and a five-terabyte ISCSI LUN, and a five-terabyte SAS-JBOD shelf, I’m saying that I can create a single 20-terabyte pool from each of those four contributing members. And then, inside of that pool, I can start leveraging features and functionality as we go up this slide or this picture. I can then auto-tier the blocks of storage inside of that pool, by promoting and demoting little storage allocation units is what we call them, up and down the tier stack, as the [red]. And so, we use an algorithm on the read frequency, and we’re promoting and demoting up and down that tier stack.
And so, that’s why, you know, traditionally, you would never, ever dream of comnvme
bining an NVMe-type of disk with a SATA 5400 RPM, but with us, you can, because if that real-time auto-tiering. We’re doing that in real time, it’s not on a schedule or anything like that. But here’s the point, and let me kind of stay focused on the fiction answer to the title of our discussion today. Regardless of what you have, I can synchronously mirror that pool, utilizing something we call a virtual disk. So, I carve out a virtual disk on top of that pool, and now if I have – I’m not a big fan of, you know, calling out vendor names, but if a have a fiber channel LUN or LUNs, again, same thing, ISCSI LUNs, and then I’ve aggregated them all together, even with some internal disk, that represents a pool that I can now mirror to another pool that is made up of perhaps entirely different storage. And that’s the point and the answer and the reason why we’re calling out the title is absolutely fiction.
So, lots of features and functionality; don’t want to spend a lot of time there, but just know that we have some pretty cool features. And then, as we go up and talk about our access methods, I can take these virtual disks and present them out as fiber channel, ISCSI, NFS or SMB, and that way, all of our external hosts are able to use the storage that we’re presenting. So, let me clear my drawings and go back a couple slides, and review the deployment models again, to make sure we understand kind of where we’re going. So, if I could start on the far right and pay note to the hybrid again, what we’re saying, now that perhaps you understand a little bit more about what we’re doing, is not only can DataCore be residing as a virtual appliance on say a VMware hypervisor, bringing in all the fiber channel LUNs, all the ISCSI LUNs from existing storage in your environments, but also capable of utilizing all of the local disks, the JBOD that’s attached to that server, taking all of that storage, pooling it all together, and then providing that ESX cluster that I’m standing on, and the virtual guests, the machines that are residing on that same box, with storage, I am also capable of also, at the same time, presenting a fiber channel or ISCSI-NFS-SMB disk, out to external hosts. So there’s your hybrid approach.
Now, let me jump to the white board and make sure this – I changed the camera appropriately here – and start drawing some things out for you – and explaining how we pull off this synchronous mirror. You should be able to see this strain here on my whiteboard. So, here we go. So, I have an array, and it’s fiber channel, and keep in mind, that could be a whole bunch of LUNs. I have another array, and it might be ISCSI, and I might have some local NVMe, and I might have some JBOD attached to that, as the different storage that I can have in my purview. I have a host, and we’ll call this – we’ll just call it DataCore for now, and I don’t want to get too in-depth here, but that could be an ESX host as, with us as a virtual machine, or we could just be a node unto ourselves in a more traditional aspect. But the point is, is I can take each of these, pool them together, and like I said, mirror over to another side.
So, let’s kind of build these two nodes out, so now I have DataCore 1, with all of its storage, I have DataCore 2 with all of its storage. It matters not – they don’t need to match – remember the fiction – and then I can use mirror paths and hopefully they’re redundant, and these mirror paths are either fiber channel, or ISCSI. And I’m drawing two, because certainly they can be redundant; certainly, we want you to have that. There can be more; they can be switched through a switch, a fiber switch or an Ethernet switch, or they can be direct-connected from node to node. So, what are these nodes? These nodes are any x86 server. So now, really what we’re saying is, you build – you bring a Lenovo, Super Micro, Dell, HP, on and on to the table, your server choice, and install DataCore on that node, and we now have two storage controllers that are aggregating all the storage underneath, and capable of mirroring.
Now, those are the paths. We now create what’s called the virtual disk, and I’m going to show this in our GUI here in one second. We now have a virtual disk that is known by both of these nodes. In other words, we’ve now split the two storage controllers. So, this mirror path might be a certain distance, and we do want to have latency of less than five milliseconds on those mirror paths. But the point is, is now my data is split, and I have true HA – active-active, from site B to site A, or, you know, whatever, and yes we do have customers that have these both in the same rack, in the same cabinet, in the same room, but we also have customers where they’re in different rooms, or different buildings on the campus. Or different cities. And we’ve split the data, and we have true HA. So, a synchronous mirror is really, if you can think of it this way, RAID 1. And it’s a true mirror. So, whatever happens to this disk, is written to both sides. Remember that I said that this disk can be served out as fiber channel, ISCSI, et cetera.
Well, if I have a DSX cluster that I’ve served that to, it will show up in ESX as a data store. And that data store, when written to, will use the three NPIO ways of round-robin, most–recently-used, or fixed path, and it will decide the path to take. But ESX absolutely sees all of the paths that I’ve given to it for that data store. So, in a healthy environment, when all the paths are present, everything’s active-active, the right might come down – one of – you know, let’s just say to the path to the left of DataCore 1. Well, here’s where we start getting into the nitty-gritty and get a little technical on how we maintain the synchronous mirror. That right is received in RAM. Now that’s a big deal, because some of our competing products out there, they use disk or some form of disk to cache their rights and their reads. We use RAM to cache rights and reads. That right gets sent over the mirror – one of the mirror paths – and is received on the other side in RAM, and then we then send a T-10 SCSI ANSI standard SCSI acknowledgement and send that [AK] back in the reverse process over the – back [unintelligible 0:20:10] the mirror path, back to the ESX VM that’s at the right.
So, we’re absolutely mirroring the right to both sides in that active-active manner. In the case of a disaster striking, regardless of what happens, of any component on the left, let me take out my red pen – regardless of the disaster, and what component therein fails, ESX sees those paths as dead, or may not – maybe the paths are fine, maybe the storage underneath is the problem, the point is you have another active side with that synchronous mirror or RAID 1 true availability on the other side. So yes, you have to have storage here and storage here. If you need 100 terabytes of capacity, you have to have 100 terabytes here, and 100 terabytes here. So, that’s how we pull off this fictional aspect of not requiring the same storage vendor to be on both sides.
Let me switch my camera and now show you our GUI. All right. I’m going to flip over to our GUI that I have remote-desktopped into, and in this screen, you’re seeing my three DataCore servers. I have a server group entitled Demo Lab, and it has three DataCore engines, or servers, inside of it. We support up to 64 inside of a single organizational unit or server group. This SAN 1, and I’ve taken the liberty of just manufacturing my own demo environment here, so please understand that. But in it, I have physical disks of any nature, and that’s the point that I’m trying to prove is that I’ve got – I may have a fiber channel LUN, I may have an ISCSI LUN, or I might have internal disks present inside of the server. These physical disks can be used to create pools and that’s the next building block. So, I have these physical disks, and I’m showing you the pools that they’re actually already assigned to, and then I have my pools here, where I have Disk Pool 1, I have CDP, and these pools are made up of those physical disks.
Now, I’ve actually gone, and I’ll get a little bit advanced with you and [do] that – I like how flexible and how powerful we are, and I want to demonstrate that, that I actually have a disk pool member that is actually a mirrored, nimble LUN, or – and a 3PAR LUN. So, we actually have the ability to mirror LUNs and that mirrored pair of physical disks represent a disk in and of itself, inside of the disk pool, which is just incredibly resilient and fantastic. I have a disk pool, then I have virtual pool disks, where I have, in this case, I have one called ESXi that I’ve created, and as you can see from this screen, it has a number of paths that are either active and connected and it’s showing you who the initiators and the targets are, which raises the point that DataCore SANsymphony, that server that I drew behind me, the server that we’ve been talking about, is a concurrent initiator and a target. So, I initiate to back-end systems, and bring forth their storage to the table, and then I’m a target to the initiators in the industry, like ESX, Hyper-V, et cetera, and can give them storage.
And that is all created with the nature of server ports. So, I have actually tried to build a little bit of a best practice here in my own lab, where I have two front-end paths and two mirror paths. And I’ve told you, I’m cheating because I don’t have all that storage as back-end in my home. [Laughter] So, but you would have two back-end paths if you had back-end storage.
So, to be fully candid and to show you how this works, these are all ISCSI, so I’m presenting over 229.1, 91.1, and let me show you where I’m reading from, so you can see – this is the IP address and the IQN that we’re talking about. And I’ll go ahead and erase that and switch my pointer back to the arrow, and now we have two front-end paths, two mirrors, and I’m doing that all through ISCSI. If I showed you on the physical adapter side, you can see that I actually have – and I like to do this as I’m a little OCD, but I put the – Microsoft still hasn’t figured out to put the IP address as one of the columns here, and if anybody knows how to do that, let me know. But I just name them with their IP address and their name, so that I can keep track of them all, but indeed, I have what, six – six [nicks] that are all IP-based and I’ve named them after their roles and their functionality. So, that’s showing you the plumbing of how all of this takes place, and how we’re able to pull that off.
So, these mirror ports are only taking those DataCore-specific rights over that pipe and over to the other DataCore node or nodes to provide that RAID 1, that synchronous mirror approach, so that if any one of these were to fail, the data would still be available. On a virtual disk, I then have the ability to have a lot of the feature and functionality as we went and showed you that chart where really most – the bulk of my feature set is right here with the virtual disk. So, I can create other virtual disks – split and un-serve it, which is basically, because they’re mir – a mirrored pair allows me to split or divorce them and make then two single entities, I can replace the storage and move the storage, set up a storage profile, and in the industry, we like to use words like pin – I want to pin this disk to SSD or maybe the opposite – I know that this disk is actually never really going to be accessed, so I want to pin it to the lower archived tiers.
I can turn on CDP, which is a fantastic feature, that enables me to have a journal and roll back to a precise second of time, and within a period, based on some other elements in your infrastructure, that I can’t just, you know, say how many days it’s going to be for you, but let’s say it’s 14 days, you have the ability to revert back to a precise second of time, just like a sequel log recovery. Same exact type of premise, where we have a journal that’s keeping track of all the rights. I can utilize snapshots and take a Polaroid picture snapshot of that storage; I can replicate it while it’s being mirrored, I can then asynchronously replicate it to someplace in a distance, you know, at a distance. Maybe Azure of Google or AWS or Mister Frost’s Garage, I don’t know. But the point is, we can replicate it while it’s being mirrored to a third location and provide you with a bit of DR.
I can group these and serve them out to different hosts, and I’ll show you that now. So, this ESX virtual disk has some information and I’ll kind of just spotlight that here for a second. You’ll see that it’s up to date, it’s green – let me take out my highlighter here – I have a green up-to-date on SAN 1, I have green up-to-date on SAN 2, and I actually have an up-to-date on SAN 3, and you’re seeing that the host access is disabled, because when we do a three-way mirror, which is for all of you mission-critical guys that need your seven or eight-nines of ability, no kidding, we can do a three-way raid across those different storage pools, and really make your data highly available. But we don’t do an active-active on all three; we do an active-active-passive. So, at the point where anyone of these were to fail, we become active on that third note. Let me erase my drawings and move on to the next point where I wanted to show you.
So, here in my settings tab, I want to just pay particular attention to this because this is my SCSI device ID, my NAA, and you’ll see that AF29 is my – are my last four. Let me log into my ESX box and just show you how that comes across, and that I’m not just making this stuff up, and I’m taking you from [stick to stitch] here. If I can remember my password. Oh no. And you’ll see that I have some storage – while we’re here, remember when I told you that we have ISCSI and all those paths, let me get my marker out – here are all of the front-end ports, so we – the reason we have six is because I’ve got two front-end ports coming in from each of three DataCore hosts, so again, with that three-way mirror, ESX is seeing them all [plumbed], it’s cap[able of using four of those six, or really, all six – DataCore just doesn’t negotiate the host access, but there’s your all six [passing], so I’ve built complete redundancy, and that’s how ISCSI is set up. And then, on the storage itself, you’ll see that the NAA number is AF29, just the same as what I showed you on the DataCore console – AF29.
So, that’s the way we take physical disks, just as a little, brief review, physical disks which are any fiber channel, ISCSI, direct-attached, or internal disk, they become a physical disk in our world, and then [am] capable of taking any combination of those physical disks, creating pools, once I have a pool, I can create virtual disks on top of them, and have an incredible amount of features inside of that virtual disk. I’m [plumbed] via server ports, via fiber channel, or ISCSI with my mirrors, my front-end, my back-end, and my configuration is all redundant and solid and then I’m mirroring that over to other nodes within that server group.
[Let me] go back to my PowerPoint, and then, just really quickly recap – we are claiming that you shouldn’t have to be locked in to a vendor, and you should feel free at this day and age of 2019 to utilize different storage. One of the great sales points and things to think about – we had a customer who is using all of the same storage, and that storage provider did a firmware upgrade, the customer did the upgrade and brought down all their arrays, because that upgrade failed. And so – it’s actually, if you’re in the business of making your data mission-critical and highly available, it’s actually wise to have different storage in your environment, so that when that kind of a scenario happens, you’re not bringing it all down.
So, it is fiction. And we have these deployment models – there really isn’t a deployment model that we can’t do. I can take local disk, external arrays, serve it up and down, left to right, side to side – it doesn’t matter. We are storage virtualization at its finest, and we certainly have learned – we would never go back to having physical servers. And I honestly will never go back to having a physical storage array. We have advantages that we’re aware of, for both sides; we have a complete enterprise-class stack of features; we’ve explained the storage protocols from the bottom all up to the consumers to the top; there are no special agents or anything like that that need to be installed on the [bare metal] hosts or hypervisors, because we’re providing an industry-standard SCSI-block disk.
And just, maybe lastly, the top of this slide is showing a typical picture of an array, where I have two controllers, and both of the controllers are in the same sheet-metal chassis. And certainly, as this one red area here explains, I can’t separate those controllers. I wouldn’t be in my right mind of I tried to break those apart. But yet, those storage controllers call themselves HA. I don’t know that I would call it that. I’d call it [fault-tolerant], because certainly you have two controllers. But those two controllers are using the same shared back plane of disks, so your single point of failure is at that disk level, and yes it could be raided, but it’s still all in one place. So, if you want the complete opposite of that, and absolutely split those controllers at a distance and keep them at the very least in different rooms, DataCore is your answer.
And certainly, you know, it’s pretty provoking and – thought-provoking – and flexible to think about as new storage comes out, as I need more capacity, I can plug things in a server, I can hand off a JBOD, go get another array, we’re not controlling or asking you or telling you how to – and what storage to purchase, but understand that we’re giving you that software that allows you to make those business decisions and save yourself quite a lot of money on purchasing storage that does not have software on it, because let’s face it, most of the storage arrays in our business, they don’t even make the storage – they’re just software companies just like us, that have married together their software with the hardware that they’re built on. They may only provide a few storage services, so look at the flexibility there and certainly when you have HA in two different places, you don’t have a single point of failure; you have that RAID 1 where if I lose one side, it – all is not lost, because I have the ability to still have the data on the other side.
We’re drawing to a close here. I probably am a little early, but I want to thank you for joining us, and I’d like to ask Whitney if – I didn’t see any questions pop up. Whitney, did you see any questions arise from our audience?
Whitney: We did have some questions about contacting you and follow-up and DataCore information, so I handled those in the question box. But specific questions related to technical things that I cannot answer? No.
Hunsaker: Okay. Well please, those of you that are on still, and those of you who will be watching this later, please feel free to reach out to me. My email address is Steve.Hunsaker@datacore.com. Always happy to engage and discuss and come up with different creative ways to think about storage and certainly how you’re taking care of it in your environment, and learning from each other.