Search
Languages
<
On Demand Webcast
58 min

RSD Improves Veeam Backup Performance by 3X and Makes ERP System Lighting Fast

Webcast Transcript

Sander Puerto: Hello, good morning and good afternoon everyone, and thanks for joining us today. My name is Sander Puerto. I am a senior product marketing manager here at DataCore, and I’ll be your moderator for today. We would like to welcome you to this webcast, where you will learn how RSD was able to improve their backup performance by 3x and also make their ERP system lightning fast.

We’re going to talk quickly about some housekeeping points before we begin our presentation. Let’s review the agenda. Well, we’re going to do the introductions. We’re going to talk about the problem. We’re going to talk about the solution, the outcome, and the results that RSD was able to accomplish. Then towards the end we’ll talk a little bit about what software-defined storage does and why it should be something that you should at least be able to evaluate or be thinking about—and see what are our next events coming up. And also we’ll have an open Q&A if you have any additional questions at the end. Also, be aware that we are giving away a $200 giveaway gift card from Amazon. So we’ll be picking a winner at the end of this webcast, so I highly recommend that you stay throughout the webcast so you can find out if you are the winner.

On the panel today, we have Alan Kerr, who is the senior solutions architect at StablePath; and Jim Barnes, director of IT at RSD; and obviously myself. One other thing I wanted to mention here is, as we are going through all of this, be aware that there is a download section here, or an attachment section, where you can download the case study for RSD and can also download the webinar slides that we’re going to go through.

As we go through the slides, feel free to start asking your questions. Either Jim or Alan will be more than happy to answer your questions as it pertains to their specific success story. Also, once the webcast is finished, you will be receiving an email from BrightTALK that will have an on-demand link that will allow you to watch the replay, if you choose to do so. Now let’s go ahead and quickly jump into who is RSD. For that, I would love Jim—Jim, are you there?

Jim Barnes: I am here.

Sander: Jim, if you can tell us a little bit about RSD?

Jim: Yes, my name is Jim. I’ve been with RSD for about 17 years. We’re a refrigeration supply and HVAC wholesaler in over 10 states and have 79 locations in the 10 western states, including Alaska. This company has been around for 112 years, started off as a piano manufacturing company and in the 30s moved into refrigeration. Yeah, we’re expanding, growing. We just recently moved into Colorado a couple of years ago. With the technologies that we have, we have been able to expand into different markets and stuff like that. We’re a family-owned company. Again, I think the owner now is the fourth generation. I’m glad you guys are out there to hear our story.

Sander: Absolutely, thank you. Excellent. That is quite a change, going from being involved with pianos and now with HVAC. So, that is very impressive. Excellent. So now let’s turn it to Alan Kerr to tell us more about StablePath. Alan?

Alan Kerr: Yeah, so we’re an engineering-focused firm. We specialize in high-performance, resilient infrastructure. The founder of the company founded—he’s a former IT director. He started it around the year 2000. We’ve been doing high performance and resilient infrastructure for corporations of all sizes, but typically mid-sized enterprise is where we spend most of our time. We’re headquartered in California, but we do work nationwide for anyone who basically needs it. That includes enterprises, government, charities, so and so forth.

Sander: Excellent. So your main office is based out of California. Do you have any other satellite offices?

Alan: No, basically because engineers are always mobile. Even though the office is in California, it runs always on customer sites. That could be anywhere in the country. So we don’t have particular hubs anywhere.

Sander: Excellent. Thanks for that, Alan. Now that we know exactly who our panelist are today, we’ll quickly jump into the story of how all of this came about, how they were able to find out the solution that was the best fit for their specific situation, and be able to see exactly what it is that they were able to receive the minute they deployed this solution into their environment.

To begin, the problem as I understand was threefold, right? There was an issue with not enough performance for the upcoming project. You also needed some level of flexibility to be able to make this happen. At the same time, you had your ERP system that needed redundancy. Jim, if you could tell us more about this, since you were involved with finding these solution to these problems. Tell us a little more about what happened at this time.

Jim: Yeah, about 8 or 9 years ago, we really wanted to get into the virtual world. With the way our storage was configured, we had a Compellent. We had an EMC, and we were using EqualLogic storage at the time. The Compellent was what we were using for our enterprise storage solution for our ERP system and Exchange. That storage, as limited as it was, did allow us to run our ERP system and Exchange, but to expand out and start running a virtual environment, we really needed to expand the amount of storage we had.

What we found was we ran a lot of our basic storage on the EqualLogic, which is really slow. Some of the other issues we ran into was with the Compellents you had dual controllers but only one, single storage in the backend. So if that storage went down or if—we even had situations where one of the controllers failed and it locked up the other one. So it essentially took our tier-1 storage offline at times.

So we started looking at a solution that was going to give us the speed that we needed, the volume of storage that we needed for our ESX environment, and also the redundancy that we wanted so we could actually have a failure on one storage appliance and it wouldn’t affect production at all. So Alan came out and presented what DataCore had. It took us a little while for us to see what it was all about and understand what it was all about, but once we started moving over to it, the transition was very simple. It gave us a ton of flexibility. So we were able to have two separate storage appliances that are in a mirrored configuration. We could actually lose one DataCore appliance, and the system would still stay up and running.

The other thing it’s allowed us to do is move into the ESX world and store the virtual VMs on the storage. The ESX server would actually connect via fiber to both of the appliances, to both of the storage arrays, and to the ESX environment it was still only one appliance. So it didn’t matter if we shut one appliance off. The other one would automatically, without a glitch, actually keep—stay up and running and keep our environment up and running. So it gave us a lot of flexibility, for instance, to do updates to the DataCore servers.

It also gave us the ability to put whatever storage on the backend, serve it up—since it’s just an application-based appliance. You can actually just assign any storage to it, and it would handle it all the same. With the caching and all that running through the DataCore server, it also sped up slower storage. We were running spinning disk at the time, and it would run a cache and allow it to seem much faster than it really was.

Alan: I was going to add something to that. I worked with Jim’s predecessor on the initial DataCore implementation, and they were coming off an EMC—they’d had EMC for some while. Let’s just say it wasn’t a happy experience as a general rule. Every time there was a problem, they had support issues. They had availability issues. There were numerous problems with that.

At the time, when introducing the first DataCore nodes, their storage was very siloed. You had some stuff on EMC, some stuff on Compellent, some stuff on EqualLogic. Not only are you managing the 3 different data sets, but to move data between those different storage platforms was nonexistent. They would basically shut the thing down and copy everything over. So that caused in and of itself downtime when a decision was made, for example, to take stuff off EMC and put it on the Compellent. So that was the other reason for—or that was the other problem they were trying to solve. How do we get out of a silo model where I’ve just got storage and interfaces all over the place?

Sander: Yeah, excellent. We’ll give our audience a little more meat on that story. Also the project at that moment—you guys were trying to move away from cyclical servers and basically make something more condensed and bring that down to hypervisors, right? So, all those 3 different silos of storage were not the most optimal options for what you were trying to do. Is that correct?

Jim: Yes. The amount of storage you need to run a virtual environment is a lot more than we could cost prohibit on something like a Compellent, because you’re adding—you were able to use much less. With DataCore, you can use less expensive storage while staying redundant and giving yourself the amount of storage you need to be able to run a virtual environment. When you’re running 100 servers, that takes up a lot of storage. With the Compellent, that amount of—the cost involved would’ve more than the company really would’ve wanted to pay.

Sander: At that time, did you guys already have Veeam backups in place?

Jim: No, that was way after. That was after we moved to—we virtualized our environment.

Sander: Let’s move on to the solution piece, where I know Alan was involved in those discussions. So tell us more about how you were seeing this, Alan, when they came to you and said this is what we’re trying to do and this is what we’re trying to accomplish with X amount of budget. What was your thought process at that time?

Alan: It was probably the reverse. I saw them struggling and pointed out it doesn’t have to be this way. Over time—I mean, initially there was—to be fair, there was skepticism because EMC is a big name. Compellent is a known quantity. It’s a name. DataCore wasn’t like those ones at the time. It was really a proof is in the pudding approach, that we started small. It was like 8 terabytes mirrored, I think.

It was a small initial build that we did. It was basically so successful that we migrated the production ERP system over to it, but we hadn’t told the boss. It was running on it, and he eventually came in and said, okay, right, everything looks good. We can move over to the ERP system. And we pointed out—that was actually Jim and I that did the move—oh, it’s already been running on that for the last day or two. He just said good and left.

So it was really a matter of them looking at what they went through on a day-to-day basis, speaking to some of their administrators, and understanding what problems they were going through with available, support, and performance and then constructing we’ll say a bite-sized package to start off. Then from there it’s just grown into—it’s just how RSD does storage from now on.

Sander: As far as hardware, what did you choose to go with at that time?

Alan: The initial install was Hewlett-Packard servers with MSA arrays on the backend. Everything was—the MSA arrays didn’t have any controllers. Basically they were JBOD-type storage. That was in the initial run. That ran for what, seven years? So the initial one ran for probably 7 years. We didn’t have any outages in that time. Then in the last probably year-and-a-half maybe—

Jim: Yeah, about a year ago.

Alan: Yeah, probably about a year ago we then moved on to a Lenovo for the servers and Violin storage on the backend, which is an old flash environment.

Sander: So you’re saying the HP lasted that long?

Alan: It’s actually still in production.

Sander: Wow, talk about reliable.

Jim: Yeah, we moved our production environment that we were using when we moved to the hardware over to—so, let’s go back a little bit, because we have—we talked about what we have here in our production environment, but we also in Peoria, Arizona—we also have 2 DataCore servers over there, and we replicate all of our data from here to Peoria. And it actually does it in real time.

As an email is being written or an invoice is being written, that data is being written immediately over to our disaster recovery site. We actually have two over there that are mirroring each other. So we have two of them here in production mirroring each other, and we have two in Peoria that are mirroring each other. And we just use that hardware that we were using in production and moved it over there, and it’s working great.

Alan: So, the initial install had 2 nodes on the production side, which were the HP nodes that we just referenced. On the remote side was old Dell equipment that was taken out of service on the production side and moved over there. So we found that if we’re doing upgrades or anything like that on the remote side, then obviously the disaster recovery would stop during those scenarios. So the decision was made when we were placing the production side nodes to take that hardware that still obviously was serving everything, right up until the point we swapped it, and put that on the disaster recover side. So now as we do updates on either side, we’re not actually taking down the environment. The environment stays up 24/7, regardless of whether we’re doing maintenance  or anything, pretty much.

Jim: The other thing that I think has made my life as the administrator of this—is we can do—you know, just like any software, just like any type of server, the servers have to be updated. These do run on Windows, so we’re running Windows 2016 Server that runs DataCore. We have to do Windows updates, as we all know. We have to do DataCore updates. We can do these updates, and we do do these updates actually during the day. We will take one of the servers offline, stop the DataCore service, and the server environment has no clue. The ESX environment has no clue. Our ERP system—nothing even realizes that we take one of these nodes offline when we do it. We’ll do it during the day, do our Windows update, do our DataCore updates, and bring them back on. It synchronizes back up, and you’re off and running like it never happened. This really limits the amount of time your staff has to be there after hours, too, and working longer hours. So that’s another real benefit for my staff, that we don’t have to work as many late nights.

Sander: That’s a good thing, right? A huge plus.

Alan: I mean, we’re coming up on 9 years of production storage uptime.

Jim: Yeah, the storage arrays have been, in the time that we’ve had it—I’ve never lost—or, both nodes have never been down at the same time. So at some point storage is always being served.

Sander: Great. So you have 2 copies right now on production, and you also have 2 copies in DR. Is that right? So you have 4 copies of your data?

Jim: Correct.

Sander: In addition to backups?

Jim: Yeah, and then we’re also using Veeam to backup to disk and then going from disk to tape.

Alan: Then we have continuous data.

Jim: And then we have continuing data protection, which runs on DataCore. What continuing data protection does is it allows you to do, for lack of a better term, rollbacks. It gives you a period of time—so, it matters how much storage you throw at it. It’ll allow you to have a day to 5 days, or whatever you throw at it, where you can actually take a timeline, roll it back and create a snapshot and copy of that volume from that point in time and restore it from that. For instance, if ransomware—stuff gets encrypted—you can roll it back to before that, and your data is then available to you.

Alan: Then you don’t wind up like Baltimore.

Sander: That’s excellent. I have a question here from the audience. The question is, “How far away is your DR site from your production?”

Jim: It’s about 350-400 miles. That’s in Peoria, Arizona to Lake Forest.

Sander: What type of connectivity do you have between the 2 sites?

Jim: Right now we’re running a 50-meg MPLS circuit, and we’re reusing Riverbed Steelheads for dedupe.

Alan: And we’re looking at changing that, too.

Jim: Yeah, I’m actually getting ready to look at a different process of doing that. We’re looking at doing a layer-2 connection between the sites, jumping that up to 200 MB, and then putting a larger Riverbed Steelhead in there to give us more throughput.

Alan: What that’ll enable us to do is have VMs run on either side at any time at our choice as we go forward.

Jim: Yeah, we could actually have production stuff running at our DR site, still. So it gives us that true—

Sander: So now it’s a multisite, high availability environment.

Alan: Yeah, that’s what it’s moving towards, the idea being that it doesn’t matter which side a VM happens to be running on.

Sander: Excellent. Another thing, as far as the type of bandwidth that you have and the data changes on a daily basis for that replication to take place, how is that working out?

Jim: Yeah, like I said it actually happens in real time. There’s actually in DataCore a little meter that shows, okay, it’s sending this much data to you, and this is how much lag time you have. There’s never more than 2 or 3 seconds of lag, usually. The only time that you’re going to have that is when you’re starting off a new replication, or for instance when you do take a node down to do updates to it, you’ll notice you’ll be able to watch how fast it’s re-replicating and how much data it is. So that’s when you’ll notice it, but on a normal basis you can look there, and there’s never any lag.

Alan: We had this actual discussion this morning, when we were looking into doing the layer-2 connection and trying to determine how much bandwidth we’re doing. The amount of—I mean, in the last 4 weeks, the amount that the storage sent across the 50-meg line, or attempted to send, was just a shade over 20 terabytes. With the dedupe appliances and a few modifications on the DataCore side to make it more susceptible to dedupe, we saved 18.5 or thereabouts terabytes on the line. So we only sent 1.4 terabytes in the last four weeks. Because of the dedupe ratio that we’re getting and the efficiency at which DataCore makes it be able to dedupe, that’s what’s giving us that real-time feel on the—you know, that’s why there’s never any lag on any of the volumes.

Sander: I have another question here for you guys related to this same topic. We see a lot of redundancy here with the high availability on-site, the DR solutions, the CDP, and the lean backups. So maybe, Jim, you can tell us why that’s so crucial for your business, for your organization to have multiple copies of your data. How is that beneficial to your business model?

Jim: What we do know is there’s a lot of different ways that your data can fail on you. There’s not just one answer for backup and restore. You have a lot of different ways. Like I said, if you end up with ransomware, you’re going to want to use CDP to roll back to it. If you have a situation where something gets corrupted, you might want to use the Veeam backup to restore—or if the data is not time-sensitive.

The other thing is if you have an actual disaster, the data that you have here at your corporate office is going to be—we have to assume it’s going to be gone. And some things you may need to roll all the way back to tape. If you’re looking at archiving, you have to deal with financials or something like that where you need to get something from 7 years ago that you’re not storing any other way. You have to go back to tape. So having the—I call it the 3-tier. I think they use that for—you want to have the original copy. You want to have it to disk, and you want to have it to tape. With the DataCore, it gives you that fourth option of having CDP, the rollback. Now, that’s going to be something that’s going to be used in a short-term recovery, because you’re not going to get your week’s worth. You’re not going to get your month’s worth. You’re going to get that for days or maybe a week.

Every recovery situation is definitely different, and you have to be available. You have to be ready for any of those situations to come up in how you’re going to recover that data the most efficiently. Running to tape is going to take you forever to get back from a tape, whereas if you’re using the Veeam backup, or if you’re using the CDP or a snapshot—you can also do snapshots with it. So before you start a project or do an upgrade, you can take a snapshot of it. If something doesn’t go well, you just roll back from that snapshot, and you’re back to the way you were. Every scenario has its own challenges, and having these different ways of backing up and restoring is definitely a must.

Alan: I actually had to address this question when we first talked about DataCore, mirroring, adding CDP, and adding some of these other functions. The reality is if I mirror them, I’m taking up the same amount of space again. If I’m doing CDP, I’m now taking up some additional percentage of space. So the question before this went in initially—and this was once again prior to Jim taking over—was why do I need all these copies of my data? My response at the time was it’s the one thing the insurance company can’t give you back. They can give you more software and give you more hardware. They can give you another building. They can give you servers. They can give you the money to go get the contract for a new communications line. But the one thing they can’t give you is your data. You either have it, or you don’t.

The policy from then on is everything in this environment is mirrored. We have at least another physical copy of it in the environment. Then on top of that the decision is, is this CDP’d? Is this going to tape? Is this backed up? How important is this stuff? But the minimum is it’s mirrored.

Jim: Yeah, I can actually remember a story. This happened probably 5 or 6 years ago. We had one of our Exchange volumes. We actually run like 5 different databases for our Exchange, and one of those databases—just one of them got corrupted. We were able, after we determined what happened, to do a rollback on just that individual database volume because we had them separated in DataCore—that each database has its own volume. We were able to rollback and do a snapshot of that, connect the database back up to Exchange, even before we actually had it back as a full copy, and were able to bring that database back up online while it was actually writing back to the full.

Once you do a snapshot, obviously there’s a process that it has to go through to become an actual live database and not just a snapshot. We were able to attach the snapshot to the Exchange server even before it actually did that and then let the process in the background run, and we were able to get the database once we figured out that it was corrupted—that we had to roll back within 5 minutes.

Alan: Also, we determined that we didn’t lose any data in doing that. From the time we’d made the decision that we’re going to pull this back—we’re going to roll back and blow away the corrupt version—that I think from start to finish was maybe 5 minutes.

Sander: So what you’re saying is you didn’t have to use your Veeam backup; rather, you used DataCore’s CDP?

Jim: Yeah, for that we used DataCore CDP, because we knew—again, if we were to rollback to another product, we would’ve lost data at that point because we would’ve been rolling back to the night before. All that email that came in that morning before the corruption would’ve been lost.

Alan: Yeah, Veeam wasn’t granular enoug.

Jim: Yeah, so this is going back to the same question of why have all these different backups? Well, every one has its own, unique advantages, and the CDP was able to get us—we were able to find out the time when the corruption started, roll back to just before that, and we were able to do this without losing any of that email that came in this morning, before that corruption hit.

Sander: Excellent. I love that. That’s really impressive, because most data center—or better yet, most IT admins would’ve immediately had to go to the backup from that morning, and like you said they would’ve lost several hours of data. So that’s great to hear. Another thing here quickly—you had mentioned a story about an electrician that made a mistake doing some rewiring. Can you tell us a little more about how that happened and whether DataCore kept you up or not?

Jim: Yeah, in that situation we had an electrician in there doing some wiring. We call him Sparky now. He actually went through and was adding some outlets, and he popped about half the breakers in our data center. It took down a pretty large amount of it. Because of the way we have the DataCores separated in the data center, obviously on different breakers and in different parts of the data center, we were able to—I believe one of them did go down, but the other one stayed up. So that data—we never had a situation—again, going back to—we never had a situation where we’ve had both DataCore servers down at one time. So we were able to get the power back to it and bring it back up. Once we powered it back on, it resynchronized everything from the DataCore that stayed up and never lost any data or never had any lasting effects from it.

Sander: That’s great. Now if you were to have been down for 5 or 6 hours, what would that have meant for your business? How would that have affected your business?

Jim: Well yeah, I think if the branches can’t write orders and can’t help the customers, there’s always a monetary cost involved—and not just monetary. You know, it’s a trust with your customers, too. They want to know when they come in there that they’re going to be able to get what they need. So there’s a trust involved in being up and running and being able to help your customers.

Sander: Excellent. I appreciate all the input there. I have one more question from the audience regarding the migration process. They asked how difficult was the migration from the old storage to DataCore?

Jim: The process that you go through when you’re migrating data from any storage that you have into the DataCore—you essentially serve the storage up to the DataCore, and then there’s a process of putting in what’s called passthrough mode. It allows you to move that data into DataCore during that, and you’re actually running the data through the DataCore, but it’s still on the existing storage. Then you’d mirror it over to another server, and then you’d actually fail it from the existing—the old storage to the mirrored storage, kind of like we do when we shut down one server and bring up—it’s a mirror of it. Then you’d mirror it back to the second copy of it. We actually did the migration during—the systems never went down during the migration. We never had to shut anything off to do this migration. So it actually happened all while we were up and running.

Alan: It was all during the day.

Jim: It’s all during the day.

Sander: Wow, that is really impressive. It’s amazing the stories that we hear all the time from our customers. They’re nothing short of impressive. And hearing you mention all of this is just a reminder of the actual benefits—more than technology, but the benefits that we’re able to provide to our end users not only to make their business run better but make it easier for you to do your jobs. So thanks for all that feedback.

We’re going to move to the next slide. Quickly if Alan can—or both of you can give us a little more on each of these points. You now have a single platform that’s serving all your data services, whereas before you had multiple silos. And we are talking about 400 miles apart you’re getting that performance. Tell us a little more about actually that performance for your ERP. How has that improved?

Alan: One of the things that changed over time was that the ERP system used to be on its own dedicated, and the storage was on DataCore. Previous to that it was on the EMC and the Compellent that we talked about. One of the things that we wanted to ensure is that the company that makes the database that sits underneath the ERP system is a Progress database. What we wanted to do is ensure that if we migrate this thing to a virtual machine, it’s going to function appropriately.

The decision at some point was RSD engage Progress directly to come in and basically run a test against it. The results—and I’ve actually had this with another customer that used Progress in the same environment—in both cases the response was, how did you get these numbers? Because we don’t see numbers like this. What are you doing in order to make it this fast? So that was pretty much a green light to go ahead and take the system off a dedicated physical, bare metal hardware and virtualize it to where it sits today.

Sander:  Alan, that feedback was from who? The software vendor?

Alan:  Yeah, the software database vendor, which was Progress. They were curious as to how we were even achieving the numbers, because when they go and look at these things elsewhere, they haven’t seen numbers like this before. I’ve actually had that comment with another customer doing the same test directly with Progress, as well. That wasn’t us involved in it. That was them running whatever test they typically run, doing whatever load testing.

Sander: Do you happen to remember the numbers, maybe?

Alan: No, we’re talking years ago now.

Sander: That was quite a while ago. Excellent. Jim, we’ve talked about Veeam a little bit. Can we elaborate a little more on that? I understand that you did not have Veeam, but at some point you acquired Veeam and were doing your backups to what type of media?

Jim: Yeah, when we first got onto Veeam, we were using inexpensive storage and just running the backups to that. During this upgrade process, we decided to use—we had some extra hardware from our production hardware when we upgraded—that we had an extra server and some storage. We decided let’s try to use DataCore to run the backups through the—the Veeam backups to a DataCore platform instead of using I think it was JBOD storage, the Synology. Once I had the guy that does the Veeam backups for me—he knew the numbers pretty well and looked at the numbers from how long it was taking doing the backups to the Synology storage. Once we started running it to the DataCore with all of its—

Alan: The big thing was the caching ability to ingest the data much faster. I think it was 3x when we looked at it.

Jim: Yeah, so we did that. He said it was about 3 times faster running the backups to that server compared to running it to the Synology.

Alan: And he had all the numbers. He did present it in some kind of graph to us, because he was working on it and ran different types of Veeam backups, and it was from different machines. He did sort of a big cross-section of different ways they would do production backups, and it was significantly faster. So the decision ultimately was just leave it on that separate DataCore node.

The hardware came from when we shipped we’ll say the old production equipment to the DR site. We didn’t ship all of the arrays, but we didn’t need that much space in the disaster recovery site. So we sort of reclaimed that and put that behind this node as essentially a backup target. Then the licensing—we just expanded the licensing a bit and included that as a node.

The licensing now is kind of easy, because it’s purely just how much space you have. That’s it. That’s the one discussion. So when we hit a high number—you know, we hit a high watermark that’s been decided internally that we need more space, then we just place an order for more licensing.

Sander: From a hardware perspective, do you need to go out and purchase another enclosure? How does that work?

Alan: On the production side, everything on the Violin side is deduped. So we still need to acquire DataCore licensing as we add more, but on the backend, because it’s deduped, the growth is extremely slow. So we don’t anticipate anything for a long time, on that side anyway. So if we do add anything, it’ll just be DataCore, the software-licensing aspect. With the backup side, I don’t think we’re going to need any more in there, but if we did we would just acquire another simple JBOD-type array and just plug it into the back—is probably how we’d go about it.

Jim: Yeah, if you’re running out of storage, and it’s traditional spinning-disk storage, you have to add hardware, obviously. The one thing with the DataCore is you can add—you can be running—we’re running D2700s. You can add a different storage array, and the DataCore doesn’t care what it is. You just present it to it, and it manages it. So it doesn’t matter if you’re running HP storage or EqualLogic storage. You can put a Compellent behind it. You can put an EMC behind it. You can put whatever you want behind it. It’s going to manage it all the same. With the caching that it has, even if it’s slow storage, it’s going to speed it up.

Alan: There is one thing, though, when you compare the current production environment to the previous production environment. The one important thing is that we’re not really getting away with anything, right? The new production environment with all the SSDs runs significantly faster than the old environment, because we had one of the administrators here who made the comment that everything is snappy. This is someone who doesn’t give out that kind of stuff lightly.

The DataCore looks good when you put it on good stuff, and if you put it on great stuff, it looks great. The current environment that they have now—like I said, we don’t anticipate expanding that for some time, simply because we’ve just got plenty of performance—enough space. If we were going to expand, it’d just be a space thing. It’s certainly not a performance consideration. We could load this up much, much more than what it’s doing today.

Sander: So what you’re saying is depending on what’s the use case, then you want to invest in the proper hardware accordingly to meet those demands, right?

Alan: Exactly. For example, if you look at the backup portion of what we’re talking about, that’s obviously much lower-end hardware than the other stuff, but because of the DataCore software, it makes it look pretty good, especially when it’s essentially purpose-built and the RAID and everything is built in such a way that it’s just for backups. So that works great in that environment and is cost-effective.

The production stuff—we put some arguably better stuff in there. That means we essentially are never going to have a performance conversation as to why is something slow. That was pretty much what we wanted to walk away from this from—is the only conversation we’re having is, do we have enough space? And that’s the way it was built.

Sander: Excellent. As far as your connectivity all these 9 years, what’s the connectivity you use?

Jim: On DataCore it’s always been Fibre Channel. Yeah, we’re running on 8 gig.

Sander: From DataCore to your ESX host, what type of connectivity do you have?

Jim: That’s fiber, also.

Sander: Excellent. So yeah, you definitely have a good backbone there that gives you all the throughput that you need.

Alan: Well, part of the reason that we went with fiber is the idea that we’d never have to have a discussion about why is something up or down, because Fibre Channel just runs.

Sander: At least for me personally, that’d be my preference always, but we see that iSCSI works as well, right? Again, it all depends on what type of—

Alan: In the DataCore environment, we have presented out iSCSI environments from certain volume, like for example administrators’ desktops. We pull back files and stuff like that. So we do do that every now and again, but from a production perspective if you’re just talking about production connectivity, in this environment it’s all Fibre Channel. We use iSCSI as hey, this thing just needs a little bit of extra space temporarily. Also historically it was for some administrative-type tasks that we were doing to give administrators space.

Sander: I have a question here, and maybe Alan you can answer this one. What OS platform does the DataCore run in? Is it Linux, Windows, or proprietary?

Alan: It runs on Windows, and the reason for that is it’s the only real-time kernel available—that’s commercially available. Linux doesn’t have a real-time kernel in it, so we don’t use that today. That may change in the future when that becomes available. That, as well as if you have a Windows server, there’s not a piece of storage on the planet that it won’t work with. So you’ve got ultimate flexibility on whatever you want to put behind it. We now have that as at least an option that we can consider when looking at expanding or changing any hardware aspect of it.

Sander: Thank you for that answer. Quickly we’ll move on to the next slide and talk about the results—just a quick summary here. You guys were able to gain that flexibility to continue to use some of the latest storage on the DR site, be able to accomplish everything that you wanted at your production site, have all the redundancy that you needed, and make your investment in storage arrays more cost efficient. Overall it seems like we were able to meet all the needs that are that you want to add to this, Alan?

Alan: No. I think at some point we were meeting needs that weren’t initially even determined that were achievable. For example, recovering data with zero data loss. Before we even turned on CDP, the expectation was if you have corruption, there will be loss because you have to go back to some other point in time, whether it be a snapshot or recovering from tape or some other mechanism from the previous day’s backup.  A lot of these things have now become requirements that weren’t initially there.

Sander: Great. There was one question about Veeam here. Jim, maybe you can touch on this. I know the answer, but I’ll let you answer this. We have 8 more minutes, so if you could take a few seconds to answer this—the question is, “What did you base your decision to go with Veeam instead of other solutions?”

Jim: I think we used to—we actually use 2 separate—we used to use Backup Exec, also. We actually had a mixed environment, but once we went virtual—once we went to a 99% virtual environment, the flexibility of Veeam and some of the functionality that it has—for instance, we use Veeam also replicate virtual machines to Peoria, also. So as we do DataCore with data over to Peoria, we also use Veeam to replicate the whole VM to Peoria, which allows us to have it in a virtual environment on an ESX server, ready to be powered on, just sitting there kind of in a powered off state. So Veeam gives us that ability to have those servers. Now, it’s not real-time like DataCore is, but for the servers that we don’t—you know, that data isn’t changing a lot, we’ll use that—print servers and that kind of stuff. We’ll use that.

The server is just sitting there, ready to be powered on. So that’s one of the functions that we really like with it. And it does a really good job of dedupe, also. So you can use a lot less storage on the backups, too. But yeah, it’s been a great product for us. We find that it was a lot of better than using Backup Exec, which is what we used to use.

Sander: Great. Thanks for that answer, Jim. It seems it’s a great one-two combo, where Jim is doing the VM and DataCore is doing actually the massive terabytes of data. So they’re complementary to each other.

Jim: For sure, for sure. Yeah.

Sander: All right, excellent. Quickly I’m going to jump on really—you know, just elaborating a little more on what software-defined storage is. Practically what we are doing is we are creating a layer of intelligence that allows you to enable all these data services that probably your traditional hardware does not have available natively. So now not only do you have all these data services, but they’re applicable to any physical storage that you have underneath DataCore. Now you can your whole storage from a single pane of glass. Not only are the management and services—but also being able to have that consolidation of all your arrays.

As you heard Jim’s story, we create multiple redundant copies—whether it’s 2 copies or 3 copies, whether you want to have a DR on a physical site or the cloud—and we enable all of those options. We give you the flexibility, and these are proven capabilities. Jim talked about CDP and a few others. So definitely it’s something that I would encourage everyone here to be able to evaluate.

We’ve been in the business for 20 years now with over 10,000 global customers, and our customers always stick with us—a 95% renewal rate. If you want to come to sunny Florida, we’re based out of Fort Lauderdale. We’d definitely love to talk to anyone that can benefit from what we do. We love to help. That’s what we do, and that’s why we won an award 5 times as the storage product of the year. If you need more information, you can send us an email at info@datacore.com.

Actually, let me quickly go over this. We believe that storage has the ability to evolve, right? So software-defined storage enables that evolution to take place and gives you the flexibility to grow your environment from different places, different deployment models, and different architectures, and you’re still using the same software, which is DataCore software.

Again, you have the ability to download these presentations, so you can review these slides and see if this is something that can help you right now with your current situation. Here are some of the quotes from Jim. I want to say thank you to both of you—to Jim and to Alan—for being here today. I appreciate all of your input.

All right, now let’s go ahead and pick the winner. We still have 3 more minutes, and we are going to choose a winner. Just give me a few seconds. Let’s see who’s the lucky winner of $200 on an Amazon gift card. That is going to Railene Schubert. You will be receiving an email with all of the information to download that gift card. So congratulations to you.

Again, if you guys are loving these stories, I encourage you to continue to join all these webinars. Our next one will be May 26 at 2:00 PM. We will have our partner from Tri Delta. We’re going to talk about YMCA of Long Island, and we’ll hear about their story. More than likely it’s going to be quite different from Jim’s story, but you need to be here if you want to see exactly what DataCore is doing for these organizations.

Once again, this is Sander with DataCore. Remember, you can download both the presentation and you can download the case study. Just go to the attachment section, and you’ll be able to do that. Thanks for your time. Until next time, have a good day.

Read Full Transcript