Search
Languages
<
On Demand Webcast
52 min

Combating the Threat of Ransomware, Disasters and Hardware Failures

Steve Firmes

Senior Solutions Architect Alliances

Veeam

Brian Bashaw

Americas Field CTO

DataCore

As your data volume grows, so do the potential security threats it faces. Your business is more vulnerable than ever to ransomware, natural disasters, and hardware failures. In this 60-minute educational webinar, experts from Veeam and DataCore will discuss best practices for data protection, mitigating risk, and implementing solutions for data restoration and recovery.

Attendees will learn:

  • How threats to your data have evolved over time and where they are headed
  • Proven countermeasures ranging from detection to restoration and recovery
  • Incorporating on-premises object storage in your data protection strategy
  • The role of immutability, disaster recovery, and replication in mitigating the risk of these threats

Webcast Transcript

AJ: Hello everyone and welcome to today’s webinar on Combating the Threat of Ransomware, Disasters and Hardware Failures. We have some great speakers today. I’ll let them introduce themselves but before that, just want to get some housekeeping items out of the way. There is a questions dialogue if you have any questions throughout the presentation. Go ahead and ask them. We will try to answer them during the webinar . If not, we will get to them at the end and if we don’t get to your question for some reason we will follow up via email. And with that, let’s go ahead and get started and have the presenters introduce themselves. So why don’t we start with you, Steve.

Steve Firmes: Well, good morning, good day, good afternoon everyone. I’m Steve Firmes. I am a senior solution architect in our product management group and I work with our alliances and DataCore is one of our most trusted alliances that I work with.

AJ: And Brian.

Brian Bashaw: All right. Thanks, AJ, Steve. Good morning everyone else. Good afternoon, whatever the case might be for you. I am Brian Bashaw. I am the America’s Field CTO. I work under the sales organization. Going out and working with all of our customers and partners to try and introduce solutions to delight and please our customers.

AJ: All right. So let’s go ahead and get started. Brian, why don’t you kick things off.

Brian Bashaw: Yeah, will do. Thanks. When I think about just the concept of data protection and the idea of the complexity of that entire conversation, you know, the word of the day is obviously ransomware and everybody wants to pivot directly to that. It’s important and we will as well but I think it’s also important to really think about the bigger picture as well and what that means for the caring the feeding of the data that a lot of the people that are on the call today are tasked with maintaining and it starts with the data growth. I know we all like to throw around fun statistics around that.

I’ll spare you that challenge today but when we think about terabytes becoming petabytes, that’s a singular kind of end point conversation in my mind. When I look at the bigger picture and think about the overall landscape of data being created in the world today, I often go back and look at a report that IDC published for example that kind of breaks it down year by year and estimates of growth and whatnot. The 2020 numbers — I haven’t seen the 2021 numbers but the 2020 numbers of data created and/or replicated are in excess of six exabytes in one year.

So on that scale, that’s the kind of datasets that obviously no single customer has that but that’s the kind of problem statement that we have is this volume and velocity of data that’s coming at people from various different applications, new applications all the time, different access methods all the time. Just lots of different things that you have to think about in the face of this amazing growth that people are seeing. And then apply that to the distributed workforce. Of course the pandemic amplified some of that but distributed workforce doesn’t mean that you’re influenced by a global event. I have done this presentation from the back of my motorcycle before, just honing in and listening and asking someone to move the slides.

I’m not doing that today but it’s possible and so in that way, it just means that you definitely have a distributed workforce where they expect whatever application they’re using, whether it’s Teams on a cellphone or if it’s Go To Meeting on your laptop. You expect the data to be available and in the same way and with the same reliability that you come to expect regardless. That ties over to that last element of that — I like to think of it as an ever swinging pendulum. Whether the data is live on premise or public cloud or private cloud or whatever that means to you, it really boils down to what that workload placement is all about. Where does the workload that is valuable to my business need to live and then you have to make sure that the data is accessible via in that location and that location may change from day to day as well. So lots of things happening and then at the center of all of that is your data. So lots of stuff to just think about in a proactive way in that data set.

And then that carries over to some of the ways that the data is vulnerable so AJ, if you’ll hit the next slide for me please. Your vulnerabilities’ and certainly always start thinking about that as ransomware, malware, etcetera coming into the environment. But I don’t think it’s enough to just talk about that as a trait of a vulnerability or a method of vulnerability. I think it’s important to also think about the different avenues that that vulnerability can go down. For example, if you think about unstructured datasets that are by and large easier to access because they are shared in most cases by their nature, that opens up that unstructured side to be a little bit more of a target. You start thinking about well, what about a backup container. If I’m a bad actor that wants to come in and exploit data that maybe people aren’t going to notice right away was impacted or influenced and back up data sets or archived data sets start to become a more attractive target for that kind of a thing. So there’s lots of things that you’ve got to really kind of think about along that attack angle of the data sets.

Now, a lot of those things can be protected and prevented but now you have that human element, that mistake that happens. I know one of our customers on the DataCore side, that is one of the reasons they left their previous vendor is that the vendor had an ability to provide tools in place to make sure that data was being cared and feed for in responsible ways but it required manual setup, manual intervention and the admin made a mistake. It wasn’t nefarious, it wasn’t an attack, it was just an oops and that cost them dearly. So you’ve got to make sure you’re planning for that, making sure your people are real educated and when you think about all the different things that are coming at ya, maintaining that level of education and that affluence and the data protection that you’re asked to execute, it’s challenging.

And then moving right over to that, we talk about hardware failures and availability. Again, I won’t harp too much on the hardware supply chain thing. It’s a thing. We see it all the time but a lot of times what you fall into, especially in a data protection world, is that there’s an expectation of like for like hardware has to be acquired, I have to have the same drive types, same drive capacities or the same controller types for example. There’s a lot of rigidity in some of those architectures.

So that can be challenging. And then failures. I think a lot of times, especially folks who live in my world, in the erasure coated objects store kind of world, we like to talk about how quickly we can recover from failures and how we can endure multiple failure and it really is easy to fall into this trap of talking about well we can recover. What we don’t spend an awful lot of talking about and what customers — I would encourage customers and partners to talk about is what’s that experience when things are broken. I think it’s very important to make sure that your environment performs and behaves just as well in the face of a failure as it does when everything is healthy. Hard drives fail, that’s what they’re born to do outside of storing data. It’s a thing.

So you’ve got to make sure you’re prepared for it. All of that stuff will sound unpredictable costs and potentially loss of control. Unpredictable cost just again how are we going to get access to the things that we need to grow the data, to recover from an attack, to train our people the right way. And then down to loss of control and maybe that means people are starting up their own little shadow IP operations. It might mean that people spin enough other locations for data that you didn’t plan on so you’ve got to figure out how you’re going to protect that stuff as well. So there’s all kinds of ways that just lead to data vulnerabilities in the face of all of this growth. All of that ultimately boils down to a very simple question and that is how are you going to protect that data, how are you going to make sure that it is always accessible? Accessibility from a ransomware attack, accessibility from a drive failure, accessibility because some talking head wants to do the presentation from the back of his motorcycle?

So a lot of different ways to think about how that data needs to be presented. So for our part of that on the DataCore side we have an object storage solution that we like to talk to you about called Swarm. Swarm is our scale out solution for providing large scale data protection, protecting against all of the different elements that you see down here, ransomware protection that we work in tight partnership with our friends over at [Veeam] to make sure that those two products tightly couple together, provide that end-to-end protection against any sort of bad actors.

Providing continuous healing. We can dive into that in another session at some point but it’s not enough to just say that you rotate us somewhere and then fire and forget and be done. We wanted to make sure that when we produce and we rolled the system out rather that we employed continuous data protection, continuous health and healing in that system. And what we mean by that is just being able to look at the system and make sure that the data that you wrote is still the data that you would read and that it extends beyond just what the application might do as well. It also now means that system by itself can help do those things along the way.

When we think about secure access there’s a number of ways to access data, a number of ways to make sure that we’re handing out the right keys, that we’re providing encryption in flight, that we’re doing encryption at rest. We’re distributing that in secure locations. There’s a number of elements to come into that. We want to make sure that those all paramount to that caring and feeding of the dataset as well. Disaster recovery means a lot of different things to different people. If you asked everybody that was on here, if we unmuted you all and asked you about your disaster recovery plan, we would get a number of different answers. Some of those include another copy, maybe on another Swarm system. It might mean another object store system from some other vendor. It might mean a public cloud provider.

What we want to make sure that we do as a responsible participant in that is ensure that not only can we receive the data and do all the things we already talked about but also get out of the way. If your disaster recovery plan means that you’re going to spin off services somewhere else, then by all means let us help you do that. Let us help you get the data where it needs to go to spin off those services. Again, that workload placement that I mentioned earlier. Let’s help you do that and then be that responsible participant in your DR plan.

High availability, what do you say about it, right? You want to make sure, like I said, does the system behave as well when it’s broken as it did when it was healthy? We think our does and all of that comes together to be a very viable and very valuable alternative to what it means to deploy a [unintelligible 00:10:25] solution or even above based solution for data protection. I like to think of it as part of a complete breakfast. So with that, I’m going to step back and shut up and hand it over to Steve.

Steve Firmes: Oh thanks, Brian, now that I’ve found that mute button. Good job. As Brian explained all the complexities today of the data environments as well as how to not only protect it but to recover it, what we at Veeam have done is we’ve had these four tenants that our products are built on. Right off the bat, it’s simple. Right. So look at again that complexity environments. All the different attack vectors and threats to the data. To be able to protect the data, recover data, manage it needs to be simple. Right. Veeam is the simple solution to help do that, reduce all your costs, all that good stuff. You don’t need an army of people to manage your Veeam. Straightforward. Semi flexible. Again, we’ve looked at that diagram with all of the different types of data, different locations of the data, different storages that the data is on. On prem, on cloud, public cloud, hybrid cloud, all that stuff. Veeam can protect all of those so pretty much whatever you have in your environment, we can protect it.

Again, reliability. Right. Having all this simplicity, flexibility isn’t really worth much if you can’t restore the data. Right. I’m always amazed, people are always concerned about their backup success rates, all that good stuff. Which is fine. It’s important. It’s that restore, right. I always joke not a lot of people get fired because they can’t do a backup but if you can’t do a restore, you got problems. Veeam is not only extremely reliable but we have tools in place that we can actually verify your backups and we have our data labs where you can restore workloads into it and do some testing to make sure that it’s validated. Almost like on demand DR testing. And all of that combines into being an extremely, extremely powerful solution. As Brian mentioned, the multiple exabytes of data being created here, Veeam can protect anywhere from the smallest environment all the way up to the largest, again, multiple petabytes, hundreds of petabytes. All covered.

And on that, one of the threats that Brian talked about was ransomware and several years ago, rare. Right. Now I challenge you to do a Google search and I’m sure there’s probably a handful within the last week or so that have been documented. Not only has the number of these ransomware attacks gone through the roof, it’s the cost. Right. We’re seeing 10s of millions of dollars being ransomed and people paying it or paying as much of it as they can. These ransom amounts are not going down, right. And what we’re seeing too these ransomware perpetrators are not necessarily the most honorable folks.

So what we see a lot of times is they’ll hit you and you pay the ransom and then all of a sudden they will say oh we need a second payment. Sometimes they will have a secondary infection hidden somewhere in the environment so that once you pay for attack number one down the line you get attack number two. As Brian mentioned, one of the newer trends is that they’re now targeting the backup infrastructure. Stay large, right. They’ve learned over the years that yeah; we can wipe out your data but if you have a good backup copy you can recover really quick. So now they’re targeting those backups and that’s where working with DataCore and the Swarm we can make that attack on the backup data. We can protect it and have everything immutable.

Again, Brian mentioned, same thing. How prevalent these attacks are and the different ways , the malicious actors and just the oopsies. There’s so many ways to do it. We talked about patches, right. Always my phone — I think I have some application that pings me almost every day. Hey, there’s a new version of your IOS out there for your phone. Your phone is vulnerable. Get this patch. Same thing for computers. Alert fatigue. That’s a really good one because one of the larger attacks in recent years up here in the northeast, the organization actually they’re reporting an alerting system worked. It actually caught the intrusion and sent out an alert. The problem was it truly was that needle in a haystack. Right. Or the boy who cried wolf. They had so many alerts and so many warnings of different things that the real intrusion — they didn’t get to it in time. Certainly passwords policies, humans, right. They’re just clicking on those emails. A lot of organizations have gone to the extent where they actually send fake phishing emails to their emails and based upon your reaction, right, if you report it as phishing or if you just ignore it and delete it or if you make the mistake of clicking on it, it’s a way for an organization to measure how successful they are with getting the news across or the information across of you need to be careful, right. The security is really only as good as those who are protecting it.

AJ: Yeah, I think it’s back over to me in the booth. So I think when we think about all those different places where things can go wrong in that ransomware. Not only the potential opening of that attack, the OK, what now sort of elements. We’ve been breached and then how do we make sure it doesn’t happen again. All those things do come at significant cost and there’s a number of different ways to look at it. It’s interesting to look at things like the IBM report on the cost of a breach.

Right out of the gate, 23 percent of that they say could be avoided potentially just by some of the human error. So there’s a training cost associated with this. Sometimes that human error isn’t the data admin, it’s not the infrastructure admin. Sometimes that breach is just caused by someone making a decision around where data should be placed that maybe didn’t know the ramifications of it. Or maybe we didn’t know the legal requirements for that data, the SLA associated with that. So it’s another thing that when I’m talking with people about their data protection plan in general that I like to point out to them is don’t make these decisions in a vacuum.

Make sure that you’re reaching out to people in various business units in your organization to ensure that you’re meeting their SLAs and that your decision for a cost effective data container doesn’t turn around and result in penalties for missing deadlines for example. So there’s a lot of traits that come into this. You kind of rewind that a tiny bit and you start thinking about the cost of just notifying. Like you say, what’s the needle in the haystack that says we’ve got an issue here? Just the cost in finding that alone starts to get in excess of $4.5 million for a singular incident. That doesn’t necessarily include the cost of the ransom itself.

The other interesting part — I mean, Steve touched on the idea that the bad guy might not necessarily play nice. They might have a secondary time bomb waiting to go off in your environment. They’re going to hit you up for a second payment. But it’s also possible, they’re just not the best developers. You may pay the ransom, you may get the key to unlock your data and you might find out there’s a bug in that key, in that tool and now what are you gonna do? You can’t call the ransomware helpline? You’re stuck. So there’s this extra cost of how do you really get this data back and then you start bringing that over to the statistics report.

Sorry, my southeast Texas education has trouble saying that but in 2021 alone they also sort of echoed that similar cost associated with the breach. It’s a consistent number that we’re hearing. But then if you take away all of the other noise, all the stuff that we’ve just talked about and just think about that impact to your business and why I specifically think that you should talk to people in the business level, not just the technical level is the actual loss of revenue associated with a business and their actual ongoing money making engine results in over a million and a half dollars per incident. And that can totally be avoided with the right solution put in place. So with that said, next slide I think it goes back over to you, Steve.

Steve Firmes: Absolutely. Thanks, Brian. So as we talk about finding a needle in a haystack or trying to find those hidden landmines that may have been left behind, Veeam has several features and functions within our product which will allow you to do that. First off is identify. What that means is it’s hard to protect your data if you don’t know what it is or where it is, right. So this is where we can go in and we can put tags on it, we can put where the location of the data is, provide a holistic view of your data. So this is a way that when you go to craft any sort of protection strategy, you have this global view or holistic view of your environment so you can map it out appropriately. Let’s see.

So we talk about the [unintelligible 00:21:09] protector, right. Absolutely protecting your data without question. Right. We have our 3-2-1-1-0 rule which is basically our blueprint of how to protect the data. We’ll go into that a little further. And immutable storage. The idea of I’m going to do my backup. Here are the backup files or the backup objects and they’re immutable. They can’t be changed. You can’t delete them, you can’t change them, you can’t move them, you can’t do anything. All you can do is use them for restore. So this thwarts any sort of attack from the ransomware folks or even the malicious person who tries to delete it or by accident. Right. If someone accidentally wanted to delete a folder and it happened to have your backup files in it. Couldn’t happen now. Being proactive versus reactive, right, so that’s that whole detecting. So Veeam has a lot of reports and stuff and alerts, that we can notify you.

So let’s say you had a machine and all of a sudden its CPU was spiking. Now, one cause to that could be someone is attacking it and they’re going through and encrypting all the data. So this is — through our orchestration or some things you can do, take that box offline or certainly send out an alert. Also we can monitor the amount of data that’s being backed up and report on [deltas]. So again, one of the tricks that the ransomware folks like to do is they take the data from one location and they move it to another. Right. Veeam will catch that because all of a sudden we’ll see all right here is a machine that all of a sudden has a 110 percent or 80 percent, pick your number, increase in the amount of data being stored.

Also on the other hand you can have a machine that now all of a sudden is 80 percent or 90 percent less of what it used to do. It could indicate again; the data was moved. So some great ways to catch that data. I mentioned that you’ll have our sure backup capability which is when you go through and once you do a backup you can load it into this data lab which is an isolated offline environment where you can load a workload and do some testing. Also you can run some ransomware detection through it to see if the data is safe. It’s a great way to again proactively sample some of your backup load to see if there’s any sort of infections in them.

Again, once anything is detected, there’s our response. Right. Talk about the orchestration, the automation, how to be able to go through and automatically fix things or enact some sort of reaction to it based upon an alarm going off. You talk about in the identifying phase, building that holistic view of the environment. We do have our dynamic documentation, right. The ability to go through and build out your DR plan as your environment grows, as soon as things change, this documentation will automatically get updated. And certainly last but not least is the ability to do the recovery. Right. You can backup as efficiently and protected as you can, but if you can’t restore it, not much use to you. So Veeam has tons of instant recovery capabilities. Right. You can instantly recover machines and discs, file systems. You name it. Databases, we can do it.

You talk about the security store. Again, that’s the ability to — let’s say I did have a ransomware attack, I’m doing my resource. But one of these ransomware attacks they’ll place those ticking timebombs in the environment and it could be months until they actually go off. So if you were to detect a ransomware attack today and you say oh I’ll go back to my last backup, well, that last backup may be infected. So what you can do is then go through and before you do a restoration, the data will be checked to see if there’s any sort of infections and stuff so you don’t restore those ticking timebombs back into your environment. Again, all that’s done through our data [latch] capability.

I mentioned earlier our 3-2-1 rule and with our extra 10 being tied on at the end. This is a nice blueprint that if you enact this rule, it’s a great way to ensure that your data is protected. Right. So we start off with the three versions of your data, three copies of your data. What that is you have your production data, your initial backup and at least your second backup, right? Second hardcopy. So you can have a 4-2-1 or 5-2-1 but the bare minimum would be that 3-2-1. The important thing is to have two of those copies on different media. The reason for that is if you were to place all of your backups there, your primary copy and your secondary copy on the same type of storage, what can happen is if they were to access one device, maybe they could access the second and both copies would be corrupt. But if you were to have one on traditional disc using [unintelligible 00:26:26] and then you would have your secondary copy on Swarm. In that case, the data is in two different formats on two different protocols. The same attack with the same tools wouldn’t necessarily be used for both. They would have to have a much more complicated, much more robust attack.

Again, as Brian eluded to, maybe they’re not the greatest coders or maybe they’re just lazy and they go after one. You’ll have that second one. The first one, off site, absolutely having that in that secondary location prevents certainly the once that gets wiped out you have the second site available. The last 10 are really important, right. One of them is that offline [unintelligible 00:27:07] immutable. That’s when we talk about it’s great to have as many copies as you want but if they can be deleted and effectively moved, they do you no good. So having at least one version of the data being in some sort of immutable area gapped space is great. Now, if you can do it on both, again, using SANsymphony. Having a hard [unintelligible 00:27:32] on that. So now that’s locked and immutable. Then if you were to use object lock on these forms now that’s second. So now you don’t have to worry about well if one copy is gone I have to use a second one. Both copies are fully protected and armored up and can’t be attacked.

And again, last but not least, those – no errors. Right. We don’t want any errors when we do the recovery. I talked about several of the methods that we can go through and validate that the backup stays were successful. The data repository is still valid and we can do all that recovery validation and through the secure restore [unintelligible 00:28:16].

Now we talk about all these features and functionalities. Just want to break them down real quick. Within Veeam, these are the components that do the certain features that we talked about. The detection, that’s our V One. Right. That’s all of those CPU deltas and storage deltas and all that stuff that I talked about to help you detect. The backup strategy, again, that’s that whole identify, build that holistic view and how to protect it using our 3-2-1 rule and the immutable backup so that’s all of the strategy that you need to put into it. VBR which is the backup restoration which is the heart and soul. Obviously that’s the one that does the backups, right. It’ll do the restores. It’ll do things like we have our data integration API which is an API that allows us to take a dataset, a backup and mount it so that you can run something against it to verify it or stage it. Great, great tool that we have.

I talked about the sure backup. Storage integrations. Using our Veeam ready to validate products again like SANsymphony and Swarm to make sure that they perform up to our standards and they’re compatible. And last but not least, again all those restores. We’ve got the secure restore and the ability to use our data labs. So combine the Veeam portfolio will protect you from start to finish and allow you to have this ransomware protection strategy so that you can not only recover your data but you can protect it and identify and sort of attack it ahead of time.

Brian Bashaw: All right. Excellent Steve. Thanks for the lead up there and a couple of other nuggets that you dropped in there for some of the product mentions that we’ll talk about now. So Steve mentioned Swarm of course. We’ve talked a lot about that so far but then there is also another product from DataCore and that’s SANsymphony and again, I’ll kind of joke. I think all of this comes together as part of a nice complete breakfast. SANsymphony, for those that are unaware, is our SAN virtualization platform where we can apply across an existing SAN infrastructure and aggregate that for [unintelligible 00:30:23] capacity, efficiency etcetera. A lot of the customers that we have in that space are running some sort of a hypervisor type environment across that. It might just be acting as a native data store. It might an HTI type environment.

There’s a number of different ways that that gets deployed in a customer environment. And a lot of them are protecting that data using Veeam. A lot of them also then will put SANsymphony on the other side of that, on a backup and replication and checking it through Veeam. Let SANsymphony be that performance gear. I think it’s that initial [unintelligible 00:30:56] hosting, the immediate content and that is a Veeam ready solution as well. And then from there tying that over to the capacity tier that is Swarm as part of that overall life cycle. And then that capacity tier with Swarm is not only object ready but is object immutable ready as well. So know that you have that waiting for you.

Then there’s also the Veeam for Office 365 which has the unique ability to be able to reach directly past that performance year and right into the capacity engine that is Swarm as well and just take direct advantage of that without having to impact or influence that performance tier. So kind of a couple of nice ways to get your hands on some of this Veeam DataCore tandem goodness. But if you wouldn’t mind going to the next slide I want to talk about some of the traits that we start to employ that are nice, connect the dot elements if you will. That is we’ve used this word immutability so for anybody that is familiar with object storage in general, there’s a phrase that’s been floating around — I’ve been working in object storage for a few years now and one of the things that we’ve often talked about and a trait of object store is that objects are immutable by their nature. But when we say that, what we mean is that in object store land, when you make a change to a singular object you will in fact rewrite that object in its entirety which will result in a new version. Similar to what you might be familiar with from a traditional file system standpoint which are previous versions of a file. So in that way, if you don’t have version enabling or version control on your object, you might be confusing or conflating this word immutability and I want to make sure that when we talk about object immutability in this sense, we mean this extra level of saying no, no, not only can you only maintain that one version and if you try to write it you result in a new version.

When we turn on immutability here, it means there is no opportunity to create a new version and therefore no opportunity to overwrite the data or delete it. And again, not to dive too far out of the weeds but when you delete an object, that is also a version of that object until you intently go back and delete that deleted version. So in that sense, when we talk about immutability, there is an extra step that we go through to ensure that the data is not able to be modified or deleted deliberately or otherwise until a certain time period expires and that could be applied through a few different elements and that’s the level at which we integrate with Veeam.

Then we talk about activity logging and hashing. It’s important to remember that object storage is by nature is well designed for large scale environments. A lot of our customers have many, many petabytes of data stored in that environment in a Swarm environment. A lot of them are CSP type customers, service provider customers. In that world, they have to have very detailed logging for what’s being accessed and why whom to ensure that they can apply proper chargeback for example. But all of that turns right back into an ability to tell where did an intrusion come from? Who accessed what? What did they access? What did they do with it? The fact that we have all of these things built in for chargebacks makes it very easy to evolve that or made it very easy rather to evolve that into providing that extra level. Steve mentioned the 3-2-1-1-0, that last 10 I think are still new to a lot of people and that zero bit about no errors and making sure — it’s all about making sure that what you’re reading back for restore is what you wrote, that you’re getting what you expected and part of that activity logging and hashing that we provide is part of that zero element as well.

Encryption, flight and at rest, covered this earlier but just to reiterate. We’re all about making sure that not only are we protecting the bytes that we write but to make sure that they’re not — and nobody can get a peak at those in flight so send data to us in flight over at the Vel TLF funnel, do encryption at rest, manage those keys. Make sure that that data is encrypted. I was listening to somebody else the other day that actually at one of the — I watched one of the things that they were doing yesterday as a matter of fact and one of the statements he made, and I’m going to borrow it from him, was hey if you’re going to go through all the trouble to turn on object immutability in your backup stream please make sure you also do encryption. Don’t do one without the other. It seems kind of silly. So it’s just built into our system. It’s a very easy thing to employ.

I mentioned earlier that automated replication but I didn’t mention it in the sense of that 3-2-1-1-0 and I want to go back to the last one and that is making sure that you have an [air gassed] copy. I know it’s a hard thing to do in this ever connected omni present world that we live in but we do provide this ability to provide logical or even physical air gapping by sending stuff completely off sight and being able to turn off those systems and then rehydrate data later on if we need to. There’s lots of things we can talk about there too but know that when we think about that air gapped portion, that last one in that 3-2-1-1-0 approach. Swarm provides some amazingly cool features to help you get to that destination.

Another thing that we like to point out is we don’t actually use a file system under the covers for storing the eraser coded or replicated segments of an object in Swarm. And the reason why that’s important is a lot of the exploits that are being used, that are being targeted are at that level. If there’s a known exploit and say it needs [unintelligible 00:36:31] files for example, then if I have that intention, shame on me. If I don’t deliver to that exploit. So for us, we kind of take that element away. It leads to some interesting data protection conversation like we’re having now. It also leads to some storage efficiencies. We don’t have overhead of putting eraser coded blocks onto a file system. We just treat the native block device that we’re storing that data on as just that. We don’t need to add that extra overhead. So it helps with a little bit of storage efficiency as well as security.

And then when we talk about zero administration here it means it all ties into this idea of flexibility and simplicity for just how the system is going to grow. For a lot of you you haven’t had your first on prem object storage system yet. We hope Swarm might be your first object store but regardless of what it is, you’re going to have to decide what that initial presence looks like and then as that purpose grows, where it evolves over time, you’re going to have to make decisions around what’s that going to look like, how you’re going to grow it, what that costs, data protection, all the stuff that goes into that. For us, we want to make that super easy. Zero administration for storage and everything. When we talk about that it means we’re going to bring in new hardware, we’re going to put it on the same network segment as the rest of the storage nodes that are protecting the data. They will advertise themselves to a control plane that says hi, I’d like to help protect your data. Can I play? That will just now make the cluster bigger and now all of a sudden we have a larger pool. It’s very analogous. Just adding more tapes into your tape library and inventorying them, only now we’re able to do that with storage in any combination you’d like. Kind of super nice to be able to grow in that way. Can you go to the next slide, please?

Yes, sorry. I defined a little bit about what data immutability means for us on the [unintelligible 00:38:17] side. Steve, do you want to take a second to walk through what that means through the infrastructure?

Steve Firmes: Sure. As you mentioned, immutability I guess could mean many things and basically it really is. You can’t alter the data. Now, within Veeam we use what’s called compliance mode which means it can’t be overwritten. All right. So you can’t log in as a super user and do something. Also, once you turn immutability on, you can’t turn it off as long as there’s any data being protected. Again, this helps from those mistakes or again the malicious insider where if someone got into Veeam and they said well, all right, the data is immutable. Let me turn immutability off. But you can’t because there’s data in those buckets. It’s immutable. You can’t touch it. If you went out and you tried to do a deletion of the backup, you get that nice little warning or error saying you can’t delete this backup because its protected. So what this shows is that not only does this protect you from those ransomware attacks, all this stuff, it does help with the accidental, the oopsies I like to call them because if someone went to delete backup drop one and they mistakenly backup job two. Your backup to job two was immutable, that’s what they would get. So again, it’s not necessarily saying your Veeam admin is going to go out and try to wreck your life but this does prevent both malicious and accidental deletion of data.

Brian Bashaw: Fantastic. I’m going to try to close out now some of the features that you get here with Swarm. Like I said before on that zero or the administration, that little over your head for scaling quite literally we can install swarm anywhere with — I have it running with Veeam on my laptop. We can do it on bare metal. However you want to experience using Swarm goodness, we can make that happen. The other cool thing is that decision is not a one and done decision as your system does grow and you want to evolve that infrastructure. It’s very easy to take whatever server you can get your hands on; pick your flavor and then also whatever storage media you’d like to put behind it. It’s a choose your own event share a little bit if you’d like. We’re here to help you. We don’t want to make that choose your own adventure thing be something that you feel like comes with an immense amount of complexity. We make this very simple for people to evolve that infrastructure as well. We’ve even got some instances where customers have taken previous bad decisions that they’ve made and used that hardware as the end point for Swarm clusters. So it makes it very easy to get into that infrastructure.

When we talk about costs being lower and more predictable, I think bringing in that element of control. That you call up your favorite server vendor for example and ask them for an expansion on something and they may have longer lead times than you need and it may come down to higher bidder or it may just become that you’re beholden to that supply and demand rule but you can call up another server vendor that you’re just as friendly with and now offer that extra control. Because we give you that flexibility you now are in a little bit more control of your own destiny in terms of what that scaling cost can look like.

And then I think it’s important to also think about what you want that recovery experience to be. We’ve talked a little bit about this earlier and in this way I think a lot of people have fallen into this trap of — I shouldn’t even call it trap but they’ve fallen into this mode of backing data up to a cloud or they’ve done their archive to a public cloud. I’m not going to tell you that’s wrong. If that’s part of your business continuance and your disaster recovery plan, if you’ve thought all that out then fantastic. You made the right decision. But if you haven’t thought it out, I encourage you to do so at some point and think about what does that recovery experience look like? What’s the cost of prime to first flight? I’ve got a saying. I’ve got to go recover it now for whatever reason, what’s that cost? What’s the cost of holding back. [Unintelligible 00:42:14] come out of that closet. They’re trying to act as that first element. Just those kinds of things are important to think about. Again, it goes back to not just your own IT or infrastructure technology budget but thinking about what does that mean to the greater business at large?

Then making that system easier to manage and more accessible than tape. That self-healing element of our systems, just to dive into that for a second. It’s a direct extension of things like that zero rule that Steve mentioned where not only do we have the logging, that’s an important first part of it but really there’s a little friend that lives in our system. We call it the health processor. And the health processors job in life, there’s no management of it. There’s no button to flip or no knobs and levers. It just is. And it’s job is to go through the system in its idle time where there’s free time in the system, go do something. It’s that hey, if you’re standing around with nothing to do, grab a mop kind of mentality. This health processors job is to walk the system in idle times or any times that it can get cycles and compare the [checksong] that we wrote when we first received the data against what is it now? Did we suffer from some sort of a byte rod or maybe we have a bad sector on a drive that we’re just waiting for a read to tickle it and manifest an error. That’s this thing’s purpose. So from Steve talking about that last element, making sure that there are no errors. Our ability to extend that to long tail and perpetuity ties right back into ensuring that you’ve got that appropriate protection for that 3-2-1-1-0.

Next slide I think its going to be back over to you, Steve.

Steve Firmes: Absolutely. All the stuff that we’ve been talking about, you know, these links that I provided, great, great resources that will help you go through and learn more about and implement all the features and functionalities that we’ve spoken about. Just great links to look at and if you have any questions, certainly reach out to us.

Brian Bashaw: And likewise, we want to throw up some helpful resources as well from the DataCore side so links are here. No need for me to read them to you but if you’d like to learn a little bit more, hit those up and drop me a note in an email, give me a call, hit me up on LinkedIn, whatever your favorite method of communication is. We’ll do our best to get you pointed in the right direction. With that, Steve, thank you. AJ, how are we doing on time? Questions, comments? Career advice? What do we need to know?

AJ: Yeah, we have a few more minutes for questions. If people want to submit questions I saw some more general questions about our partners out there getting a copy of the slides and absolutely we can go ahead and follow up with that. So where do people get started? Let’s start with Veeam. Let’s say someone is interested in Veeam, where should they go to get started beyond the resources you showed? Let’s say they’re interested in purchasing, where should they go, who should they contact?

Steve Firmes: Sure. You can go out to Veeam.com, download a demo of our software. Once you do that you can send your email and stuff and someone will get you. We just have a little automated assistant that will pop up and you can request some information and based upon, you know, where you’re located, the appropriate Veeam sales folks will be more than happy to reach out to you.

AJ: And likewise, Brian. What should someone do if they’re interested in Swarm and what’s the timeframe? Let’s say you already have Veeam setup, how long will it take to setup Swarm instead of the integration fee?

Brian Bashaw: Yeah, absolutely. I think it’s important to call out just for any of our partners that are on the line. We are 100 percent channel partner only. We don’t know how to transact business directly with customers. It’s not in our DNA. So I would say if you’re a customer on the line and you’re looking to get your hands on some Swarm goodness, reach out to your favorite channel partner that you already work with for integrating solutions in your environment. If you don’t have one of those or don’t know of one of those off the top of your head and you’re looking for a little assistance, similar to Steve, right, come to our website, there’s a little guy that will pop up and ask you if they can help you out, you can go through that method. There’s also a section on the website that will offer you that ability to get in touch with us or a section that will also point you to the channel partner that can help you can get started as well. So there’s a handful of ways to get introduced to the right teams to get engaged a little bit deeper.

In terms of timing to get the system setup, regardless if it’s a test that you’re executing or if you’re setting up a completely production new environment, from the moment that the Swarm system is ready to serve data itself, creating a bucket is very simple, five minute task. If that. Then just simply opening up the Veeam configuration menu, pointing it at your URI for the bucket that you created on prem, providing it that access key and secret key and off you go.. There’s not a whole lot of setup required for it but if you want to learn a little bit more about that as well, we do have a deployment designed document that can help you guys get a little bit more information on the set processes.

AJ: And it looks like we’ve answered most of the questions. They were more general in nature. So like I always like to do with webinars, I always like to end with words of wisdom. So I’m going to throw you both a little curve ball, you mentioned that there are lots of different ways out there for data to be exploited. A lot of different things that you have to protect against. Sometimes it feels like you have to boil the ocean. So let’s just give the viewers out there words of wisdom on where they should start? Sometimes it feels very, very overwhelming so from both of your perspectives, where should someone start just from a data protection perspective? So we’ll start with you Steve and we’ll end with Brian and then we’ll conclude our webinar after that. What are your words of wisdom for someone that is just kind of getting into the whole data protection analysis or data protection project mode?

Steve Firmes: Sure. First thing, know your data. You can’t protect what you don’t know is out there. Prioritize the data. Right. So obviously the mission critical stuff you’d want to do first. Then some sort of secondary offsite copy for sure. So again, with Veeam start off with the 3-2-1 rule. You can always add the other 1-0 later but certainly use the 3-2-1 rule as your guideline and blueprint to get to a successful strategy.

AJ: Brian, same question to you. Words of wisdom, words of encouragement, words of consolation.

Brian Bashaw: Wow, those are three different ways to attack that one. When Steve was talking, I was trying to think about it. My first — I won’t say it’s words of wisdom and you really threw me a curveball here, AJ. Shame on you. But one of the first things that I worked on as a data — when I first started getting into data protection was a customer who was impacted by Hurricane Katrina. I’m down in the Houston office and one of my customers had an office in Houston and one in New Orleans. Their office was right next door actually, in a very tall building right next door to the Super Dome. And in that world getting started with their data protection project, we didn’t do something that I suggested earlier and that was engaging the business units. And Steve touched on that a little bit there. He was talking about identifying the top tier data sets. I’d say that’s where you start.

And the reason I bring up that Katrina thing is because we started with this idea of we’ll just replicate it all. It’ll be great. And then all of a sudden the storm is coming and then we lose power and networks, all that stuff was able to be taken care of. We had a great plan. We thought everything out in terms of how to get data from one place to another, what recovery was going to look like. What we had not done was gone through and set that priority list. That what does it mean to the business if we lose this piece of information? And the result of that lack of thinking was as soon as we were available, I literally drove from Houston to New Orleans, got in my car and drove to New Orleans, had an escort to get in the building and luckily we used some of this [unintelligible 00:50:50] that we had from our [unintelligible 00:50:54] to grab the actual physical drives out of the data center and bring them back to their DR facility and get their data back. That’s a happy ending to the story. I’d say plan, make sure that you what it means for your business to protect or lose that data set.

AJ: Yeah, great words of wisdom. Prioritize, know your data and prioritize that data. So it’s great. Great. So thank you so much to both of you for being presenters. It was an excellent webinar. We’re getting a lot of good feedback online so thank you so much for that. And thank you so much for our viewers and listeners out there. Again, this will be posted as a recording so you’ll be able to share it with others in your organization if you feel like they should watch it. And you can go to Veeam.com for more information on Veeam and go to DataCore.com for more information on DataCore. And with that, I will conclude the presentation. So thank you so much Brian and Steve.

Steve Firmes: Glad to help.

Brian Bashaw: Thanks, Steve, nice hanging out with you guys.

AJ: And that will conclude our webinar. Thank you. Bye everyone.

Read Full Transcript