Search
Languages
<
On Demand Webcast
58 min

Data Resilience and Recovery with Object Storage

John Bell

Senior Technical Customer Engineer

DataCore

Adrian J. Herrera

Product Marketing Principal, Object Storage

DataCore

Human error, infrastructure failure, malware, ransomware attacks, and natural disasters can all lead to data deletion or corruption. The repercussions of not properly protecting data include lost opportunities, stiff fines and penalties, major business disruptions, or even, in extreme cases, loss of life.

Therefore data backup, recovery, and protection solutions are routinely the largest budgetary expenses for data-driven organizations. But as data sets grow from hundreds of Terabytes to Petabytes, and data is created and consumed in distributed locations, how can you ensure proper data protection?

In today’s distributed world, it’s extremely important to understand why object storage provides the best defense against a wide range of issues and how to identify the right object storage solution for your organization. In this webinar, you will learn:

  • The best object storage software deployment approaches and how they help with malware and ransomware attacks
  • The difference between data replication and erasure coding on object storage and how they differ from RAID on NAS
  • The difference between proactive versus reactive data recovery techniques and why these matter at Petabyte scale
  • WORM, Immutability and Object Locking options
  • Manageability with millions or billions of files
  • How object storage fits into the 3-2-1 and 3-2-1-1 backup rules with an emphasis on helping viewers to understand the differences between object storage, cloud storage, and tape

Join John Bell, Senior Technical Customer Engineer for DataCore, and Adrian J. Herrera, Object Storage Product Marketing Principal for DataCore, as they cover these topics and how object storage fits into modern data protection processes.

Webcast Transcript

Adrian Herrera: Hello, everyone.  And welcome to our webinar on Data Resilience and Recovery with Object Storage.  My name is Adrian Herrera.  I will be the moderator for today.  The expert with me is John Bell, senior technical customer engineer at DataCore.

Hey, John, why don’t you introduce yourself to everyone?  I think they’ve heard me if they’ve been following the series, but you are new to the series, although you are an expert in this space.  So I think giving a little bit of your background is appropriate here.

John Bell: Sure.  Thanks, A.J.  I’ve been working on this one product for the better part of 10 years and basically in object storage for that period of time.  I’ve had the opportunity to watch it evolve over that period of time from your basic core, the fundamental piece of being an object store to evolving into other areas such as data analytics with [unintelligible, 00:01:07] and protocol personalities supporting API such as S-3 and so on.  And really had a great chance to help build this over time and looking forward to presenting to you today.

Adrian Herrera: And John has been involved in a lot of different deployments.  He’s seen a lot of different scenarios.  He’s a wealth of knowledge, which is why it’s great to have him walk us through this content, Resilience and Recover with Object Storage.

And specifically in this webinar what you’re going to learn is object storage, the resilience and recovery features and approaches and in general object storage also some that are unique to Swarm and why you need to think differently, a petabyte scale.  I think there’s this conversation about your backing up your primary data sets, you’re backing up your secondary and tertiary data sets that is really becoming very relevant in today’s space just because of the scale of datasets and we’ll discuss that.

We’re going to talk about how these approaches differ from RAID on traditional NAS.  We’ll go over the difference in overhead and the difference in really the value that you get from choosing object storage over RAID and NAS.  We’ll talk about how objects storage fits into the 3-2-1 backup rule.  And then what we’ll talk about, how object storage is part of a complete ransomware and malware protection strategy.

And of course at any time if you have any questions you can go ahead and ask them.  Just go ahead and type them up.  We’ll try to get to them as we present.  If for some reason we don’t get to them we’ll go ahead and follow-up via email or you can always email us directly at info@datacore.  We’ll put up this email address at the end of the webinar.

But we’d like to have an informal discussion, so please do feel free to ask questions.  We’d like to answer them as we present.

Anything after this, John, before we start?

John Bell: No.  Let’s go ahead and dive into it, A.J.

Adrian Herrera: All right.  So I think let’s – we’re setting the foundation, we’re setting the framework or the building blocks of protections on object storage.  So why don’t you walk everyone through their options.

John Bell: Certainly.  So there are a couple of ways to approach protecting your data, which is in an object storage solutions.  And the two primary ones that you see are, of course, making copies of the data within the object store via replication or you can hash it out into data and parity sets using erasure coding.  And depending on the use case that you have and the type of data that you’re going to be instantiating into the cluster you may choose one approach over the other.  That said, you’re not necessarily locked into choosing one or the other.  Our solution is designed to accommodate both protection approaches when storing your data within the system.  You can have a combination of replicated or erasure-coded objects residing in the storage.

And of course how these things scale with 9s and availability and their overhead is going to be based on which one you choose.  One will have more efficiency than the other, ones might be designed more for optimized access.  And we’ll go into that more in detail as we move through the presentation, but scaling for 9s availability here also take in terms of 9s of durability for your data as well.  That’s what e trying to address here.

Adrian Herrera: Yeah.  I think it’s important to note, I mean, the connection between object storage and Cloud storage and that Cloud storage is really enabled by object storage and a lot of the different SLA’s and difference in pricing that you see from the Cloud storage, the service providers, actually deploy or I should say employ different variations of replication erasure coding, John, and different types of media on the backend, but it’s usually called object storage?

John Bell: Most certainly.  And the key to that is bringing into your organization the scale up/scale out approach for handling your storage within your organization.  And that is the key feature that objects storage tends to address.

Adrian Herrera: And I’m calling it out also just from a cost perspective, a petabyte scale that with these two you’re really able to optimize for a cost and for performance.  And why don’t walk over – everyone through this?

John Bell: So there’s a matrix here where you want to make some decisions as to what type of protection scheme you want to use for the data that you’re going to be placing into the object store.  And we have a fairly simply slide diagram here that describes that decision-making process.  For example, at the upper left-hand portion of the diagram we have small clusters that are going to be holding small objects.  And typically that’s going to fall into the scheme of just using very simply replication scheme to protect the data within the cluster.

As you get into petabyte or even exabyte scale and you start storing a mix and match of large objects, especially for larger objects where you’re storing a very large number of large objects, for efficiency sake while providing the same ability and durability you want to move into using an erasure coding protection scheme for that data.  So basically you’re just walking from one side of the diagram to the other depending on the use case that you have and the nature of the nature that you’re going to be placing into the storage.

Adrian Herrera: And this is just a guideline, right, I mean, would you ever use replication for large objects?

John Bell: You most certainly can.  You may have situations where you don’t want to, for example – let’s say you have a data that’s read very frequently.  Sometimes in those cases it’s better off to use replication for that kind of data because that keeps you from avoiding having to do things like present apological object by pulling together the different segments to prevent the logical stream to the client.  If it’s accessed frequently you may see in certain cases, and this is the whole magic, A.J., behind elastic content protection within our solution is you can have the same type of protection schemes at the same time running in the cluster.  And you can even move the data between different protection schemes depending on how it’s being used at any given time.

And this is the type of flexibility at Petabyte Scale that we try to provide and the solution.

Adrian Herrera: Yeah.  Absolutely.  I guess that’s just to say.  Say what you just said another way.  You can set policies that move between the replication erasure coding throughout the life of the data in addition to storing replicate and erasure coding on the same infrastructure.

John Bell: That’s the key.  And the other piece to that is that the system will track that, those policies, and how you want to want to move them back and forth automatically and take care of that for you.

Adrian Herrera: Yeah.  So the point for everyone out there listening is you don’t need to just select one and then that’s it.  You can shift between replication erasure coding, at least in Swarm you can.  You can shift from one to the other and optimize depending on the value of the data at that time.

John Bell: Right.

Adrian Herrera: And there’s a value of the data and kind of referenced the optimizing for cost.  Let’s talk about overhead versus RAID.  So you have replication, you have erasure coding.  In traditional NAS you can create copies, right?  In traditional NAS you can also use RAID.  And RAID is very similar to erasure coding.  Erasure coding is just more of a file-based or object-based kind of RAID for using parity segments.  Why don’t you walk everyone through the overhead, what that actually means when you apply it to your volumes.

John Bell: Sure.  I mean, what we have here is what amounts to a reasonably simplified representation of the differences between the various approaches.  In the file system side of the world is traditional RAID, such as grade 5, you have your overhead for the file system itself, the RAID overhead and whatnot, and after the file system has been formatted, of course, you have 10 percent [unintelligible, 00:09:39] and so on that are associated with that.  And ultimately you have a pool of usable storage that’s at a given amount.

Moving to the right we have the replication approach using two replicas on JBOD.  There’s no RAID overhead involved at all.  And, in fact, in our solution there’s no Cloud system overhead involved at all either.  The volumes are use purely for storing the data.  There’s a little bit of an overhead for some journaling that needs to take place, but otherwise what you see is what you get with replication.  You’re going to have two replicas, you’re going to have 50 percent overhead – is what it amounts to or, I’m sorry, 100 percent overhead.

But in the case of erasure coding you can start tuning down that overhead dramatically.  So what you have here on the far right is you have a 10:2 erasure coding scheme where you have 10 data segments and to parity segments laid out on just a bunch of discs.  And as you can see there the overhead for that is much lower.  And the overhead constitutes basically the parity and then, of course, perhaps a very small portion of journaling that takes place for the volume.  And as a result your usable storage for laying out data in that fashion, which in the object store becomes much larger and you have much better efficiency to work with there as a result.

What we’re assuming here, of course, is that we’re working with the exact same pool of hardware and we’re just adjusting the efficiencies based on the various approaches that could be used with that hardware.

Adrian Herrera: Yeah.  Absolutely.  But when you apply scale, when you start to extrapolate out in the future, you know, three years, five years down the road you can really see the benefits of that savings.  It starts to compound over time.

John Bell: Not only savings, but there are implications here from a technical side with things like bitrot.  I mean, who wants to run RAID 5 forever.  In fact, RAID 5 is one of those technologies that, you know, as you scale up, like you say, A.J., it just becomes unviable as a solution.  You can’t do it that way.  You had multiple petabytes or even exabyte scale.  You can’t trust your data to be well protected.  So many failures happening potentially within that pool of equipment that you can’t really trust to keep your data available and durable, whereas in an objects store you don’t have those issues.  It’s self-healing and it heals quickly.  We’ll talk about that later as we go through the presentation.

Adrian Herrera: Yeah.  I think that’s a good segue to the next section here, replication, doing a deep down verification just showing the viewers how replication actually behaves and actually works.  So let’s start with this.  You can walk us through this example.

John Bell: Yeah, most certainly.  So this is a simple scenario where we each of the objects that stored in the cluster has two replicas associated with it.  We have three server chasseing’s and as a result – and also you have nine objects stored in the system, so as a result you basically have what we call 18 strains that are stored in the cluster and they’re spread across these three chassis.  And I believe then we go through failure.  Yup.  So in the case of having one of these chassis completely fail before the recovery you basically have six objects or six streams, but per chassis.

And after recovery you need to have nine of those per chassis.  And what’s going to happen is that the system is going to pick up the fact that this chassis is no longer available.  The copies that were associated with that chassis needed to be treated back up in the remainder of the cluster.  And it’s going to go through the process of creating the necessary reps so that they’re true backups and they’re a protection scheme – so they’re a protection policy.

The neat thing about this is that all of the process is transparent to any client request that are coming in.    The data is fully available and accessible as this recovery process takes place.

Adrian Herrera: So theoretically, I mean, you’ve seen a lot of deployments out there.  I mean, how often does this happen on an average size cluster and how often does this happen on a large cluster.  And maybe if you can just let the viewers know what an average-size cluster is, what a large-size cluster is and how often this process occurs?

John Bell: Sure.  Just speaking from field experience, what we’ve considered to be an average-size cluster could range in the range of say hundreds.  We would use the magnitude of hundreds of terabytes, for example.  That would be a typical cluster.  A large cluster, of course, is going to be into petabytes or multiple petabytes, one, two, three figures of petabytes fail, easily.  And what we see in those situations is that in the case of the, quote, unquote, normal-sized clusters you typically don’t have a large number of simultaneous volume failures happening, but as clusters get larger the chances for an individual drive to fail goes up within the cluster and in very large clusters the chances for an individual chassis to go down can go up as well.  That’s just way the way the availability calculations work.

And so it’s very important that you have the ability to recover from those situations quickly as you approach very large scale.  And that’s what our system is designed to do.  As you get in these situations where you have petabytes across say 10’s or even 100’s of chassis with hundreds or even thousands of volume, for example, having two to three volumes go down at the same time is something that’s quite feasible to happen and you want to design your protection schemes around the fact that that can happen and if you do that properly then it’s not going to be an issue as far as relying on commodity equipment, let’s say.

Hopefully that covers it, A.J.

Adrian Herrera: Yeah, that covers it.  I just wanted to get the concept of scale and the scales we’re talking about out so the viewers know exactly what we’re talking about.  We’re not talking about 10’s of terabytes.  We’re usually talking hundreds of terabytes to multiple petabytes.

John Bell: Yeah, most certainly.  And the solution is designed to scale up and down.  You can dead instances that are very small, but we’re concerned about extremely large scale and being able to tackle the challenges that were represented were fair.

Adrian Herrera: Yeah.  And that kind of drives home this point, right, that when you’re only dealing with a small amount of hardware, a small amount of infrastructure sometimes you can’t flip the right protection method into place.  I mean, you can virtualize these and have these on one box, but it’s just driving home that point, correct?

John Bell: Right.  You’re basically seeing a scenario here where you have a small deployment.  Perhaps it’s a promotion chain of some kind.  You have development, but then you have testament.  You move up in the higher scale with UAT and then you [unintelligible, 00:16:59] for people that like to do things that way, but the key here is that what happens in this type of situation where you just have two chassis and you’re using the same criteria where reps is equal to two for the objects that are stored in the cluster.  So let’s kick off the failure mode here and pop through it so if one of the chassis goes down.

The sensible thing to bear in mind here is that there’s no place to recover the content to accrue it back up to its policy for protection scheme when this happens.  So you’re basically running in what amounts to a degraded mode.  Now, as long as nothing happens to the first chassis you’re ok.  The clients can come in.  They can read data and they can even write data potentially given certain settings that are dropped in that we took for recommended production.  But you definitely can read the data, but there’s nowhere to put recovery, too, so you have to wait for that second chassis to be repaired or to come back online before things can be treed back up in the cluster again.

Adrian Herrera: Yeah.  You’re correct.  Well, we’re going to walk through the erasure code, an example.  I think this is probably the protection method that most people don’t really understand, so let’s start with a description/definition of what erasure code actually is.

John Bell: Yeah.  Certainly.  So when you’re using protection for your contact with erasure code the object storage is automatically going to take what you wrote into the system.  And it’s going to create and distribute segments evenly across all the nodes that reside within the storage cluster or – and if you have something clusters configures where you groups of chassis that are better handled in a given fashion – let’s say you want to have protection per rack or you want to have protection per power distribution.  You would use the storage cluster approach and that’s what we’re referencing there, but ultimately it’s going to take those segments and it’s going to spread them across those divisions that have been defined for the cluster.

And the thing to bear in mind here is that, and we’ll go into more deep dive as to what K & P means, but if you have P parity segments – if you have P or fewer segments on any one of the nodes then the loss of that note can’t be tolerated.

Adrian Herrera: And let’s go deeper dive into K & P.

John Bell: Right.  Exactly.  So we’re talking about K & P.  K, of course is data.  P is parity.  P is going to be equivalent to the number of simultaneous volume losses that you can have without beta loss and it must be greater than one.   Now if you’re using sub-clusters you may decide that – or in larger clusters when you approach very large scales you may decide to split things up, so that’s the number of simultaneous nodes you can tolerate to fail or the number of simultaneous sub-clusters you can afford to have down, but ultimately what we’re referencing here, the number of items in the cluster, the divisions in the cluster you can have unavailable and still access your data.

What chassis you control over is the space distribution and ultimately your efficiency is defined by the equation that we show you there, which is K + P divided by K.  That is your data footprint for the erasure code scheme that you have chosen.

It is worth noting that larger K and P numbers are going to require more resources in the form of CPU, memory, and ultimately, like we just said, the number of nodes that are needed to make sure that you can have that data available when requests come in to read it.

And we have a simple table over here to the right, A.J., where we walk through the comparisons between typical protection schemes that are encountered in the field.  You have your usually reps equal 2, reps equals 3, and then typically what people use for erasure code is something along the lines of – the ones we see the most, A.J. are EC 5:2 or EC 6:3.  And the comparisons there for the footprint multiplier, as we said previously, overhead for reps equals 2 is  100 percent, overhead for reps equals 3 is 200 percent.  The amount of storage that’s needed, the loss storage that’s needed to store a terabyte of data becomes two terabytes for reps equals two, three terabytes for reps equals 3, et cetera.

And then, of course, in the last column we have the simultaneous volume loss tolerance for those protection schemes.  When you dive down in erasure code you see the magic start to occur.  So let’s talk about EC 5:2 because that’s a common one.  We talk about that all the time.  And as we go into this scenario you get your 40 percent overhead and so on.  And let’s talk about what happens when these fail.

So in our example here was have an EC 5:2 scenario.  You have three chassis of servers.  Each of them have three volumes and the occurring scheme used for data is 5:2, 5 data, 2 parity.  This creates seven segments total that are evenly distributed across the cluster as fast as possible.  In this case fast as possible is two segments on one, two segments on the other, and three on the remainder.  And then there are two parity segments.  And what that means is you can tolerate the loss of two of those segments in order to still read the data in the system.

So in this case you can lose any two volumes or you can lose an entire chassis.  You can lose chassis 1 or 2, but you can’t lose chassis 3 because, again, it has three as part of the remainder for the distribution.  And the other key takeaway here is that if one of these chassis were failed recovery will fail because there are not enough remaining resources in the cluster to reinstantuate the segments that are needed to trim everything up for the objects that have been written in that fashion.

[Pause]

John Bell: And the key thing, A.J., of course, is you can still read the data.  It’s just going to be in degraded mode, right?  You can’t afford the loss of another chassis and you certainly can’t afford the loss of chassis 3 in this distribution.

Adrian Herrera: And we have the same scenario, but with 3:2.

John Bell: Yeah.  And that was two things to accommodate the resources that we have available in the cluster and the associated failure modes that we may concerned with.  So in this case we have 3 data, 2 parity.  That’s five segments that are evenly distributed, quote, unquote, evenly, as best as possible across the cluster with two on one server, two on the other, and one on the remainder.  So the loss column is here.  You can still access this object if you lose any two volumes or you can lose a chassis.  And furthermore recovery will success because there are enough resources remaining in the cluster to bring back the total of 3 data and 2 parity, to have all of that recreated within the cluster while one of these chassis is down.

Adrian Herrera: And as far as erasure code sizing how do you work with customers to determine the right erasure coding method for them?

John Bell: Yeah, there are a lot of key inputs that we try to look for if we can get them, A.J. when trying to design the best erasure code protection scheme or policy for their deployment.  The first and foremost, of course, as I mentioned earlier, we try to take a look at the final size.  Does it that range where erasure code makes sense for their use case and if the answer, of course, to that is yes, then we move onto what kind of failure modes are you willing to tolerate, what kind of availability and durability do you need to have for the data that resides in the system.  And that’s going to give us some insight as to what kind of protection scheme they want to use.  And of course we want to know what kind of efficiency they’re targeting to.

And we have to take all those inputs and we basically have to iterate to what amounts to the most optimized approach for what they’re trying to do, for what they plan to deploy.  But those are the inputs we look for typically.

Adrian Herrera: And I mean, this does look rather complex, but this is all policy based, correct?  You kind of pick an erasure coding scheme or replication scheme and the life of the data and then you just set the policy and the system takes care of it, correct, if it’s manual –

John Bell: That’s right.

Adrian Herrera: – continuous manual management to protection policies?

John Bell: That’s right.  And policy isn’t just locked in at the global cluster level.  You can have these policies set in a different fashion down at the domain level, for example.  And depending on the nature of the data and the domain for different entities or applications or even business units that may be using it they may have different schemes associated with the data that they’re putting in the system depending on their requirements.  And again, that’s a flexibility that we bring to the table and it’s certainly that automated flexibility that we bring to the table to assist with filling the solution to a very large pool of data.

Adrian Herrera: And we do have a question, John.  It says, “If I’m using EC 5:2 with seven chassis do I still have a single point of failure?”

John Bell: The single point of value?  No.  And, in fact, you should be able to recover if one chassis goes down.

Adrian Herrera: All right.  Well, that answers that question.  And so we covered how erasure coding works, how replication works.  Now let’s talk about proactive issue detection and automated recovery.  So now we’ve talked about how you can actually protect the data, the methods that you can choose.  Now, so what happens?  Why is object storage so good at protecting petabyte-scale environments?

John Bell: Well, a well-designed object storage solution is going to have what we find in here.  It’s going to have a health processing component built into it’s DNA, basically, because like we say here, at petabyte scale are you going to run into some kind of issue with the chance of something failing within the overall cluster at various hardware component levels.  And you have to be able to be constantly checking for that to be able to react to those failures immediately and make sure that the data is protected and so on, but even then in the background you want to constantly be walking through the system.  You’re looking for things like, bitrot, right, A.J., you know, items of that nature, making sure that your data has not degraded over time.  And that’s what this process is doing here.

And between looking for bitrot and then making sure that you recover and trim everything up to be fully protected per it’s policy that’s the nature of the solution that you have when you scale to this level of storage.

Adrian Herrera: And let’s cover what’s happening when a drive fails.  I think we talked about the data replication, but can you walk everyone through this slide?

John Bell: Yeah.  So what happens when a drive fails?  In Swarm loss of a drive where data may be left under protected, as in affairs we described previously, that’s considered what is quote, unquote, an emerg messaging each other.  So what you want to see happen in best-in-breed object storage is going to make every effort to immediately kick off what’s necessary to restore that date to the full protection, via full replication of all the objects that are in the system, or making sure that all the EC components, the data and parity segments, are fully recreated within the system so that it goes back to its full protection.

Ultimately, in Swarm’s goal, is to minimize the window of time for data that is unprotected because typically what we see happen in the field, A.J., when it rains it pours.  You may have a situation where you have a certain number of servers or drivers that were made in the same block.  So one server with one set of drives starts to encounter issues and starts to degrade.  You may have a situation where not far after that another system in the cluster starts suffering from that, too.  So you want that window to be really tiny so that you don’t have so many simultaneous failures piling up on you at once that things become unrecoverable.

Adrian Herrera: Absolutely.  And then how about the mechanics of recovery versus NAS.  We talked about NAS and the overhead, but what’s exactly the difference when an issue is encountered?

John Bell: The main different here is that in the case of object storage, especially a well-architecture solution, all of the nodes in the cluster are going to participate in this recovery process.  Like I mentioned previously, we’re going to see that notification that, oh, this node has failed or this chassis no longer available in the cluster.  And they’re all going to pile in and do whatever it takes to true up everything that was associated with that hardware component in the cluster.  And as a result the recovery performance is going to increase as cluster size increases.  And we’ve got another slide that goes through scenarios like that, whereas with RAID on NAS or SAN you may have specialized components, such as SSD’s or for journaling or you just need to have SSD period so that the rebuild of the array can take place as quickly as possible before, yet, another failure happens in that array.

A lot of these solutions you may find yourself bottlenecked through a single controller, for example.  You may find yourself in a situation where that single controller fails for whatever reason and takes the whole thing down.  You don’t have that problem in the object storage solution because when you scale up and scale out with servers all the CPU’s associated with all of the servers, all of the memory associated with all the servers, all the network interface cards associated with the servers, all the disc controller card associated with that – all of those are participating in recovery at the same time.

Adrian Herrera: Yeah.  And that’s what this table shows, correct, as you scale the effect of the participation.  You want to walk everyone through this?

John Bell: Sure.  The example we have here is we have 1,000 objects that are stored in the cluster.  Protection scheme is our usual reps equals 2.  And we walk through clusters of various sizes.  In the column there we have 3, 10, and 100 nodes in a cluster.  And we showed the distribution of the objects in that cluster based on the node count.  So in the case of three nodes your distribution is roughly 667, 2,000 divided by three, for the objects in the nodes – and as a result if any one of those nodes fail each of the remaining nodes is going to have to recover 333 of that.  And there’s only one other node that can receive those recovered objects.

So as you can imagine that can overload things pretty quickly in a small cluster like that as the object count goes up.  As you move further through to 10 and ultimately the 100-node scenario you notice that the distribution per node goes way down.  And as a result the things that need to be recovered if that component fails within the cluster it becomes much smaller and the recovery is much faster because you have 98 other nodes in the cluster that can participate in that recovery process and that can take on the data that was associated with that failed component.

Adrian Herrera: So I want the viewers to kind of take mental snapshot here and then if they can go back in memory maybe I can help them out and we go back and we take a look at the erasure coding efficiency for 10:2 and you start to combine those two tables.  You’re seeing as the cluster grows not only can you use erasure coding, which is a lot more efficient from an overhead perspective, but now with the number of nodes in the system you’re also getting the added value of being able to recover faster.  So I think by combining those two you see as the cluster grows why object storage is such a great method for protecting petabyte-scale environments.

Would you add anything else to that, John?

John Bell: Yeah.  And that all goes back to the fact that you have the distribution claim in your favor regardless of the replication or erasure coding.  And certainly so with the erasure coding because there are more segments associated with an object stored in the system as a result and you want that recovery to be fast, you want everything to participate quickly to rebuild those segments, so it’s definitely in your favor to take advantage of the distribution characteristics of a larger cluster.

Adrian Herrera: Yeah.  Absolutely.  So we talked about cluster, but now let’s talk about replicating data; data to the Cloud, data to another site for disaster recovery, for disaster repurposes.  So how easy is that to do with object storage and why do is it easy to do with object storage?

John Bell: It’s actually fairly easy to do with object storage and it’s typically done through what we call a feed mechanism, A.J.  And there are two types of feeds that we can deal with for this requirement.  There’s the replication feed that allows you to replicate data from object storage cluster to another.  We also have the S3 feed mechanism that allows us to copy or replicate that data up to, for example, Amazon S3, right?  And you can have it available in the Cloud as a result for disaster recovery purposes.  And what we’re showing here is just the UI consult for what it would look like if you were looking at this in our UI, you know, what the statistics for, but example here, a feed would look like.

Adrian Herrera: Yeah.  And here you can do it to a different Swarm site, different site or you can also do it to S3, the service, correct or any S3-enabled device?

John Bell: Yes.  Basically any S3 target.  In fact, one of the way we’ve tested this, A.J. is we tested this against another Swarm cluster that was providing a S3 endpoint.  And we were able to use our own S3 endpoint to do it this way as well.  So it’s kind of neat to watch it work.  But, yeah, any viable S3 endpoint we’d have the capability to use this method as well.

Adrian Herrera: Yeah.  And some that we see out there in the field we point them out it’s not just Amazon S3 or any one of the flavors of Glacier or the service that they have.  There’s also services like Wasabi or FUJIFILM Object Archive, which is a paper archive, so there’s a lot of different options from a feed of replication perspective.  But how complex can you get with those replication topologies, John?  I mean, it’s just an example here.

John Bell: Right.  The replication topologies we support for lack of a better way to describe it is an M to N mapping.  We can do it in that fashion.  And you see examples of that here in these diagrams.  So for example, on the left-hand side we have three clusters, A, B, C.  They are all replicating data to on-print object to the mothership, we like to call it, or maybe the central office.  The cluster A also has the added requirement that I want to push that out to a Cloud service provider SC endpoint such as Wasabi or AWS or perhaps I want to push it out to a take endpoint that provides an S3 interface, such as FUJIFILM.  They want to protect it further in that fashion.

And you have to take it up and setting it up in that way and having them all run simultaneously once those feeds are defined.  It’ll replicate that data automatically to the targets that you designed in the feed definition.

We also have the capability to do an initial replication, let’s say, to another cluster and then perhaps a temporary, quote, unquote, Airgap or shutdown access to that cluster or even just part of that feed, end cluster feed, and then perhaps shut down the access in some other method.  And at that point you have a logical or even perhaps even a physical Airgap for the data that was replicated from cluster feed to that cluster at a given point and time.  And that comes in really handy if you want to protect against things like ransomware attacks and so on where you can make absolutely sure that there’s no way for anybody coming in and itching to reach the object because usually that’s what happens, right.  You have the lead portion of people putting their data under hostage and this prevents this from happening and allows you to be able to recover from that one cluster that’s been gapped away from everybody else.

Adrian Herrera: Yeah, and that’s the important part.  I think we’re seeing a lot of the momentum growing as far as the need to protect very large-scale datasets from ransomware attacks and just malicious attacks in general.  And there are a number of other features that are inherent to object storage, some unique to Swarm, some are available in most object storage solutions, but why don’t you walk everyone through each of these and how they can protect data scale.

John Bell: Sure.  Some of this, walk left to write, from an authorization perspective we have the ability to provide for authorization, authentication, accounting in auditing, right, quadruple A, typically what you see as an industry standard in the field.  So you can set things up in such a way that only certain users or groups are allowed to perform certain operations against a given set of objects that reside in the cluster.

Moving further through that, of course, we support the capability to have encryption rest for the volumes themselves.  Let’s say you have a volume that goes offline, but you want to make absolutely sure that when you arm A, that volume or whatever, nobody can see the data that’s on it.  We do support full volume encryption at rest for the drives that reside within the form solution if you choose to use that.  And as long as they don’t have access to the keys they will not be able to read the data.  That’s also important.

And then, of course, you know there’s support for things like being able to send things out over TLS and encryption as well.  So encryption in transit.  We provide the capability, of course, for any immutability or Worm, as the case may be or perhaps even, you know, being able to lock the object if necessary.  That’s where immutability comes into play.  So you can share that.  Once you’ve written something however many times you read it you have assurances that that data has not been tampered with.

There’s hashing, for example, for authentication, for passwords.  Let’s say you need to hash a password for administrative access through the systems.  We don’t necessarily do that in clear text.  You can hash those passwords as well so that they’re not in the clear.

And then finally we have versioning.  We have the capability to create multiple versions of objects in the cluster as they may change over time.  You may change the metadata that’s associated with them while keeping the body intact or you may even change the entire object itself.  You may change the body of it as well and we provide the capability to have different versions of that object reside within the system that can be referenced with an API call so that you can reference which specific version you want to work with.

So this whole chain of features that we have here, A.J., along with the ability to have a temporary or permanent Airgap upon replication to a given state gives you a very ideal way to protect that petabyte scale, you data, petabyte-scale data source from ransomware attacks or other bad actors.

Adrian Herrera: Yeah.  And of course you always have the standard value props of objects stored to where you can continuously upgrade the hardware beneath the data.  You can continue to make sure you’re using the most optimal and efficient hardware or hard drives, your chassis, the most power efficient.  So we just focused on some of the resilient or recovery features, but when you pair these with the standard value proposition and the features then object storage really becomes a very powerful solution.

Now, let’s talk about bring it all together.  So from an object storage, resilience, and recovery perspective, just covering what we just present – we demonstrated how object storage enables customizable copies of data across various chassis, offsite, also offline or Airgap with all of the added data protection features.  And we also showed how you can do that at petabyte scale and why object storage is one of the unique ways to do this at petabyte scale.  We showed how all of the nodes participate in recovery.  And we also showed how as a cluster grows the efficiency that you get by using erasure coding from a protection method really kicks in because you can protect content and only really need to assign 20 to 30 percent overhead to get very, very good resilience from your object storage solution.

So do you have anything to add here, John, before we map that the traditional 3:2:1:1 backup rule?

John Bell: I would add another thing that gives you a certain measure of resilience and flexibility to touch on what you talked about with being to basically upgrade the hardware out from underneath the solution and not have clients recognize it at any one point and time.  A well-designed object store is going design around the concept of volume portability.  And one of the Achilles heels of the traditional file system and RAID approach or SAN or NAS approach it’s not really possible to take a drive out of one array and stick it into another array, right.

You might have a situation where there’s nothing wrong with your hard drives, but a motherboard dies, right, and you need to be able to bring that up quickly.  In that situation you may not even need to rebuild because all you have to do is pull the volumes out of that one server, plug it into another server.  Bring that server back online.  The cluster recognizes it joining the cluster.  At that point the data that’s on those volumes is available to any client request that comes to the cluster.

Adrian Herrera: Yeah, that’s a good point and we didn’t even cover that, but thank you for bringing that up.  And it think in today’s world where you have remote hands, where you have a lot of maybe certain employees can’t get into the data center, but you still need to go ahead and swap out chassis.

John Bell: You can have a lot of mistakes there.

Adrian Herrera: So you have to have a durable solution where you can pull a drive and then put it back in, you know, you have an oops moment.  A solution like Swarm handles that gracefully and just keeps on going.  So just to repeat what John said you can rip one drive out of one chassis and put it in another and because of the global nice space of Swarm that data will still be available.  We need to rebalance.  They system takes care of it all.  I think that’s a pretty important point to bring up.  Thanks, John.

So how does this all map to the 3:2:1:1 backup rule.  We hear backup vendors talk about this a lot, having three copies of data on two different types of media, one offsite online copy and one offline copy.  And I think we’ve demonstrated all of these.  You can do all of these with object storage.  So I guess the point to get across here is there is absolutely a place for backup applications and there’s absolutely a place for RAID and for NAS, but for your first year, your secondary, and tertiary datasets when you’re talking about petabytes of data, when you’re talking about billions of files, do those traditional backup methods and those protection methods work at that scale for you and more importantly for your budget?

If you’re struggling with that then you should really take a good hard look at object storage.

And anything to add to this, John?

John Bell: Yeah.  Certainly.  When we describe – when we say that you cost effectively protect petabytes of data the key cost savings here are in that second item, A.J. from my personal philosophical perspective where they call out that you need to have two different types of media.  With a well-designed object storage solution you can collapse that requirement because you have geographically dispersed clusters that you’re replicating between.  It really doesn’t matter whether or not their media is different, right?  You’re still providing that full set of protection and you can even have clusters replicated onsite if you choose to do it in that fashion where you have a cluster on one side of the building and a cluster on the other side of the building and they’re doing cross cluster replication with each other.

Really doesn’t matter that they’re using different media.  It doesn’t really buy you anything to do it that way, in fact, you’re probably better off taking advantage of the economies of scale by using the type of medium, the same server types, in the two different rooms, right, just as a simple example.  So there are a lot of opportunities here to save you cost when attempting to protect your data in this traditional 3:2:1:1 backup rule approach that I think people realize is available to them when they’re leveraging off storage within their environment.

Adrian Herrera: Yeah.  Absolutely.  That’s a great point.  And it’s just a rule.  And I think, John, you brought up a great point that what are you really trying to get at.  What’s your goal?  What’s your objective and does the media really matter.  Put it on two different hard drives and if you’re worried about your drives failing from the same batch go ahead and use two different vendors and two different chassis.

John Bell: You could approach it more from a performance perspective as opposed to a protection one.  You could have a primary cluster that has very fast storage.  Let’s say it has enterprise-grade drives that are running at 10,000 RPM, just simple example.  And then you could have the DR cluster that’s more of a deep-archived type of approach, right, because it’s not getting hit with client traffic and you could use slower drives or maybe larger drives or more dense solution with your chassis count over there.  And that could accommodate that requirement as well.

But the key here is you have the flexibility to make that decision given the set of requirements that you have to work with for any deployment that you’re envisioning.

Adrian Herrera: Absolutely.  So let’s talk about closing and next steps.  This is probably the single most important piece of advice that we can give, right, John?

John Bell: Oh, most definitely.  Don’t put all your eggs in one basket.  And we walk through the various methods that can used with object storage to avoid painting yourself into this corner.

Adrian Herrera: Do you have any sort of real world experiences you can share here or any – I mean, I always like to add for words of wisdom.  I mean, don’t put all your eggs in one basket are definitely words of wisdom, but any other experiences or words of wisdom to share before we walk to the closing slide?

John Bell: Well, there have definitely been situations where our customers have been able to take advantage of the fact that they set up clusters in such a way that they’re doing cross-site replication so that no matter which way the client traffic comes in, especially if they’re right, they’re ensured that that data is replicated over from the original point where it was instantiated.  The backup of that is used over to the other cluster.  But we have a lot of customers in the field that use our solution and that’s actually for that reason.  They’re not only balancing the client traffic, but balancing the fact to make sure that when data is written, no matter which cluster it’s coming into, it’s protected on the other side.

Adrian Herrera: Yeah.  Definitely.  Great words of wisdom.  And as far as next steps if you go to the datacore.com up in the top level you’ll see a resources section.  If you click on that and you click on Whitepapers we have a great white paper that guides into detail on what John and I just spoke about.  It’s called, “Data Protection with Swarm Object Storage”.  It was written by one our elite engineers, Don Baker.  It’s really good.  It goes over the different data protection schemes that are used in Swarm Object Storage including an overview of elastic content protection.

As always if you have any questions you can email them to us at info@datacore.com.  If you have any specific questions for John or I just go ahead and state that in the text of the email and they will get directed to us.

With that we’d like to open up for questions if you have any questions.  We did answer the ones that I have seen throughout the presentation.  We did have a request for this actual presentation.  You can reach out to your Datacore representative and we’ll be able to work that out.  So just reach out to your Datacore rep and we will follow-up with the presentation directly to you.

And, again, we talked about words of wisdom, John.  Any closing statements?  I mean, I think we’re in this world now.  It’s just very unique where people can’t get into the data center.  I mean, we’re seeing these massive malicious attacks.

As far as what people need to think about from a recovery and resilience perspective where would you recommend they all start?  I mean, this is a very, very complex problem.  I mean, where do you see organizations start from an evaluation perspective?  Like, how do you get your hands around this?

John Bell: Well, you know, certainly I’m not about to get too far into the weeds, but ultimately this starts out as a business requirement where a risk assessment is done on the nature of the data that you’re managed structure for managing that data that provides the appropriate abstention dips and the ability to control access as you quickly recover from any unwanted or warranted.  Right.  And when it comes to doing that my takeaway I hope that everybody has from all of this is that object storage and Swarm Object Storage in particular is an incredibly strong component in that overall approach for protecting your data.  It’s built to be very resilient in the fact of failure.  It’s built to protect against things like bitrot over time.  It has the necessary access controls in place to make sure that only authorized parties are allowed to do certain operations on the day that it’s stored in the system.

You had the ability to set up replication and even replication policies depending on the domains that you want to send as opposed to the entire cluster – what’s gets replicated to where.

And ultimately you’re presented a really strong took in your tool kit for protecting the organization’s data when you leverage out a storage solution like ours.

Adrian Herrera: Well said.  Well said.  I don’t have anything to add to that.  And it looks like we don’t have any additional questions.  So I would like to thank everyone for their time.  Thank you for sticking with us.  I think most of the attendees stayed for the end of the presentation.

John, as always, it was a great presentation.  Thank you for sharing your experiences with us.

So wait a minute.  We had one question coming in.  Let me make sure.  “File protocols.  Does it support?”

I’ll just ask you.  What file protocols does Swarm support?

John Bell: Well, of course, when you’re interacting with an objects storage typically you’re interacting in a restful API fashion, so it’s going to http.1.1.  If you have any kind of http library, you make any calls with the appropriate verbs to manipulate the data in the system, so crip that down.  You’re either going to be working with our native protocol, which we call SCSP or Simple Content Storage Protocol.  You can also use the Espry protocol, the hands-on protocol to work with data within our system.  And there are also things that you can layer on top of that, such as Format S, for example, which provides what amounts to NSS Protocol Gateway for Legacy Systems, which may not have the capability for whatever reason to integrate with an object store directly.  They could integrate that through an NFS-like interface.

Those are typically the targets that are used when putting data in the system.  And of course the magic to all of that, as A.J. touched on earlier, is our ability to provide a unified game space for all of that.  So let’s say you have client drop something in with our native integration with SCSP.  And then you want to turn right around and you want to make sure that that data can be assessed by teams that are using S3 clients, for example.  You can be assured that when they come in and make those S3 requests that data will be there and it can be seen and manipulated through S3 calls.  And we provide that unified main space that allows that to happen.

Adrian Herrera: All right.  And that actually is all the questions.  I don’t think we have any additional questions.  So all right.  Thank you again our viewers out there for joining us this morning or this afternoon or this evening – wherever you are in the world. And once again, Thanks, John.  Great information, high value.  And as always, if you have any questions just email us at info@datacore.com and we will make sure that it gets routed to the right party and get a response to you as quickly as possible. So once again, thanks, John.

John Bell: Oh, my pleasure.

Adrian Herrera: And this concludes our webinar.  Thanks, everyone.

Read Full Transcript

Object Storage Educational Webinar Series