Webcast à la demande 30 min

A Practical and Effective Approach to Leveraging the Cloud for Inactive Data

Steven Hunt

Sr. Director, Product Management

DataCore Software

Webcast Transcript

David: I’m excited to introduce Mr. Steven Hunt, senior director of product management at DataCore.  Steven are you there?

Steven Hunt: I am. Can you hear me OK?

David:  I can, yeah, thanks for being on Steve. Take it away.

Steven Hunt: Thanks David and thanks everyone for joining as David said, I’m – my name’s Steven Hunt, I’m the senior director of product management at DataCore. And I wanted to talk about a couple of different use cases related to the cloud and ultimately how DataCore helps facilitate addressing those particular use cases. So we’re really going to focus on two topics from a use case standpoint, one about leveraging cloud for storing inactive data or archival data. Then also how potentially you can leverage the cloud for disaster recovery purposes.  So let’s kind of dive in here.  From an agenda standpoint first I want to highlight why could anyone want to move data to the cloud, what are the reasons for that?

There’s probably a number of different reasons I’ll highlight a few of those. Then I want to talk about a couple of our products and ultimately how we facilitate addressing some of those potential use cases. One of those is a newly launched product DataCore vFilO and then one of our longstanding and flagship products DataCore SANsymphony. And then kind of where do we go from here; what’s next now that you’ve learned this information”  And then obviously a Q & A there at the end.  And I think a we’ll have a poll question as well somewhere amongst the slides.  So let’s dive into it, why would anyone move data to the cloud?  I think that’s a common question that a lot of IT administrators ask themselves.

And it really depends upon your organization and ultimately, what are you trying to accomplish? One of the most common reasons why here that people need to move data to the cloud is to solve the exponential data growth problem. The reality is data is growing so fast and the need to store that is sometimes outpacing the ability of IT operations teams to scale their storage solutions quickly.  So one of the effective approaches is being able to move that data into a public cloud storage option and that’s one of the things that will – I’ll kind of show you today how we can help facilitate that.

And a lot of this oftentimes is associated with we’re going to move inactive data or archival data to the cloud for long term cost effective storage, while we maintain all of our on premises storage for highly active hot data that needs the  most performance. The next reason why people would move to cloud is ultimately just facilitating multisite backup of data.  On today’s world, you know being able to recall that data, have it available whenever is necessary in the event of a disaster situation, whether it be natural or unnatural disasters, it’s something that most IT organizations are faced with this problem. They need to be able to ensure that that data is available. And one of the most sound approaches is storing that data – you know copies of that data in multiple sites, more specifically use it leveraging the public cloud as one of those sites.

And the last one I want to highlight is really oftentimes you have an IT leader that has come back from a conference, has talked to their colleagues, their peers and ultimately has enacted a couple of initiatives and one of those are, we need to have a cloud strategy, we need to have a hybrid cloud strategy, we need to have a public cloud strategy, we need to have some type of cloud strategy. So oftentimes you have to move data to the cloud simply just to deliver on your IT leaders strategic initiatives. You know I’ve talked to a number of people who when we asked, why ultimately are they moving the data to the cloud, it is because their CIO said that we have too.

vFilO Distributed File and Object Storage Virtualization Software

So if you’re faced with any of these particular  reasons, and again, there’s a number of reasons why you would move the data to cloud, but if you’re faced with these particular reasons, you know these are some very common use cases. And I want to highlight how a couple of our products can help facilitate them. So the first is DataCore vFilO, this is a new solution that we launched recently and it is really focused on taking our software-defined storage approach that we have done with SANsymphony and driving that into file and object storage solutions.  So at a quick glance DataCore vFilO can really help facilitate the assimilation of existing files – file storage and NAS solutions that you have in your environment.

It can – you can deploy vFilO as a standalone NAS and it’s a great capability to provide a new scale out approach to your file storage needs. And you know doing so with the software-defined storage component, but just like we have done with SANsymphony, and I’ll speak to that a little bit more in a moment, the ability to leverage your existing investments is very, very important to most IT organizations; you have to maximize your dollar usage.  And so as a result of that, we want to make sure that we had a solution that allows you, not only to deploy standalone instances of vFilO for net – new file storage capabilities, but also to leverage those existing investments that you’ve made in file storage in NAS, so – and I’ll show you a little bit more about how we achieve that.

vFilO also enables organizations to create this single place for interaction by applications and users, right.  Every IT organization out there has a number of file shares that they manage and oftentimes they’ve been deployed in such a siloed manner that it’s become difficult to manage and it’s also become difficult for users and applications to navigate.  So vFilO really helps aggregate all of that under a single global scalable searchable catalogue and so you can then point your users and your applications to that.  And not have to worry about, where does this share exist, where does that share exist?  What is the pathway to this particular share?  The other thing that vFilO does really, really well, is it takes all of the requests that are happening from a data read and a data write perspective and it balances that across any of the available storage.

Again, whether it be a vFilO native storage node or an existing storage solution that you’ve assimilated with vFilO.  One of the most important things that I’m going to talk about today is how – vFilO can leverage the public cloud capabilities and allow you to have a hybrid cloud approach and expand your storage from on prem into the cloud.  And then all of this really is driven by our ability to move data effectively.  And the way we move data is based upon policy definition in business intent.  And so I’ll show you a little bit more about that here in just a minute.  So I mentioned there’s assimilating existing file storage solutions.  It’s really, really simple; once you get vFilO stood up, you simply add a new storage system, vFilO storage nodes are added automatically, it detects those and you’ll see those within the UI.

But if you need to add existing storage solutions all you have to do is add a storage system in the UI, select the type of storage system, the file-based storage system that you want to connect to and you’ll see a number of those in the list and then you’ll enter in the information to be able to connect to that and then we’ll start that assimilation process.  And that assimilation process can really be from a read only or a read/write perspective, and I’ll speak about that here in just a minute.  But this allows you to establish your scale out files storage solution with vFilO very, very quickly by expanding what you need with vFilO storage nodes and also facilitating connectivity and interaction with your existing file storage solutions.

If you have – if your solution isn’t in the list there, you can simply choose the other NFS option and that will allow you to connect to a generic NFS shares and be able to read the data that exists there.  Once you’ve added basically your file storage capabilities, and this is for all of your on premises interaction, the next thing that you can do is assign and object or a cloud storage.  So this is going to allow you to move data after compression, after deduplication into an object store and for this particular part of the conversation really I want to focus on the ability to move this into cloud – public cloud storage capabilities.  So as you can see in this list, you can add AWSS3 buckets, you can add Azure blobs, we even have Google cloud storage in there, as well.  There’s IBM cloud object storage, there’s a number of different object storages.

You’ll see some on premise related storage, object storage as well.  So if you want to build up an on premises object storage solution, you know to move your data from high capacity, high performance file storage into lower performing archival object storage solution, on prem you can do that.  But also note that for moving data to the cloud, you can simply select that object storage solution from that particular vendor and then input your connection information and then we will add that as a storage system and then you can start leveraging that.  So I was mentioning assimilation and ultimately how do we start interacting with the file storage data that you have today, right?  So when you add vFilO into your environment, and you connect it to your existing solutions, we start with a Metadata Assimilation.

And this really explains to vFilO what data is there.  We don’t actually start assimilating the files themselves, we start a Metadata Assimilation so we can start to catalogue and understand what data is available, where does it live and what we need to do with it.  From that point in time then you can actually start delivering a movement policy and allow that data to either exist on that third party file storage solution, or move it to a vFilO storage node.  And so again, we can do this – if it’s on an existing solution, it can be read only access where vFilO just simply understand where it is and helps the applications and users know where to get to underneath that new global shared catalogue.  Or you can define a policy that is ultimately going to allow that data to move to a vFilO storage node if you want to migrate off of that legacy third party NAS solution.

And the one thing, again, I want to highlight very, very quickly is, this is all policy driven, right.  And so how do you facilitate that policy mechanism to allow this to happen and ultimately, allow that movement of data to the cloud?  We say you simply state your objective and there’s a reason for that, we call our policies objectives in vFilO.  So what you will do is once you’ve got your storage configured right, you’ve got your file storage in place, you’ve got your object storage connection defined, you simply just want to state your objectives.   And what you see here is a very clear example of where you know we’ve stated that if any of our live files with an extension of MP4, is less than six months old, then we want to exclude it from moving to any of our object volumes, right.

But if it’s older than six months, we want to place that on our – one of our specific Azure blobs that we’ve connected to and made available.  So once you’ve defined this objective and you click go, now vFilO starts to evaluate all of the data that exists in your environment and then starts to place it in the appropriate location.  So in this case, everything that is an MP4 extension, file extension and is older than six months is going to start moving to that Azure blob and you’re going to see the data move.  Now it still accessible by the end user and the applications, right, they still have that same front-end communication that existed through their NFS share.  But what they now have is the ability to access that data if they need to retrieve that data, they simply say, hey, I want to double click on the MP4 file, vFilO then says, OK, where does that live, let me make that accessible and pull that in.

Now the interaction with that file now changes the inactive state of the file and now that file can start moving based upon whatever objectives have been defined to wherever’s appropriate.  That may mean now it’s going to move back on prem in the local data center and make it – put that back on a more performance, highly performance file storage solution.  So one of the things I want to highlight is ultimately, again, why would we move data to the cloud?  Well one of those reasons is we have very expensive storage on prem, right.  It’s usually high performing, it’s usually is fairly costly and as a result, we want to take advantage of the cloud and move some of the inactive data or our archival data off of our expensive on premises storage and move it to a low cost object-based storage in the cloud.

And all you have to do is simply define your policy that is going to allow that data to move and so now, all of a sudden, all of your inactive data is deduped, it’s compressed and it’s moved either to, maybe a lower cost vFilO storage node or moved into the cloud for ongoing storage.  And then the next aspect of how you can leverage vFilO to solve, the other use case that we’re talking about, which is disaster recovery.  Now you can define your policies such that you can replicate the data, make backup copies of the data into the cloud.  And so through your policy-based definition, you’re taking your files, you’re saying, hey I need to do something with those, whether it be move to lower cost storage on premises, or make a copy into the cloud or just move the entire thing to the cloud, you have complete control of where the data lives and exists with vFilO through that policy-based definition.

So now have a disaster recovery event, you now have a copy of the data that lives in the cloud, you can easily start bringing that back on premises and allow that restore of that particular data.  The other thing I want to highlight is everything that I’ve talked about today is – it can be done all the way down to the file granular level.  So it – you can do it for an entire volume, or you can do it for an entire type of file set, or you can do it for a individual file itself and so vFilO helps facilitate that.  And there’s a number of different features and data services that ultimately you can leverage like Snapshot, Replication, all of those capabilities can go all the way down to a file granular level.  So we’ve talked about vFilO and its capabilities.

SANsymphony Block-Based Storage Virtualization Software

The next thing I want to highlight is SANsymphony, and again this solution has been the flagship product of DataCore for a very long time.  And for those that are not familiar with it, it is a block-based SDS platform and it has a comprehensive set of features and data services.  Just at a quick glance, like I was talking about the ability to assimilate existing file storage solutions  and object storage solutions with vFilO, SANsymphony has long held the ability to consume existing storage area network direct attached storage solutions and managed those through SANsymphony.  The other capability that SANsymphony has been able to provide with great fans there, has been our ability to increase the performance of existing storage solutions.

So the – some of the proprietary capabilities that we have around caching and parallel IO allow for the performance of your block-based storage solution to be increased by leveraging SANsymphony.  We also have a very highly available design that allows data to not only be accelerated but also become highly available through our marrying and replication capabilities.  So for those business critical workloads, SANsymphony can help facilitate the replication of that data to ensure that no matter what happens to the underlying infrastructure, it’s always available.  And that’s – that aspect is how we help, ultimately facilitate the ability to store data offsite from a replication standpoint and be able to make that available in the event of a disaster recovery scenario.

We also have the ability to tier across different disk pools and this really helps a cost effecting sizing mechanism for your block-based storage.  And then as I was mentioning, replicating across data centers and to the cloud to help facilitate that disaster recovery scenario.  So let’s kind of talk about how ultimately that can be facilitated.  If you notice the diagram that I have here, with SANsymphony, it provides the access and the connectivity to that upstream component, which is any type of host, a physical host, a virtual host, a hypervisor, it ultimately facilitates that, iSCSI does the in Fibre Channel connection down into either local storage, existing direct attached storage, existing SAN storage, and we ultimate aggregate that into a set of storage pools that then we tier that information from one location to another.

Facilitating through a cloud access gateway, you can actually connect and move that data through the tiering mechanism all the way down, as data gets older and older and older, those blocks can actually be sent into a block-based storage in the public cloud.  And again, this would help ensure that as the data gets older, it’s set in the cloud and allows for that data to ultimately, as it gets accessed, those blocks can be brought back on premises and put back in the high performance tier.  The other thing that we can help facilitate in terms of the replication is you can actually go to the Azure marketplace; this is one area where we’ve made our software available for cloud’s consumption and we’re looking at evaluating capabilities for customers to do that across different clouds.

But we started  with Azure and we allow – we have the ability to enable deployment of DataCore natively through the Azure marketplace, connect that to your on premises SANsymphony deployment and allow the replication of your data from one site to the other.  And ultimately it’s really, really simple, it’s – you have two instances of DataCore, one running on-premises, one running in Azure.  You have your application connecting to it on the top end and SANsymphony is helping facilitate the movement of that data either through a tiering mechanism or through a replication definition that you want to replicate data over to the cloud and ensure that when you have a disaster recovery event, when you have a fail over event happen, you can’t ensure that your SQL Server instances in this case are up and running and that the data is available.

And once you need to fail back, you can actually initiate the fail back through SANsymphony, allow all that data to go back over to on-prem when you’re ready to initiate the fail back mechanism of SANsymphony.  So you know these are a couple of different ways of ultimately how DataCore can help facilitate moving data from on-premises to the cloud for expansion purposes, for archival purposes, for backup and recovery purposes.  We have some of those capabilities built into our products today and can get you down that path faster.  So where do you go from here?  Really, really simple, if you want to learn more about what I had talked about from a cloud capability standpoint or if you want to know just more capabilities about these products themselves beyond these cloud – specific cloud use cases, you can go to DataCore.com/products/vfilo and you can learn more about DataCore vFilO.

And you can also go to DataCore.com/products/sds and you can learn more about DataCore SANsymphony.  And like I said, there’s a whole host of additional features and capabilities and data service that I didn’t talk about today and we highlight those on our product pages where you can learn more.  So with that, I think I’m done, I guess we can get to any questions.

Q&A:

David: Absolutely, yeah great presentation, Steven.  We do have some questions for you, while we do that I’m just going to bring up this first poll question for a minute.  And I want to call the audience’s attention to that in the slides window there, it says, do you have any active or upcoming storage projects in the next six months?  So let’s see, first question, Steven, Neil is asking, what’s the most obvious difference between DataCore and say, and he has the competitor listed there, I won’t name competitors. But what do you think makes DataCore unique above the competition?

Steven Hunt: That’s a great question. I would say that our ability – so specifically and I’ll speak to each of the products that we are mentioning here. Actually from an overall standpoint, DataCore takes a software-defined storage approach to these different scenarios. So we – we’re a software only component that drives the consumption of existing solutions facilitating through our own native nodes.  The overall management of data, whether it be block, file or object and we take a software-defined storage approach to that, which means we essentially become the overarching management of storage.  We allow IT organizations to aggregate the management of that storage and allow IT organizations to present a single source for consumption of that storage for their applications, their workloads, their users, their host, that’s one of the things that makes DataCore very, very unique is our ability to do this across block, file and object storage.

Specifically around vFilO as a product itself, one of the unique capabilities about vFilO is really – it is the combination of a file storage solution with a data management solution. So not only can you just facilitate basic capabilities of file storage in NAS, you also have advanced data management. So through our ability to consume metadata information, and give full control on a policy basis to the IT organization over that data, that’s something that we provide very, very unique that’s going to allow the effective management of both the expansion of data in an organization as well as the – just the day to day operations that are needed for a file and object storage solution.

And then same with SANsymphony, the ability for organizations to have an overall management component of their block-based storage solution, aggregate that all in and facilitate the management of that in a pooled and tiered fashion.  And then increase the performance and capability of that for their application hosts, hypervisors, virtual machines; it provides that – again that software to find storage approach to block-based storage.

David: Got it, got it, OK.  And excuse me, I’m going to bring up the next poll question for the audience, so we’ll do some more Q & A.  so another question here that came in, maybe you can just quickly summarize for those who got a little confused between the two different products.  Can you just quickly tell us, what’s the difference between vFilO and SANsymphony?

Steven Hunt: Great question. Yeah so it may have not been readily understood there, but in a nutshell, vFilO is our filing object store – software-defined storage solution. So it allows for the consumption and the management of file and objects storage and presentation of those capabilities through the standard file and object interfaces and NFS, SMB and coming in this calendar year, 2020, an F3 compliance front end interface.  So that’s vFilO and SAN symphony is our block-based software defined storage solution, which allows the consumption of local, direct attached or network-based block storage through iSCSI and Fibre Channel, aggregation and management of that and presentation of that to host through iSCSI and Fibre Channel connections.

David: Got it, got it, OK.  And then I want to call everyone’s attention to the poll question on the screen there, if you haven’t answered it.  Let’s do one more question and that is, if somebody wants to get started with vFilO or SANsymphony, what do you recommend, is there a POC process they can go through?

Steven Hunt: Yeah, so that’s a great question.  If you go onto DataCore.com/products, either vFilO or SDS, you’ll see a try it now component that exists on our website and that allows people to just quickly download and interact with the software themselves.  Software defined storage solutions are easy to adopt, but there are some complexities that exist there, so I’d highly encourage anyone that’s interested to contact us and let us know that they’re interested in trying out these – one of these or both of these products.  And we can get them partnered with the – one of our field partners, our channel partners and one of our field sales resources to maybe either go down a basic evaluation process or maybe a full scale proof of concept process.

David: Got it, got it, OK.  Well I know we could chat about this all day, I know I could – I really enjoyed the presentation, Steven and I appreciate you being on the event today, thank you.

Steven Hunt: Thanks for having us and thanks everyone taking the time to listen and I hope you were able to get something meaningful out of it.

David: Very good, yeah.  Very good presentation, and I want to point out, you can access the DataCore website directly in the handouts tab by clicking on the DataCore link, which will take you to DataCore.com and you can sign up to do a POC or download some more information yourself.  All right, well thank you so, Steven and thank you to DataCore for supporting today’s event.

Read Full Transcript