On-Demand Webcast 49 min

DataCore News: New SDS Product for File-based workloads

Augie Gonzales

Director of Product Marketing

DataCore

Webcast Transcript

Welcome to our webcast today. I’m Augie Gonzalez, director of technical product marketing for DataCore Software, calling in today from a gorgeous morning in beautiful Fort Lauderdale, Florida.

Today’s presentation will be on our brand-new distributive file and object storage virtualization product called vFilO. And there is an attachment included that you can download that has the presentation. If you have any inquiries, please use the Ask a Question panel to enter that. The webinar will be recorded for later playback, so you can share it with your colleagues.

And with that, we’ll get started. And appreciate you joining us.

Today, what we’re going to do is first of all get the – you really well-acquainted with our new capabilities in the software defined storage or unstructured data. We’ll go over some of the top uses cases and the benefits that you can derive from that, as well as the business value that would justify your investment in such a technology. Also give you an idea about pricing and licensing, and we’ll close out with some additional resources to go up and explore more.

Introducing vFilO

DataCore has been in business now for over 20 years. We have been working on storage block virtualization for much of that time with our SANsymphony products, so that may be one of the reasons you joined us is you may be familiar with that capability. As part of our launch this year into an adjacent opportunity for us is the file and object storage virtualization you see on the right here. And that allows us, and our customers, therefore, to address not only the high-performance database and hypervisor operating system requirements, but all the things that today you may be relying on a set of siloed network attached storage and file servers to address. And also to deal with some of your multimedia requirements that are putting in needs for very large object storage.

This entire portfolio allows us to address many of your requirements and really do that from a foundation of existing assets you already have in place. So the vFilO software, as you can see here, uses machine-learning optimizations based on intents and objectives that you define to decide where to place data, whether it needs to be put on high-performance gear or low-cost large capacity. And I’ll walk you through all of those choices, and how it is you provide hints to the system as to where to place it.

One of the interesting parts of this whole capability is that you can extend some of the base capabilities you have on-prem out to the cloud, so that you could tap into external storage.

If I can be really straightforward in terms of use cases, the first part of the use case for vFilO is really to consolidate or aggregate the namespace from several filers and network-attached storage resources, like you see in the drawing here. This will make it really easy for you, and your users, to access files, share them, and back them up, so that’s an important first step along this direction.

The second part that vFilO brings to the party is the ability to leverage object storage or cloud storage as an extension to a lower-cost extension to what you may already have on the floor today. And use that type of storage, inexpensive capacity, for archiving infrequently used files, and really providing additional safeguards in terms of replication of – for certain critical files. So a level of additional redundancy you may be missing today but are eager to get your hands on.

And the third element of the offering is the ability to scale out and expand at the same time that you automatically load balance across the file shares, so that you can, in fact, maintain control and adequate capacity for unstructured data growth. And we’ll see more about that in just a few minutes.

Again, if you have any questions just go ahead and pop them on the ask a question.

vFilO Use Cases

So what problems are we solving? And there are several really big ones that I’ll point out as we go through this webcast.

The first one is more from the user standpoint, and that’s the issue of trying to find files that are scattered, widely scattered, across your universe of storage system. And what I’ll do in each of these scenarios is I’ll point out the before scenario as it may be existing for you today, which may have been the reason that you came to the webcast, the problem that’s likely manifesting itself into, and how we resolve that problem once vFilO is put in its – in the domain.

So for the file finding problem, it’s really an issue that if you have multiple file servers and NAS appliances, as you see in this next picture, you have probably, at one point, had a really clear hierarchy and a real good organizational reason to do that, is to segregate file shares and the respective storage systems they’re sitting on, whether that’s NetApp or Isilon or Windows file servers, Linux file servers, each to take a slice of your file serving needs. And so you’ve carved out these nice niches for it, and that worked out pretty well at the outset.

The problem is two things. One is as capacity demands grow and the workloads grow, you tend to start to spill over. So in our case, for example, in the left-hand corner here, the headquarter marketing NAS box soon runs out of space. And we – rather than having to expand that, we see that perhaps the support NAS has a little more room on it. So usually what IT will do is they’ll say, okay. Look. I’m going to have to move some of you over to that.” And the moment I do that, I break that original organizational hierarchy that I had. Now, I have some of the marketing people sitting on – partially on the headquarter marketing server. Others are sitting on the support box. And this starts to get all the more contaminated over time in terms of who is sitting where just based on organization’s need. There’s no sense wasting good capacity and having to expand NAS1 if I’ve got some other place to put it.

So it makes rational sense from an optimization of the assets; however, it really breaks the view that those users had. So if you had – for example, if you had files that you had counted on, you knew how to navigate that, now you’ve got to go figure out which of these two buckets you might have been in. And if you shared those files with somebody else, those path names, in fact, may have been severed, and you have broken links.

And that’s one part of it. The other part of it is clearly from an overall, the access to these resources, when you’re trying to find it, if you don’t know deliberately and explicitly which share they are sitting in, you basically have to walk through all of these individually to determine which file it is. And so I’ll show you a little bit more on that, the complications.

In place of that, that frustration you may be already experiencing or your users may already be experiencing, what vFilO can do is turn that into a global namespace. By global namespace, it means that we encapsulate or assimilate all of these different file shares, so that you no longer have to be aware of the physical storage devices they sit in. You still have the same folder directories, and all of that hierarchy remains in place. But now you can access it as if there was one large share.

In this case, it’s large/corp share. And from there, you can get to any location without – and do complete searches. You can do keyword searches in natural language to look for something based on the metadata associated with the files. So it makes the exploration and the searching for files super easy. It also – you’ll see shortly, how it makes it super easy for IT to maintain this environment.

vFilO Process

The way we start this, to just give you a little bit more of the hands-on, is the first step in the process to introduce vFilO into the environment is it’s a layered product, as I showed you at the outset, so it’s basically going to be sitting outside and above existing NAS file shares. The first thing we do is then we assimilate them. We tell vFilO what are the resources available. What are the pre-existing conditions that I can take inventory of to find out what files, what that namespace looks like in each of these. And from there, I can create the global namespace. That global namespace then will associate all of the individual branches underneath this big forest, or this big tree, and present that singular view of the entire universe.

So here I can identify, for example, if I have EMC’s Isilon or NetApp’s C appliances, or just plain old NFS hosts that are out there. All of those will then be brought into the system. Within minutes, the system recommends what is available for it and creates that global catalog. At that point, we are already able to create the first part of the global namespace. So we can provide, in this picture you see, the normal environment you would have been dealing individually with each of these NAS boxes by mounting them explicitly on the yellow arrow. After we’ve done our metadata simulation, we have the go-live global namespace on the right, which now you can address as well. You can mount that in its place.

The second step is then to start actually assimilating the data proper, and basically being able to setup some policies. But I’ll defer that conversation for a couple of slides from now.

Now, the second part of the problem that all of you I’m sure are running into is trying to determine really what’s important versus what’s not, and how to flavor the type of services that I associate based on that relative importance. More than likely, if you’re like 80 percent of the people that I know, every file is really treated similarly. There’s no special distinction being made, and that’s because it’s impossible, largely, for IT to determine what’s hot and what’s not. It really isn’t from their perspective, their view. That’s an impossible task to ask them.

And so what will happen is that the active data, the working set, is a small fraction of your overall capacity consumption; whereas, all of this inactive data or secondary data accumulates, and it chews up a lot of the storage capacity. So what we’re going to do with vFilO is understanding the global namespace, we also understand the characteristics in terms of utilization. When was the last time something was accessed? When was the last time it was actually touched and read? And when are people that are hammering this and versus those that it seems to be pretty dormant.

So what vFilO will do is it will actually segregate those classes of data on an individual file basis, and automatically dedupe, compress, and move the inactive data to object storage that you designate. This graph provides an illustration, a clear illustration of that. So today, for example, you may have some NetApp appliances, some FAS boxes here that are – the red part of the volumes are the active, important part covering maybe a fraction of this. Might be 30 percent. Twenty percent of the overall space is actually actively being used by transactions and by users, etc. A large chunk, however, of that premium storage capacity has these inactive files that have just been sitting there and accumulating.

So what we’re going to do with that, so we’re going to – understanding those characteristics, vFilO can say, all right, what we will do is we will move – we will migrate in the background all of the inactive data off of the premium storage, and put it instead on your low-cost capacity. We can do that in a couple of different ways. One is we can designate the target of that to be, for example, on premise, lower-cost capacity. It just says, hey, I’ve got some other file servers here that don’t – for things that don’t merit it, let me go put – move some of this inactive data there.

And in some other organizations, that target may better be an elastic cloud. It may be Amazon or Google or Azure Cloud, that I’ve got some storage at a much lower price per terabyte or price per gigabyte. And so vFilO, when instructed to do so under these policies will automatically dedupe and compress that information, and also do that so that it reduces the transmission off to the cloud.

And it recognizes what’s in the cloud. So if you – if it starts to see some other files creep up later that are duplicates of that, it says, hey, I already have that in the cloud. I don’t need to resend it. Those particular segments of those files are already covered, and I keep track of all of that.

You can see immediately two benefits from this. One is best use of the savings of the space. So not only are we freeing capacity on the NetApp box on this case, or the Isilon NAS, we are also reducing the [GUI] cycles on it. So that – the labor that machine has to go through in order to find additional space has been reduced because now it’s got all of this place to go into, so you may, in fact, be getting better response from that NAS appliance, and you may be deferring some future expense that you had originally thought that you were forced to do because you needed the performance of it for your active new workloads.

The way we define what additional destinations are available to us is through a similar intuitive GUI. So we can here say, okay, I have a number of choices of where I can put this. Let’s say in my case I’m an Azure shop. Then I will designate as the object storage for my environment an Azure block. If you’re an AWS fan, then you would be designating that. And they can be multiples. So you don’t have to be tied to any one particular public or on-prem object-storage supplier. You can choose among them, and they can change over time.

So this is an important element and a differentiating aspect of the software-defined storage solutions from DataCore. They’re not tying you to any particular supplier, manufacturer, or public cloud supplier. You can play the field as necessary to meet your performance goals, your resiliency goals, and your also overall trust objectives.

Now, the next part of what we do here is identify to the system, give the system some direction. The direction we want to give them is what are the conditions under which we treat something as inactive. So inactive might – for one organization might mean I’m going to look at all my video clips, any MP4 files, for example, that have not been accessed in the past six months. Those – really is no reason to keep those on my NAS appliance. Let’s push those off to the cheaper storage. And that’s how we say here, just like I said it. If the active files is of type MP4, and it is inactive over six months, put it in the Azure block.

It’s that simple, and the software will automatically do this, not just now when I asked it to, but that policy remains in place. So as new video clips start to pile on, and they reach that six-month criteria where they haven’t been active, they will automatically be moved to the Azure block or whatever designated destination we’ve identified. It’s that straightforward. So we don’t have to keep an eye on this all the time. We can define very explicitly, very definitively based on file characteristics and other metadata keywords how to institute this policy or this business intent.

So sharing files is another problem, sharing files in terms of collaboration. It’s a – I know we have hardships like this. It’s you send people a link to this explicit path name, and then a few weeks from now, you don’t realize it, but you create another version, and you move that – your activity to a different location. Meanwhile, that other person still thinks they’re working on the most current one, or they went back to it and read the – where you sent them last. That’s one situation, which means they’re working off an old version.

The other part is you might have sent them, let’s say, an Excel file, and it had these absolute links to it. Guess what? Those links are now broken because the file you’re now working on is completely different space. So with vFilO, you can break out of that dependency of explicit pathnames. You can actually just use keywords and tags and point somebody. Say, look, anything on this date, this is the most current date, and these are the keywords and tags that were used to get to that file, instead of having to be so tied to the original links.

From this regard, vFilO also has this geographic coverage. So on – in our current offering, we can do global namespace across clusters that are up to 40 nodes wide in the same – kind of same campus or within what’s accessible within the same cluster. In the first half of 2020, shortly in that first quarter of next year, we will expand this to geographic or geo-spanning. So we can have multiple sites that share a view of the global namespace, and also provide access to those files, not only with the NFS for Linux and UNIX hosts, as well as SMB or Sys for the Windows environment, but you’ll also be accessing any file from S3, the Amazon S3 object API.

And what you can see from this picture is not only are we spanning – making that global namespace visible across sites, so you’re not having to duplicate the namespace and create different instances of it there. They’re all looking at the same catalog, and they know exactly where anything is. At the point that you actually need to get to something that is distant from what’s collocated with you, the software obviously will feed you that copy that it needs to get you over there.

Now, on the bottom of this picture, you’ll also see the variety of the types of storage that you can put on there. You could use high-performance block SANs and a direct attached storage underneath for those files that deem that necessary. And again, those are the policies that we were stating is if something needs to be – is accessed frequently, or we decide that this is a critical file based on those characteristics, it will naturally – the software will naturally gravitate those files to the left, to the hot fast blocks. Maybe some of the middle-of-the-road stuff will remain on some of the existing NAS file servers. And then as things age or fall into that other criteria, they get pushed off to object and the cloud storage. So that’s how – the way the system works.

スケーラビリティの実現

In terms of scalability, these data services, what they are is basically data movers and portals for providing this multiprotocol support. And in certain situations, more data services can be cranked up, so you can scale those temporarily, for example if you have a lot of migration that’s occurring initially, and then you can decommission those data services. You’ll find in terms of our pricing the number of nodes does not matter. We are strictly by – based on capacity managed.

And you see the metadata services on the left, we do segregate the metadata services from the data proper, and that affords us a number of capabilities that are unique to the vFilO product.

So another problem here that you’re likely running into is just how do you manage these diverse workloads and the allocation of capacity. Well, more than likely, somebody in the business is having to manually migrate users and folders to different filers when you run out of space on one of them. And it’s really hard for anybody to predict where best to place that, and how those loads will be affected. Because you can get an idea of capacity consumption, but you really don’t know as an IT administrator how heavily with the I/Os be to that.

These periodic data migrations are also super painful and disruptive because usually you have to take shares down for several hours as you spoof the data over, and you’ve got to call planned maintenance. And at the same time, we’ve already discussed the fact that anytime you do that, you’re breaking that folder organization for these shared pathnames that you had setup before.

In contrast to that, what vFilO is doing is it provides a way to autoload balance, and it takes into account all of those resources. I know what kind of load I’m getting, and I know the characteristics of the storage underneath me. With that, I know how much capacity they’re taking up, and which ones have the free space. And I can move data to the right ones, move files to it, but never affect the namespace. So the navigation and hierarchy that you have custom does not change even though the file has moved to a different location. This is a fundamental break in how people approach this. They break in a positive way because the breakthrough in data management.

With this, you see a little bit better perspective of what’s happening on the backend. So the system is basically saying I see that I have these collection of storage that I’ll treat as a pool, and I will autoload balance across that pool without concern about the silo characteristics that once bothered me because before I was somewhat constrained by what I could move, and I had to do it manually, here is this is just saying, hey, I’ve got these two buckets over here that are only 40 percent capacity used. The other two are at 70 percent. Let’s start to shift some of the new request for files over there, but not upset the hierarchy.

The vFilO product also, from a performance and response time, uses a parallel file system underneath. So it converts some of what you would consider a serial request coming from the applications or hosts for users and actually can determine where best to put that, and do several of these concurrently. It’s part of the technology DataCore has always been known as parallelism.

Archiving

So we move into another section here, which is archiving. Another, archiving tends to also be a problem for both sides of the house, users and DevOps as well. Here’s the normal practice that I understand most people go through is they – at some point, we realize we’re running out of room, so there’s going to be this manual purge. That’s when the email goes out, “Go clean up your files. We’re going to do a backup. Anything older than this, please, we need to move it off.” There’s a plea that goes out from the IT operations to the community to do that.

The thing is that it’s hard for us users to decide when and what should be archived. We kind of deem all of these things really important, so we may not be as willing to put some of these things in pasture, put it out to pasture, as IT would like us to do. And that creates a problem because we’re not going to move as much as we need out there. And the second part is when it does get moved, and it gets backed up and moved to archive, those files are deleted from our view. We no longer see them. The archive is a different namespace altogether.

And if I need to have to have that, for some reason, they archived something that I need from last year. It was my video clips I was using for training. My god, how can I get those back?” Now I’ve got to send out a search party, in effect, to go figure out where that was put, can I get it, or maybe get something close to it, and put that back in the active namespace. That could take days. And if there’s urgency, everybody might be pretty busy, it may not come in time.

With vFilO, that archiving is an active element that’s occurring all the time. It’s auto-archiving based on these policies. But the file names remain visible to you. So when you look at your folders, in your directory, you’re still seeing things that says, “Hey, that video clip for training is still there.” As far as I can see, I can get to it if I need to.

Now, what you will experience, if it has been archived because it reached that threshold in – from an aging standpoint that the policy called for, then what you will experience is will take a little longer when you actually access that clip because it’s going to be rehydrated from that object storage where it was archived and put back into an active storage space. So that’s the difference. But there’s – it’s self-service. You don’t have to ask anybody to do that. You never realized it was archived. All you did is you experienced it transcendentally, in effect, as a result of it being a little further away than it would normally have been had it not been archived. But that’s the difference.

The visibility, you continue to see a theme through here is visibility to the entire address space makes all of this quite possible. And then the controls that these policies gives you in the background is fundamental to the way we operate.

So there’s the notion here of autonomic data placement. “Autonomic” is a fancy word for automatic, but it implies also that there’s some understanding and some intelligence that is occurring that goes beyond just doing it on your behalf. It’s actually using machine-learning techniques to identify when to take this. So the policies can be related. That same diagram that I showed you earlier, or that same screenshot that I showed you, it can be used not only to talk about aging, which is the parameter we used earlier, we can be very explicit about resiliency.

These particular files, which are critical to the business, I want to have five-nines durability. And now that’s the only thing I need to tell it. Five-nine’s durability. The system will then interrogate the resources available to it and the characteristics, the durability characteristics of those and determine what it has to do in order to meet your objective. And at that point, it’ll either say got it, you’ve got enough redundancy in the system that I can achieve what you asked me for, or it can say sorry, I cannot align to those objectives. There are not enough places for me to place this, replicas, in order to meet your durability and perhaps availability objectives you have also setup for me. So it flags that and says, “You need to take corrective action. You need to provision additional storage for me in order to pull that off.”

This is – the other part of this is from a performance standpoint. So the software is also metering through telemetry how systems are responding. So when you first introduce an adjust storage device underneath or NAS or file server, the first thing that vFilO does is it goes out and tests it. It drives some I/Os to that, and gets an idea of how responsive or unresponsive that system will be. It’s kind of a calibration of the performance aspects and the latency we can expect from that system.

It also tracks that over time. So it’s saying every time I write to this, I’m maintaining a historical chart of everything that I’ve written to it. And from that, I can understand its relative performance compared to other elements in my resource pool. So now when somebody says to me, “These particular files for the end-of-quarter finance calculations need to be on the fastest performance,” I know where best to put that.

Now, even though you might have said, well, explicitly I might have told you that this last machine that I bought was the fastest, vFilO might actually determine that that wasn’t, either because it’s overloaded or for whatever reason, it isn’t quite delivering on that, it actually might make a more intelligent decision than what you might have otherwise thought.

Cost can be a determining factor in this placement. These things just don’t – these files are essentially residuals from another project. I want that in the cheapest location. Go put that wherever that is. I – and it goes, and based on some cost estimates that you can give it, and some of it explores and understands as a result of its activity, will put it in the right place.

For those of you in Europe, for example, location may be a particular important one. So with GDPR requirements, we have regulations that force us to put data within certain boundaries, within national boundaries or within the European sector. And we can do that explicitly here as well. So we can say these files shall never leave these area, and a mounding, the potential where I can put it to.

So that’s what we mean by specific or fuzzy, by the way, is in some cases we’re actually telling you go put it directly into Azure Blob or in some other place and be more loose about it or fuzzy about it, and just kind of giving a hint. Generally, I need it to be here in this area. I don’t want it outside. Those two, all of those components, so the policies and business intent, along with the telemetry and machine learning work to operate on your behalf, so nobody is having to explicitly watch over this and try to keep tabs on it, which is an impossible job.

Backups

Now, let’s talk about backups for safety measures and extra safeguard. Usually, those are done for each of these filers a little bit separately. The software products that you may be using may address one, and you may need a different backup product for another one. Sometimes the NAS suppliers have a preferred tool that they use, and they define special destinations, but there’s usually enough diversity, that those result in different procedures and different tools depending on the manufacturer of the file.

With vFilO, we’re able to consolidate those tools. We can look at them as – there’s a universal NAS out there, is one way to look at it, and I have a global namespace, so I can have a single tool that I use to do that backup. In fact, vFilO natively can do much of that work for you. It can perform replicas, and you can automatically say after these particular files, I need two or three replicas of it, and I need one of those to be in the cloud, for example, and it will do that on a schedule that you designate for it.

Now, that replication may be one of the things you do right from the outset. So if you’ve already – the first time you’ve introduced, for example, vFilO into a new environment, and we’ve done this initial metadata simulation, at that point, we can already take advantage of this backup copy to the cloud. So we can say, okay, now I see the entire global namespace. I don’t need to move any data underneath vFilO. I just have the catalog that vFilO has created for me. Now I’ll point the backup product, and we’ll use native vFilO capability to send replicas of certain files that I’ve identified to the cloud.

I will point out one other thing is that some of you who may be just testing the waters initially as you try these capabilities, we can say to vFilO I don’t want to ingest or assimilate all of the entire namespace of all the things that you have out there. I may only want to pick certain components. So I may want to pick only files that are PDF files. Let’s use that an as an example. Let’s do a test run with vFilO just looking at the PDF files in the NAS controller and deal with those, perform these actions on those, exclude everything else.

Once that comfort factor is, okay, I see how it behaves, I see how it does, and what it’s working for me, then I can expand the field of view to see the rest of the files that you deem appropriate. And you may even exclude some altogether for whatever reason. You might say, look, these are cordoned off. You certainly have the prerogative to do so.

One other area related to backups is being able to recover files that are deleted unintentionally. And we’ve all done this, right? As you kind of hit the delete button. You thought it was – you were done with it, but that was the wrong one, or I shouldn’t have done that because I’m not quite where I thought it should be. So immediately the request goes out to IT, “Can you restore it? Can you possibly, please? You know, I need this right away.” This is very stressful. It’s stressful, and it often yields unsuccessful attempts.

So basically, somebody says, “Well, look, I can’t get what you last did, but I can get you – it looks like you have a copy that was about three days old. I can get that to you, so that’s the best I can do.” vFilO brings an important undelete functionality, and that undelete functionality self-services. It works very much like a recycle bin where when you delete something, we’ve already set a policy for that folder that says if you delete something, what we’re going to do is we’re going to keep that information in the recycle bin for a week, just in case you didn’t mean to undelete it.

And so if you had that spurious senior moment where you press the button and didn’t mean to, you can go into a special folder, and in that snapshot’s folder, you’ll see files that have been deleted. And it’ll actually tell when was the – it has a timestamp that tells you when it was deleted. Pick that up, restore it, and now you’ve got your file back. Never had to invoke any additional outside help to do that, and you know exactly what you got. So a spontaneous self-service recovery of unintentionally deleted files. Very important part for us as users to remedy situations like that, which happen more often than we would like.

It’s important to recognize that what vFilO does, it does at the file level. What I mean by that is there are products that provide things like snapshots and replication, etc., those kind of narrow point products for specific data services. They tend to operate at the volume level. That’s one of the problems. So when you – you have to replicate this entire volume, if you want to recover from something, you’re sent the entire volume over, so it consumed the bandwidth, took the effort and the capacity to do that, only to recover something that was a small part of that, this particular file that you needed out of that big volume. That’s very wasteful.

From an efficiency standpoint, vFilO allows you to dictate these policies on an individual file level, that granular, and these data services travel with the file. That is, regardless of where the file has been located, these data services in terms of the recovery undelete, the replication, the snapshots, all of those things are associated with it. It’s not, like, well, that was something I was doing on the Isilon NAS No. 2, and because I moved it somewhere else, I no longer have that function. That’s not true here.

In fact, the opposite is the cast as vFilO says regardless of where that’s been transported as a result of the data mobility, that the product provides natively these data services and additional functionality are attached and travel with the file, wherever it happens to reside.

Metadata

Now, I’ll talk to one more area here, which is a little bit more nuanced, and just want to bring that up, and it may not be the reason you buy and invest in vFilO, but it may be something you get hooked onto. And that’s the – some of the metadata enhancements and some of the techniques you can use with the metadata engine in vFilO.

So one of the things that we’re doing is we’re basically creating the catalog, segregating the catalog from the files proper. And that’s what gives us this ability to make decisions on a file and not worry about its location. But it also allows you to enhance and enrich the metadata associated with the file. So for example, you can run a collection of photos through a product like AWS Rekognition Net Service. That’s a software package that will detect things inside the images, and from that, it generates some tags, and those tags, in this case that I’m showing you is – I may be looking through this entire thing. What I want to do is any files that have giraffes and zebras in it, any photos that have any of that, tag those files with that keyword. So it’s got that tag associated with it.

Now, when – that becomes part of the namespace. That becomes part of the global metadata that I can search as a user to isolate a collection of files that meet that criteria Anything with giraffes and zebra, I want to place those in the cloud. I don’t want those locally here. And by the way, I’m going to do some videos shot associated with those, and I’m going to do a collage with them. And that’s how I’m going to use – put it in this location.

So these are pretty cool things that we have to learn as an industry, I think, how to take advantage of it. A good opportunity of exploitation of those functions. And it comes naturally with vFilO.

Value

Now, to translate some of this into the financial and the business value, have to look at it from kind of a little different vantage point. The first thing that I want you to walk away with is that vFilO’s bringing you unprecedented visibility and control over your unstructured data, and that’s done both from the perspective of the users and the IT operation. We believe that will measurably improve your productivity and the ease with which you can collaborate across sites because now I have complete view of this namespace, not a fractured view, as I’ve had before.

The fact that I can define and control and take ownership of these policies and these high-level business objectives and institute it in the product simply gives me that control element. And clearly from this additional metadata, I will gain new business insights that I hadn’t considered before. So that’s more forward looking.

From a flexibility standpoint, you have entire choice of what you put behind vFilO, so the fact that you had one manufacturer for storage today, you can change that out. You can transparently swap equipment, decommission equipment as it comes off lease, or it simply ages past its useful financial life, and insert something in its place. And just tell vFilO here’s something coming out of service. Here’s something new. And it will, in the background, shift the files accordingly to take advantage of the new space. If you simply remove something from the system, it says, well, where do I have room to put it, and let me do that there.

This is going to give you the unique ability to expand and modernize non-disruptives, and that’s both on the storage side as well as the server technologies on which the vFilO software runs. So put it alongside existing equipment, determine when, on your call is the best time to pull something else or inject new equipment in. It also gives you the flexibility to let it – to determine where best to put the information. And it uses the same methodology, whether you’re dealing with objects or files. So – and the multiprotocol also gives you the flexibilities – your users don’t have to say, “Well, we’ve got to line up. I got to get on a Windows box because this file was only shared on SMB.” No, I can get to it from NFS, even though it was originally created on a Windows Server or a Windows host, or a Mac OS, although that is possible.

And that’s – it’s part of that software that true implications of a clear, cleanly delineated software defined storage solution where the hardware is segregated from the picture, it matters not. And that will be something that you will enjoy for a long time.

From an efficiency and simplicity standpoint, I think we talked quite a bit about this, but at the end of the day, a lot of the time that we’re spending shuffling files around, backing them up, trying to avoid these migrations or having to deal with those migrations, all of that goes away. That extra effort that’s required to feed and care for file servers and NAS is taken over autonomically by the system.

We’re also unifying the management of those different types of devices, and the profiles for them. And optimizing for the best results, so that makes sure that the active stuff is put on premium storage, and the inactive stuff can go in cheaper storage. Yet, all of it, as you saw from some of the GUI screenshots, is super easy to build into those rules. And if you’re into more of the expert mode, we also have a CLI. So your call, or bounce around between the two until you get comfortable.

Now that you know a lot about this, you might be curious about how it’s priced. It’s just two elements to it. There’s how much active data and how much inactive data. Usually say those are maybe 30 percent of the data is active, 70 percent might be inactive. That’s a good starting point. We can actually deduce that when we first do the sidecar assimilation because we can look at the catalog of metadata and say I see that a lot of these files are falling into the more than six months, haven’t been touched. Even though I haven’t done anything with vFilO yet, I can give you – through a data profiler, I can assess how many are candidates. And you can actually change this dial on the data profiler and say, well, let’s see if we looked at nine months how much data would be considered inactive, or if we just shortened that to three months unused, how much would fall into that bucket.

Pricing

Now, so the pricing is based on terabytes in each of those, and it’s based on actual capacity consumed. It doesn’t matter how much you’ve attached to the vFilO, how much physical storage there is actually consumed. We have one- and three-year subscriptions. They all include support and software update services. And the minimum order is for 10 terabytes. We also provide volume discounts. I’ll show you those in a second.

The point I made earlier is that you can scale these data service nodes, these DSX nodes. There’s no extra charge, so you can stand a bunch of those up during the migration and early data assimilation period. And then once those are – that work is done, take them out. No need to have them busy there. They’ve got nothing really to do.

This is the way that progressive discounts work, very much the way that we structured our other block storage virtualization software. As you subscribe to more capacity, your price per terabyte decreased. And the rough relationship is the price of active data, let’s say in the $200.00 range, $207.00, goes to about half of that to $105.00 for the equivalent in the 100 to 199 capacity. And the one- or three-year plans is what you see here, so those are built into the system. And that license and the pricing brings that comprehensive feature set, so those things which are germane for active data you pick those up. It’s not like we’re nickel and diming anybody about which particular features will turn on. Those set a deal with object storage. You get a few more things you do there.

無料試用版のダウンロード

And in order to get a better understanding about that feature set, what I would encourage you to do is try it. Try a 30-day trial and contact some of our DataCore authorized partners in your area, and they can basically say here’s the best place to apply it. Here’s the areas where we see the most diversity, the most confusion, the most practical scenario in which to institute this distributed file and object storage virtualization. This is where it will have that kind of exceptional business value that we pointed out here.

And you can certainly scan our resources at datacore.com.

So with that, I think I just want to close out, point out it does give you this unprecedented visibility and control. If you have some questions about that, or about simplicity or efficiency or really the freedom of choice that that flexibility grants you, share that with us either during the webcast now, or give us a call later. We’re here to help you on your way through this morass that you might be working through.

And I would like to ask you one other thing, is if you could please rate this. There is a “rate this panel” right underneath. Just take a second to tell us what you thought of it. And if it was informative, all the better. And please, once this completes and it gets recorded, share it with your colleagues. Spread the word. It’s a brand-new capability, and we think other people will be very interested in it.

And with that, I think we will – I don’t see any additional questions from the audience, so we’ll close it out here. Thank you for being with us. Look forward to another time together. Bye-bye.

Read Full Transcript