Webcast Transcript
Marisa Cabral: Welcome, everyone. We will be beginning our presentation in 30 seconds.
Good morning and good afternoon, everyone. I would like to welcome you to our webinar on vFilO distributed file and object storage virtualization. It will be presented by Steven Hunt, our senior director of product management at DataCore.
And some housekeeping points is that we will have a question-and-answer box within BrightTALK, which you’re more than welcome to put your questions in, and we will answer them at the end of the presentation. Also, at the conclusion of the presentation, we will be sending out an on-demand webinar, so you will be able to go back and look at that.
And with that, I will pass it over to Steve.
Steve Hunt: Thanks, Marisa. Thanks, everyone, for joining. Appreciate you taking the time to hear more about vFilO, our new file and object storage solution. So just some things that we’ll kind of go through today. We’re going to talk about this product. It was recently launched back in November, but we’ve been working on this for quite a lot of time, and I think it’s going to bring a lot of value that you’ll see here.
So we’ll go through a few top-use cases. We’ll talk about key benefits of the product itself. We’ll highlight some business value because I know one of the things that a lot of us have to do in the IT world is justify the expense of a new solution, so we definitely want to highlight some business value aspects, beyond just, you know, problems that we solve and key benefits. And you know, once we’ve setup all the value, we’ll highlight some of the pricing and packaging and licensing on how you can consume vFilO. And then we’ll kind of cap it up with, you know, just explaining some of the differentiation of the product itself.
And then as Marisa said, there’ll be some questions that we’ll – we’ll do some Q&A at the end. And I think we’ll also have some poll questions for you guys that we’ll just look to get some feedback from.
So with that, we’ll get started.
DataCore ONE
So really I want to highlight, you know, DataCore as a company itself and kind of where we’re going and why we’ve released vFilO. So DataCore has long been known as a block storage virtualization solution. You know, we’ve been around for many years, and we really pioneered, you know, the concept of storage virtualization that ultimately has become what everyone knows and loves today. And I say “love” in – tongue in cheek as software-defined storage.
You know, with our capabilities around our existing product that we had, SANsymphony and providing highly available block-based storage, we recognized that we really wanted to drive towards a – if you will, a more comprehensive portfolio of software defined storage capabilities. And so we set out to say, okay, what is our strategy, what is our vision to be able to do that. And we came up with what we call DataCore ONE™. And vFilO fits into this ultimately because if you see the diagram on the screen, we really – DataCore – at DataCore, we really want to create this unification of storage across different facets.
So from the applications or workloads that ultimately need access to storage, whether it be bare metal or physical servers, whether it be virtualized hosts, or I should say virtualization hosts or virtualized instances, VMs, whether it be container host or containerized applications, all of these things need access to storage. And historically, DataCore has provided that through a block-based storage solution called SANsymphony. With vFilO, we’re really adding to the mix a comprehensive file-and-object, you know, capability to our storage portfolio.
Underneath that, it’s not just enough to provide storage itself in its basic form. We also need to make sure that there’s a comprehensive set of data services that are available, whether it be high availability, whether it be dynamic provisioning, whether it be data protection in the form of snapshotting, or you know, what we call continuous data protection in our SANsymphony product, you know, migration capabilities, replication capabilities. These are all really, really important to provide a robust storage solution, and ensure that not only you’re providing access to data for users and workloads, but ultimately that you can meet business objectives such as SLAs and capabilities that keep the business going. It’s really, really important to have a rich set of data services to do that.
These data services should be enabled across, you know, primary storage capabilities, all the way to secondary storage. And I want to clarify, you know, what – when we say primary storage and secondary storage, for primary storage, we’re really talking about really high performance, really – ultimately, really costly storage capabilities. And secondary storage, as you go more towards that right spectrum, you’re really talking about, you know, lower performance needs, and ultimately, really lower cost for storing data, making sure that it’s always available, but doesn’t always have to be high performance.
So the workloads that are within IT organizations, when I’m – which I’m sure most of you can attest to that are on this webinar, it spans the gamut, right? There are high-performance needs, and there are highly available, low-performance needs across the organization. And with DataCore One, we want to make sure that we can provide the expanse of those capabilities across those different – those primary and secondary storage needs.
And really, all of this is driven by an infrastructure component underneath, whether it be local – you know, local drive storage in x86 servers, whether it be storage area networks. Those of you that are familiar with DataCore’s SANsymphony, you know our ability to aggregate external storage, existing storage through fiber channel and ISCSI connection. For those of you that aren’t familiar with SANsymphony, that’s really the crux of what our software-defined storage solutions provide is the ability to aggregate heterogenous storage and store on your own local storage capabilities, and under one management plain, you know, really extending that with vFilO into the concepts of aggregating existing file storage solutions – NFS, SMB, filers, NAS storage, and also cloud and object-based storage underneath.
Those are all infrastructure components that ultimately you can aggregate that the heterogenous mixture of those across the – you know, the management and data services plain. And that’s DataCore One. And to control all of that, we want to have a common control plain, whether it be just rest API access, whether it be management UI access, whether it be, you know, data collection and analysis through our cloud-based analytic solution DataCore Insight Services, we want to make sure that there is a comprehensive control plain to manage all of that.
So that, you know, long story short, if you will, is the DataCore One vision, the ability to unify all of this across one portfolio, one vendor, and allow the overall arching management and unification of storage is really what we see that we are well-positioned to do. And vFilO is the next step in that.
So, you know, with that, most of you are here to hear about vFilO, not about DataCore’s strategy and vision, and so let’s dive right into vFilO.
Introducing vFilO
For those, again, that are familiar with SANsymphony and the aggregation of block-based storage, you – this will be very familiar to you. But vFilO is really focused on delivering that aggregation, right. Whether you’re using local-based storage on x86 servers and deploying what we call, you know, DSX stores, which is really just our ability to deploy a file storage node and handle – store all of the files on our infrastructure. Or if you want to connect to existing file servers, or NAS solutions, and allow vFilO to manage the aggregation of that data, the movement of that data, and the access of that data, as well as if you want to provide a lower-cost storage for maybe archival or backup purposes, you can connect object-based storage and aggregate that underneath vFilO.
This also enables our hybrid cloud storage play through our S3 connector on the backend and allowing you to connect to, you know, a cloud-based storage solution through a public cloud vendor, like Azure, like AWS, and be able to facilitate the movement of that data from local on-prem high-performance file-based storage, all the way to low-cost, highly available cloud-based object storage underneath. vFilO is providing the capabilities of that and the management of that.
And just like what I mentioned in the DataCore One vision, facilitating the data services that are going to enable these capabilities – and I’ll speak a little bit more about those here in a moment. But you know, looking at this diagram, you can see that aggregation at the bottom of, you know, local storage, existing file-based storage, existing object-based storage, and then presenting that up through vFilO either through file-based storage client access, like NFS and SMB protocols, or object-based interfaces for, you know, S3-compliant object interface.
Now, I will highlight here this recent release of vFilO has the NFS and SMB client interfaces in it. In an upcoming release in the first half of calendar year 2020 is when we’re targeting to have the frontend S3 object interface available. So today, that frontend client interface is file based. The object-based frontend interface will come in calendar year 2020. But that underlying ability to connect to object-based storage on the backend, and either write data to low-cost object storage on-prem or write data to a public cloud object-based storage is available in vFilO today.
And just a reminder, if you have any questions as we’re going through this, feel free to submit them. And if I – you know, if they’re pertinent to the topic at hand, I’ll try to get them answered while we’re going through the webinar. But I’ll probably hold most questions toward the end. So feel free to go ahead and add those questions, and I’ll make sure that we get those answered.
So, you know, we’ve talked about vFilO at a high level, and you know I kind of highlighted some of the things that it make up vFilO. But more importantly, what problems is vFilO really setting out to solve? It – I’m sure many of you can attest to this. I know personally I can, working in the IT industry for over a decade, I have been an IT administrator in large organizations with file server shares that have run the gamut. They are completely disaggregated. Different people are managing different instances of it. Files exist everywhere. Files have been duplicated everywhere. And ultimately, it’s just very, very difficult to manage.
Even in smaller organizations where you don’t have as many siloed implementations of file storage solutions, you still probably have potentially a lot of shares that have been created through different people within the company just needing different things, and the IT organization just providing that as quickly as possible to be reactive. It ultimately creates a disaggregation of all of these – ultimately, all of these, you know, shares and the files, and where they live.
What’s really, really difficult is I bet you could challenge someone to pay them, you know, $1,000.00 for whoever can find very two specific files that they’re looking for, and it not take a lot of time. If they were able to do that, that means you’ve got a really good indexing system and file search capabilities that you’ve implemented. And it probably cost you a lot of money, so $1,000.00 isn’t going to really be that big of a drop in the bucket.
But the reality is this is really difficult for a lot of different organizations because ultimately how fast unstructured data is growing. The ability to get organization over this data has become difficult over time because the growth has happened so quickly. And we’ve provided, you know, tactical solutions, and spinning up new file shares, spinning up new filers, and ultimately creating this disaggregated of management.
So we wanted to solve this problem. We wanted to provide a solution that can aggregate all of it, that can ultimately deliver a unification of management in a couple of different facets, not just from providing a single share, but also providing unification of data services, and providing a unification of data management. And I’ll get into more of the latter two in a moment. But really the – it starts with creating an aggregate global namespace, being able to unify all of these shares under one location for users to access.
Why is this important? One main reason. It’s very difficult to manage client connectivity and movement, whether it’s an application needing access to a file share, or whether it is a user needing access to a file share, ultimately, when you have to provide a migration or you have to transition from one file solution to the other, there’s a lot of work involved in providing the shares and making them available. Typically, you have to go in, and you change the client side. And depending upon how big your organization is, how many applications, how many users are affected by this change, that’s a large effort.
But if you have a single global namespace, you are able to then make migrations on the backend, make changes to your underlying infrastructure without any impact to the end-user, and ultimately, without the overall – of the – I guess, if you will, over-expense of project management and effort needed to make that change and that migration happen. And I think most of you can agree that, you know, we go through refresh cycles of hardware and solutions on a three to five-year basis on average, and so that means when we’re coming up to that, we’re doing a lot of planning. And ultimately, a lot of that planning is how do we not disrupt the end-users.
With the global namespace, you can reduce the overall disruption, and it makes the transition of your organization for, you know, migrating from one file server infrastructure to the next, or ultimately adding more as your company grows or as your data grows, ultimately, allowing that still one single frontend to be accessible for the clients and the workloads. That’s where we see vFilO is providing a ton of value here by solving this problem and ensuring that you have a single global namespace, that it’s going to allow constant user access, while you manage the underlying infrastructure underneath. And provide that multiple level of data services and data management.
vFilO Use Cases
So let’s talk about some of the use cases that we’re really addressing with vFilO. We talked about the problem of the disaggregation. You know, that use case of having a consolidated namespace and ensuring that files are all easily accessible, easily found, easily shared, easily backed up, right. Making sure that that data is available, constantly available the same way all the time, that’s the first use case that we’re really solving.
The second use case, I don’t know how many of you have experienced this, but I know I’ve talked to many customers where, you know, the CIO has come and said, “We’re going to have a hybrid cloud initiative.” And many people have asked, “Well, what is that hybrid – why do we have to have a hybrid cloud initiative? What does it mean by hybrid cloud?” And they’re like, “I’m not sure, but I just know that we need to have a hybrid cloud initiative.” That’s fantastic. And there’s a lot of people that are a little bit more mature down the path of, okay, what is that hybrid cloud initiative, and really what are we trying to do.
And a lot of people have identified that moving data to a lower-cost cloud-based storage solution is oftentimes, if you will, the low-hanging fruit. So vFilO enables the movement and the management of that data from your, you know, higher-cost, high-performant, on-prem, file-based storage, to a cloud-based, lower-cost, ultimately lower-performant but highly available object-based storage, ensuring that the movement of that can happen, and it’s facilitated. vFilO is going to enable your organization to take on that hybrid cloud initiative that’s been put forth, and really drive it to success.
It’s – one of the key points, and I’ll highlight this later, is making sure that you reduce cost when you’re doing this because ultimately there’s no point in moving something from on-prem to a cloud-based resource, unless you’re gaining some type of benefit, whether it be efficiency or cost. We want to make sure that cost is a part of that, so before we move any of that data to the cloud-based object stores, we are going through a compression and dedupe mechanism to ensure that you’re only writing the minimal amount of data that’s necessary, and ensuring that you’re getting those critical files up there, without expanding the overall cost. There’s no point in trying to move data to the cloud for cost reduction if you’re moving 15 instances of the same files over and over again. You want to make sure that you’re only storing it for, you know, ongoing purposes, you know, like archive. You only need to store one instance of that file, as opposed to 15 instances, so you can recall that later.
The other thing that I kind of highlighted earlier, but I’ll speak to a little bit more here, is scaling with unstructured growth. So unstructured data growth is a massive thing. There’s a ton of reports out there. Some of you may be experiencing it at faster paces than others, but we typically see that the data growth is at a minimum 3 percent to 5 percent year over year. The reality is, for a lot of you, it may be much higher. But that creates its own unique challenge in such that how do you keep up with the ability, how do you have a solution that grows at the pace of your unstructured data growth.
vFilO can do that. So we have a significant scale-out capability that allows for you to add as many instances of vFilO as you need to, to ultimately facilitate petabytes, if not almost exabytes of data. And still maintaining management of that data and access of that data in a unified way.
vFilO-Funktionen
So this slide really kind of shows some of the features of vFilO. I won’t touch on every single one of them because we’ve kind of addressed them in certain facets as we’ve gone through a few of the slides already. But you know, this really gives you a sense of how robust vFilO is as a solution today, in terms of what can ultimately connect to it, what access methods are available, what data services are available, the way that ultimately you can control vFilO through either, you know, CLI management, UI management, being able to interface with different storages underneath.
Now, the one thing I will highlight is there’s a few notations here in – let me make sure it’s – oh, sorry. There’s a couple of notations on here. So as I was mentioning, the S3 frontend client interface will be available in a future release coming in the calendar year 2020, while NFS and SMB are available in the version that has been released today.
Another thing I will note is due to compression and encryption, our functions, when we’re moving data from on-prem file storage onto third-party object storage, allowing that data to be, you know, reduced and reduce the overall cost and footprint required for, you know, moving data to object storage. Because again, the use case there is you want to, you know, move data to lower-cost, you know, storage for archival and backup purposes. And as such, you want to reduce your overall footprint to help reduce your overall cost.
So we’ll look to add dedupe and compression capabilities in future releases for the file-based storage, but today it is a function of moving data from your file-based storage to the object-based storage solution.
So really vFilO is really focused on assimilating existing filers, you know, bringing heterogenous management to what your existing solutions are today, and allowing you to continue to scale that out with adding additional vFilO storage nodes. And just facilitating that overall, you know, capability of managing that data, not having to undertake a heavy migration task right off the bat. Just giving you control and management over your existing heterogenous solutions. And we’re going to see all that data. We’re going to bring in all of the metadata associated with it, and allow it then to be effectively managed moving forward.
Again, it provides that one global, scalable catalog of files really when you think about it. It doesn’t matter where the data sits. The user doesn’t have to care, doesn’t have to know. The administration – you, as the administrator have full control over that and allowing the capabilities of vFilO to really shine through. And ultimately just givers access to the files, that’s what they care about. And you’ll have very granular control over there. And I’ll speak to that here, again, also in just a moment.
You know, making sure that anything that’s coming across, whether it’s data access, it’s writing net new data, you want to make sure that it’s going to the right place so you have that control and making sure that ultimately the client of the workload is just sending data reads, sending data writes to vFilO. And ultimately, vFilO is handling where that needs to go. Allowing for, you know, elastic cloud storage as an extension of the on-prem capacity.
And ultimately, allowing data management. It’s the most important part of what, I think, most organizations need today. And vFilO is going to facilitate the movement of that data, the access of that data, really through what most IT organizations are looking for in terms of, you know, making sure that data is available at the performance level it needs to be available. Or making sure that it – it maybe doesn’t need to be performance-bound, business it needs to be availability-bound. It just has to be highly available. Or making sure that, you know, we are going to control the cost, and ensure that old data is only stored on lower-cost solutions. So you have – with vFilO, you really have full control over that.
So let’s talk about, you know, some of the features and the capabilities I mentioned. And what I really want to show here is, you know, some aspects of vFilO and how it works, right. So we talked about the management of existing, you know, file share, file storage solutions, NAS storage. So in vFilO, you simply just go add a storage system. And this allows you to connect to any existing solution. So if you’re adding native vFilO stores, those show up automatically in the UI, and you can start creating file shares and writing data to that, and have a vFilO-only file storage solution. Or you can connect to those existing file storage solutions that you have, whether it be EMC, whether it be NetApp, or just any generic underlying file store or file share solution. You can connect that to vFilO, and then vFilO will see all of that data.
Now, you can do this in a read-only mode, and what we kind of call sidecar. That’s ultimately where you’re ingesting the metadata and understanding where all this data exists, and providing access to that data. Moving forward, you can do this in a read-write mode, such that vFilO is now in control, and those underlying existing file storage solutions really are just an underlying – another infrastructure component that you can use to store the data on.
I think I clicked one too many. There it is. Okay.
So for object storage, right, what I showed a moment ago was around assimilating existing file storage. For connecting third-party object storage for movement of data, for archival and backup purposes, it’s the same process. You add an object-storage system. And this allows you to then see, you know, all of these different storage systems for data placement. Now, whether it is an on-prem existing object storage solution, or you’re going to implement one for use, for data archival purposes, or again, you want to enable a hybrid cloud storage capability, as you can see here, you’re just simply selecting the interface that you want.
Now, whether it is an existing interface that we – you know, that we’ve written through either Azure or AWS or a solution like Cloudian, or if you have something else. A perfect example is if you maybe spin-up an I/O instance of object store on-prem, you can use the generic S3 connector and allow for interaction with it. The list that you see here in this screenshot is really where we’ve tested and we’ve qualified the interaction with those particular solutions. And we – in some instances, we’ve created more integration through API communication with those.
And so as such, you can then see the – a little bit more data about those file store system – or those existing storage solutions, whether it be file storage or object storage, and understand what’s there. So it’s really, really easy to add existing storage solutions to vFilO for management and usage purposes.
So once you’ve got all of these connected, then you’re in, if you will, this ability to work with the data. And so one of those scenarios is like a live migration if you will. So what you see here is an application or a system that is interacting with an existing file storage solution. Read/write requests are coming through. Now, when you implement vFilO into this, you then have the ability to move the data where you want it. So we automatically start the metadata assimilation, and this really tells the client all it needs to know about the file. The file is just something it wants to access, but how it knows where the file is, what to do with the file, that’s really facilitated through the metadata information that’s available.
So as soon as vFilO has knowledge of those existing file storage solutions, we start the metadata collection process. Now, what you can do through the policy-based definitions that exist in vFilO is allow the data to then be moved between either vFilO file stores or the existing third-party file storage solution.
So really what you can do is you can have an aging file storage solution. You can put in high-performance, you know, vFilO storage instances, and then allow that data to be moved over or at least maybe the most performant required data, to be moved over to a vFilO instance. And this is seamless to the end-user because you’ve setup that file share upfront. And ultimately, the user doesn’t need to know that you’re making any changes to the infrastructure underneath.
The other thing that, you know, that this addition and management of existing storage solutions does is allow you to have better control over where the storage is, and ultimately, what are the costs that you’re dealing with, right. So if you take a look at this slide, what we’re really highlighting here is your existing storage solutions. You have this all-in-one concept, right, where, you know, critical data, data that requires high performance is all being facilitated, and that’s perfectly fine. But you also have probably a larger amount of data that’s not readily accessed, doesn’t have a high-performance requirement, and you’re probably overpaying to store that data.
With vFilO, you can essentially move that data to lower-cost infrastructure components, if you will, to ensure that all of that high – you know, that highly active, highly important, highly-performant required data is still be facilitated through that, you know, large, robust existing third-party, so file storage solution. And then you can build out some lower-cost, lower-performant vFilO instances, or connect it to the cloud and move even, you know – maybe even less access data or lower-performant data onto the cloud. And again, you have full control. It’s not a situation where you can have it here, or you can have it there. You can really have it anywhere, and it’s all through that policy-based control that allows you to move that data from one backend infrastructure component to another.
So I mentioned how we do this, and I’ve said policies a few times. And what I want to highlight here really is, you know, what do I mean by policies. When you’re in the vFilO UI, or you’re in the command line interface, you’re going to see what we call objectives. And this is really our policy-based approach. You’re stating the objectives that you want to achieve. So you can create a very – what we call a basis objective, where you’re just kind of defining the simple performance requirements, or the availability requirements that you have for the data. And then vFilO will understand that, and understand the characteristics of the underlying infrastructure that it’s connected to and interfacing with, and know where to put that data.
So you can say something very simple as all of my MP4 files that have been inactive for six months, you know, or newer, we’re going to exclude from all object storage. And then anything that’s older than six months, we’re actually going to place that in Azure. So we’re going to move all of this data from our on-premise, high-cost, high-performance storage solution, and we’re going to move all of that data onto an Azure Blob, and allow that information to be stored up there. So we freed up the on-prem storage, where we can write more highly active high-performant data to that storage. And we’ve facilitated the growth or the transition of that inactive data without having to add a whole bunch of infrastructure underneath. And we’ve allowed that transition to happen in a very short period of time.
This is simply, you know, done by creating an Azure account, connecting vFilO to that Azure account, and setting your policy objective, right. And all of these policy objectives are defined in such a way that, you know, you’re looking at the metadata from an attribute. You could say all of the data that’s owed – you know, has an ownership. You know, that you have metadata that’s defined the ownership of that particular – of those particular files, you can specify that as an objective, and move data based upon that particular objective. So you have very, very granular control of ultimately the data that lives there. And it’s all through these policies that are defined.
So we’ve kind of talked about how vFilO – you know, how vFilO facilitates data movement, how it interacts with third-party existing file storage solutions and object storage solutions. And I’ve mentioned a couple of times about vFilO being scalable. So I want to speak to that a little bit more about what do I mean by “highly available and scalable solution.”
vFilO Scalability
When you look at how a vFilO implementation can be architected, it grows with you. I truly grows with your overall data growth and capacity needs. In a single cluster we support up to 40 vFilO nodes, and those are responsible for all of the data services, all of the connectivity to underlying solutions, whether it be the, you know, local storage, vFilO file stores, or it be through, you know, existing third-party file store and NAS solutions or object storage, even cloud storage. Those are done by what we call the vFilO DSX stores, or data services nodes. You can have up to 40 of those nodes in a single cluster.
And then that is, you know, complemented, if you will, with a pair of highly available metadata nodes. And so what’s important about this is the metadata is something that you want to have control over, and you need to have, you know, highly available. But you ultimately probably don’t need to scale out a lot of data services to facilitate that. This is just purely facilitating the informational aspect that gives you that granular control. It’s when you need to scale out the things like doing snapshotting, things like storing data in a vFilO node. Things like facilitating connectivity to different object and object cloud-based storage solutions. That’s where you’ll have to grow your data services nodes. And we facilitate that up to a 40-node cluster.
So one thing I’ll also highlight is all of this helps facilitate that global namespace. What we’re going to do in an upcoming release is facilitate what we call a universal global namespace, or really a multisite global namespace. So, you know, it’s difficult to bend the laws of physics and make sure that data is available across geographic locations, but I think we’re really, really close to being able to bend the laws of physics. And ultimately, we’ve got a really smart engineers that have – are coming up or really have come up with a way – and we’ll – you’ll see soon in an upcoming release – to allow that global namespace to span multiple sites.
So this is really great for multisite collaboration for users that transition from different locations, different campuses, and allowing the consistent data access to the same solution. Not having to change which file share that they reference. Not having to worry about is the data going to be available. Not worrying about is the data performant when I get there. VFilO is handling the data movement underneath, the client access, and allowing all of that to span, you know, in an existing site today, and across multiple sites as we move forward in upcoming releases.
So the other thing I want to highlight really is, you know, how do we facilitate the constant access, right. How do we ensure that we’re providing performance, we’re providing the capabilities in the scale-out solution. The reality is there’s a couple of components that really help facilitate that. One is our implementation of pNFS, and it helps facilitate the —if you will, the communication from the client side, and allowing the client to send an – you know, a request, if you will, to see data. And vFilO sees that and says okay. Great. What data is it that you’re looking for? Right. We have all the metadata. We know where everything exists. You’re really just accessing references to the metadata. Now we need to tell you where to go.
Now, the great thing about this is, is it allows the user to gain immediate access to that data. Now, as that data is being accessed, you know, you can have policies in place that allow the data – the movement of that data such that it now comes closer to where that user is, and becomes faster and more available, more performant as you get along. Or you can say, no, that’s supposed to be, you know, available on a – you know, on a basis that doesn’t require higher performance. You know, leave it where it is.
Or if I’m in one location, I’m moving to another, we want to make sure that that data moves with that user. As you scale out these instances, as you scale out the client usage, vFilO really is the brains of where all this data lives, who’s trying to access it, where that data needs to be placed on the underlying infrastructure, so that it’s always available. It’s always at the right performance level. It’s always at the right available level for the workload or the end-user.
Data Placement
So let me talk a little bit more about the data placement itself, and how do we facilitate this data placement, right. I mentioned policy-based mechanism that really helps facilitate this. But the reality is it’s all driven based upon our ability to consume the metadata, understand what’s going on with the environment, and allowing the granular control of the data through the policy-based management.
So as we understand information about the infrastructure that’s underneath, you know, the performance characteristics, the access patterns of the data, and allowing for, you know, a policy to be defined across the different characteristics of the telemetry and the metadata, then vFilO does its work. It says, great, I have these rules that I need to adhere to, and I have all of this information that I understand about the environment and what people are doing with the data. Now, I can start providing access. I can start moving data, and I can start allowing the control and the placement of that data through, you know, ultimately what is best for the organization and the business.
And we really call this what we call autonomic data placement. It’s allowing you to have almost an economic control over the data, and ensuring that the data is going to be where it needs to be, whether it’s high performant, highly available, offsite, close to the users, in a single location, doesn’t change, you know, geographic boundaries. That’s really what vFilO is great at is allowing you to have that complete control over the data, dictate what should be happening with the data, and allow vFilO to do its work and make that movement happen.
File Level Granularity
Another thing I want to highlight about vFilO, which is very different than a lot of solutions out there, is our file level granularity. Everything that you’re doing from a policy definition, to a snapshot, to the movement of the data, to the ability to recover or undelete the data, replication of the data is all done at a file granular level. This is important because you don’t have to do it on a file-by-file basis level. We do it at a file granular level. And what that means is whether or not you want to create a definition or take an action on a file, or if you want to do it on an entire volume, that is at your control. And then we facilitate the interaction at a file level.
So perfect example would be if you have the need to, you know, take a snapshot of an entire volume, but restore a particular file for a user, you can do that. You can take a snapshot at a volume, and then go down to the file level and restore that particular file for that user, and allow the – you know, the immediate availability of that information at a file granular level. Same thing with, you know, policies. You can define I want to move a particular type of file. If you saw – if you remember in the screenshot that we had of the objectives that we were creating, we were doing it for all MP4s, right. So we were looking for all MP4 files, and we were saying if it’s older than six months, let’s move it to object storage in the cloud, right. We did that at a file granular level.
And that creates a lot of power for you as an administrator, especially in situations like replicating data to the cloud. You know, being able to ensure that you’re going to move only the data that is required, only the information that you need to move to this lower-cost, lower – or I’m sorry, lower-performant storage for archival purposes, for backup purposes, you can do that.
Whether it’s you need to keep – either you need to move that data, you want to backup that data and have a copy of it in the cloud, vFilO gives you that control to ensure that, again, on a file granular level, you are allowed to move that data, you know, wherever you need it. Whether it’s on a high-performant existing NAS solution, whether it’s on a vFilO data store, whether you’re keeping two copies of that on-prem to allow for quick recovery purposes, whether you want to archive a copy of that, or move all of that data into the cloud, vFilO is helping facilitate that through that policy-based interaction, through the understanding of the metadata that exist at a file granular level.
Meta Data
I’ve mentioned metadata, you know, multiple times throughout the presentation here. And what I really want to highlight is, you know, access to the data and control of the data is all about the metadata, right. When has it been accessed? When was it written? When was it updated? Who is the owner of the data? Who should have access to the data? All of this information helps give that granular control over that. And that’s really – vFilO, first and foremost, is consuming that metadata to allow for control and management of the ultimate data files themselves.
So, you know, whether it’s the, you know, bare minimum metadata that’s available on a standard file, or you’ve enhanced that metadata such that, you know, you’ve added particular keywords or descriptors or tags to that data, vFilO understands all of that, and you can write objectives to help understand what the data is, and ultimately give you that control over that particular data.
A perfect example of, you know, how to deal with that particular data, if you think about AWS Rekognition, right, it understands what kinds of animals appear in photos, and allows the photo to be classified, right. Whether this is a giraffe photo, whether it’s a dog photo. This is a perfect example of where you get that type of control over data. VFilO has a very similar approach to ultimately how we interact with the files based upon that data that’s available.
And so, you know, if there’s some type of tag or metadata that’s being added to a file that, say, you’ve got a whole bunch of giraffe photos, and you need to move those to the cloud for, you know, a collection that is going to be interacted with for a particular project, you can – you know, you can do that. You have the ability to understand the metadata about the file that exists, and facilitate the control over that data with vFilO.
So how does metadata assimilation works – or how does it work? It really comes down to, you know, understanding the data that’s there. So we’ll write the metadata into our catalog. We’ll keep a copy of that metadata on our instances. And we will interact with that data, such that we understand what’s going on there.
You can use analytics with third-party tools, so you can connect to vFilO, understand what metadata is there. And so for maybe third-party solutions for file auditing purposes and things of that nature, we will have all of that metadata in one location within vFilO, and allow for not only you to manage the data, but also interact with that metadata and allow for, you know, capabilities like audit purposes.
Data Control
So, you know, we think that vFilO does a lot for an organization, right. We’ve talked about the capabilities of what vFilO does. We’ve talked about some of the use cases. Really what’s important, and I think most people are understanding today is they need control over their data. They need to understand where data exists. They need to be able to understand how to control that data. And that is huge for most organizations because as I mentioned in the very beginning, data is growing fast, right. Data is constantly being written. It is constantly being, you know, interacted with. And as such, organizations need to have visibility over what that data is, and be able to control that data. And vFilO really helps you deliver those capabilities.
You know, whether it is a migration project, whether or not it’s a, you know, being able to find and backup data, you know, vFilO is providing the ability to facilitate those operations, tasks in a much more efficient and faster manner, again, through the capabilities of understanding where the data exists, understanding how to control that data.
And then ultimately, you’re going to be able to reduce costs with vFilO. You’re going to be able to move data to the right infrastructure components, the right storage components, to optimize the overall cost, right. Make sure that you’re only putting high-performant data, and you’re only spending the money that you require on high-performant data, for just that amount of capacity that’s required. Allowing you to then move data to lower-cost, lower-performant for archival purposes.
And then scale out that capacity as you need to. You can simply just add more vFilO nodes by a lower-cost, high-performant x86 server. Add that to your vFilO environment as a vFilO store, and allow, you know, you to scale out very quickly and very cost-effectively for the data growth needs.
So, you know, between visibility, control, improved operations management, and overall reduction of costs through that control and that efficiency, vFilO really provides that business value that’s going to help you drive the capabilities of adopting this type of solution, and move forward with – you know, with vFilO.
vFilO Pricing
So, you know, cost, all things are very interesting. Let’s talk about the pricing. Let’s talk about what this great product that I just talked about costs. So our licensing model and our pricing is done on two different ways. We have what we call active data, which is the information that’s stored in file storage. And then we have inactive data. That’s the data that’s the data that’s been moved to object storage, either on-prem, object-based storage, or in the cloud.
And so you have the ability to have a more granular control and facilitate the active data and the inactive data, and only pay for that capacity in which you need. This is on a per-terabyte basis, and I’ll show you here in a minute, you know, the different – the cost on a per tier. So as you have more terabytes, you’ll have, you know, a lower cost per terabyte that you’ll get into.
There’s a one- and three-year subscription, and this includes support. It’s a minimum 10 terabyte, but on a one terabyte granularity increase. So 10 terabytes upfront, and then you can consume on a terabyte basis. Like I said, there is the volume discounts. And it’s only on a per-capacity basis. And by that, I mean storage capacity. It doesn’t matter how many data services nodes you need. You can scale that out to – you can have two, you can have 40, all you’re paying for is the capacity that is actually stored.
And so a quick look at, you know, the discount. So if you have that 10 terabytes, up to 50 terabytes of active data, at one year, that’s $345.00 a terabyte. If you get into the petabyte scale, you can see very rapidly how the cost per terabyte drops. And then you can see the difference between that on-prem file store, you know, high-performant file storage solution of active data, versus if you’re going to move inactive data to a low-cost object store, the price difference that you pay. So we want to make it cost-effective for you to archive your data, and move that, you know, to lower-performant, more effective – or more cost-effective object-based storage. And so we’ve aligned the pricing of vFilO for inactive data or archival data accordingly.
So just a quick recap. The distinctive advantages of vFilO, right. Immediate understanding of your existing file systems, you know, through assimilation. No data is moved. vFilO now understands everything, allowing for the high-performant access, and if you will, unified access through our NFS and SMB frontend. Being able to, you know, bring all of that data into the vFilO management without necessarily having to replicate the data from one location to another, just simply moving that data, either through snapshot and recovery, or undelete function, allowing for vFilO to ensure data is always protected. And that – and ultimately enabling a zero RPO capability.
Quite possibly, from my perspective, one of the most important things, blending the concept of file storage with data management into one solution, and given that policy-driven data movement and mobility mechanism, I think is probably the most important advantage that vFilO is going to give you in a file and object-based storage solution. Again, giving you granularity over the files for snapshot, clone, replication, or recovery purposes. And then ultimately ensuring that you have complete understanding and interaction with extensible metadata for ongoing management purposes. VFilO brings these very distinctive advantages that you’re not going to find in one solution in the market today.
Q&A
So I think I’ve talked enough about everything. I have a couple of quick poll questions for everyone. So, real quick, poll question No. 1, do you have an upcoming file and object storage project need within the next 12 months? So I think you can go and vote really quickly on that. It should be up and available, and we’ll give just a few minutes. If you wouldn’t mind giving us a quick answer on the poll question. Again, do you have an upcoming file and object storage project within the next 12 months?
And we’ll look and see. All right. I’m seeing some responses come in here. Again, you can look at the poll section of BrightTALK. It should be right there, available for you to click on. It’s a simple yes or no. No complexity there in the project. We’ll give everyone a couple more minutes here, or a few more minutes just to come in here.
It looks like – so far, it looks like three quarters of you don’t have anything going on in the 12 months. Maybe a quarter of you do. Anymore responses? Again, the poll question is do you have an upcoming file and object storage project within the next 12 months? As soon as we get through the poll questions, then I’ve got some questions from the audience. I’ll get to those. But I want to make sure that we get some responses to these poll questions. Let us know if you have one of these in the next 12 months.
All right. Let’s see here. Let’s move – let’s get the next question. So our next poll question is are you interested in scheduling a live demo of DataCore vFilO, right. So we’ve talked a lot about the capabilities. I’ve shown you some screenshots here of the UI and how you have control over the metadata. Ultimately, are you interested in actually having a live demo where we can schedule time with you to go through the interaction of that particular capability – you know, particularly capabilities that you saw here, a little bit more interest in it. Let us know if – you know, if you’re interested in scheduling a live demo. Go to the poll questions. Let us know, yes or no if you’re interested in scheduling that.
All right. We’ve got a couple of responses coming in. Again, this will allow us to know. We’ll be able to reach back out to you in a quick timeframe, and you know, get you a live interaction with vFilO. You know, again, showing you in real time some of the capabilities of vFilO. Or if you just have some questions, and ultimately you want to see how some of the things that I mentioned, how do those things work, we can show you in real time during that live demo.
All right. We’ve got some answers coming in there. All right. I’ll – while – you guys can continue, I think, voting on those. And let’s – so, real quick, if you want take a – just a quick product tour, you can click – you can take down that URL. I don’t think you can click on it in BrightTALK, but take a look at that – capture that URL real quick. You can go see, like, how the product works. And ultimately, we can reach back out to you, and schedule a demo where you can interact with it.
Also, if you go to datacore.com, you can see information about vFilO. You can download an evaluation, a 30-day evaluation version of vFilO and test it out for yourself, if you want to, you know, play with these capabilities that we’ve mentioned today. You know, if – ultimately, what we’d really like, if you’re – if you saw everything, and it all sounds great, and you’re ready to buy, contact, you know, your DataCore partner. Ask them about vFilO, and let them know that you’re wanting to learn more, and you’re ready to see what this solution has to offer.
So let’s see here. Let’s take a look at some of the questions we’ve got here. So I’ll read them aloud.
“Can you run a what-if scenario? What will happen if I bought a newly created policy in place, and make it active without really doing anything?”
So I think I understand the question. Let me know if I got it wrong, re-ask it. So can you run a what-if scenario? Yes. So you can definitely use what we call the data profiler. When you log in to the DataCore UI, you can use the DataCore – or the vFilO profiler. It will allow you to see the shares that you’ve connected to vFilO, and understand, you know, if you move data from one tier to another tier, you know, based upon certain characteristics, could you save some money. So that data profiler really allows you to do that what-if scenario.
Next question is, “Are there reporting functions available without using third-party tools? How many large files have not been accessed in the last three years?”
So yes, you can get some visibility into that. So when you’re looking in the file-object explorer, or if you’re looking in the data profiler, you can see, you know, some characteristics about the capacity, the number of files, things of that nature that fit within these characteristics. But if you really, truly want to do some file auditing capabilities today, you’re going to want to facilitate that through an existing third-party utility or software.
If that’s something that you’re interested in having native to vFilO, definitely let us know. We’re always looking for how we can provide product enhancements and make the product even more functional for you. So definitely let us know.
Do we have any more questions? I think we’re about out of time. We may have run over a little bit. My apologies for running over a little bit, but I wanted to get through all of this information, tell you about vFilO, and let you know what’s capable today.
So with that, thanks, everyone, for joining. I’ll turn it back over to Marisa.
Marisa: Thank you, Steve. This was very informative. And thank you, everyone, for joining our webinar today. I will be sending out a follow-up email with that link that Steve was mentioning in the next steps, so you could check out environment. You can basically let us know if you’re interested in a live demo still within the voting poll. And thank you for joining. Have a great day.