You’re the storage administrator for a medium company, it’s 2020, and you’re back in the office. You feel rested, relaxed, and optimistic. You had a good breakfast, you have your cup of coffee ready to go, and you take a look at your list of tasks for the day.
To start, you have a collection of trouble tickets.
“Can’t locate file.” “File share timed out.” “File share out of space.”
And a collection of alerts from some of your mixed collection of storage platforms.
“Multiple disk failure, rebuild in process, system performance degraded.” “Remote file system sync and share has failed.” “File store at 90% utilization.”
Some bigger tasks. “Need to move 4TB of files to S3 archive.” “Need to migrate 11 TB of files to a new file server.” “Need to decommission existing filer.”
And a couple of important but never urgent tasks – except that they’re becoming urgent rapidly. “Evaluate object storage platform for TCO benefits.” “Need to send report to Director asking for additional file storage budget.”
Well, there went the optimism. It’s been replaced by a kind of painful resignation. It feels like a game of Wack-a-Mole…but you don’t know which mole to hit first.
The thing is, even though this situation seems inevitable, it isn’t. Sure, you have nine network attached storage devices of various ages and manufacturers across a few locations, a few departmental file servers, and an Amazon S3 archive—which seems like a lot. Because of your infrastructure, you’re coping with complexity that, to be honest, is likely to get more complex over time as your organization grows, your platforms age, and your requirements change.
Just what you need, more complexity.
You’ve looked at alternatives of course. You’d love to have a way to easily escape proprietary file stores. You know that software defined storage offers the promise of letting you have full-fledged NAS functionality on commodity hardware—driving down TCO—but you haven’t found a software defined storage (SDS) platform that’s a right fit for all your requirements, that offers the TCO advantages you want…and, if you’re being honest with yourself, that gets all these problems off your desk.
Listen, we’ve been there too. File storage has, in a world of explosive data growth, become a hassle for almost every organization. That’s why we created vFilO.
What’s vFilO? It’s a software defined distributed file storage solution that’s built to work with what you have, add the functionality you need, and give you a streamlined migration path that takes you away from the high costs of legacy platforms toward optimized TCO.
The key to it is that vFilO virtualizes third party file and object stores, whether on-premise, remote, or in the cloud. It identifies all the relevant devices on your network, and using artificial intelligence to optimize performance and efficiency, gives you a single front-end for all your file and object requirements.
Let’s take a look at your list and see what vFilO can do.
“Can’t locate file.” vFilO gives you a single searchable global namespace for all the files and objects in your environment. With that, your users can find what they need, avoid keeping local copies, and reduce versioning conflicts.
“File share timed out.” vFilO optimizes data placement across drive types and devices. If you have relatively hot files, vFilO will autonomously move them to flash capacity so timeouts become a thing of the past. Cold files can move to low cost slow disk or even to the cloud.
“File share out of space.” vFilO also optimizes data placement. It sees every file and storage device holistically, and balances capacity utilization to give you a better way to avoid running out of space. Inactive data is de-duplicated, compressed, and moved to cheaper storage.
“Multiple disk failure, rebuild in process, system performance degraded.” vFilO adds data safeguards, including snapshots and replication at any level of granularity.
“Remote file system sync and share has failed.” vFilO offers remote collaboration capabilities instead of relying on file sync and share.
“File store at 90% utilization.” Instead of being blindsided by these alerts, vFilO gives you total proactive visibility across all your file and object platforms. You’ll know that utilization is creeping up well before you receive a NAS alert.
“Need to move 4TB of files to S3 archive.” vFilO does this automatically. It watches file utilization and, based on your policies, moves files to archive. If the files are needed, vFilO can pull them back from archive and restore them upon access. They never leave the global namespace.
“Need to migrate 11 TB of files to a new file server.” vFilO automates this. Time-consuming manual migrations become a thing of the past.
“Need to decommission existing filer.” With vFilO, decommissioning existing hardware becomes easy because all your devices, whether proprietary NAS or commodity x86 storage servers, are part of the pool. Files on the aging gear are relocated to other filers in the background without taking any planned downtime. No one has to know.
“Evaluate object storage platform for TCO benefits.“ vFilO incorporates on-premises and cloud-hosted object storage as part of the global pool. It also supports file and object protocol access to the same set of data (NFS, SMB, S3).
“Need to ask Director for additional file storage budget.” This requirement might actually disappear from your list if vFilO is automatically, intelligently optimizing performance, capacity utilization, and resilience across your entire organization. But if you still need more capacity, you’re now in a position to use inexpensive commodity disk instead of expensive platforms that come with costly licensing and expansion.
We could say so much more about vFilO, but I think we’ll just leave you with a renewed sense of optimism. It’s a new year, and you have a path away from the complexity and confusion of your existing environment. Why not try it out? Request a demo of vFilO distributed file and object storage virtualization software today!