Vfilo Data Sheet Thumb

DataCore vFilO: 分散型サイトおよびハイブリッドクラウド用のグローバルファイルシステム

Download Data Sheet

Combines File Shares Distributed Across Campuses and Hybrid Clouds for Easy Access, Optimal Capacity Use, Scalable Throughput, and High Availability

The rapidly growing number and diversity of storage systems and locations make file sharing, collaboration, and data placement increasingly challenging. Plus, the recurring restructuring of file shares every time space runs low upsets users and disrupts applications.

DataCore vFilO solves these problems by creating a scale-out global file system across distributed sites spanning on-premises and cloud-based file shares. Previously isolated folders spread across different systems roll up under a single global namespace for convenient access from any location via NFS and SMB protocols. The vFilO software pools resources from discrete filers at each site, making optimal use of their capacity and horsepower.

vFilO continually load balances, safeguards, and migrates files between active primary filers and secondary S3 object storage based on site-specific policies. The software-defined architecture offers the unique flexibility to incorporate existing NFS NAS and file servers into virtual pools. The filers can be nondisruptively expanded and replaced over time with new devices of your choosing based on cost, performance, and other preferences.

vFilO Benefits

  • Increased visibility and control of scattered data
  • Complete hardware-independence and flexibility
  • High operational efficiency
  • Simplified file access, sharing, and collaboration
  • Non-disruptive data migration between NAS/ filers and cloud/object storage
  • High data availability
  • Lower storage costs

Pool Capacity From On-Premises NAS Device and File Servers

Rather than incurring long, stressful data migrations normally associated with technology transitions, vFilO layers on top of existing NAS devices and file servers in the same campus, aggregating their resources under a single global file share. The combined capacity and horsepower of the virtual storage pool can then be used to balance the load, enhance data availability, and tier data placement.

pool storage capacity across diverse NAS devices and filers

Pool storage capacity across diverse NAS devices and filers

The initial step called “assimilation” gathers the metadata describing folder hierarchies, ownership, permissions, and file locations from each filer to create a global catalog. That catalog is kept separate from the actual file contents, effectively de-coupling how data is organized from where it is stored. vFilO then exports the original shares as subfolders under a global mountpoint (for example, /Global/Engineering). Clients momentarily disconnect from the filers and remount the globally-accessible shares from the vFilO portal.

Applications and users continue accessing their data from the same familiar folder paths. Now, vFilO is free to nondisruptively replicate and relocate files in the background as conditions and policies dictate. Normal business operations continue undisturbed even when adding new hardware or decommissioning legacy equipment. For utmost data protection and uninterrupted access, fully redundant configurations with multiple replicas of critical files may be configured.

Automatic Data Placement

vFilO automates the near-impossible task of juggling where files should be relocated to in order to satisfy business intentions, despite constantly changing conditions. The system administrator simply sets a few high-level objectives that guide how files meeting specific (or broad) criteria should be treated. Parameters including file type, aging, access frequency, ownership, and origin can drive data placement. These parameters help vFilO choose between fast and cheap storage, on-premises and the cloud, or the number of copies necessary to meet resiliency, performance, and governance goals. Its AI/ML algorithms regularly sweep the metadata and assess the fluctuating state of the hardware in order to align files with those goals. Even when archiving files from active primary tiers to secondary S3-compatible cloud/object storage, the directory structure remains intact. There’s no need for IT intervention to retrieve them.

vFilO uses A/ML to automate file placement based on administrator-defined business objectives

vFilO uses A/ML to automate file placement based on administrator-defined business objectives


The vFilO global file system extends beyond a single campus. Once-isolated shares housed in different facilities are combined under a single namespace, making file sharing and collaboration extremely easy. Regularly-accessed files may be stored locally for the fastest response, whereas infrequently used ones appear local, but are retrieved behind the scenes from remote locations. This approach significantly reduces the capacity required. Files transferred between sites are also deduplicated and compressed in a cloud or object storage intermediary to reduce data transmission and space consumed. Only metadata is synchronized regularly to ensure that all users have the latest view of the global catalog.

access files in multiple locations from a single namespace for effective inter-site collaboration

Access files in multiple locations from a single namespace for effective inter-site collaboration

Key Features / Data Services

Several hands-free, file-granular services provided by vFilO dynamically govern data mobility, durability, and availability. They include synchronous mirroring between active filers on the same cluster, asynchronous replication between sites and to cloud/object storage, automatic data migration and rapid snapshots/clones. Archives placed on cloud/object storage are globally de-duplicated and compressed, yet remain accessible, as do recently deleted files.


  • End Users
  • Application & Web Services
  • Devices

Access Methods

  • NFS
  • SMB

Operation & Insights

  • Extensible Metadata
  • Data Migration
  • Historical /
    Real-Time Charts
  • Health &
    Performance Graphs
  • Alerts
  • Provisioning

Data Services

  • Multi-Site Global Namespace Multi-Site
    Global Namespace
  • Active Archive Active Archive
  • Parallel NFS Parallel NFS
  • Asynchronous Replication Asynchronous Replication
  • Pooling, Assimilation of NAS/File Servers Pooling, Assimilation
    of NAS/File Servers
  • Automated Data Placement Automated Data Placement
  • Self-Service Undelete Self-Service Undelete
  • Deduplication / Compression Deduplication / Compression*
  • Snapshots/Clones Snapshots/Clones
  • Encryption Encryption*
  • Synchronous Mirroring Synchronous Mirroring
  • Load Balancing Load Balancing

Command & Control

  • Access Controls
  • CLI
  • Console
  • File Granularity
  • Plug-Ins

Supported Storage

  • File
  • Object
  • Block
  • Cloud




  • アプリとユーザが簡単にアクセスできるように、広く分散しているファイルを単一の名前空間に整理します
  • ファイル共有が簡単になり、拠点間のユーザの生産性とコラボレーションを大幅に改善します
  • Fulfill data governance obligations through policies for data placement and protection


  • 高価なフォークリフトアップグレードなしで、現在のインフラストラクチャに新しいテクノロジーを統合します
  • 場所やデバイスに依存しないアプローチのメリットを活用して拡張と最新化を実現します
  • 拠点およびハードウェアアーキテクチャを横断する最適な場所にデータを配置できます


  • リソースをプールし、それらの間で負荷を分散させることで、利用可能なストレージ資産を完全に活用します
  • 手動のファイルのシャッフルとバックアップに要していたリカバリー時間が不要になります
  • 破壊的で時間のかかるデータ移行が避けられます


vFilO clusters at each site are comprised of three key components. The clusters may be joined to an Active Directory domain for comprehensive access controls.

Existing NAS Devices and File Servers

existing NAS devices and file servers

Provide the original segregated file shares. They are mounted and accessed by vFilO over NFS.

Metadata Service Nodes (Anvil servers)

metadata service nodes

Control the cluster, its administrative interface, metadata operations, and assimilation. Anvil nodes are deployed as a pair for redundancy.

Data Service Nodes (DSX servers)

data service nodes

Responsible for all data services (synchronous mirroring, replication, data placement, snapshots/ clones, etc.). They provide redundant portals for data access and also serve as on-premises file storage. The number of data service nodes can be scaled up or down based on business needs.


  • Scales from 50 TBs to multiple petabytes with billions of files in a single namespace
  • Scales up and out to 40 data service nodes per site
  • Up to eight sites can participate in the global file system