Users and applications occasionally will have duplicate copies of the same data, wasting storage capacity. DataCore uses deduplication techniques to reduce the space taken up by these copies.
In the background, the Deduplication service analyzes blocks of data, looking for repetition. It replaces multiple copies of data with references to a single, compressed copy, thereby reducing the amount of capacity needed. This post-processing technique takes into account how busy the system is to avoid contention with peak hours of operation.
When an application or user requests the deduplicated data, the software transparently uses the reference to retrieve a copy of the data.
The types of files most likely to benefit from deduplication and compression contain repetitive data blocks, with relatively static content, accessed infrequently; especially after they’ve aged a few days. These include general file shares used for group content publication and software development. Poor candidates are files that change often and are constantly accessed by users or applications, such as latency-sensitive databases and boot volumes. Please refer to the SANsymphony™ documentation for more considerations.
All hosts served by SANsymphony can benefit from the space reduction technique simply by keeping candidate files on virtualized storage pools designated for deduplication. Windows, Linux, and Unix servers, both virtual and physical, are supported. Generally, lower cost, high-capacity disks work best for deduplication pools, but other tiers of storage can be used.
Deduplication may be managed remotely for all nodes in a DataCore server group. It leverages and extends proven data deduplication and compression features in the underlying Windows Server 2012 R2 operating system.
Feature Highlights Summary
- Reduce capacity necessary to store data
- Increase efficiency of storage
- Choose virtual disks to be deduplicated and compressed
- Set schedule for post-process deduplication
- Display space savings