Panzura CloudFS™ automatically deduplicates and compresses all data before it is moved to the cloud. The result is a dramatically reduced storage footprint from the outset.
Panzura CloudFS deduplicates data before it’s consolidated in the cloud, across every site in your network. Each unique block from a file is stored once, so only one unique copy of a file is preserved by the file system. Here’s how it works:
Data is deduplicated in-line when files are created or changed. If a file block already exists, metadata references are created so the data doesn’t need to be written.
Deduplication data is stored with the metadata, so each location has a full record of the deduplication tables. Even if a file block isn’t local, the deduplication engine uses it to make the entire system more efficient.
Deduplication data is updated instantly through metadata from other locations and local write activity. As a result, every location in your network always has an up-to-the-moment view of every file.
Scales to petabytes
Panzura’s patented technology scales to petabytes and minimizes lookups for unique data, making it efficient even for large volumes of data.
After deduplication, Panzura CloudFS uses a lossless compression algorithm to break each file into blocks as it is created. Each block is compressed in-line, in memory, as it’s created. Last but not least, Panzura CloudFS takes advantage of redundancy within files to aid in greater reduction in file size.
Caching Hot Data Locally
In addition to caching metadata and global metadata locally in flash, Panzura uses several additional techniques to ensure a fast and seamless file access experience.
Cache hot data
Caches hot data at each location based on read and write frequency.
Provides policy-based caching based on file types, folders, and other criteria.
Built-In WAN Optimization
The global deduplication and compression provided in CloudFS makes it possible to reduce or eliminate expensive WAN optimization appliances
and eliminate costly MPLS or other private networks.
This is because
Only unique file data is sent to the cloud when it’s created
Only active data is cached locally
Global file locking keeps file operations local and application data doesn’t need to cross the WAN each time a user opens, saves, or closes a file;
Latency is no longer a problem since file operations are local.
Why Global Deduplication and Compression Matter
Leading Video Game Developer
- 1.5 PB of build files across 30 offices
- Spending millions of dollars on enterprise NAS systems, mirroring, and backups
- Consolidated down to 45 TB of storage (99% reduction)
- Cloud economics enabled them to pay $4000 a month for all tiers of storage