Global File Distribution
Quickly Distribute Data Globally without Duplication
Today’s modern enterprises require real-time collaboration across global sites. Data created in one location must be available for consumption or modification in multiple sites around the world quickly to drive high-performance.
Challenges with global data distribution across geographies
Traditional Replication Methods
If performance of any kind is required at each location where that data will be needed then you will need to replicate that data to each location where it is used, even if only to read that data. Traditional methods may include using such utilities as rsync, FTP, DFS/R, and Robocopy. These traditional methods are optimized to detect the differences between data at different locations and synchronize them. If there are changes at both location the administrator must choose which location wins. This could lead to multiple versions of the same data or even the loss of data/changes from one location to the next. Additionally, you will need to invest in as much capacity as needed at the origin and additionally as much at each site you access the data which could be expensive.
Traditional Remote Access
If local performance isn’t that important you might centralize your data at one site and access it from multiple remote sites. This can be done directly over a wide area network or WAN from each site. Traditionally an Enterprise WAN is built using leased lines or multi-protocol label switching links also known as MPLS links which can be quite expensive. MPLS links provide dedicated, secure links and may even provide lower latencies than public Internet links. Alternatively, an Enterprise may interconnect remote offices using site-to-site VPNs over public Internet links which offers lower overall cost at higher bandwidths but at the cost of latency. Latency is the key challenge with these methods and ultimately what hinders access.
The Network is the Bottleneck
Regardless of which method you choose, replication or remote access, the network is the bottleneck. You will require the total bandwidth needed to either replicate or access the data from the source multiplied by the total number of target sites. For both, latency is the key challenge where local area network or LAN protocols such as SMB and NFS weren’t designed to work efficiently over highly latent connections. Though technologies such as WAN optimization may improve performance it ultimately does not solve the problem.
Seamless Distribution of Large Files via Panzura’s Global File System
Data can be accessed from any location anytime, anywhere without dependencies. with Freedom filer’s metadata synchronization services every 60 seconds. Furthermore the whole of that data is consolidated in an object store — public or private — reducing duplicate data.
Local Caching Services
The Panzura Freedom filer caching services allows each site to act as a single copy by automatically determining what data is actively being accessed from each site and caching only data needed at that site.
Prefetch and Prewarming
The Panzura Freedom filer further enhances local performance through the implementation of usage and request-based prefetch functions. These are done automatically with added ability to manipulate through data locality rules.
Access your Data Anywhere — Hassle Free
Panzura’s Freedom filer provides a global namespace to access data from any location without having to explicitly transport or replicate data from site to site.
Consolidation of Data
By consolidating data in the cloud, the islands of storage created by site to site replication are eliminated, significantly reducing costs, simplifying management, and eliminating backup processes.
Reduce Network Utilization and Cost
The Freedom filer’s global CloudFS and integrated network optimization features allow companies to greatly reduce or even eliminate their dependency on expensive MPLS links while using less expensive and higher bandwidth of public Internet links.
Eliminate backup and DR
Freedom filers store 100% of its data in the cloud, which can offer 16 9s or greater durability. Automated, space-efficient snapshots provide point-in-time data recovery with a near-zero RPO for fast, simple recovery that eliminates the need for replication and backup processes.