The Replify Accelerator Cross-Protocol Data Reduction XDR technology is a bi-directional network caching software module which allows both ends of the WAN to ‘learn’ the data payload as it flows between the client and the accelerated servers. Subsequent sends and receives between the clients and servers will be sent mainly as references to the stored blocks, with the cached content being retrieved before presentation to the client or server application.
Only references to cached blocks are sent for subsequent transmissions of previously seen data, irrespective of whether the client is pushing it back to, or pulling it down from the server.
The Network Cache is a collection of data blocks rather than objects or files. Once a file has been cached, only references to the data blocks are sent across the WAN. If the file is amended, only the changed portion of the file is sent across the WAN, in addition to the references to the unchanged parts of the file. For larger files where updates are frequent and relatively small, this can have great effect, enabling the user to save and update their critical business flow documents frequently and painlessly.
The key benefits are a huge reduction in the number of bytes being sent over the WAN and, often more crucially, a massive reduction in the file transfer time.
Configurable Multi-Tiered Cache
Each Replify Accelerator Client and Virtual Appliance has a multi-tiered cache. A finite in-memory cache is limited based on the available system RAM. The second tier is an encrypted disk cache which is generally much bigger and contains items which have been cycled out of the RAM cache – this is also configurable, and is scaled based on available disk capacity. An LRU (Least Recently Used) algorithm rotates objects out of both RAM cache and disk cache. Objects rotated out of disk cache are learned again the next time they are seen. When an object in the disk cache is hit frequently it is promoted back into RAM cache.
Shared XDR Cache on a Remote Virtual Appliance
In addition to deploying a virtual appliance in the data center, it is normal to deploy one in each remote site of more than ten users, where people are sharing network resources. In this scenario there is a common cache for all remote office workers which is synchronized with the virtual appliance in the data center. As users (with Replify Clients installed) request content from accelerated application servers, the requests are routed through the Virtual Appliance in their office, and then on to the Virtual Appliance in the data center. This vastly improves response times for all users who are accessing the same content.