Data De-Duplication, often called Common File Elimination, is one of the sophisticated ways to reduce the raw data from your network servers across all remote offices to a size that can be transmitted over the WAN. It ensures that the same data is never transmitted offsite more than twice, thereby saving the bandwidth to transmit only new, unique data. It achieves this simple elimination by generating a Checksum of each file as it is backed up and comparing it against the known details of all previous files. If the Checksum matches a previously backed up file, it must be a duplicate and only a shortcut need be transmitted up the line.
Due to the way this technique is applied, it does not matter if the files are on different servers, at different offices or even have different filenames.