We are currently working on supporting Windows to Windows. Linux to Linux has lower priority as rsync already provides all functionality, it's just a bit slower on fast connections. On slow connections, rsync and cdc_rsync perform very similarly as the sync speed is dominated by the network.
Am I reading this right that onboarding your game to Stadia as a developer involved essentially rsyncing data directly to a Linux cloud instance?
That's.....
Linux to Linux is also an option if there is demand, but currently it's Windows to Linux only.
hash = (hash << 1) + random_table[data[n]];
bool chunk_boundary = (hash & magic_pattern) == 0;
per byte. That's only a few ops and very cache friendly. The random table only has 256 entries, 8 bytes each, so it easily fits into L1.Say, we have 1GB file and we detected an extra byte at the head of our local copy. Great, what next? We can't replicate this on the receiving end without recopying the file, which is exactly what happens - rsync recreates target file from pieces of its old copy and differences received from the source. Every byte is copied, it's just that some of them are copied locally.
In that light, sync tools that operate with fixed-size blocks have one very big advantage - they allow updating target files in-place and limiting per-sync IO to writes of modified blocks only. This works exceptionally well for DBs, VMs, VHDs, file system containers, etc. It doesn't work well for archives (tars, zips), compressed images (jpgs, resource packs in games) and huge executables.
In other words - know your tools and know your data. Then match them appropriately.
Also, most modern compression tools have an "rsyncable" option that makes the archives play more nicely with rsync.