WARNING: This is highly experimental and will probably eat your data. Make sure you have good backups before you test it.
This is a filesystem that allows you to keep a seamless local view of a very large repository of files while only really having a much smaller local cache. It's meant for situations where you have too small of a disk to hold the full collection but instead would like to fetch data from a remote server on demand. The use case it was built for was having a very large collection of media (e.g., a multi-terabyte photo collection) and wanting to be able to seamlessly access it at any time on a laptop that only has a few GBs of space.
syncer is built as a FUSE filesystem so it presents a regular POSIX interface that any app should be able to use. Files are internally split into blocks and hashed. Those blocks get uploaded to any rsync end point you want (usually an SSH server). Then when the local storage exceeds the limited amount the least recently used blocks get evicted. They get brought back into local storage on demand by fetching them from the remote server again.
The basic program works and syncs to a remote rsync/ssh server. This should be enough for a photo collection which is mostly a set of fixed files that don't get changed a lot. But this is still highly experimental and might eat your data. The basic existing features are:
Still on the TODO list:
To install or upgrade just do:
```sh
$ cargo install -f syncer ```
To start the filesystem do something like:
```sh
$ syncer data someserver:~/blobs/ mnt 1000 ```
That will give you a filesystem at mnt
that you can use normally. The data for it comes from the data
folder locally and the server. At most syncer will try to use 1GB locally and then fetch from server when needed.
Bug reports and pull requests welcome at https://github.com/pedrocr/syncer
Meet us at #chimper on irc.mozilla.org if you need to discuss a feature or issue in detail or even just for general chat.