Easy, performant, pragmatic CRDTs.
WIP: Do not use.
In rough order of priority.
The key aspect of this is the Mergable
trait which has the following interface. The rest of the crate is data-types that implement the interface, and in the future tools to help you manage the structures and syncing.
Basic syncing is very simple. To send your changes to someone else just send them your structure. They can then call local.merge(remote)
and now they have your changes. They can send you their changes and you merge them in the same way and now you have the same structure. You can even do this in parallel!
```rust let mut alicestate = unimplemented!(); let mut bobstate = unimplemented!();
// Send states to the other party. let forbob = serialize(&alicestate); let foralice = serialize(&bobstate);
// Update local states with remote. alicestate.merge(deserialize(foralice)); bobstate.merge(deserialize(forbob));
asserteq!(alicestate, bob_state); ```
For 1:1 syncing you maintain two copies of the data structure. One represents the remote state and one represents the local state. Edits are performed on the local state as desired. Occasionally a delta is generated and added to a sync queue.
```rust let mut remotestate = unimplemented!(); let mut localstate = remotestate.clone(); // Or whatever you had lying around. let mut syncqueue = Vec::new();
for _ in 0..10 { makechanges(&mut localstate);
let delta = remote_state.diff(&local_state);
sync_queue.push(serialize(&delta));
remote_state.apply(delta.clone());
} ```
Once you have established a network connection the remote can apply your changes.
```rust let mut remote_state = unimplemented!();
for delta in fetchsyncqueue() { remote_state.apply(delta); } ```
At this point both nodes have an identical structure. (Assuming that there have been no changes to the remote concurrently.)
This is a common pattern of having a central server as the "source of truth". This can be used to allow offline editing (like Google Docs) or with a P2P backup for max reliability. This pattern can provide the maximum efficiency for clients at the cost of some latency (for the edits to traverse the server).
For clients this looks exactly like 1:1
sync, a simple server looks roughly as follows.
```
struct Server
impl
fn update(&mut self, diff: T::Diff) {
}
}
let mut state = unimplemented!(); let mut deltas = Vec::new(); let mut clients = Vec::new(); for event in unimplemented!("recieve events from clients") { match event { NewClient{client} => { client.send(New{ state: serialize(&state), version: delta.len(), }); clients.push(client); } ResumeClient{client, version} => { for (version, delta) in deltas.iter().enumerate().skip(version) { client.send(Delta{ delta: &delta, version: data.len(), }) } clients.push(client); } Delta{client_id, delta} => { state.apply(deserialize(&delta)); deltas.push(delta);
for client in &mut clients {
client.send(Delta{
delta: &delta,
version: data.len(),
})
}
}
}
} ```
This is a simple implementation, some things could be approved.
New
state to old clients.)In the future I would like to provide a server core in this library to make a quality implementation easy.
This is currently tricky. The best option now is likely have all clients act as a server but care must be taken to avoid holding too much state in the clients.
Another option would be to keep a copy of the last-seen-state of all peers (for a period of time) and upon reconnection generate a delta based on that. This would be feasible if your data is not too large.
In the future we may add support for common-ancestor discovery which would allow for more efficient initial syncing. (This would be similar to how Git does pushes and pulls.)