A thin wrapper around serialized data which add information of identity and version.
See concepts for more details.
Application 1 (DotV1) Application 2 (DotV1 and DotV2)
| |
Encode DotV1 |----------------------------------------> | Decode DotV1 to DotV2
| | Modify DotV2
Decode DotV1 | <----------------------------------------| Encode DotV2 back to DotV1
| |
```rust,skt-main // Application 1 let dot = DotV1(1, 2); let bytes = native_model::encode(&dot).unwrap();
// Application 1 sends bytes to Application 2.
// Application 2
// We are able to decode the bytes directly into a new type DotV2 (upgrade).
let (mut dot, sourceversion) = nativemodel::decode::
// Application 2 sends bytes to Application 1.
// Application 1
let (dot, ) = nativemodel::decode::
Full example here.
When to use it? - Your applications that interact with each other are written in Rust. - Your applications evolve independently need to read serialized data coming from each other. - Your applications store data locally and need to read it later by a newer version of the application. - Your systems need to be upgraded incrementally. Instead of having to upgrade the entire system at once, individual applications can be upgraded one at a time, while still being able to communicate with each other.
When not to use it? - Your applications that interact with each other are not all written in Rust. - Your applications need to communicate with other systems that you don't control. - You need to have a human-readable format. (You can use a human-readable format like JSON wrapped in a native model, but you have to unwrap it to see the data correctly.)
Early development. Not ready for production.
First, you need to set up your serialization format. You can use any serialization format.
Just define the following functions, so they must be imported in the scope where you use the native model.
```rust,ignore
fn nativemodelencode_body
fn nativemodeldecode_body
With T
and E
the type depending on the serialization format you use. Just E
need to implement the std::error::Error
trait.
Examples: - bincode with encode/decode - bincode with serde
Define your model using the macro native_model
.
Attributes:
- id = u32
: The unique identifier of the model.
- version = u32
: The version of the model.
- from = type
: Optional, the previous version of the model.
- type
: The previous version of the model that you use for the From implementation.
- try_from = (type, error)
: Optional, the previous version of the model with error handling.
- type
: The previous version of the model that you use for the TryFrom implementation.
- error
: The error type that you use for the TryFrom implementation.
```rust,skt-define-models use nativemodel::nativemodel;
struct DotV1(u32, u32);
struct DotV2 { name: String, x: u64, y: u64, }
// Implement the conversion between versions From
struct DotV3 { name: String, cord: Cord, }
struct Cord { x: u64, y: u64, }
// Implement the conversion between versions From
Full example here.
In order to understand how the native model works, you need to understand the following concepts.
id
): The identity is the unique identifier of the model. It is used to identify the model and
prevent to decode a model into the wrong type.version
) The version is the version of the model. It is used to check the compatibility between two
models.Under the hood, the native model is a thin wrapper around serialized data. The id
and the version
are twice encoded with a little_endian::U32
. That represents 8 bytes, that are added at the beginning of the data.
+------------------+------------------+------------------------------------+
| ID (4 bytes) | Version (4 bytes)| Data (indeterminate-length bytes) |
+------------------+------------------+------------------------------------+
This crate is in an early stage of development, so the performance should be improved in the future. The goal is to have a minimal and constant overhead for all data sizes. It uses the zerocopy crate to avoid unnecessary copies.
Current performance: - Encode time: have overhead that evolves linearly with the data size. - Decode time: have overhead of ~162 ps for all data sizes.
| data size | encode time (ns/ps/µs/ms) | decode time (ps) | |:---------------------:|:--------------------------:|:----------------:| | 1 B | 40.093 ns - 40.510 ns | 161.87 ps - 162.02 ps | | 1 KiB (1024 B) | 116.45 ns - 116.83 ns | 161.85 ps - 162.08 ps | | 1 MiB (1048576 B) | 66.697 µs - 67.634 µs | 161.87 ps - 162.18 ps | | 10 MiB (10485760 B) | 1.5670 ms - 1.5843 ms | 162.40 ps - 163.52 ps | | 100 MiB (104857600 B) | 63.778 ms - 64.132 ms | 162.71 ps - 165.10 ps |
Benchmark of the native model overhead here.
To know how much time it takes to encode/decode your data, you need to add this overhead to the time of your serialization format.