fog-pack

License Cargo Documentation

A serialization library for content-addressed, decentralized storage.

The fog-pack serialization format is designed from the ground-up to be effective and useful for content-addressed storage systems, and to work effectively in a decentralized network. With these being the highest priorities for the format, it has had to make some tough choices that other serialization formats do not. Here's the quick rundown:

Key Concepts

These four types form the core of fog-pack's concepts, and are used to build up complex, inter-related data in content-addressed storage systems.

So, what does it look like in use? Let's start with a simple idea: we want to make a streaming series of small text posts. It's some kind of blog, so let's have there be an author, blog title, and optional website link. Posts can be attached to the blog as entries, which will have a creation timestamp, an optional title, and the post content.

We'll start by declaring the documents and the schema:

```rust // Our Blog's main document

[derive(Serialize, Deserialize)]

struct Blog { title: String, author: String, // We prefer to omit the field if it's set to None, which is not serde's default #[serde(skipserializingif = "Option::is_none")] link: Option, }

// Each post in our blog

[derive(Serialize, Deserialize)]

struct Post { created: Timestamp, content: String, #[serde(skipserializingif = "Option::is_none")] title: Option, }

// Build our schema into a completed schema document. let schemadoc = SchemaBuilder::new(MapValidator::new() .reqadd("title", StrValidator::new().build()) .reqadd("author", StrValidator::new().build()) .optadd("link", StrValidator::new().build()) .build() ) .entryadd("post", MapValidator::new() .reqadd("created", TimeValidator::new().query(true).ord(true).build()) .optadd("title", StrValidator::new().query(true).regex(true).build()) .reqadd("content", StrValidator::new().build()) .build(), None ) .build() .unwrap(); // For actual use, we'll turn the schema document into a Schema let schema = Schema::fromdoc(&schemadoc)?; ```

Now that we have our schema and structs, we can make a new blog and make posts to it. We'll sign everything with a cryptographic key, so people can know we're the ones making these posts. We can even make a query that can be used to search for specific posts!

```rust // Brand new blog time! let mykey = fogcrypto::identity::IdentityKey::newtemp(&mut rand::rngs::OsRng); let myblog = Blog { title: "Rusted Gears: A programming blog".into(), author: "ElectricCogs".into(), link: Some("https://cognoscan.github.io/".into()), }; let myblog = NewDocument::new(myblog, Some(schema.hash()))?.sign(&mykey)?; let myblog = schema.validatenewdoc(myblog)?; let bloghash = my_blog.hash();

// First post! let newpost = Post { created: Timestamp::now().unwrap(), title: Some("My first post".into()), content: "I'm making my first post using fog-pack!".into(), }; let newpost = NewEntry::new(newpost, "post", &bloghash)?.sign(&my_key)?;

// We can find entries using a Query: let query = NewQuery::new("post", MapValidator::new() .reqadd("title", StrValidator::new().inadd("My first post").build()) .build() );

// To complete serialization of all these structs, we need to pass them through the schema one // more time: let (bloghash, encodedblog): (Hash, Vec) = schema.encodedoc(myblog)?; let (posthash, encodedpost): (Hash, Vec) = schema.encodenewentry(newpost)?.complete()?; let encodedquery = schema.encode_query(query)?;

// Decoding is also done via the schema: let myblog = schema.decodedoc(encodedblog)?; let newpost = schema.decodeentry(encodedpost, "post", &bloghash)?; let query = schema.decodequery(encoded_query)?; ```

License

Licensed under either of

at your option.

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.