Library provides a simple API for Google Firestore based on the official gRPC API:
- Create or update documents using Rust structures and Serde;
- Support for:
- Querying/streaming docs/objects;
- Listing documents/objects (and auto pages scrolling support);
- Listening changes from Firestore;
- Transactions;
- Aggregated Queries;
- Streaming batch writes with automatic throttling to avoid time limits from Firestore;
- Fluent high-level and strongly typed API;
- Full async based on Tokio runtime;
- Macro that helps you use JSON paths as references to your structure fields;
- Implements own Serde serializer to Firestore protobuf values;
- Supports for Firestore timestamp with #[serde(with)]
and a specialized structure
- Google client based on gcloud-sdk library
that automatically detects GKE environment or application default accounts for local development;
Cargo.toml:
toml
[dependencies]
firestore = "0.33"
All examples available in the examples directory.
To run an example with environment variables:
PROJECT_ID=<your-google-project-id> cargo run --example crud
To create a new instance of Firestore client you need to provide at least a GCP project ID. It is not recommended creating a new client for each request, so it is recommended to create a client once and reuse it whenever possible. Cloning instances is much cheaper than creating a new one.
The client is created using the Firestore::new
method:
```rust
use firestore::*;
// Create an instance let db = FirestoreDb::new(&configenvvar("PROJECT_ID")?).await?; ``` This is the recommended way to create a new instance of the client, since it automatically detects the environment and uses credentials, service accounts, Workload Identity on GCP, etc. Look at the section below Google authentication for more details.
In cases if you need to create a new instance explicitly specifying a key file, you can use:
rust
FirestoreDb::with_options_service_account_key_file(
FirestoreDbOptions::new(config_env_var("PROJECT_ID")?.to_string()),
"/tmp/key.json".into()
).await?
or if you need even more flexibility you can use a preconfigured token source and scopes with:
rust
FirestoreDb::with_options_token_source(
FirestoreDbOptions::new(config_env_var("PROJECT_ID")?.to_string()),
gcloud_sdk::GCP_DEFAULT_SCOPES.clone(),
gcloud_sdk::TokenSourceType::File("/tmp/key.json".into())
).await?
The library provides two APIs: - Fluent API: To simplify development and developer experience the library provides more high level API starting with v0.12.x. This is the recommended API for all applications to use. - Classic and low level API: the API existing before 0.12 is still available and not deprecated, so it is fine to continue to use when needed. Furthermore the Fluent API is based on the same classic API and generally speaking are something like smart and convenient constructors. The API can be changed with introducing incompatible changes so it is not recommended to use in long term.
```rust use firestore::*;
const TESTCOLLECTIONNAME: &'static str = "test";
let mystruct = MyTestStructure { someid: "test-1".tostring(), somestring: "Test".tostring(), onemorestring: "Test2".tostring(), some_num: 42, };
// Create let objectreturned: MyTestStructure = db.fluent() .insert() .into(TESTCOLLECTIONNAME) .documentid(&mystruct.someid) .object(&my_struct) .execute() .await?;
// Update or Create // (Firestore supports creating documents with update if you provide the document ID). let objectupdated: MyTestStructure = db.fluent() .update() .fields(paths!(MyTestStructure::{somenum, onemorestring})) .incol(TESTCOLLECTIONNAME) .documentid(&mystruct.someid) .object(&MyTestStructure { somenum: mystruct.somenum + 1, onemorestring: "updated-value".tostring(), ..my_struct.clone() }) .execute() .await?;
// Get object by id
let finditagain: Option
// Delete data db.fluent() .delete() .from(TESTCOLLECTIONNAME) .documentid(&mystruct.some_id) .execute() .await?;
```
The library supports rich querying API with filters, ordering, pagination, etc.
```rust
// Query as a stream our data
let objectstream: BoxStream
let asvec: Vec
Use:
-
q.forallfor AND conditions
-
q.for_any` for OR conditions (Firestore has just recently added support for OR conditions)
You can nest q.for_all
/q.for_any
.
```rust
let finditagain: Option
let objectstream: BoxStream<(String, Option
By default, the types such as DateTime
To change this behaviour and support Firestore timestamps on database level there are two options:
- #[serde(with)]
and attributes:
```rust
struct MyTestStructure {
#[serde(with = "firestore::serializeastimestamp")]
created_at: DateTime
#[serde(default)]
#[serde(with = "firestore::serialize_as_optional_timestamp")]
updated_at: Option<DateTime<Utc>>,
}
- using a type `FirestoreTimestamp`:
rust
struct MyTestStructure {
createdat: firestore::FirestoreTimestamp,
updatedat: Option
This will change it only for firestore serialization, but it still serializes as string to JSON (so you can reuse the same model for JSON and Firestore).
In your queries you need to use the wrapping class firestore::FirestoreTimestamp
, for example:
rust
q.field(path!(MyTestStructure::created_at))
.less_than_or_equal(firestore::FirestoreTimestamp(Utc::now()))
You can work with nested collection specifying path/location to a parent for documents:
```rust
// Creating a parent doc db.fluent() .insert() .into(TESTPARENTCOLLECTIONNAME) .documentid(&parentstruct.someid) .object(&parent_struct) .execute() .await?;
// The doc path where we store our children let parentpath = db.parentpath(TESTPARENTCOLLECTIONNAME, parentstruct.some_id)?;
// Create a child doc db.fluent() .insert() .into(TESTCHILDCOLLECTIONNAME) .documentid(&childstruct.someid) .parent(&parentpath) .object(&childstruct) .execute() .await?;
// Listing children println!("Listing all children");
let objsstream: BoxStream
``` Complete example available here.
To manage transactions manually you can use db.begin_transaction()
, and
then the Fluent API to add the operations needed in the transaction.
```rust let mut transaction = db.begin_transaction().await?;
db.fluent() .update() .fields(paths!(MyTestStructure::{ somestring })) .incol(TESTCOLLECTIONNAME) .documentid("test-0") .object(&MyTestStructure { someid: format!("test-0"), somestring: "UpdatedTest".tostring(), }) .addtotransaction(&mut transaction)?;
db.fluent() .delete() .from(TESTCOLLECTIONNAME) .documentid("test-5") .addto_transaction(&mut transaction)?;
transaction.commit().await?; ```
You may also execute transactions that automatically retry with exponential backoff using run_transaction
.
```rust
db.runtransaction(|db, transaction| {
Box::pin(async move {
let mut teststructure: MyTestStructure = db
.fluent()
.select()
.byidin(TESTCOLLECTIONNAME)
.obj()
.one(TESTDOCUMENTID)
.await?
.expect("Missing document");
// Perform some kind of operation that depends on the state of the document
test_structure.test_string += "a";
db.fluent()
.update()
.fields(paths!(MyTestStructure::{
test_string
}))
.in_col(TEST_COLLECTION_NAME)
.document_id(TEST_DOCUMENT_ID)
.object(&test_structure)
.add_to_transaction(transaction)?;
Ok(())
})
})
.await?;
``` See the complete example available here.
Please note that Firestore doesn't support creating documents in the transactions (generating
document IDs automatically), so you need to use update()
to implicitly create documents and specifying your own IDs.
Firestore provides additional generated fields for each of document you create:
- _firestore_id
: Generated document ID (when it is not specified from the client);
- _firestore_created
: The time at which the document was created;
- _firestore_updated
: The time at which the document was last changed;
To be able to read them the library makes them available as system fields for the Serde deserializer with reserved names, so you can specify them in your structures as:
```rust
struct MyTestStructure {
#[serde(alias = "firestoreid")]
id: Option
Complete example available here.
Sometimes having static structure may restrict you from working with dynamic data, so there is a way to use Fluent API to work with documents without introducing structures at all.
```rust
let fields: HashMap<&str, FirestoreValue> = [ ("someid", mystruct.someid.clone().into()), ("somestring", mystruct.somestring.clone().into()), ("onemorestring", mystruct.onemorestring.clone().into()), ("somenum", mystruct.somenum.into()), ("createdat", mystruct.createdat.into()), ] .intoiter() .collect();
let objectreturned = db .fluent() .insert() .into(TESTCOLLECTIONNAME) .documentid(&mystruct.someid) .document(FirestoreDb::serializemapto_doc("", fields)?) .execute() .await?;
``` Full example available here.
The library supports server side document transformations in transactions and batch writes:
```rust
// Only transformation db.fluent() .update() .incol(TESTCOLLECTIONNAME) .documentid("test-4") .transforms(|t| { // Transformations t.fields([ t.field(path!(MyTestStructure::somenum)).increment(10), t.field(path!(MyTestStructure::somearray)).appendmissingelements([4, 5]), t.field(path!(MyTestStructure::somearray)).removeallfromarray([3]), ]) }) .onlytransform() .addtotransaction(&mut transaction)?; // or addto_batch
// Update and transform (in this order and atomically): db.fluent() .update() .incol(TESTCOLLECTIONNAME) .documentid("test-5") .object(&myobj) // Updating the objects with the fields here .transforms(|t| { // Transformations after the update t.fields([ t.field(path!(MyTestStructure::somenum)).increment(10), ]) }) .addtotransaction(&mut transaction)?; // or addtobatch ```
To help to work with asynchronous event listener the library supports high level API for listening the events from Firestore on a separate thread:
The listener implementation needs to be provided with a storage for the last received token for specified targets to be able to resume listening the changes from the last handled token and to avoid receiving all previous changes.
The library provides basic implementations for storing the tokens but you can implement your own more sophisticated storage if needed:
- FirestoreTempFilesListenStateStorage
- resume tokens stored as temporary files on local FS;
- FirestoreMemListenStateStorage
- in memory storage backed by HashMap (with this implementation if you restart your app, you will receive all notifications again);
```rust
let mut listener = db.create_listener( FirestoreTempFilesListenStateStorage::new() // or FirestoreMemListenStateStorage or your own implementation ).await?;
// Adding query listener db.fluent() .select() .from(TESTCOLLECTIONNAME) .listen() .addtarget(TESTTARGETIDBY_QUERY, &mut listener)?;
// Adding docs listener by IDs db.fluent() .select() .byidin(TESTCOLLECTIONNAME) .batchlisten([docid1, docid2]) .addtarget(TESTTARGETIDBYDOC_IDS, &mut listener)?;
listener .start(|event| async move { match event { FirestoreListenEvent::DocumentChange(ref docchange) => { println!("Doc changed: {:?}", docchange);
if let Some(doc) = &doc_change.document {
let obj: MyTestStructure =
FirestoreDb::deserialize_doc_to::<MyTestStructure>(doc)
.expect("Deserialized object");
println!("As object: {:?}", obj);
}
}
_ => {
println!("Received a listen response event to handle: {:?}", event);
}
}
Ok(())
})
.await?;
// Wait some events like Ctrl-C, signals, etc
//
// and then shutdown listener.shutdown().await?;
```
See complete example in examples directory.
By default, all Option<> serialized as absent fields, which is convenient for many cases. However sometimes you need to have explicit nulls.
To help with that there are additional attributes implemented for serde(with)
:
test_null: Option
* For Firestore timestamps attribute:
rust
test_null: Option
The library supports the aggregation functions for the queries:
rust
db.fluent()
.select()
.from(TEST_COLLECTION_NAME)
.aggregate(|a| a.fields([a.field(path!(MyAggTestStructure::counter)).count()]))
.obj()
.query()
.await?;
The library supports the preconditions:
rust
.precondition(FirestoreWritePrecondition::Exists(true))
Looks for credentials in the following places, preferring the first location found:
- A JSON file whose path is specified by the GOOGLEAPPLICATIONCREDENTIALS environment variable.
- A JSON file in a location known to the gcloud command-line tool using gcloud auth application-default login
.
- On Google Compute Engine, it fetches credentials from the metadata server.
Don't confuse gcloud auth login
with gcloud auth application-default login
for local development,
since the first authorize only gcloud
tool to access the Cloud Platform.
The latter obtains user access credentials via a web flow and puts them in the well-known location for Application Default Credentials (ADC).
This command is useful when you are developing code that would normally use a service account but need to run the code in a local development environment where it's easier to provide user credentials.
So to work for local development you need to use gcloud auth application-default login
.
When you design your Dockerfile make sure you either installed Root CA certificates or use base images that already include them. If you don't have certs installed you usually observe the errors such as:
SystemError(FirestoreSystemError { public: FirestoreErrorPublicGenericDetails { code: "GrpcStatus(tonic::transport::Error(Transport, hyper::Error(Connect, Custom { kind: InvalidData, error: InvalidCertificateData(\"invalid peer certificate: UnknownIssuer\") })))" }, message: "GCloud system error: Tonic/gRPC error: transport error" })
For example for Debian based images, this usually can be fixed using this package:
RUN apt-get install -y ca-certificates
Also, I recommend considering using Google Distroless images since they are secure, already include Root CA certs, and are optimised for size.
To work with the Google Firestore emulator you can use the environment variable:
export FIRESTORE_EMULATOR_HOST="localhost:8080"
or specify it as an option using FirestoreDb::with_options()
There are integration tests in tests directory that runs for every commit against the real Firestore instance allocated for testing purposes. Be aware not to introduce huge document reads/updates and collection isolation from other tests.
Apache Software License (ASL)
Abdulla Abdurakhmanov