GOOD-ORMNING

Good-ormning is an ORM, probably? In a nutshell:

  1. Define schemas and queries in build.rs
  2. Good-ormning generates a function to set up/migrate the database
  3. Good-ormning generates functions for each query

Features

Like other Rust ORMs, Good-ormning doesn't abstract away from actual database workflows, but instead aims to enhance type checking with normal SQL.

See Comparisons, below, for information on how Good-ormning differs from other Rust ORMs.

Current status

Alpha:

Supported databases

Getting started

First time

  1. You'll need the following runtime dependencies:

    And build.rs dependencies:

  2. Create a build.rs and define your initial schema version and queries

  3. Call goodormning::generate() to output the generated code
  4. In your code, after creating a database connection, call migrate

Schema changes

  1. Copy your previous version schema, leaving the old schema version untouched. Modify the new schema and queries as you wish.
  2. Pass both the old and new schema versions to goodormning::generate(), which will generate the new migration statements.
  3. At runtime, the migrate call will make sure the database is updated to the new schema version.

Example

This build.rs file

rust fn main() { println!("cargo:rerun-if-changed=build.rs"); let mut latest_version = Version::default(); let users = latest_version.table("zQLEK3CT0", "users"); let id = users.rowid_field(&mut latest_version, None); let name = users.field(&mut latest_version, "zLQI9HQUQ", "name", field_str().build()); let points = users.field(&mut latest_version, "zLAPH3H29", "points", field_i64().build()); goodormning::sqlite::generate(&root.join("tests/sqlite_gen_hello_world.rs"), vec![ // Versions (0usize, latest_version) ], vec![ // Queries new_insert(&users, vec![(name.clone(), Expr::Param { name: "name".into(), type_: name.type_.type_.clone(), }), (points.clone(), Expr::Param { name: "points".into(), type_: points.type_.type_.clone(), })]).build_query("create_user", QueryResCount::None), new_select(&users).where_(Expr::BinOp { left: Box::new(Expr::Field(id.clone())), op: BinOp::Equals, right: Box::new(Expr::Param { name: "id".into(), type_: id.type_.type_.clone(), }), }).return_fields(&[&name, &points]).build_query("get_user", QueryResCount::One), new_select(&users).return_field(&id).build_query("list_users", QueryResCount::Many) ]).unwrap(); }

Generates this code

```rust

[derive(Debug)]

pub struct GoodError(pub String);

impl std::fmt::Display for GoodError { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { self.0.fmt(f) } }

impl std::error::Error for GoodError { }

impl From for GoodError { fn from(value: rusqlite::Error) -> Self { GoodError(value.to_string()) } }

pub fn migrate(db: &mut rusqlite::Connection) -> Result<(), GoodError> { db.execute( "create table if not exists _goodversion (rid int primary key, version bigint not null, lock int not null);", (), )?; db.execute("insert into _goodversion (rid, version, lock) values (0, -1, 0) on conflict do nothing;", ())?; loop { let txn = db.transaction()?; match (|| { let mut stmt = txn.prepare("update _goodversion set lock = 1 where rid = 0 and lock = 0 returning version")?; let mut rows = stmt.query(())?; let version = match rows.next()? { Some(r) => { let ver: i64 = r.get(0usize)?; ver }, None => return Ok(false), }; drop(rows); stmt.finalize()?; if version > 0i64 { return Err( GoodError( format!( "The latest known version is {}, but the schema is at unknown version {}", 0i64, version ), ), ); } if version < 0i64 { txn.execute("create table \"users\" ( \"name\" text not null , \"points\" integer not null )", ())?; } txn.execute("update _goodversion set version = $1, lock = 0", rusqlite::params![0i64])?; let out: Result = Ok(true); out })() { Err(e) => { match txn.rollback() { Err(e1) => { return Err( GoodError( format!("{}\n\nRolling back the transaction due to the above also failed: {}", e, e1), ), ); }, Ok() => { return Err(e); }, }; }, Ok(migrated) => { match txn.commit() { Err(e) => { return Err(GoodError(format!("Error committing the migration transaction: {}", e))); }, Ok() => { if migrated { return Ok(()) } else { std::thread::sleep(std::time::Duration::from_millis(5 * 1000)); } }, }; }, } } }

pub fn createuser(db: &mut rusqlite::Connection, name: &str, points: i64) -> Result<(), GoodError> { db .execute("insert into \"users\" ( \"name\" , \"points\" ) values ( $1 , $2 )", rusqlite::params![name, points]) .maperr(|e| GoodError(e.to_string()))?; Ok(()) }

pub struct DbRes1 { pub name: String, pub points: i64, }

pub fn getuser(db: &mut rusqlite::Connection, id: i64) -> Result { let mut stmt = db.prepare( "select \"users\" . \"name\" , \"users\" . \"points\" from \"users\" where ( \"users\" . \"rowid\" = $1 )", )?; let mut rows = stmt.query(rusqlite::params![id]).maperr(|e| GoodError(e.tostring()))?; let r = rows.next()?.okor_else(|| GoodError("Query expected to return one row but returned no rows".into()))?; Ok(DbRes1 { name: { let x: String = r.get(0usize)?; x }, points: { let x: i64 = r.get(1usize)?; x }, }) }

pub fn listusers(db: &mut rusqlite::Connection) -> Result, GoodError> { let mut out = vec![]; let mut stmt = db.prepare("select \"users\" . \"rowid\" from \"users\"")?; let mut rows = stmt.query(rusqlite::params![]).maperr(|e| GoodError(e.to_string()))?; while let Some(r) = rows.next()? { out.push({ let x: i64 = r.get(0usize)?; x }); } Ok(out) }

```

And can be used like

```rust fn main() { use sqlitegenhello_world as queries;

let mut db = rusqlite::Connection::open_in_memory().unwrap();
queries::migrate(&mut db).unwrap();
queries::create_user(&mut db, "rust human", 0).unwrap();
for user_id in queries::list_users(&mut db).unwrap() {
    let user = queries::get_user(&mut db, user_id).unwrap();
    println!("User {}: {}", user_id, user.name);
}
Ok(())

} ```

User 1: rust human

Usage details

Schema IDs and IDs

IDs are used both in SQL and Rust, so must be valid in both (however, some munging is applied to ids in Rust if they clash with keywords). Depending on the database, you can change IDs arbitrarily between schema versions but swapping IDs in consecutive versions isn't currently supported - if you need to do swaps do it over three different versions (like v0: A and B, v1: A_ and B, v2: B and A).

Schema IDs are internal ids used for matching fields across versions, to identify renames, deletes, etc. Schema IDs must not change once used in a version. I recommend using randomly generated IDs, via a macro.

Types and queries

Use type_* field_* functions to get expression/field type builders. Use new_insert/select/update/delete to get a query builder for the associated query type.

Custom types

When defining a field in the schema, call .custom("mycrate::MyString", type_str().build()) on the field type builder (or pass it in as Some("mycreate::MyType".to_string()) if creating the type structure directly).

Custom types need to implement functions like this:

```rust pub struct MyString(pub String);

impl MyString { pub fn to_sql(&self) -> &str { &self.0 }

pub fn from_sql(s: String) -> Result<Self, MyErr> {
    Ok(Self(s))
}

} ```

Any std::err::Error can be used for the error. The to_sql result and from_sql arguments should correspond to the base type you specified. If you're not sure what type that is, guess, and when you compile you'll get an compiler error saying which type you need.

Comparisons

Vs Diesel

Good-ormning is functionally most similar to Diesel.

Diesel

Good-ormning

Vs SQLx

SQLx

Good-ormning

Vs SeaORM

SeaORM focuses on runtime checks rather than compile time checks, so the focus is quite different.

A few words on the future

Obviously writing an SQL VM isn't great. The ideal solution would be for popular databases to expose their type checking routines as libraries so they could be imported into external programs, like how Go publishes reusable ast-parsing and type-checking libraries.

It would be great to provider more flexibility in migrations, but for downtime-less migrations with complex migrations the code also needs to be adjusted significantly. Common advice appears to be to make smaller, incremental, backward-compatible migrations and make larger changes over multiple versions and deploys, which seems a reasonable solution.