Good-ormning is an ORM, probably? In a nutshell:
build.rs
Like other Rust ORMs, Good-ormning doesn't abstract away from actual database workflows, but instead aims to enhance type checking with normal SQL.
See Comparisons, below, for information on how Good-ormning differs from other Rust ORMs.
Alpha:
pg
)sqlite
)You'll need the following runtime dependencies:
good-ormning-runtime
tokio-postgres
for PostgreSQLrusqlite
for SqliteAnd build.rs
dependencies:
good-ormning
And you must enable one (or more) of the database features:
pg
sqlite
plus maybe chrono
for DateTime
support.
Create a build.rs
and define your initial schema version and queries
goodormning::generate()
to output the generated codemigrate
goodormning::generate()
, which will generate the new migration statements.migrate
call will make sure the database is updated to the new schema version.This build.rs
file
rust
fn main() {
println!("cargo:rerun-if-changed=build.rs");
let mut latest_version = Version::default();
let users = latest_version.table("zQLEK3CT0", "users");
let id = users.rowid_field(&mut latest_version, None);
let name = users.field(&mut latest_version, "zLQI9HQUQ", "name", field_str().build());
let points = users.field(&mut latest_version, "zLAPH3H29", "points", field_i64().build());
goodormning::sqlite::generate(&root.join("tests/sqlite_gen_hello_world.rs"), vec![
// Versions
(0usize, latest_version)
], vec![
// Queries
new_insert(&users, vec![(name.clone(), Expr::Param {
name: "name".into(),
type_: name.type_.type_.clone(),
}), (points.clone(), Expr::Param {
name: "points".into(),
type_: points.type_.type_.clone(),
})]).build_query("create_user", QueryResCount::None),
new_select(&users).where_(Expr::BinOp {
left: Box::new(Expr::Field(id.clone())),
op: BinOp::Equals,
right: Box::new(Expr::Param {
name: "id".into(),
type_: id.type_.type_.clone(),
}),
}).return_fields(&[&name, &points]).build_query("get_user", QueryResCount::One),
new_select(&users).return_field(&id).build_query("list_users", QueryResCount::Many)
]).unwrap();
}
Generates this code
```rust
pub struct GoodError(pub String);
impl std::fmt::Display for GoodError { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { self.0.fmt(f) } }
impl std::error::Error for GoodError { }
impl From
pub fn migrate(db: &mut rusqlite::Connection) -> Result<(), GoodError> {
db.execute(
"create table if not exists _goodversion (rid int primary key, version bigint not null, lock int not null);",
(),
)?;
db.execute("insert into _goodversion (rid, version, lock) values (0, -1, 0) on conflict do nothing;", ())?;
loop {
let txn = db.transaction()?;
match (|| {
let mut stmt =
txn.prepare("update _goodversion set lock = 1 where rid = 0 and lock = 0 returning version")?;
let mut rows = stmt.query(())?;
let version = match rows.next()? {
Some(r) => {
let ver: i64 = r.get(0usize)?;
ver
},
None => return Ok(false),
};
drop(rows);
stmt.finalize()?;
if version > 0i64 {
return Err(
GoodError(
format!(
"The latest known version is {}, but the schema is at unknown version {}",
0i64,
version
),
),
);
}
if version < 0i64 {
txn.execute("create table \"users\" ( \"name\" text not null , \"points\" integer not null )", ())?;
}
txn.execute("update _goodversion set version = $1, lock = 0", rusqlite::params![0i64])?;
let out: Result
pub fn createuser(db: &mut rusqlite::Connection, name: &str, points: i64) -> Result<(), GoodError> { db .execute("insert into \"users\" ( \"name\" , \"points\" ) values ( $1 , $2 )", rusqlite::params![name, points]) .maperr(|e| GoodError(e.to_string()))?; Ok(()) }
pub struct DbRes1 { pub name: String, pub points: i64, }
pub fn getuser(db: &mut rusqlite::Connection, id: i64) -> Result
pub fn listusers(db: &mut rusqlite::Connection) -> Result
```
And can be used like
```rust fn main() { use sqlitegenhello_world as queries;
let mut db = rusqlite::Connection::open_in_memory().unwrap();
queries::migrate(&db).unwrap();
queries::create_user(&db, "rust human", 0).unwrap();
for user_id in queries::list_users(&db).unwrap() {
let user = queries::get_user(&db, user_id).unwrap();
println!("User {}: {}", user_id, user.name);
}
Ok(())
} ```
User 1: rust human
pg
- enables generating code for PostgreSQLsqlite
- enables generating code for Sqlitechrono
- enable datetime field/expression types"Schema IDs" are internal ids used for matching fields across versions, to identify renames, deletes, etc. Schema IDs must not change once used in a version. I recommend using randomly generated IDs, via a macro. Changing Schema IDs is treated like a delete followed by a create.
"IDs" are used both in SQL (for fields) and Rust (in parameters and returned data structures), so must be valid in both (however, some munging is automatically applied to ids in Rust if they clash with keywords). Depending on the database, you can change IDs arbitrarily between schema versions but swapping IDs in consecutive versions isn't currently supported - if you need to do swaps do it over three different versions (like v0
: A
and B
, v1
: A_
and B
, v2
: B
and A
).
Use type_*
field_*
functions to get type builders for use in expressions/fields. Use new_insert/select/update/delete
to get a query builder for the associated query type.
There are also some helper functions for building queries, see
field_param
, a shortcut for a parameter matching the type and name of a fieldset_field
, a shortcut for setting field values in INSERT and UPDATEeq_field
, gt_field
, gte_field
, lt_field
, lte_field
are shortcuts for expressions comparing a field and a parameter with the same typeexpr_and
, a shortcut for AND expressionsfor the database you're using.
When defining a field in the schema, call .custom("mycrate::MyString", type_str().build())
on the field type builder (or pass it in as Some("mycreate::MyType".to_string())
if creating the type structure directly).
The type must have methods to convert to/from the native SQL types. There are traits to guide the implementation:
```rust pub struct MyString(pub String);
impl pg::GoodOrmningCustomString
fn from_sql(s: String) -> Result<MyString, String> {
Ok(Self(s))
}
} ```
Parameters with the same name are deduplicated - if you define a query with multiple parameters of the same name but different types you'll get an error.
Return types with the same contents are similarly deduplicated (methods to make two queries that return the same fields will return the same type).
Good-ormning is functionally most similar to Diesel.
build.rs
filebuild.rs
SeaORM focuses on runtime checks rather than compile time checks, so the focus is quite different.
Obviously writing an SQL VM isn't great. The ideal solution would be for popular databases to expose their type checking routines as libraries so they could be imported into external programs, like how Go publishes reusable ast-parsing and type-checking libraries.
It would be great to provider more flexibility in migrations, but for downtime-less migrations with complex migrations the code also needs to be adjusted significantly. Common advice appears to be to make smaller, incremental, backward-compatible migrations and make larger changes over multiple versions and deploys, which seems a reasonable solution.