Corgi
A neural network, and tensor dynamic automatic differentiation implementation for Rust.
BLAS
- The BLAS feature can be enabled, and requires CBLAS if used.
Important Design Notes
- Array values should never be modified from operations; instead, new arrays should be created.
- Arrays are untracked by default, so if gradients are required,
tracked()
, or start_tracking()
must be used (see the documentation for details).
- Versions 0.x.y of Corgi are considered unstable, so check the releases page on Github for new versions.
Examples
- For fully-connected examples, remember to call
model.update()
.
- Fully-connected MNIST (convolutional neural networks are in-progress).
- Fully-connected neural network (full version):
```rust
let initializer = initializer::makehe();
let sigmoid = activation::makesigmoid();
let mse = cost::makemse();
let gd = GradientDescent::new(learningrate);
let l1 = Dense::new(inputsize, hiddensize, initializer.clone(), Some(sigmoid));
let l2 = Dense::new(hiddensize, outputsize, initializer.clone(), None);
let mut model = Model::new(vec![Box::new(l1), Box::new(l2)], Box::new(gd), mse);
for _ in 0..8 {
let mut input = vec![0.0; inputsize * batchsize];
let mut target = vec![0.0; outputsize * batchsize];
// set inputs, and targets
let input = Arrays::new((vec![batch_size, input_size], input));
let target = Arrays::new((vec![batch_size, output_size], target));
let _result = model.forward(input.clone());
let loss = model.backward(target.clone());
// update the parameters
model.update();
println!("loss: {}", loss);
}
* Dynamic computational graph:
rust
let a = arr![5.0].tracked();
let b = arr![2.0].tracked();
let mut c = arr![0.0].tracked();
for _ in 0..10 {
c = &c + &(&a * &b);
if c[0] > 50.0 {
c = &c * &a;
}
}
assert_eq!(c, arr![195300.0]);
c.backward(None);
asserteq!(c.gradient(), arr![1.0]);
asserteq!(b.gradient(), arr![97650.0]);
assert_eq!(a.gradient(), arr![232420.0]);
* Custom operation (still needs some work):
rust
// note proper implementations should handle tracked, and untracked cases
let op: array::ForwardOp = Arc::new(|x: &[&Array]| {
Arrays::new((x[0].dimensions(), x[0].values().iter().zip(x[1].values()).map(|(x, y)| x * y).collect::>()))
});
let opclone = Arc::clone(&op);
let backwardop: array::BackwardOp = Arc::new(move |c: &mut Vec, x: &Array| {
vec![Some(Array::op(&vec![&c[1], x], Arc::clone(&opclone), None)),
Some(Array::op(&vec![&c[0], x], Arc::clone(&opclone), None))]
});
let a = arr![1.0, 2.0, 3.0].tracked();
let b = arr![3.0, 2.0, 1.0].tracked();
let mut product = Array::op(&vec![&a, &b], op, Some(backwardop));
asserteq!(product, arr![3.0, 4.0, 3.0]);
product.backward(None);
asserteq!(b.gradient(), arr![1.0, 2.0, 3.0]);
asserteq!(a.gradient(), arr![3.0, 2.0, 1.0]);
```
Design
- Originally worked around the ergonomics of the
arr!
macro (which however, currently still needs more work).
- Dynamic-as-possible computational graph.
- Did not want to have to manage any 'graph' structures when using Corgi (the Arrays should represent the graph alone).
- Graph became more, and more dependent on threading for the backward pass, and the use of
Arc
, and Mutex
.
- Graphs do note store consumers (at the moment). They store consumer counts instead.
Tracked Arrays
- Tracked arrays are arrays which require gradients to be computed, and stored.
- For more information, see the documentation for
tracked()
, and untracked()
in array.rs
.
Backward Pass
- An informal UML sequence diagram (it's not entirely up to specs, but should give an overview of the process):

Name
- Original name was going to be 'cog-(something)', since Rust's logo is a cog, and since cognition (get it?).
But as it turns out, many AI libraries are named 'cog-(something)'. Attempts at permutations of 'cog' sounded awkward, such as 'cogi', for 'cog-intelligence',
so the name Corgi was chosen.
Acknowledgements
Licence