Provides differentiable operations and tensors.
Lazy, zero-copy and side-effect-free tensors.
autograd::Tensor<T>
itself doesn't have its value basically (except for persistent tensors).
It realizes graphs that are eagerly executable at any timing,
that is, it supports both run-by-define and define-by-run naturally
in the context of neural networks.
Reverse-mode automatic differentiation. There are a lot of built-in operations that support higher-order derivatives, and you can define your own ops with ndarrays easily.
Pure Rust. The graph execution engine is implemented in pure Rust, so it's compilable to WebAssembly.
[dependencies]
autograd = { version = "0.9.3", features = ["mkl"] }
mkl
feature is recommended to speedup gemm operations.
Here we are computing partial derivatives of z = 2x^2 + 3y + 1
.
```rust extern crate autograd as ag;
let ref x = ag::placeholder(&[]); let ref y = ag::placeholder(&[]); let ref z = 2.xx + 3.*y + 1.;
// dz/dy let gy = &ag::grad(&[z], &[y])[0]; println!("{:?}", gy.eval(&[])); // => Some(3.)
// dz/dx (requires to fill the placeholder x
)
let gx = &ag::grad(&[z], &[x])[0];
println!("{:?}", gx.eval(&[(x, &ag::ndarray::arr0(2.).into_dyn())])); // => Some(8.)
// ddz/dx (differentiates z
again)
let ggx = &ag::grad(&[gx], &[x])[0];
println!("{:?}", ggx.eval(&[])); // => Some(4.)
```
Another example: softmax regression for MNIST digits classification with Adam.
```rust
// This achieves 0.918 test accuracy after 3 epochs, 0.11 sec/epoch on 2.7GHz Intel Core i5
let ref w = ag::variable(ag::ndarrayext::glorotuniform::
// -- dataset -- let ((xtrain, ytrain), (xtest, ytest)) = dataset::load();
// -- training loop -- for epoch in 0..maxepoch { ... ag::eval(updateops, &[(x, &xbatch), (y, &ybatch)]); }
```
Many of well-known ops are pre-defined in ag::ops
, but you can also
implement custom ops by hand.
```rust extern crate ndarray; extern crate autograd as ag;
type NdArray
// Implements Op
trait for Sigmoid
.
struct Sigmoid;
impl
fn name(&self) -> &str {
"Sigmoid"
}
// Core function to run this op.
// Any errors in this function must be reported by *panic*.
fn compute<'v>(
&self,
ctx: ag::runtime::OpComputeContext<'v, T>,
) -> ag::op::ComputeResults<'v, T> {
let xs = ctx.grab_inputs();
let x = &xs[0];
// Using `ndarray::Array::mapv` for element-wise computation.
let half = T::from(0.5).unwrap();
let y = x.mapv(|a| ((a * half).tanh() * half) + half);
// In some cases, you can return `ag::ArrRepr::View` for input arrays
// to reduce unnecessary copies.
vec![Ok(ag::ArrRepr::Owned(y))]
}
fn grad(&self, gy: &ag::Tensor<T>, xs: &[&ag::Tensor<T>], y: &ag::Tensor<T>)
-> Vec<Option<ag::Tensor<T>>>
{
// Symbolic gradient of `x`
let gx = gy * (y - ag::square(y));
vec![Some(gx)]
}
}
// Symbolic sigmoid
function for end-user.
fn sigmoid
You can register hooks on ag::Tensor
objects.
```rust
extern crate autograd as ag;
// .p()
is a shorthand for .with(ag::Hook::Print)
.
let a: ag::Tensor
c.eval(&[]); // Zeros: // [[0.0, 0.0], // [0.0, 0.0], // [0.0, 0.0], // [0.0, 0.0]] shape=[4, 2], strides=[2, 1], layout=C (0x1) ```
For more, see documentation or examples