Provides differentiable operations and tensors.
Lazy, side-effect-free tensors.
autograd::Tensor<T>
itself doesn't have its value basically.
It realizes graphs that are immutable and eagerly executable at any timing,
that is, it supports both run-by-define and define-by-run naturally
in the context of neural networks.
Reverse-mode automatic differentiation. There are a lot of built-in operations that support higher-order derivatives, and you can define your own ops with ndarrays easily.
Pure Rust. The graph execution engine is implemented in pure Rust, so it's compilable to WebAssembly.
[dependencies]
autograd = "0.9.0"
mkl
feature is enabled by default to speedup gemm operations.
Here we are computing partial derivatives of z = 2x^2 + 3y + 1
.
```rust
extern crate autograd as ag;
let ref x = ag::placeholder(&[]); let ref y = ag::placeholder(&[]); let ref z = 2.xx + 3.*y + 1.;
// dz/dy let gy = &ag::grad(&[z], &[y])[0]; println!("{:?}", gy.eval(&[])); // => Some(3.)
// dz/dx (requires to fill the placeholder x
)
let gx = &ag::grad(&[z], &[x])[0];
println!("{:?}", gx.eval(&[(x, &ag::ndarray::arr0(2.).into_dyn())])); // => Some(8.)
// ddz/dx (differentiates z
again)
let ggx = &ag::grad(&[gx], &[x])[0];
println!("{:?}", ggx.eval(&[])); // => Some(4.)
```
Another example: softmax regression for MNIST digits classification with Adam.
```rust // This achieves 0.918 test accuracy after 3 epochs, 0.14 sec/epoch on 2.7GHz Intel Core i5
let ref w = ag::variable(ag::ndarrayext::glorotuniform::
// -- dataset -- let ((xtrain, ytrain), (xtest, ytest)) = dataset::load();
// -- training loop -- for epoch in 0..maxepoch { ... ag::eval(updateops, &[(x, &xbatch), (y, &ybatch)]); }
``` For more, see documentation or examples