This crate provides implementations of common stochstic gradient optimization algorithms. They are designed to be lightweight, flexible and easy to use.
Currently implemted: - Adam - SGD - AdaGrad
The crate does not provide automatic differentiation, the gradient is given by the user.
```rust use stochastic_optimizers::{Adam, Optimizer}; //minimise the function (x-4)^2 let start = -3.0; let mut optimizer = Adam::new(start, 0.1);
for _ in 0..10000 { let current_paramter = optimizer.parameters();
// d/dx (x-4)^2 let gradient = 2.0 * current_paramter - 8.0;
optimizer.step(&gradient); }
asserteq!(optimizer.intoparameters(), 4.0);
``
The parameters are owned by the optimizer and a reference can be optained by [
parameters()](crate::Optimizer::parameters()).
After optimization they can be optained by [
intoparameters()`](crate::Optimizer::intoparameters()).
All types which impement the Parameters
trait can be optimized.
Implementations for the standart types f32
, f64
, Vec<T : Parameters>
and [T : Parameters ; N]
are provided.
Its realativly easy to implement it for custom types, see Parameters
.
The unit tests require libtorch via the tch crate. See github for installation details.
Licensed under either of
at your option.
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.