Deep Learning library using [custos] and [custos-math].
external (C) dependencies: OpenCL, CUDA, nvrtc, cublas, a BLAS lib (OpenBLAS, Intel MKL, ...)
There are two features available that are enabled by default: - cuda ... CUDA, nvrtc and cublas must be installed - opencl ... OpenCL is needed
If you deactivate them (add default-features = false
and provide no additional features), only the CPU device can be used.
For all feature-configurations, a BLAS library needs to be installed on the system.
```toml [dependencies] gradients = "0.3.1"
```
(if this example does not compile, consider looking here)
Use a struct that implements the NeuralNetwork trait (it is implemented via the network
attribute) to define which layers you want to use:
```rust use gradients::purpur::{CSVLoader, CSVReturn, Converter}; use gradients::OneHotMat; use gradients::{ correctclasses, nn::{cce, ccegrad}, range, Adam, CLDevice, Linear, network, ReLU, Softmax, };
pub struct Network { lin1: Linear<784, 128>, relu1: ReLU, lin2: Linear<128, 10>, relu2: ReLU, lin3: Linear<10, 10>, softmax: Softmax, } ``` Load [data] and create an instance of Network:
You can download the mnist dataset here.
```rust // use cpu (no features enabled): let device = gradients::CPU::new().select(); // use cuda device (cuda feature enabled): let device = gradients::CudaDevice::new(0).unwrap().select(); // use opencl device (opencl feature enabled): let device = CLDevice::new(0)?;
let mut net = Network::with_device(&device);
let loader = CSVLoader::new(true);
let loadeddata: CSVReturn
let i = Matrix::from(( &device, (loadeddata.samplecount, loadeddata.features), &loadeddata.x, )); let i = i / 255.;
let y = Matrix::from((&device, (loadeddata.samplecount, 1), &loaded_data.y)); let y = y.onehot(); ```
Training loop:
```rust let mut opt = Adam::new(0.01);
for epoch in range(200) { let preds = net.forward(&i); let correcttraining = correctclasses(&loadeddata.y.asusize(), &preds) as f32;
let loss = cce(&device, &preds, &y);
println!(
"epoch: {epoch}, loss: {loss}, training_acc: {acc}",
acc = correct_training / loaded_data.sample_count() as f32
);
let grad = cce_grad(&device, &preds, &y);
net.backward(&grad);
opt.step(&device, net.params());
} ```