Neural Networks in Rust [Moved]

This project has been replaced by the jiro-nn crate.

Implementing GPU-bound Neural Networks in Rust from scratch + utils for data manipulation.

It is not a production-ready framework by any means.

Feel free to give feedback.

Usage

Add this in your project's Cargo.toml file:

toml [dependencies] neural_networks_rust = "*"

Preprocessing + CNNs example

MNIST (hand-written digits recognition) workflow example:

```rust // Step 1: Enrich the features of your data (eg. the "columns") with metadata using a Dataset Specification // The specification is necessary for guiding further steps (preprocessing, training...)

// Extract features from a spreadsheet to start building a dataset specification // You could also start blank and add the columns and metadata manually let mut datasetspec = Dataset::fromfile("dataset/train.csv"); // Now we can add metadata to our features datasetspec // Flag useless features for removal .removefeatures(&["size"]) // Tell the framework which column is an ID (so it can be ignored in training, used in joins, and so on) .tagfeature("id", IsId) // Tell the framework which column is the feature to predict // You could very well declare multiple features as Predicted .tagfeature("label", Predicted) // Since it is a classification problem, indicate the label needs One-Hot encoding during preprocessing .tagfeature("label", OneHotEncode) // You may also want to normalize everything except the ID & label during preprocessing .tagall(Normalized.except(&["id", "label"]));

// Step 2: Preprocess the data

// Create a pipeline with all the necessary steps let mut pipeline = Pipeline::basicsinglepass(); // Run it on the data let (datasetspec, data) = pipeline .loaddataandspec("dataset/train.csv", dataset_spec) .run();

// Step 3: Specify and build your model

// A model is tied to a dataset specification let model = ModelBuilder::new(datasetspec) // Some configuration is also tied to the model // All the configuration calls are optional, defaults are picked otherwise .batchsize(128) .loss(Losses::BCE) .epochs(20) // Then you can start building the neural network .neuralnetwork() // Specify all your layers // A convolution network is considered a layer of a neural network in this framework .convnetwork(1) // Now the convolution layers .fulldense(32, 5) // You can set the activation function for any layer and many other parameters // Otherwise defaults are picked .relu() .adam() .dropout(0.4) .end() .avgpooling(2) .fulldense(64, 5) .relu() .adam() .dropout(0.5) .end() .avgpooling(2) .end() // Now we go back to configuring the top-level neural network .fulldense(128) .relu() .adam() .end() .fulldense(10) .softmax() .adam() .end() .end() .build();

println!( "Model parameters count: {}", model.tonetwork().getparams().count() );

// Step 4: Train the model

// Monitor the progress of the training on a nice TUI (with other options coming soon) TM::startmonitoring(); // Use a SplitTraining to split the data into a training and validation set (k-fold also available) let mut training = SplitTraining::new(0.8); let (predsandids, modeleval) = training.run(&model, &data); TM::stop_monitoring();

// Step 5: Save the resulting predictions, weights and model evaluation

// Save the model evaluation per epoch modeleval.tojsonfile("mnisteval.json");

// Save the weights let modelparams = training.takemodel(); modelparams.tojsonfile("mnistweights.json");

// Save the predictions alongside the original data let predsandids = pipeline.revert(&predsandids); pipeline .revert(&data) .innerjoin(&predsandids, "id", "id", Some("pred")) .tocsvfile("mnistvaluesandpreds.csv"); ```

You can then plot the results using a third-party crate like gnuplot (recommended), plotly (also recommended) or even plotters.

For more in-depth examples, with more configurable workflows spanning many scripts, check out the examples folder.

Features

Since it is a framework, it is quite opinionated and has a lot of features. But here are the main ones:

NNs (Dense Layers, Full Layers...), CNNs (Dense Layers, Direct Layers, Mean Pooling...), everything batched, SGD, Adam, Momentum, Glorot, many activations (Softmax, Tanh, ReLU...), Learning Rate Scheduling, K-Folds, Split training, cacheable and revertable Pipelines (normalization, feature extraction, outliers filtering, values mapping, one-hot-encoding, log scaling...), loss functions (Binary Cross Entropy, Mean Squared Errors), model specification as code, preprocessing specification as code, performance metrics (R²...), tasks monitoring (progress, logging), multi-backends (CPU, GPU, see Backends), multi-precision (see Precision).

Scope and goals

Main goals:

Side/future goals:

Non-goals:

Backends

Switch backends via Cargo features:

Precision

You can enable precision up to f64 with the f64 feature.

Precision below f32 is not supported (yet).

Installing Arrayfire

You need to first install Arrayfire in order to use the arrayfire feature for fast compute on the CPU or the GPU using Arrayfire's C++/CUDA/OpenCL backends (it will first try OpenCL if installed, then CUDA, then C++). Make sure all the steps of the installation work 100% with no weird warning, as they may fail in quite subtle ways.

Once you installed Arrayfire, you:

  1. Set the AF_PATH to your Arrayfire installation directory (for example: /opt/Arrayfire).
  2. Add the path to lib files to the environement variables:
  3. Run sudo ldconfig if on Linux
  4. Run cargo clean
  5. Disable default features and activate the arrayfire feature

toml [dependencies] neural_networks_rust = { version = "*", default_features = false, features = ["arrayfire"] }

If you want to use the CUDA capabilities of Arrayfire on Linux (was tested on Windows 11 WSL2 with Ubuntu and a RTX 3060), check out this guide.