This library aims to be a complete deep learning framework with extreme flexibility written in Rust. The goal would be to satisfy researchers as well as practitioners making it easier to experiment, train and deploy your models.
Disclamer Burn is currently in active development, and there will be breaking changes. While any resulting issues are likely to be easy to fix, there are no guarantees at this stage.
Sections
metric
, logging
and checkpointing
📈The best way to get started with burn
is to clone the repo and play with the examples.
This may also be a good idea to take a look the main components of burn
to get a quick overview of the fundamental building blocks.
Understanding the key components and philosophy of burn
can greatly help when beginning to work with the framework.
Nearly everything in burn
is based on the Backend
trait, which enables you to run tensor operations using different implementations without having to modify your code.
While a backend may not necessarily have autodiff capabilities, the ADBackend
trait specifies when autodiff is needed.
This trait not only abstracts operations but also tensor, device and element types, providing each backend the flexibility they need.
It's worth noting that the trait assumes eager mode since burn
fully supports dynamic graphs.
However, we may create another API to assist with integrating graph-based backends, without requiring any changes to the user's code.
At the core of burn lies the Tensor
struct, which encompasses multiple types of tensors, including Float
, Int
, and Bool
.
The element types of these tensors are specified by the backend and are usually designated as a generic argument (e.g., NdArrayBackend<f32>
).
Although the same struct is used for all tensors, the available methods differ depending on the tensor kind.
You can specify the desired tensor kind by setting the third generic argument, which defaults to Float
.
The first generic argument specifies the backend, while the second specifies the number of dimensions.
```rust use burn::tensor::backend::Backend; use burn::tensor::{Tensor, Int};
fn function
As demonstrated in the previous example, nearly all operations require owned tensors as parameters, which means that calling Clone
explicitly is necessary when reusing the same tensor multiple times.
However, there's no need to worry since the tensor's data won't be copied, it will be flagged as readonly when multiple tensors use the same allocated memory.
This enables backends to reuse tensor data when possible, similar to a copy-on-write pattern, while remaining completely transparent to the user.
The 'Backend' trait is highly flexible, enabling backpropagation to be implemented using a simple backend decorator, which makes any backend differentiable.
```rust use burn::tensor::backend::{ADBackend, Backend}; use burn::tensor::{Distribution, Tensor}; use burnautodiff::ADBackendDecorator; use burnndarray::NdArrayBackend;
fn linear
fn main() {
type Backend = NdArrayBackend
let weight = Tensor::random([3, 3], Distribution::Standard);
let bias = Tensor::zeros([1, 3]);
let x = Tensor::random([3, 3], Distribution::Standard);
let y = linear::<Backend>(x.clone(), weight.clone(), bias.clone());
// y.backward() // Method backward doesn't exist
let y = linear::<ADBackendDecorator<Backend>>(
Tensor::from_inner(x),
Tensor::from_inner(weight).require_grad(),
Tensor::from_inner(bias).require_grad(),
);
let grads = y.backward(); // Method exists
}
```
The Module
derive allows you to create your own neural network modules, similar to PyTorch.
Note that the Module
derive generates all the necessary methods to make your type essentially a parameter container.
It makes no assumptions about how the forward function is declared.
```rust use burn::nn; use burn::module::{Param, Module}; use burn::tensor::backend::Backend;
pub struct PositionWiseFeedForward
impl
self.linear_outer.forward(x)
}
} ```
Note that only the fields wrapped inside Param
are updated during training, and the other fields should implement the Clone
trait.
The Config
derive lets you define serializable and deserializable configurations or hyper-parameters for your modules or any components.
```rust use burn::config::Config;
pub struct PositionWiseFeedForwardConfig { pub dmodel: usize, pub dff: usize, #[config(default = 0.1)] pub dropout: f64, } ```
The derive also adds useful methods to your config, similar to a builder pattern.
rust
fn main() {
let config = PositionWiseFeedForwardConfig::new(512, 2048);
println!("{}", config.d_model); // 512
println!("{}", config.d_ff); // 2048
println!("{}", config.dropout); // 0.1
let config = config.with_dropout(0.2);
println!("{}", config.dropout); // 0.2
}
The Learner
is the main struct
that let you train a neural network with support for logging
, metric
, checkpointing
and more.
In order to create a learner, you must use the LearnerBuilder
.
```rust use burn::train::LearnerBuilder; use burn::train::metric::{AccuracyMetric, LossMetric};
fn main() { let dataloadertrain = ...; let dataloadervalid = ...;
let model = ...;
let optim = ...;
let learner = LearnerBuilder::new("/tmp/artifact_dir")
.metric_train_plot(AccuracyMetric::new())
.metric_valid_plot(AccuracyMetric::new())
.metric_train(LossMetric::new())
.metric_valid(LossMetric::new())
.with_file_checkpointer::<f32>(2)
.num_epochs(10)
.build(model, optim);
let _model_trained = learner.fit(dataloader_train, dataloader_valid);
} ```
See this example for a real usage.
Burn supports no_std
with alloc
for the inference mode with the NDArray backend.
Simply disable the default features of the burn
and burn-ndarray
crates (minimum required to run the inference mode).
See the burn-no-std-tests example as a reference implementation.
Additionally burn-core
and burn-tensor
crates support no_std
with alloc
if needed to direclty include them as dependencies (the burn
crates reexports burn-core
and burn-tensor
).
Note, under the no_std
mode, a random seed is generated during the build time if the seed is not initialized by Backend::seed
method.
Additionally, spin::mutex::Mutex is used in place of std::sync::Mutex under the no_std
mode.
Burn is distributed under the terms of both the MIT license and the Apache License (Version 2.0). See LICENSE-APACHE and LICENSE-MIT for details. Opening a pull request is assumed to signal agreement with these licensing terms.