collenchyma-NN • Join the chat at https://gitter.im/autumnai/collenchyma Build Status Crates.io License

collenchyma-NN provides Neural Network related algorithms for Collenchyma, so you can use NN operations on servers, desktops or mobiles with OpenCL, CUDA and common host CPU support.

If you would like to write your own backend-agnostic, high-performance library, you can
* take this library as an example for basically copy&paste, * glance over the docs for a broader overview * and notify us about your library - we are happy to feature your Collenchyma library on the Collenchyma README.

collenchyma-NN was started at Autumn to support the Machine Intelligence Framework Leaf with backend-agnostic, state-of-the-art performance.

For more information,

Provided Operations

This Plugins provides the following operations to the Collenchyma Backend. Every Operation includes forward + backward. A - means not yet implemented. More information can be found in the Documentation.

| Operation | CUDA | OpenCL | Native | |--- |--- |--- |--- | | Sigmoid | cuDNN v3 | - | Rust | | ReLU | cuDNN v3 | - | Rust | | Tanh | cudNN v3 | - | Rust | | | | | | | Normalization (LRN) | cudNN v3 | - | - | | | | | | | Convolution | cudNN v3 | - | - | | | | | | | Softmax | cudNN v3 | - | Rust | | | | | | | Pooling Max | cudNN v3 | - | - | | Pooling Avg | cudNN v3 | - | - |

Kudos to ehiggs, for implementing the native Rust operations.

Getting Started

If you're using Cargo, just add collenchyma-NN to your Cargo.toml:

[dependencies]
collenchyma = "0.0.7"
collenchyma-nn = "0.2.1"

If you're using Cargo Edit, you can call:

$ cargo add collenchyma-nn

Usage

Bring the Plugin trait and the other important Collenchyma traits/structs in scope and you will be able to execute the here provided operations on your Collenchyma Backend.

rust extern crate collenchyma as co; extern crate collenchyma_nn as nn; use co::backend::{Backend, BackendConfig}; use co::framework::IFramework; use co::frameworks::Cuda; use co::tensor::SharedTensor; use nn::*; fn main() { // Initialize a CUDA Backend. // Usually you would not use CUDA but let it pick what is available on the machine. let framework = Cuda::new(); let hardwares = framework.hardwares(); let backend_config = BackendConfig::new(framework, hardwares); let backend = Backend::new(backend_config).unwrap(); // Initialize two SharedTensors. // Usually you would want also fill them with data. let mut x = SharedTensor::<f32>::new(backend.device(), &(1, 1, 3)).unwrap(); let mut result = SharedTensor::<f32>::new(backend.device(), &(1, 1, 3)).unwrap(); // Use the operation provided by this Plugin. backend.sigmoid(&mut x, &mut result); }

Contributing

Want to contribute? Awesome! We have instructions to help you get started contributing code or documentation. And high priority issues, that we could need your help with.

We have a mostly real-time collaboration culture and happens here on Github and on the Collenchyma Gitter Channel. You can also reach out to the Maintainers {@MJ, @hobofan}.

License

collenchyma-NN is released under the MIT License.