DDO is a truly generic framework to develop MDD-based combinatorial optimization solvers in Rust. Its goal is to let you describe your optimization problem as a dynamic program along with a relaxation.
When the dynamic program of the problem is considered as a transition system, the relaxation serves the purpose of merging different nodes of the transition system into an other node standing for them all. In that setup, the sole condition to ensure the correctness of the optimization algorithm is that the replacement node must be an over approximation of all what is feasible from the merged nodes.
Bonus:
As a side benefit from using ddo
, you will be able to exploit all of your
hardware to solve your optimization in parallel.
This library is written in stable rust (1.41 at the time of writing). Therefore, it should be compiled with cargo the rust package manager (installed with your rust toolchain). Thanks to it, compiling and using ddo will be a breeze no matter the platform you are working on.
Once you have a rust tool chain installed, using ddo is as simple as adding it as a dependency to your Cargo.toml file and building your project.
toml
[dependencies]
ddo = "0.3.0"
The following example illustrates what it takes to build a complete solver based on DDO for the binary knapsack problem.
The first step is always to describe the optimization problem you try to solve in terms of a dynamic program. And then, you should also provide a relaxation for the problem.
From this description of the problem, ddo will derive an MDD which it will use to find an optimal solution to your problem. However, for a reasonably sized problem, expanding the complete state space in memory would not be tractable. Therefore, you should also provide ddo with a relaxation. In this context, the relaxation means a strategy to merge several nodes of the MDD, so as to make a fake node that is an over approximation of all the merged nodes (state + longest path, where the longest path is the value of the objective function you try to optimize).
The first thing to do in this example is to describe the binary knapsack
problem in terms of a dynamic program. Here, the state of a node, is nothing
more than an unsigned integer (usize). That unsigned integer represents the
remaining capacity of our sack. To do so, you define your own structure and
make sure it implements the Problem<usize>
trait.
```rust /// Describe the binary knapsack problem in terms of a dynamic program. /// Here, the state of a node, is nothing more than an unsigned integer (usize). /// That unsigned integer represents the remaining capacity of our sack.
struct Knapsack {
capacity: usize,
profit : Vec
The relaxation we will define is probably the simplest you can think of. When one needs to define a new state to replace those exceeding the maximum width of the MDD, we will simply keep the state with the maximum capacity as it enables at least all the possibly behaviors feasible with lesser capacities.
Optionally, we could also implement a rough upper bound estimator for our
problem in the relaxation. However, we wont do it in this minimalistic
example since the framework provides you with a default implementation.
If you were to override the default implementation you would need to
implement the estimate()
method of the Relaxation
trait.
```rust
struct KPRelax;
impl Relaxation
```rust /// Then instantiate a solver and spawn as many threads as there are hardware /// threads on the machine to solve the problem. fn main() { // 1. Create an instance of our knapsack problem let problem = Knapsack { capacity: 50, profit : vec![60, 100, 120], weight : vec![10, 20, 30] }; // 2. Build an MDD for the given problem and relaxation let mdd = mddbuilder(&problem, KPRelax).intodeep();
// 3. Create a parllel solver on the basis of this MDD (this is how
// you can specify the MDD implementation you wish to use to develop
// the relaxed and restricted MDDs).
let mut solver = ParallelSolver::new(mdd);
// 4. Maximize your objective function
// The `outcome` object provides the value of the best solution that was
// found for the problem (if one was found) along with a flag indicating
// whether or not the solution was proven optimal. Hence an unsatisfiable
// problem would have `outcome.best_value == None` and `outcome.is_exact`
// true. The `is_exact` flag will only be false if you explicitly decide
// to stop searching for the optimum with an arbitrary cutoff.
let outcome = solver.maximize();
// The best solution (if one exist) is retrieved with.
let solution = solver.best_solution();
// You can also retrieve the value of the best solution as shown per the
// following line. The value it returns will be the same as the value of
// `outcome.best_value`.
let value = solver.best_value();
} ```
The source code of the above (simplistic) example is provided in the examples section of this project. Meanwhile, the examples provide an implementation for more advanced solvers. Namely, it provides an implementation for: + Maximum Independent Set Problem (MISP) + Maximum 2 Satisfiability (MAX2SAT) + Maximum Cut Problem (MCP) + Unbounded Knapsack
These are again compiled with cargo with the following command:
cargo build --release --all-targets
. Once the compilation completes, you
will find the desired binaries at :
+ $project/target/release/examples/knapsack
+ $project/target/release/examples/max2sat
+ $project/target/release/examples/mcp
+ $project/target/release/examples/misp
If you have any question regarding the use of these programs, just to <program> -h
and it should display an helpful message explaining you how to use it.
The implementation of MISP, MAX2SAT and MCP correspond to the formulation and relaxation proposed by Bergman et al.
If you use DDO, or find it useful for your purpose (research, teaching, business, ...)
please cite:
@misc{gillard:20:ddo,
author = {Xavier Gillard, Pierre Schaus, Vianney Coppé},
title = {Ddo, a generic and efficient framework for MDD-based optimization},
howpublished = {IJCAI-20},
year = {2020},
note = {Available from \url{https://github.com/xgillard/ddo}},
}