localsearch

Rust library for local search optimization

Implemented Algorithms

All of the algorithms are parallelized with Rayon.

  1. Hill Climbing.
  2. Tabu Search.
  3. Simulated Annealing
  4. Epsilon Greedy Search, a variant of Hill Climbing which accepts the trial state with a constant probabilith even if the score of the trial state is worse than the previous one.
  5. Relative Annealing, a variant of Simulated Annealing which uses relative score diff to calculate transition probability.
  6. Logistic Annealing, a variant of Relative Annealing which uses logistic function instead of simple exponential.

How to use

You need to implement your own model that implements OptModel trait. Actual optimization is handled by each algorithm functions. Here is a simple example to optimize a quadratic function with Hill Climbing algorithm.

```rust use std::error::Error;

use indicatif::{ProgressBar, ProgressDrawTarget, ProgressStyle}; use localsearch::{optim::HillClimbingOptimizer, OptModel, OptProgress}; use ordered_float::NotNan; use rand::{self, distributions::Uniform, prelude::Distribution};

[derive(Clone)]

struct QuadraticModel { k: usize, centers: Vec, dist: Uniform, }

impl QuadraticModel { fn new(k: usize, centers: Vec, valuerange: (f64, f64)) -> Self { let (low, high) = valuerange; let dist = Uniform::new(low, high); Self { k, centers, dist } } }

type StateType = Vec; type ScoreType = NotNan;

impl OptModel for QuadraticModel { type StateType = StateType; type TransitionType = (); type ScoreType = ScoreType; fn generaterandomstate( &self, rng: &mut R, ) -> Result> { let state = self.dist.sample_iter(rng).take(self.k).collect::>(); Ok(state) }

fn generate_trial_state<R: rand::Rng>(
    &self,
    current_state: &Self::StateType,
    rng: &mut R,
    _current_score: Option<NotNan<f64>>,
) -> (Self::StateType, Self::TransitionType, NotNan<f64>) {
    let k = rng.gen_range(0..self.k);
    let v = self.dist.sample(rng);
    let mut new_state = current_state.clone();
    new_state[k] = v;
    let score = self.evaluate_state(&new_state);
    (new_state, (), score)
}

fn evaluate_state(&self, state: &Self::StateType) -> NotNan<f64> {
    let score = (0..self.k)
        .into_iter()
        .map(|i| (state[i] - self.centers[i]).powf(2.0))
        .sum();
    NotNan::new(score).unwrap()
}

}

fn createpbar(niter: u64) -> ProgressBar { let pb = ProgressBar::new(niter); pb.setstyle( ProgressStyle::defaultbar() .template( "{spinner:.green} [{elapsedprecise}] [{widebar:.cyan/blue}] {pos}/{len} (eta={eta}) {msg} ", ).unwrap() .progresschars("#>-") ); pb.setdrawtarget(ProgressDrawTarget::stderrwithhz(10)); pb }

fn main() { let model = QuadraticModel::new(3, vec![2.0, 0.0, -3.5], (-10.0, 10.0));

println!("running Hill Climbing optimizer");
let n_iter = 10000;
let patiance = 1000;
let n_trials = 50;
let opt = HillClimbingOptimizer::new(patiance, n_trials);
let pb = create_pbar(n_iter as u64);
let callback = |op: OptProgress<StateType, ScoreType>| {
    pb.set_message(format!("best score {:e}", op.score.into_inner()));
    pb.set_position(op.iter as u64);
};

let res = opt.optimize(&model, None, n_iter, Some(&callback));
pb.finish();
dbg!(res);

} ```

Further details can be found at API document, example and test codes.