All Downloads are FREE. Search and download functionalities are using the official Maven repository.

org.cicirello.search.evo.OnePlusOneEvolutionaryAlgorithm Maven / Gradle / Ivy

Go to download

Chips-n-Salsa is a Java library of customizable, hybridizable, iterative, parallel, stochastic, and self-adaptive local search algorithms. The library includes implementations of several stochastic local search algorithms, including simulated annealing, hill climbers, as well as constructive search algorithms such as stochastic sampling. Chips-n-Salsa now also includes genetic algorithms as well as evolutionary algorithms more generally. The library very extensively supports simulated annealing. It includes several classes for representing solutions to a variety of optimization problems. For example, the library includes a BitVector class that implements vectors of bits, as well as classes for representing solutions to problems where we are searching for an optimal vector of integers or reals. For each of the built-in representations, the library provides the most common mutation operators for generating random neighbors of candidate solutions, as well as common crossover operators for use with evolutionary algorithms. Additionally, the library provides extensive support for permutation optimization problems, including implementations of many different mutation operators for permutations, and utilizing the efficiently implemented Permutation class of the JavaPermutationTools (JPT) library. Chips-n-Salsa is customizable, making extensive use of Java's generic types, enabling using the library to optimize other types of representations beyond what is provided in the library. It is hybridizable, providing support for integrating multiple forms of local search (e.g., using a hill climber on a solution generated by simulated annealing), creating hybrid mutation operators (e.g., local search using multiple mutation operators), as well as support for running more than one type of search for the same problem concurrently using multiple threads as a form of algorithm portfolio. Chips-n-Salsa is iterative, with support for multistart metaheuristics, including implementations of several restart schedules for varying the run lengths across the restarts. It also supports parallel execution of multiple instances of the same, or different, stochastic local search algorithms for an instance of a problem to accelerate the search process. The library supports self-adaptive search in a variety of ways, such as including implementations of adaptive annealing schedules for simulated annealing, such as the Modified Lam schedule, implementations of the simpler annealing schedules but which self-tune the initial temperature and other parameters, and restart schedules that adapt to run length.

There is a newer version: 7.0.1
Show newest version
/*
 * Chips-n-Salsa: A library of parallel self-adaptive local search algorithms.
 * Copyright (C) 2002-2022 Vincent A. Cicirello
 *
 * This file is part of Chips-n-Salsa (https://chips-n-salsa.cicirello.org/).
 *
 * Chips-n-Salsa is free software: you can redistribute it and/or modify
 * it under the terms of the GNU General Public License as published by
 * the Free Software Foundation, either version 3 of the License, or
 * (at your option) any later version.
 *
 * Chips-n-Salsa is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
 * GNU General Public License for more details.
 *
 * You should have received a copy of the GNU General Public License
 * along with this program.  If not, see .
 */

package org.cicirello.search.evo;

import org.cicirello.search.ProgressTracker;
import org.cicirello.search.SingleSolutionMetaheuristic;
import org.cicirello.search.SolutionCostPair;
import org.cicirello.search.operators.Initializer;
import org.cicirello.search.operators.UndoableMutationOperator;
import org.cicirello.search.problems.IntegerCostOptimizationProblem;
import org.cicirello.search.problems.OptimizationProblem;
import org.cicirello.search.problems.Problem;
import org.cicirello.util.Copyable;

/**
 * This class implements a (1+1)-EA. In a (1+1)-EA, the evolutionary algorithm has a population size
 * of 1, in each cycle of the algorithm a single mutant is created from that single population
 * member, forming a population of size 2, and finally the EA keeps the better of the two solutions.
 * This is perhaps the simplest case of an EA. This class supports optimizing arbitrary structures,
 * specified by the generic type parameter.
 *
 * @param  The type of object under optimization.
 * @author Vincent A. Cicirello, https://www.cicirello.org/
 */
public class OnePlusOneEvolutionaryAlgorithm>
    implements SingleSolutionMetaheuristic {

  private final IntegerCostOptimizationProblem pOptInt;
  private final OptimizationProblem pOpt;
  private final Initializer initializer;
  private final UndoableMutationOperator mutation;
  private int elapsedEvals;
  private ProgressTracker tracker;
  private final SingleRun sr;

  /**
   * Creates a OnePlusOneEvolutionaryAlgorithm instance for real-valued optimization problems. A
   * {@link ProgressTracker} is created for you.
   *
   * @param problem An instance of an optimization problem to solve.
   * @param mutation A mutation operator supporting the undo operation.
   * @param initializer The source of random initial states.
   * @throws NullPointerException if any of the parameters are null
   */
  public OnePlusOneEvolutionaryAlgorithm(
      OptimizationProblem problem,
      UndoableMutationOperator mutation,
      Initializer initializer) {
    this(problem, mutation, initializer, new ProgressTracker());
  }

  /**
   * Creates a OnePlusOneEvolutionaryAlgorithm instance for integer-valued optimization problems. A
   * {@link ProgressTracker} is created for you.
   *
   * @param problem An instance of an optimization problem to solve.
   * @param mutation A mutation operator supporting the undo operation.
   * @param initializer The source of random initial states.
   * @throws NullPointerException if any of the parameters are null
   */
  public OnePlusOneEvolutionaryAlgorithm(
      IntegerCostOptimizationProblem problem,
      UndoableMutationOperator mutation,
      Initializer initializer) {
    this(problem, mutation, initializer, new ProgressTracker());
  }

  /**
   * Creates a OnePlusOneEvolutionaryAlgorithm instance for real-valued optimization problems.
   *
   * @param problem An instance of an optimization problem to solve.
   * @param mutation A mutation operator supporting the undo operation.
   * @param initializer The source of random initial states.
   * @param tracker A ProgressTracker object, which is used to keep track of the best solution found
   *     during the run, the time when it was found, and other related data.
   * @throws NullPointerException if any of the parameters are null
   */
  public OnePlusOneEvolutionaryAlgorithm(
      OptimizationProblem problem,
      UndoableMutationOperator mutation,
      Initializer initializer,
      ProgressTracker tracker) {
    if (problem == null || mutation == null || initializer == null || tracker == null) {
      throw new NullPointerException();
    }
    this.initializer = initializer;
    this.mutation = mutation;
    this.tracker = tracker;
    pOpt = problem;
    pOptInt = null;
    // default on purpose: elapsedEvals = 0;
    sr = new DoubleCostSingleRun();
  }

  /**
   * Creates a OnePlusOneEvolutionaryAlgorithm instance for integer-valued optimization problems.
   *
   * @param problem An instance of an optimization problem to solve.
   * @param mutation A mutation operator supporting the undo operation.
   * @param initializer The source of random initial states.
   * @param tracker A ProgressTracker object, which is used to keep track of the best solution found
   *     during the run, the time when it was found, and other related data.
   * @throws NullPointerException if any of the parameters are null
   */
  public OnePlusOneEvolutionaryAlgorithm(
      IntegerCostOptimizationProblem problem,
      UndoableMutationOperator mutation,
      Initializer initializer,
      ProgressTracker tracker) {
    if (problem == null || mutation == null || initializer == null || tracker == null) {
      throw new NullPointerException();
    }
    this.initializer = initializer;
    this.mutation = mutation;
    this.tracker = tracker;
    pOptInt = problem;
    pOpt = null;
    // default on purpose: elapsedEvals = 0;
    sr = new IntCostSingleRun();
  }

  /*
   * package-private copy constructor in support of the split method, and so subclass can also use.
   * note: copies references to thread-safe components, and splits potentially non-threadsafe components
   */
  OnePlusOneEvolutionaryAlgorithm(OnePlusOneEvolutionaryAlgorithm other) {
    // these are threadsafe, so just copy references
    pOpt = other.pOpt;
    pOptInt = other.pOptInt;

    // this one must be shared.
    tracker = other.tracker;

    // split these: not threadsafe
    initializer = other.initializer.split();
    mutation = other.mutation.split();

    sr = pOptInt != null ? new IntCostSingleRun() : new DoubleCostSingleRun();
  }

  /**
   * Continues optimizing starting from the previous best found solution contained in the tracker
   * object, rather than from a random one. If no prior run had been performed, then this method
   * starts the run from a randomly generated solution.
   *
   * @param maxEvals The maximum number of evaluations (i.e., iterations) to execute.
   * @return the current solution at the end of this run and its cost, which may or may not be the
   *     best of run solution, and which may or may not be the same as the solution contained in
   *     this instance's {@link ProgressTracker}, which contains the best of all runs. Returns null
   *     if the run did not execute, such as if the ProgressTracker already contains the theoretical
   *     best solution.
   */
  @Override
  public final SolutionCostPair reoptimize(int maxEvals) {
    if (tracker.didFindBest() || tracker.isStopped()) return null;
    T start = tracker.getSolution();
    if (start == null) start = initializer.createCandidateSolution();
    else start = start.copy();
    return sr.optimizeSingleRun(maxEvals, start);
  }

  /**
   * Runs the EA beginning at a random initial solution.
   *
   * @param maxEvals The maximum number of evaluations (i.e., iterations) to execute.
   * @return the current solution at the end of this run and its cost, which may or may not be the
   *     best of run solution, and which may or may not be the same as the solution contained in
   *     this instance's {@link ProgressTracker}, which contains the best of all runs. Returns null
   *     if the run did not execute, such as if the ProgressTracker already contains the theoretical
   *     best solution.
   */
  @Override
  public final SolutionCostPair optimize(int maxEvals) {
    if (tracker.didFindBest() || tracker.isStopped()) return null;
    return sr.optimizeSingleRun(maxEvals, initializer.createCandidateSolution());
  }

  /**
   * Runs the EA beginning at a specified initial solution.
   *
   * @param maxEvals The maximum number of evaluations (i.e., iterations) to execute.
   * @param start The desired starting solution.
   * @return the current solution at the end of this run and its cost, which may or may not be the
   *     best of run solution, and which may or may not be the same as the solution contained in
   *     this instance's {@link ProgressTracker}, which contains the best of all runs. Returns null
   *     if the run did not execute, such as if the ProgressTracker already contains the theoretical
   *     best solution.
   */
  @Override
  public final SolutionCostPair optimize(int maxEvals, T start) {
    if (tracker.didFindBest() || tracker.isStopped()) return null;
    return sr.optimizeSingleRun(maxEvals, start.copy());
  }

  @Override
  public final Problem getProblem() {
    return (pOptInt != null) ? pOptInt : pOpt;
  }

  @Override
  public final ProgressTracker getProgressTracker() {
    return tracker;
  }

  @Override
  public final void setProgressTracker(ProgressTracker tracker) {
    if (tracker != null) this.tracker = tracker;
  }

  @Override
  public OnePlusOneEvolutionaryAlgorithm split() {
    return new OnePlusOneEvolutionaryAlgorithm(this);
  }

  /**
   * Gets the total number of evaluations (iterations) performed by this EA object. This is the
   * total number of such evaluations across all calls to the optimize and reoptimize methods. This
   * may differ from the combined number of maxEvals passed as a parameter to those methods. For
   * example, those methods terminate if they find the theoretical best solution, and also
   * immediately return if a prior call found the theoretical best. In such cases, the total run
   * length may be less than the requested maxEvals.
   *
   * @return the total number of evaluations
   */
  @Override
  public long getTotalRunLength() {
    return elapsedEvals;
  }

  private interface SingleRun> {
    SolutionCostPair optimizeSingleRun(int maxEvals, T current);
  }

  private class IntCostSingleRun implements SingleRun {

    @Override
    public final SolutionCostPair optimizeSingleRun(int maxEvals, T current) {
      // compute cost of start
      int currentCost = pOptInt.cost(current);

      // initialize best cost, etc
      int bestCost = tracker.getCost();
      if (currentCost < bestCost) {
        boolean isMinCost = pOptInt.isMinCost(currentCost);
        bestCost = tracker.update(currentCost, current, isMinCost);
        if (tracker.didFindBest()) {
          // found theoretical best so no point in proceeding
          return new SolutionCostPair(current, currentCost, isMinCost);
        }
      }

      // main EA loop
      for (int i = 1; i <= maxEvals; i++) {
        if (tracker.isStopped()) {
          // some other thread signaled to stop
          elapsedEvals += (i - 1);
          return new SolutionCostPair(current, currentCost, pOptInt.isMinCost(currentCost));
        }
        mutation.mutate(current);
        int neighborCost = pOptInt.cost(current);
        if (neighborCost <= currentCost) {
          // switching to better solution
          currentCost = neighborCost;
          if (currentCost < bestCost) {
            boolean isMinCost = pOptInt.isMinCost(currentCost);
            bestCost = tracker.update(currentCost, current, isMinCost);
            if (tracker.didFindBest()) {
              // found theoretical best so no point in proceeding
              elapsedEvals += i;
              return new SolutionCostPair(current, currentCost, isMinCost);
            }
          }
        } else {
          // reject the mutant and revert back to previous state
          mutation.undo(current);
        }
      }
      elapsedEvals += maxEvals;
      return new SolutionCostPair(current, currentCost, pOptInt.isMinCost(currentCost));
    }
  }

  private class DoubleCostSingleRun implements SingleRun {

    @Override
    public final SolutionCostPair optimizeSingleRun(int maxEvals, T current) {
      // compute cost of start
      double currentCost = pOpt.cost(current);

      // initialize best cost, etc
      double bestCost = tracker.getCostDouble();
      if (currentCost < bestCost) {
        boolean isMinCost = pOpt.isMinCost(currentCost);
        bestCost = tracker.update(currentCost, current, isMinCost);
        if (tracker.didFindBest()) {
          // found theoretical best so no point in proceeding
          return new SolutionCostPair(current, currentCost, isMinCost);
        }
      }

      // main EA loop
      for (int i = 1; i <= maxEvals; i++) {
        if (tracker.isStopped()) {
          // some other thread signaled to stop
          elapsedEvals += (i - 1);
          return new SolutionCostPair(current, currentCost, pOpt.isMinCost(currentCost));
        }
        mutation.mutate(current);
        double neighborCost = pOpt.cost(current);
        if (neighborCost <= currentCost) {
          // accepting the mutant
          currentCost = neighborCost;
          if (currentCost < bestCost) {
            boolean isMinCost = pOpt.isMinCost(currentCost);
            bestCost = tracker.update(currentCost, current, isMinCost);
            if (tracker.didFindBest()) {
              // found theoretical best so no point in proceeding
              elapsedEvals += i;
              return new SolutionCostPair(current, currentCost, isMinCost);
            }
          }
        } else {
          // reject the mutant and revert back to previous state
          mutation.undo(current);
        }
      }
      elapsedEvals += maxEvals;
      return new SolutionCostPair(current, currentCost, pOpt.isMinCost(currentCost));
    }
  }
}




© 2015 - 2025 Weber Informatics LLC | Privacy Policy