All Downloads are FREE. Search and download functionalities are using the official Maven repository.

org.cicirello.search.sa.ModifiedLamOriginal Maven / Gradle / Ivy

Go to download

Chips-n-Salsa is a Java library of customizable, hybridizable, iterative, parallel, stochastic, and self-adaptive local search algorithms. The library includes implementations of several stochastic local search algorithms, including simulated annealing, hill climbers, as well as constructive search algorithms such as stochastic sampling. Chips-n-Salsa now also includes genetic algorithms as well as evolutionary algorithms more generally. The library very extensively supports simulated annealing. It includes several classes for representing solutions to a variety of optimization problems. For example, the library includes a BitVector class that implements vectors of bits, as well as classes for representing solutions to problems where we are searching for an optimal vector of integers or reals. For each of the built-in representations, the library provides the most common mutation operators for generating random neighbors of candidate solutions, as well as common crossover operators for use with evolutionary algorithms. Additionally, the library provides extensive support for permutation optimization problems, including implementations of many different mutation operators for permutations, and utilizing the efficiently implemented Permutation class of the JavaPermutationTools (JPT) library. Chips-n-Salsa is customizable, making extensive use of Java's generic types, enabling using the library to optimize other types of representations beyond what is provided in the library. It is hybridizable, providing support for integrating multiple forms of local search (e.g., using a hill climber on a solution generated by simulated annealing), creating hybrid mutation operators (e.g., local search using multiple mutation operators), as well as support for running more than one type of search for the same problem concurrently using multiple threads as a form of algorithm portfolio. Chips-n-Salsa is iterative, with support for multistart metaheuristics, including implementations of several restart schedules for varying the run lengths across the restarts. It also supports parallel execution of multiple instances of the same, or different, stochastic local search algorithms for an instance of a problem to accelerate the search process. The library supports self-adaptive search in a variety of ways, such as including implementations of adaptive annealing schedules for simulated annealing, such as the Modified Lam schedule, implementations of the simpler annealing schedules but which self-tune the initial temperature and other parameters, and restart schedules that adapt to run length.

There is a newer version: 7.0.1
Show newest version
/*
 * Chips-n-Salsa: A library of parallel self-adaptive local search algorithms.
 * Copyright (C) 2002-2021  Vincent A. Cicirello
 *
 * This file is part of Chips-n-Salsa (https://chips-n-salsa.cicirello.org/).
 *
 * Chips-n-Salsa is free software: you can redistribute it and/or modify
 * it under the terms of the GNU General Public License as published by
 * the Free Software Foundation, either version 3 of the License, or
 * (at your option) any later version.
 *
 * Chips-n-Salsa is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
 * GNU General Public License for more details.
 *
 * You should have received a copy of the GNU General Public License
 * along with this program.  If not, see .
 */

package org.cicirello.search.sa;

import java.util.concurrent.ThreadLocalRandom;

/**
 * This class implements the Modified Lam annealing schedule, which dynamically adjusts simulated
 * annealing's temperature parameter up and down to either decrease or increase the neighbor
 * acceptance rate as necessary to attempt to match a theoretically determined ideal. The Modified
 * Lam annealing schedule is a practical realization of Lam and Delosme's (1988) schedule, refined
 * first by Swartz (1993) and then further by Boyan (1998). For complete details of the Modified Lam
 * schedule, along with its origins and rationale, see the following references:
 *
 * 
    *
  • Lam, J., and Delosme, J. 1988. Performance of a new annealing schedule. In Proc. 25th * ACM/IEEE DAC, 306–311. *
  • Swartz, W. P. 1993. Automatic Layout of Analog and Digital Mixed Macro/Standard Cell * Integrated Circuits. Ph.D. Dissertation, Yale University. *
  • Boyan, J. A. 1998. Learning Evaluation Functions for Global Optimization. Ph.D. * Dissertation, Carnegie Mellon University, Pittsburgh, PA. *
* *

This class, ModifiedLamOriginal, is a direct implementation of the Modified Lam schedule as * described in the reference to Boyan above. In most cases, if you want to use the Modified Lam * schedule, you should prefer the {@link ModifiedLam} class, which includes a variety of * optimizations to speed up the updating of schedule parameters. This ModifiedLamOriginal class is * included in the library for investigating the benefit of the optimizations incorporated into the * {@link ModifiedLam} class (see that class's documentation for a description of the specific * optimizations made). * *

The {@link #accept} methods of this class use the classic, and most common, Boltzmann * distribution for determining whether to accept a neighbor. * * @author Vincent A. Cicirello, https://www.cicirello.org/ * @version 1.22.2021 */ public final class ModifiedLamOriginal implements AnnealingSchedule { private double t; private double acceptRate; private double targetRate; private double phase1; private double phase2; private int iterationCount; private int lastMaxEvals; /** * Default constructor. The Modified Lam annealing schedule, unlike other annealing schedules, has * no control parameters other than the run length (the maxEvals parameter of the {@link #init} * method), so no parameters need be passed to the constructor. */ public ModifiedLamOriginal() { lastMaxEvals = -1; } @Override public void init(int maxEvals) { t = 0.5; acceptRate = 0.5; targetRate = 1.0; iterationCount = 0; if (lastMaxEvals != maxEvals) { // These don't change during the run, and only depend // on maxEvals. So initialize only if run length // has changed. phase1 = 0.15 * maxEvals; phase2 = 0.65 * maxEvals; lastMaxEvals = maxEvals; } } @Override public boolean accept(double neighborCost, double currentCost) { boolean doAccept = neighborCost <= currentCost || ThreadLocalRandom.current().nextDouble() < Math.exp((currentCost - neighborCost) / t); updateSchedule(doAccept); return doAccept; } @Override public ModifiedLamOriginal split() { return new ModifiedLamOriginal(); } private void updateSchedule(boolean doAccept) { if (doAccept) acceptRate = 0.998 * acceptRate + 0.002; else acceptRate = 0.998 * acceptRate; iterationCount++; if (iterationCount <= phase1) { targetRate = 0.44 + 0.56 * Math.pow(560, -1.0 * iterationCount / phase1); } else if (iterationCount > phase2) { targetRate = 0.44 * Math.pow(440, -(1.0 * iterationCount / lastMaxEvals - 0.65) / 0.35); } else { // Phase 2 (50% of run beginning after phase 1): constant targetRate at 0.44. targetRate = 0.44; } if (acceptRate > targetRate) t *= 0.999; else t /= 0.999; } /* * package-private for unit testing */ double getTargetRate() { return targetRate; } /* * package-private for unit testing */ double getAcceptRate() { return acceptRate; } /* * package-private for unit testing */ double getTemperature() { return t; } }





© 2015 - 2025 Weber Informatics LLC | Privacy Policy