All Downloads are FREE. Search and download functionalities are using the official Maven repository.

org.cicirello.search.problems.scheduling.ExponentialEarlyTardyHeuristic Maven / Gradle / Ivy

Go to download

Chips-n-Salsa is a Java library of customizable, hybridizable, iterative, parallel, stochastic, and self-adaptive local search algorithms. The library includes implementations of several stochastic local search algorithms, including simulated annealing, hill climbers, as well as constructive search algorithms such as stochastic sampling. Chips-n-Salsa now also includes genetic algorithms as well as evolutionary algorithms more generally. The library very extensively supports simulated annealing. It includes several classes for representing solutions to a variety of optimization problems. For example, the library includes a BitVector class that implements vectors of bits, as well as classes for representing solutions to problems where we are searching for an optimal vector of integers or reals. For each of the built-in representations, the library provides the most common mutation operators for generating random neighbors of candidate solutions, as well as common crossover operators for use with evolutionary algorithms. Additionally, the library provides extensive support for permutation optimization problems, including implementations of many different mutation operators for permutations, and utilizing the efficiently implemented Permutation class of the JavaPermutationTools (JPT) library. Chips-n-Salsa is customizable, making extensive use of Java's generic types, enabling using the library to optimize other types of representations beyond what is provided in the library. It is hybridizable, providing support for integrating multiple forms of local search (e.g., using a hill climber on a solution generated by simulated annealing), creating hybrid mutation operators (e.g., local search using multiple mutation operators), as well as support for running more than one type of search for the same problem concurrently using multiple threads as a form of algorithm portfolio. Chips-n-Salsa is iterative, with support for multistart metaheuristics, including implementations of several restart schedules for varying the run lengths across the restarts. It also supports parallel execution of multiple instances of the same, or different, stochastic local search algorithms for an instance of a problem to accelerate the search process. The library supports self-adaptive search in a variety of ways, such as including implementations of adaptive annealing schedules for simulated annealing, such as the Modified Lam schedule, implementations of the simpler annealing schedules but which self-tune the initial temperature and other parameters, and restart schedules that adapt to run length.

There is a newer version: 7.0.1
Show newest version
/*
 * Chips-n-Salsa: A library of parallel self-adaptive local search algorithms.
 * Copyright (C) 2002-2020  Vincent A. Cicirello
 *
 * This file is part of Chips-n-Salsa (https://chips-n-salsa.cicirello.org/).
 *
 * Chips-n-Salsa is free software: you can redistribute it and/or modify
 * it under the terms of the GNU General Public License as published by
 * the Free Software Foundation, either version 3 of the License, or
 * (at your option) any later version.
 *
 * Chips-n-Salsa is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
 * GNU General Public License for more details.
 *
 * You should have received a copy of the GNU General Public License
 * along with this program.  If not, see .
 */

package org.cicirello.search.problems.scheduling;

import org.cicirello.permutations.Permutation;
import org.cicirello.search.ss.IncrementalEvaluation;
import org.cicirello.search.ss.Partial;

/**
 * This class implements a constructive heuristic, known as EXPET, for scheduling problems involving
 * minimizing the sum of weighted earliness plus weighted tardiness. EXPET is an acronym for
 * Exponential Early Tardy.
 *
 * 

To define the EXPET heuristic, first let he[j] be the weighted longest processing time * heuristic of job j, defined as he[j] = -we[j] / p[j], where we[j] is the earliness weight for job * j, and p[j] is the processing time of job j. Next, let ht[j] be the weighted shortest processing * time heuristic of job j, defined as ht[j] = wt[j] / p[j], where wt[j] is the tardiness weight of * job j. Define the slack s[j] of job j as: s[j] = d[j] - T - p[j], where d[j] is the job's due * date and T is the current time. Let k ≥ 1 be a lookahead parameter that can be tuned based on * problem instance characteristics, and p̄ is the average processing time of remaining * unscheduled jobs. * *

Now we can define the EXPET heuristic, h[j] for job j, as follows. Case 1: If s[j] ≤ 0, * h[j] = ht[j]. Case 2: if s[j] ≥ k*p̄, h[j] = he[j]. Case 3: If 0 < s[j] ≤ * k*p̄*ht[j]/(ht[j]-he[j]), then h[j] = ht[j] * exp((s[j](ht[j]-he[j]))/(he[j]*k*p̄)). * Case 4: If k*p̄*ht[j]/(ht[j]-he[j]) < s[j] < k*p̄, then h[j] = he[j]-2 * * (ht[j] - s[j](ht[j]-he[j])/(k*p̄))3. For jobs with negative slack, the EXPET * heuristic is equivalent to weighted shortest processing time. For jobs with slack greater than * some multiple k of the average processing time, EXPET is equivalent to weighted longest * processing time. * *

We make one additional adjustment to the heuristic as it was originally described. Since this * library's implementations of stochastic sampling algorithms assumes that constructive heuristics * always produce positive values, we must adjust the values produced by the EXPET heuristic. * Specifically, we actually compute h'[j] = h[j] + shift, where shift = {@link #MIN_H} - min(we[j] * / p[j]). The {@link #MIN_H} is a small non-zero value. In this way, we shift all of the h[j] * values by a constant amount such that all h[j] values are positive. * * @author Vincent A. Cicirello, https://www.cicirello.org/ * @version 10.31.2020 */ public final class ExponentialEarlyTardyHeuristic extends SchedulingHeuristic { private final double[] wlpt; private final double[] wspt; private final double k; private final double shift; private final int totalProcessTime; /** * Constructs a ExponentialEarlyTardyHeuristic heuristic. Uses a default value of k=1. * * @param problem The instance of a scheduling problem that is the target of the heuristic. */ public ExponentialEarlyTardyHeuristic(SingleMachineSchedulingProblem problem) { this(problem, 1.0); } /** * Constructs a ExponentialEarlyTardyHeuristic heuristic. * * @param problem The instance of a scheduling problem that is the target of the heuristic. * @param k A parameter of the heuristic (see class documentation). Must be at least 1. * @throws IllegalArgumentException if k < 1. */ public ExponentialEarlyTardyHeuristic(SingleMachineSchedulingProblem problem, double k) { super(problem); if (k < 1) throw new IllegalArgumentException("k must be at least 1"); // pre-compute WLPT and WSPT, and cache results. wlpt = new double[data.numberOfJobs()]; wspt = new double[data.numberOfJobs()]; double minimum = 0; for (int i = 0; i < wlpt.length; i++) { wlpt[i] = -data.getEarlyWeight(i) / (double) data.getProcessingTime(i); if (wlpt[i] < minimum) minimum = wlpt[i]; wspt[i] = data.getWeight(i) / (double) data.getProcessingTime(i); } // shift heuristic values by minimum to ensure all positive values. shift = MIN_H - minimum; this.k = k; totalProcessTime = sumOfProcessingTimes(); } @Override public double h(Partial p, int element, IncrementalEvaluation incEval) { double slack = ((IncrementalAverageProcessingCalculator) incEval).slack(element, p); if (slack <= 0) { return wspt[element] + shift; } double kpBar = k * ((IncrementalAverageProcessingCalculator) incEval).averageProcessingTime(); if (slack >= kpBar) { return wlpt[element] + shift; } double bound1 = kpBar * wspt[element] / (wspt[element] - wlpt[element]); if (slack <= bound1) { return wspt[element] * Math.exp(slack * (wspt[element] - wlpt[element]) / (wlpt[element] * kpBar)) + shift; } else { double numTerm = wspt[element] - slack * (wspt[element] - wlpt[element]) / kpBar; return numTerm * numTerm * numTerm / (wlpt[element] * wlpt[element]) + shift; } } @Override public IncrementalEvaluation createIncrementalEvaluation() { return new IncrementalAverageProcessingCalculator(totalProcessTime); } }





© 2015 - 2025 Weber Informatics LLC | Privacy Policy