main.java.burlap.behavior.singleagent.shaping.potential.PotentialFunction Maven / Gradle / Ivy
Go to download
Show more of this group Show more artifacts with this name
Show all versions of burlap Show documentation
Show all versions of burlap Show documentation
The Brown-UMBC Reinforcement Learning and Planning (BURLAP) Java code library is for the use and
development of single or multi-agent planning and learning algorithms and domains to accompany them. The library
uses a highly flexible state/observation representation where you define states with your own Java classes, enabling
support for domains that discrete, continuous, relational, or anything else. Planning and learning algorithms range from classic forward search
planning to value-function-based stochastic planning and learning algorithms.
The newest version!
package burlap.behavior.singleagent.shaping.potential;
import burlap.mdp.core.state.State;
/**
* Defines an interface for reward potential functions. This interface will be used by potential-based reward shaping. Note: potential functions
* should always be defined to return 0 for terminal states.
* @author James MacGlashan
*
*/
public interface PotentialFunction {
/**
* Returns the reward potential from the given state.
* Note: the potential function should always return 0 for terminal states.
* @param s the input state for which to get the reward potential.
* @return the reward potential from the given state.
*/
double potentialValue(State s);
}
© 2015 - 2024 Weber Informatics LLC | Privacy Policy