All Downloads are FREE. Search and download functionalities are using the official Maven repository.

burlap.behavior.functionapproximation.dense.DenseStateFeatures Maven / Gradle / Ivy

Go to download

The Brown-UMBC Reinforcement Learning and Planning (BURLAP) Java code library is for the use and development of single or multi-agent planning and learning algorithms and domains to accompany them. The library uses a highly flexible state/observation representation where you define states with your own Java classes, enabling support for domains that discrete, continuous, relational, or anything else. Planning and learning algorithms range from classic forward search planning to value-function-based stochastic planning and learning algorithms.

The newest version!
package burlap.behavior.functionapproximation.dense;

import burlap.mdp.core.state.State;

/**
 * Many functions approximation techniques require a fixed feature vector to work and in many cases, using abstract features from
 * the state attributes is useful. This interface provides a means to take a BURLAP OO-MDP state and transform it into
 * a feature vector represented as a double array so that these function approximation techniques may be used.
 * @author James MacGlashan
 *
 */
public interface DenseStateFeatures {
	
	/**
	 * Returns a feature vector represented as a double array for a given input state.
	 * @param s the input state to turn into a feature vector.
	 * @return the feature vector represented as a double array.
	 */
	double [] features(State s);

	/**
	 * Returns a copy of this {@link DenseStateFeatures}
	 * @return a copy of this {@link DenseStateFeatures}
	 */
	DenseStateFeatures copy();
	
}




© 2015 - 2024 Weber Informatics LLC | Privacy Policy