All Downloads are FREE. Search and download functionalities are using the official Maven repository.

Download JAR files tagged by bfgs with all dependencies

Search JAR files by class name

lbfgs from group com.dbtsai.lbfgs (version 0.1)

A Java Port of L-BFGS variant Optimization Algorithms.

Group: com.dbtsai.lbfgs Artifact: lbfgs
Show documentation Show source 
 

8 downloads
Artifact lbfgs
Group com.dbtsai.lbfgs
Version 0.1
Last update 25. February 2014
Organization com.dbtsai.lbfgs
URL https://github.com/dbtsai/lbfgs
License Apache License 2.0
Dependencies amount 1
Dependencies scala-library,
There are maybe transitive dependencies!

lbfgs4j from group com.github.vinhkhuc (version 0.2.1)

lbfgs4j is a Java version of liblbfgs, a library of Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS).

Group: com.github.vinhkhuc Artifact: lbfgs4j
Show all versions Show documentation Show source 
 

5 downloads
Artifact lbfgs4j
Group com.github.vinhkhuc
Version 0.2.1
Last update 06. February 2017
Organization not specified
URL http://maven.apache.org
License MIT License
Dependencies amount 0
Dependencies No dependencies
There are maybe transitive dependencies!

LBFGS from group com.github.thssmonkey (version 1.0.4)

Limited-memory BFGS (L-BFGS) is an optimization algorithm in the family of quasi-Newton methods that approximates the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm using a limited amount of computer memory. It is a popular algorithm for parameter estimation in machine learning. The algorithm's target problem is to minimize f(x) over unconstrained values of the real-vector x where f is a differentiable scalar function.

Group: com.github.thssmonkey Artifact: LBFGS
Show all versions Show documentation Show source 
 

0 downloads
Artifact LBFGS
Group com.github.thssmonkey
Version 1.0.4
Last update 16. May 2019
Organization not specified
URL https://github.com/thssmonkey/LBFGS
License The Apache Software License, Version 2.0
Dependencies amount 4
Dependencies flink-scala_${scala.binary.version}, flink-streaming-scala_${scala.binary.version}, flink-clients_${scala.binary.version}, flink-ml_${scala.binary.version},
There are maybe transitive dependencies!

RBFNetwork from group nz.ac.waikato.cms.weka (version 1.0.8)

RBFNetwork implements a normalized Gaussian radial basisbasis function network. It uses the k-means clustering algorithm to provide the basis functions and learns either a logistic regression (discrete class problems) or linear regression (numeric class problems) on top of that. Symmetric multivariate Gaussians are fit to the data from each cluster. If the class is nominal it uses the given number of clusters per class. RBFRegressor implements radial basis function networks for regression, trained in a fully supervised manner using WEKA's Optimization class by minimizing squared error with the BFGS method. It is possible to use conjugate gradient descent rather than BFGS updates, which is faster for cases with many parameters, and to use normalized basis functions instead of unnormalized ones.

Group: nz.ac.waikato.cms.weka Artifact: RBFNetwork
Show all versions Show documentation Show source 
 

11 downloads
Artifact RBFNetwork
Group nz.ac.waikato.cms.weka
Version 1.0.8
Last update 16. January 2015
Organization University of Waikato, Hamilton, NZ
URL http://weka.sourceforge.net/doc.packages/RBFNetwork
License GNU General Public License 3
Dependencies amount 1
Dependencies weka-dev,
There are maybe transitive dependencies!

kernelLogisticRegression from group nz.ac.waikato.cms.weka (version 1.0.0)

This package contains a classifier that can be used to train a two-class kernel logistic regression model with the kernel functions that are available in WEKA. It optimises the negative log-likelihood with a quadratic penalty. Both, BFGS and conjugate gradient descent, are available as optimisation methods, but the former is normally faster. It is possible to use multiple threads, but the speed-up is generally very marginal when used with BFGS optimisation. With conjugate gradient descent optimisation, greater speed-ups can be achieved when using multiple threads. With the default kernel, the dot product kernel, this method produces results that are close to identical to those obtained using standard logistic regression in WEKA, provided a sufficiently large value for the parameter determining the size of the quadratic penalty is used in both cases.

Group: nz.ac.waikato.cms.weka Artifact: kernelLogisticRegression
Show documentation Show source 
 

0 downloads
Artifact kernelLogisticRegression
Group nz.ac.waikato.cms.weka
Version 1.0.0
Last update 26. June 2013
Organization University of Waikato, Hamilton, NZ
URL http://weka.sourceforge.net/doc.packages/kernelLogisticRegression
License GNU General Public License 3
Dependencies amount 1
Dependencies weka-dev,
There are maybe transitive dependencies!

multiLayerPerceptrons from group nz.ac.waikato.cms.weka (version 1.0.10)

This package currently contains classes for training multilayer perceptrons with one hidden layer, where the number of hidden units is user specified. MLPClassifier can be used for classification problems and MLPRegressor is the corresponding class for numeric prediction tasks. The former has as many output units as there are classes, the latter only one output unit. Both minimise a penalised squared error with a quadratic penalty on the (non-bias) weights, i.e., they implement "weight decay", where this penalised error is averaged over all training instances. The size of the penalty can be determined by the user by modifying the "ridge" parameter to control overfitting. The sum of squared weights is multiplied by this parameter before added to the squared error. Both classes use BFGS optimisation by default to find parameters that correspond to a local minimum of the error function. but optionally conjugated gradient descent is available, which can be faster for problems with many parameters. Logistic functions are used as the activation functions for all units apart from the output unit in MLPRegressor, which employs the identity function. Input attributes are standardised to zero mean and unit variance. MLPRegressor also rescales the target attribute (i.e., "class") using standardisation. All network parameters are initialised with small normally distributed random values.

Group: nz.ac.waikato.cms.weka Artifact: multiLayerPerceptrons
Show all versions Show documentation Show source 
 

10 downloads
Artifact multiLayerPerceptrons
Group nz.ac.waikato.cms.weka
Version 1.0.10
Last update 31. October 2016
Organization University of Waikato, Hamilton, NZ
URL http://weka.sourceforge.net/doc.packages/multiLayerPerceptrons
License GNU General Public License 3
Dependencies amount 1
Dependencies weka-dev,
There are maybe transitive dependencies!



Page 1 from 1 (items total 6)


© 2015 - 2024 Weber Informatics LLC | Privacy Policy