Download all versions of multiLayerPerceptrons JAR files with all dependencies
multiLayerPerceptrons from group nz.ac.waikato.cms.weka (version 1.0.10)
This package currently contains classes for training multilayer perceptrons with one hidden layer, where the number of hidden units is user specified. MLPClassifier can be used for classification problems and MLPRegressor is the corresponding class for numeric prediction tasks. The former has as many output units as there are classes, the latter only one output unit. Both minimise a penalised squared error with a quadratic penalty on the (non-bias) weights, i.e., they implement "weight decay", where this penalised error is averaged over all training instances. The size of the penalty can be determined by the user by modifying the "ridge" parameter to control overfitting. The sum of squared weights is multiplied by this parameter before added to the squared error. Both classes use BFGS optimisation by default to find parameters that correspond to a local minimum of the error function. but optionally conjugated gradient descent is available, which can be faster for problems with many parameters. Logistic functions are used as the activation functions for all units apart from the output unit in MLPRegressor, which employs the identity function. Input attributes are standardised to zero mean and unit variance. MLPRegressor also rescales the target attribute (i.e., "class") using standardisation. All network parameters are initialised with small normally distributed random values.
10 downloads
Artifact multiLayerPerceptrons
Group nz.ac.waikato.cms.weka
Version 1.0.10
Last update 31. October 2016
Tags: they using numeric rescales package zero ridge modifying over only multiplied bias problems random number available apart optionally many mlpregressor small optimisation minimum correspond before bfgs instances conjugated training identity faster normally latter standardisation classes overfitting specified classification gradient perceptrons minimise squared both distributed contains used where function values quadratic mlpclassifier parameters penalty control class multilayer size that input default employs output this decay weight attribute former currently from network penalised parameter corresponding mean implement find activation initialised standardised functions with error added layer averaged prediction descent logistic variance tasks units unit which weights hidden there target determined attributes local also user
Organization University of Waikato, Hamilton, NZ
URL http://weka.sourceforge.net/doc.packages/multiLayerPerceptrons
License GNU General Public License 3
Dependencies amount 1
Dependencies weka-dev,
There are maybe transitive dependencies!
Group nz.ac.waikato.cms.weka
Version 1.0.10
Last update 31. October 2016
Tags: they using numeric rescales package zero ridge modifying over only multiplied bias problems random number available apart optionally many mlpregressor small optimisation minimum correspond before bfgs instances conjugated training identity faster normally latter standardisation classes overfitting specified classification gradient perceptrons minimise squared both distributed contains used where function values quadratic mlpclassifier parameters penalty control class multilayer size that input default employs output this decay weight attribute former currently from network penalised parameter corresponding mean implement find activation initialised standardised functions with error added layer averaged prediction descent logistic variance tasks units unit which weights hidden there target determined attributes local also user
Organization University of Waikato, Hamilton, NZ
URL http://weka.sourceforge.net/doc.packages/multiLayerPerceptrons
License GNU General Public License 3
Dependencies amount 1
Dependencies weka-dev,
There are maybe transitive dependencies!
multiLayerPerceptrons from group nz.ac.waikato.cms.weka (version 1.0.9)
This package currently contains classes for training multilayer perceptrons with one hidden layer, where the number of hidden units is user specified. MLPClassifier can be used for classification problems and MLPRegressor is the corresponding class for numeric prediction tasks. The former has as many output units as there are classes, the latter only one output unit. Both minimise a penalised squared error with a quadratic penalty on the (non-bias) weights, i.e., they implement "weight decay", where this penalised error is averaged over all training instances. The size of the penalty can be determined by the user by modifying the "ridge" parameter to control overfitting. The sum of squared weights is multiplied by this parameter before added to the squared error. Both classes use BFGS optimisation by default to find parameters that correspond to a local minimum of the error function. but optionally conjugated gradient descent is available, which can be faster for problems with many parameters. Logistic functions are used as the activation functions for all units apart from the output unit in MLPRegressor, which employs the identity function. Input attributes are standardised to zero mean and unit variance. MLPRegressor also rescales the target attribute (i.e., "class") using standardisation. All network parameters are initialised with small normally distributed random values.
10 downloads
Artifact multiLayerPerceptrons
Group nz.ac.waikato.cms.weka
Version 1.0.9
Last update 14. October 2016
Tags: they using numeric rescales package zero ridge modifying over only multiplied bias problems random number available apart optionally many mlpregressor small optimisation minimum correspond before bfgs instances conjugated training identity faster normally latter standardisation classes overfitting specified classification gradient perceptrons minimise squared both distributed contains used where function values quadratic mlpclassifier parameters penalty control class multilayer size that input default employs output this decay weight attribute former currently from network penalised parameter corresponding mean implement find activation initialised standardised functions with error added layer averaged prediction descent logistic variance tasks units unit which weights hidden there target determined attributes local also user
Organization University of Waikato, Hamilton, NZ
URL http://weka.sourceforge.net/doc.packages/multiLayerPerceptrons
License GNU General Public License 3
Dependencies amount 1
Dependencies weka-dev,
There are maybe transitive dependencies!
Group nz.ac.waikato.cms.weka
Version 1.0.9
Last update 14. October 2016
Tags: they using numeric rescales package zero ridge modifying over only multiplied bias problems random number available apart optionally many mlpregressor small optimisation minimum correspond before bfgs instances conjugated training identity faster normally latter standardisation classes overfitting specified classification gradient perceptrons minimise squared both distributed contains used where function values quadratic mlpclassifier parameters penalty control class multilayer size that input default employs output this decay weight attribute former currently from network penalised parameter corresponding mean implement find activation initialised standardised functions with error added layer averaged prediction descent logistic variance tasks units unit which weights hidden there target determined attributes local also user
Organization University of Waikato, Hamilton, NZ
URL http://weka.sourceforge.net/doc.packages/multiLayerPerceptrons
License GNU General Public License 3
Dependencies amount 1
Dependencies weka-dev,
There are maybe transitive dependencies!
multiLayerPerceptrons from group nz.ac.waikato.cms.weka (version 1.0.8)
This package currently contains classes for training multilayer perceptrons with one hidden layer, where the number of hidden units is user specified. MLPClassifier can be used for classification problems and MLPRegressor is the corresponding class for numeric prediction tasks. The former has as many output units as there are classes, the latter only one output unit. Both minimise a penalised squared error with a quadratic penalty on the (non-bias) weights, i.e., they implement "weight decay", where this penalised error is averaged over all training instances. The size of the penalty can be determined by the user by modifying the "ridge" parameter to control overfitting. The sum of squared weights is multiplied by this parameter before added to the squared error. Both classes use BFGS optimisation by default to find parameters that correspond to a local minimum of the error function. but optionally conjugated gradient descent is available, which can be faster for problems with many parameters. Logistic functions are used as the activation functions for all units apart from the output unit in MLPRegressor, which employs the identity function. Input attributes are standardised to zero mean and unit variance. MLPRegressor also rescales the target attribute (i.e., "class") using standardisation. All network parameters are initialised with small normally distributed random values.
10 downloads
Artifact multiLayerPerceptrons
Group nz.ac.waikato.cms.weka
Version 1.0.8
Last update 10. November 2015
Tags: they using numeric rescales package zero ridge modifying over only multiplied bias problems random number available apart optionally many mlpregressor small optimisation minimum correspond before bfgs instances conjugated training identity faster normally latter standardisation classes overfitting specified classification gradient perceptrons minimise squared both distributed contains used where function values quadratic mlpclassifier parameters penalty control class multilayer size that input default employs output this decay weight attribute former currently from network penalised parameter corresponding mean implement find activation initialised standardised functions with error added layer averaged prediction descent logistic variance tasks units unit which weights hidden there target determined attributes local also user
Organization University of Waikato, Hamilton, NZ
URL http://weka.sourceforge.net/doc.packages/multiLayerPerceptrons
License GNU General Public License 3
Dependencies amount 1
Dependencies weka-dev,
There are maybe transitive dependencies!
Group nz.ac.waikato.cms.weka
Version 1.0.8
Last update 10. November 2015
Tags: they using numeric rescales package zero ridge modifying over only multiplied bias problems random number available apart optionally many mlpregressor small optimisation minimum correspond before bfgs instances conjugated training identity faster normally latter standardisation classes overfitting specified classification gradient perceptrons minimise squared both distributed contains used where function values quadratic mlpclassifier parameters penalty control class multilayer size that input default employs output this decay weight attribute former currently from network penalised parameter corresponding mean implement find activation initialised standardised functions with error added layer averaged prediction descent logistic variance tasks units unit which weights hidden there target determined attributes local also user
Organization University of Waikato, Hamilton, NZ
URL http://weka.sourceforge.net/doc.packages/multiLayerPerceptrons
License GNU General Public License 3
Dependencies amount 1
Dependencies weka-dev,
There are maybe transitive dependencies!
multiLayerPerceptrons from group nz.ac.waikato.cms.weka (version 1.0.7)
This package currently contains classes for training multilayer perceptrons with one hidden layer, where the number of hidden units is user specified. MLPClassifier can be used for classification problems and MLPRegressor is the corresponding class for numeric prediction tasks. The former has as many output units as there are classes, the latter only one output unit. Both minimise a penalised squared error with a quadratic penalty on the (non-bias) weights, i.e., they implement "weight decay", where this penalised error is averaged over all training instances. The size of the penalty can be determined by the user by modifying the "ridge" parameter to control overfitting. The sum of squared weights is multiplied by this parameter before added to the squared error. Both classes use BFGS optimisation by default to find parameters that correspond to a local minimum of the error function. but optionally conjugated gradient descent is available, which can be faster for problems with many parameters. Logistic functions are used as the activation functions for all units apart from the output unit in MLPRegressor, which employs the identity function. Input attributes are standardised to zero mean and unit variance. MLPRegressor also rescales the target attribute (i.e., "class") using standardisation. All network parameters are initialised with small normally distributed random values.
10 downloads
Artifact multiLayerPerceptrons
Group nz.ac.waikato.cms.weka
Version 1.0.7
Last update 15. July 2014
Tags: they using numeric rescales package zero ridge modifying over only multiplied bias problems random number available apart optionally many mlpregressor small optimisation minimum correspond before bfgs instances conjugated training identity faster normally latter standardisation classes overfitting specified classification gradient perceptrons minimise squared both distributed contains used where function values quadratic mlpclassifier parameters penalty control class multilayer size that input default employs output this decay weight attribute former currently from network penalised parameter corresponding mean implement find activation initialised standardised functions with error added layer averaged prediction descent logistic variance tasks units unit which weights hidden there target determined attributes local also user
Organization University of Waikato, Hamilton, NZ
URL http://weka.sourceforge.net/doc.packages/multiLayerPerceptrons
License GNU General Public License 3
Dependencies amount 1
Dependencies weka-dev,
There are maybe transitive dependencies!
Group nz.ac.waikato.cms.weka
Version 1.0.7
Last update 15. July 2014
Tags: they using numeric rescales package zero ridge modifying over only multiplied bias problems random number available apart optionally many mlpregressor small optimisation minimum correspond before bfgs instances conjugated training identity faster normally latter standardisation classes overfitting specified classification gradient perceptrons minimise squared both distributed contains used where function values quadratic mlpclassifier parameters penalty control class multilayer size that input default employs output this decay weight attribute former currently from network penalised parameter corresponding mean implement find activation initialised standardised functions with error added layer averaged prediction descent logistic variance tasks units unit which weights hidden there target determined attributes local also user
Organization University of Waikato, Hamilton, NZ
URL http://weka.sourceforge.net/doc.packages/multiLayerPerceptrons
License GNU General Public License 3
Dependencies amount 1
Dependencies weka-dev,
There are maybe transitive dependencies!
multiLayerPerceptrons from group nz.ac.waikato.cms.weka (version 1.0.5)
This package currently contains classes for training multilayer perceptrons with one hidden layer, where the number of hidden units is user specified. MLPClassifier can be used for classification problems and MLPRegressor is the corresponding class for numeric prediction tasks. The former has as many output units as there are classes, the latter only one output unit. Both minimise a penalised squared error with a quadratic penalty on the (non-bias) weights, i.e., they implement "weight decay", where this penalised error is averaged over all training instances. The size of the penalty can be determined by the user by modifying the "ridge" parameter to control overfitting. The sum of squared weights is multiplied by this parameter before added to the squared error. Both classes use BFGS optimisation by default to find parameters that correspond to a local minimum of the error function. but optionally conjugated gradient descent is available, which can be faster for problems with many parameters. Logistic functions are used as the activation functions for all units apart from the output unit in MLPRegressor, which employs the identity function. Input attributes are standardised to zero mean and unit variance. MLPRegressor also rescales the target attribute (i.e., "class") using standardisation. All network parameters are initialised with small normally distributed random values.
10 downloads
Artifact multiLayerPerceptrons
Group nz.ac.waikato.cms.weka
Version 1.0.5
Last update 09. September 2013
Tags: they using numeric rescales package zero ridge modifying over only multiplied bias problems random number available apart optionally many mlpregressor small optimisation minimum correspond before bfgs instances conjugated training identity faster normally latter standardisation classes overfitting specified classification gradient perceptrons minimise squared both distributed contains used where function values quadratic mlpclassifier parameters penalty control class multilayer size that input default employs output this decay weight attribute former currently from network penalised parameter corresponding mean implement find activation initialised standardised functions with error added layer averaged prediction descent logistic variance tasks units unit which weights hidden there target determined attributes local also user
Organization University of Waikato, Hamilton, NZ
URL http://weka.sourceforge.net/doc.packages/multiLayerPerceptrons
License GNU General Public License 3
Dependencies amount 1
Dependencies weka-dev,
There are maybe transitive dependencies!
Group nz.ac.waikato.cms.weka
Version 1.0.5
Last update 09. September 2013
Tags: they using numeric rescales package zero ridge modifying over only multiplied bias problems random number available apart optionally many mlpregressor small optimisation minimum correspond before bfgs instances conjugated training identity faster normally latter standardisation classes overfitting specified classification gradient perceptrons minimise squared both distributed contains used where function values quadratic mlpclassifier parameters penalty control class multilayer size that input default employs output this decay weight attribute former currently from network penalised parameter corresponding mean implement find activation initialised standardised functions with error added layer averaged prediction descent logistic variance tasks units unit which weights hidden there target determined attributes local also user
Organization University of Waikato, Hamilton, NZ
URL http://weka.sourceforge.net/doc.packages/multiLayerPerceptrons
License GNU General Public License 3
Dependencies amount 1
Dependencies weka-dev,
There are maybe transitive dependencies!
multiLayerPerceptrons from group nz.ac.waikato.cms.weka (version 1.0.4)
This package currently contains classes for training multilayer perceptrons with one hidden layer, where the number of hidden units is user specified. MLPClassifier can be used for classification problems and MLPRegressor is the corresponding class for numeric prediction tasks. The former has as many output units as there are classes, the latter only one output unit. Both minimise a penalised squared error with a quadratic penalty on the (non-bias) weights, i.e., they implement "weight decay", where this penalised error is averaged over all training instances. The size of the penalty can be determined by the user by modifying the "ridge" parameter to control overfitting. The sum of squared weights is multiplied by this parameter before added to the squared error. Both classes use BFGS optimisation by default to find parameters that correspond to a local minimum of the error function. but optionally conjugated gradient descent is available, which can be faster for problems with many parameters. Logistic functions are used as the activation functions for all units apart from the output unit in MLPRegressor, which employs the identity function. Input attributes are standardised to zero mean and unit variance. MLPRegressor also rescales the target attribute (i.e., "class") using standardisation. All network parameters are initialised with small normally distributed random values.
10 downloads
Artifact multiLayerPerceptrons
Group nz.ac.waikato.cms.weka
Version 1.0.4
Last update 07. September 2013
Tags: they using numeric rescales package zero ridge modifying over only multiplied bias problems random number available apart optionally many mlpregressor small optimisation minimum correspond before bfgs instances conjugated training identity faster normally latter standardisation classes overfitting specified classification gradient perceptrons minimise squared both distributed contains used where function values quadratic mlpclassifier parameters penalty control class multilayer size that input default employs output this decay weight attribute former currently from network penalised parameter corresponding mean implement find activation initialised standardised functions with error added layer averaged prediction descent logistic variance tasks units unit which weights hidden there target determined attributes local also user
Organization University of Waikato, Hamilton, NZ
URL http://weka.sourceforge.net/doc.packages/multiLayerPerceptrons
License GNU General Public License 3
Dependencies amount 1
Dependencies weka-dev,
There are maybe transitive dependencies!
Group nz.ac.waikato.cms.weka
Version 1.0.4
Last update 07. September 2013
Tags: they using numeric rescales package zero ridge modifying over only multiplied bias problems random number available apart optionally many mlpregressor small optimisation minimum correspond before bfgs instances conjugated training identity faster normally latter standardisation classes overfitting specified classification gradient perceptrons minimise squared both distributed contains used where function values quadratic mlpclassifier parameters penalty control class multilayer size that input default employs output this decay weight attribute former currently from network penalised parameter corresponding mean implement find activation initialised standardised functions with error added layer averaged prediction descent logistic variance tasks units unit which weights hidden there target determined attributes local also user
Organization University of Waikato, Hamilton, NZ
URL http://weka.sourceforge.net/doc.packages/multiLayerPerceptrons
License GNU General Public License 3
Dependencies amount 1
Dependencies weka-dev,
There are maybe transitive dependencies!
multiLayerPerceptrons from group nz.ac.waikato.cms.weka (version 1.0.3)
This package currently contains classes for training multilayer perceptrons with one hidden layer, where the number of hidden units is user specified. MLPClassifier can be used for classification problems and MLPRegressor is the corresponding class for numeric prediction tasks. The former has as many output units as there are classes, the latter only one output unit. Both minimise a penalised squared error with a quadratic penalty on the (non-bias) weights, i.e., they implement "weight decay", where this penalised error is averaged over all training instances. The size of the penalty can be determined by the user by modifying the "ridge" parameter to control overfitting. The sum of squared weights is multiplied by this parameter before added to the squared error. Both classes use BFGS optimisation by default to find parameters that correspond to a local minimum of the error function. but optionally conjugated gradient descent is available, which can be faster for problems with many parameters. Logistic functions are used as the activation functions for all units apart from the output unit in MLPRegressor, which employs the identity function. Input attributes are standardised to zero mean and unit variance. MLPRegressor also rescales the target attribute (i.e., "class") using standardisation. All network parameters are initialised with small normally distributed random values.
10 downloads
Artifact multiLayerPerceptrons
Group nz.ac.waikato.cms.weka
Version 1.0.3
Last update 07. September 2013
Tags: they using numeric rescales package zero ridge modifying over only multiplied bias problems random number available apart optionally many mlpregressor small optimisation minimum correspond before bfgs instances conjugated training identity faster normally latter standardisation classes overfitting specified classification gradient perceptrons minimise squared both distributed contains used where function values quadratic mlpclassifier parameters penalty control class multilayer size that input default employs output this decay weight attribute former currently from network penalised parameter corresponding mean implement find activation initialised standardised functions with error added layer averaged prediction descent logistic variance tasks units unit which weights hidden there target determined attributes local also user
Organization University of Waikato, Hamilton, NZ
URL http://weka.sourceforge.net/doc.packages/multiLayerPerceptrons
License GNU General Public License 3
Dependencies amount 1
Dependencies weka-dev,
There are maybe transitive dependencies!
Group nz.ac.waikato.cms.weka
Version 1.0.3
Last update 07. September 2013
Tags: they using numeric rescales package zero ridge modifying over only multiplied bias problems random number available apart optionally many mlpregressor small optimisation minimum correspond before bfgs instances conjugated training identity faster normally latter standardisation classes overfitting specified classification gradient perceptrons minimise squared both distributed contains used where function values quadratic mlpclassifier parameters penalty control class multilayer size that input default employs output this decay weight attribute former currently from network penalised parameter corresponding mean implement find activation initialised standardised functions with error added layer averaged prediction descent logistic variance tasks units unit which weights hidden there target determined attributes local also user
Organization University of Waikato, Hamilton, NZ
URL http://weka.sourceforge.net/doc.packages/multiLayerPerceptrons
License GNU General Public License 3
Dependencies amount 1
Dependencies weka-dev,
There are maybe transitive dependencies!
multiLayerPerceptrons from group nz.ac.waikato.cms.weka (version 1.0.2)
This package currently contains classes for training multilayer perceptrons with one hidden layer, where the number of hidden units is user specified. MLPClassifier can be used for classification problems and MLPRegressor is the corresponding class for numeric prediction tasks. The former has as many output units as there are classes, the latter only one output unit. Both minimise a penalised squared error with a quadratic penalty on the (non-bias) weights, i.e., they implement "weight decay", where this penalised error is averaged over all training instances. The size of the penalty can be determined by the user by modifying the "ridge" parameter to control overfitting. The sum of squared weights is multiplied by this parameter before added to the squared error. Both classes use BFGS optimisation by default to find parameters that correspond to a local minimum of the error function. but optionally conjugated gradient descent is available, which can be faster for problems with many parameters. Logistic functions are used as the activation functions for all units apart from the output unit in MLPRegressor, which employs the identity function. Input attributes are standardised to zero mean and unit variance. MLPRegressor also rescales the target attribute (i.e., "class") using standardisation. All network parameters are initialised with small normally distributed random values.
10 downloads
Artifact multiLayerPerceptrons
Group nz.ac.waikato.cms.weka
Version 1.0.2
Last update 27. August 2013
Tags: they using numeric rescales package zero ridge modifying over only multiplied bias problems random number available apart optionally many mlpregressor small optimisation minimum correspond before bfgs instances conjugated training identity faster normally latter standardisation classes overfitting specified classification gradient perceptrons minimise squared both distributed contains used where function values quadratic mlpclassifier parameters penalty control class multilayer size that input default employs output this decay weight attribute former currently from network penalised parameter corresponding mean implement find activation initialised standardised functions with error added layer averaged prediction descent logistic variance tasks units unit which weights hidden there target determined attributes local also user
Organization University of Waikato, Hamilton, NZ
URL http://weka.sourceforge.net/doc.packages/multiLayerPerceptrons
License GNU General Public License 3
Dependencies amount 1
Dependencies weka-dev,
There are maybe transitive dependencies!
Group nz.ac.waikato.cms.weka
Version 1.0.2
Last update 27. August 2013
Tags: they using numeric rescales package zero ridge modifying over only multiplied bias problems random number available apart optionally many mlpregressor small optimisation minimum correspond before bfgs instances conjugated training identity faster normally latter standardisation classes overfitting specified classification gradient perceptrons minimise squared both distributed contains used where function values quadratic mlpclassifier parameters penalty control class multilayer size that input default employs output this decay weight attribute former currently from network penalised parameter corresponding mean implement find activation initialised standardised functions with error added layer averaged prediction descent logistic variance tasks units unit which weights hidden there target determined attributes local also user
Organization University of Waikato, Hamilton, NZ
URL http://weka.sourceforge.net/doc.packages/multiLayerPerceptrons
License GNU General Public License 3
Dependencies amount 1
Dependencies weka-dev,
There are maybe transitive dependencies!
multiLayerPerceptrons from group nz.ac.waikato.cms.weka (version 1.0.1)
This package currently contains classes for training multilayer perceptrons with one hidden layer, where the number of hidden units is user specified. MLPClassifier can be used for classification problems and MLPRegressor is the corresponding class for numeric prediction tasks. The former has as many output units as there are classes, the latter only one output unit. Both minimise a penalised squared error with a quadratic penalty on the (non-bias) weights, i.e., they implement "weight decay", where this penalised error is averaged over all training instances. The size of the penalty can be determined by the user by modifying the "ridge" parameter to control overfitting. The sum of squared weights is multiplied by this parameter before added to the squared error. Both classes use BFGS optimisation by default to find parameters that correspond to a local minimum of the error function. but optionally conjugated gradient descent is available, which can be faster for problems with many parameters. Logistic functions are used as the activation functions for all units apart from the output unit in MLPRegressor, which employs the identity function. Input attributes are standardised to zero mean and unit variance. MLPRegressor also rescales the target attribute (i.e., "class") using standardisation. All network parameters are initialised with small normally distributed random values.
10 downloads
Artifact multiLayerPerceptrons
Group nz.ac.waikato.cms.weka
Version 1.0.1
Last update 25. November 2012
Tags: they using numeric rescales package zero ridge modifying over only multiplied bias problems random number available apart optionally many mlpregressor small optimisation minimum correspond before bfgs instances conjugated training identity faster normally latter standardisation classes overfitting specified classification gradient perceptrons minimise squared both distributed contains used where function values quadratic mlpclassifier parameters penalty control class multilayer size that input default employs output this decay weight attribute former currently from network penalised parameter corresponding mean implement find activation initialised standardised functions with error added layer averaged prediction descent logistic variance tasks units unit which weights hidden there target determined attributes local also user
Organization University of Waikato, Hamilton, NZ
URL http://weka.sourceforge.net/doc.packages/multiLayerPerceptrons
License GNU General Public License 3
Dependencies amount 1
Dependencies weka-dev,
There are maybe transitive dependencies!
Group nz.ac.waikato.cms.weka
Version 1.0.1
Last update 25. November 2012
Tags: they using numeric rescales package zero ridge modifying over only multiplied bias problems random number available apart optionally many mlpregressor small optimisation minimum correspond before bfgs instances conjugated training identity faster normally latter standardisation classes overfitting specified classification gradient perceptrons minimise squared both distributed contains used where function values quadratic mlpclassifier parameters penalty control class multilayer size that input default employs output this decay weight attribute former currently from network penalised parameter corresponding mean implement find activation initialised standardised functions with error added layer averaged prediction descent logistic variance tasks units unit which weights hidden there target determined attributes local also user
Organization University of Waikato, Hamilton, NZ
URL http://weka.sourceforge.net/doc.packages/multiLayerPerceptrons
License GNU General Public License 3
Dependencies amount 1
Dependencies weka-dev,
There are maybe transitive dependencies!
multiLayerPerceptrons from group nz.ac.waikato.cms.weka (version 1.0.0)
This package currently contains classes for training multilayer perceptrons with one hidden layer, where the number of hidden units is user specified. MLPClassifier can be used for classification problems and MLPRegressor is the corresponding class for numeric prediction tasks. The former has as many output units as there are classes, the latter only one output unit. Both minimise a penalised squared error with a quadratic penalty on the (non-bias) weights, i.e., they implement "weight decay", where this penalised error is averaged over all training instances. The size of the penalty can be determined by the user by modifying the "ridge" parameter to control overfitting. The sum of squared weights is multiplied by this parameter before added to the squared error. Both classes use BFGS optimisation by default to find parameters that correspond to a local minimum of the error function. but optionally conjugated gradient descent is available, which can be faster for problems with many parameters. Logistic functions are used as the activation functions for all units apart from the output unit in MLPRegressor, which employs the identity function. Input attributes are standardised to zero mean and unit variance. MLPRegressor also rescales the target attribute (i.e., "class") using standardisation. All network parameters are initialised with small normally distributed random values.
10 downloads
Artifact multiLayerPerceptrons
Group nz.ac.waikato.cms.weka
Version 1.0.0
Last update 23. October 2012
Tags: they using numeric rescales package zero ridge modifying over only multiplied bias problems random number available apart optionally many mlpregressor small optimisation minimum correspond before bfgs instances conjugated training identity faster normally latter standardisation classes overfitting specified classification gradient perceptrons minimise squared both distributed contains used where function values quadratic mlpclassifier parameters penalty control class multilayer size that input default employs output this decay weight attribute former currently from network penalised parameter corresponding mean implement find activation initialised standardised functions with error added layer averaged prediction descent logistic variance tasks units unit which weights hidden there target determined attributes local also user
Organization University of Waikato, Hamilton, NZ
URL http://weka.sourceforge.net/doc.packages/multiLayerPerceptrons
License GNU General Public License 3
Dependencies amount 1
Dependencies weka-dev,
There are maybe transitive dependencies!
Group nz.ac.waikato.cms.weka
Version 1.0.0
Last update 23. October 2012
Tags: they using numeric rescales package zero ridge modifying over only multiplied bias problems random number available apart optionally many mlpregressor small optimisation minimum correspond before bfgs instances conjugated training identity faster normally latter standardisation classes overfitting specified classification gradient perceptrons minimise squared both distributed contains used where function values quadratic mlpclassifier parameters penalty control class multilayer size that input default employs output this decay weight attribute former currently from network penalised parameter corresponding mean implement find activation initialised standardised functions with error added layer averaged prediction descent logistic variance tasks units unit which weights hidden there target determined attributes local also user
Organization University of Waikato, Hamilton, NZ
URL http://weka.sourceforge.net/doc.packages/multiLayerPerceptrons
License GNU General Public License 3
Dependencies amount 1
Dependencies weka-dev,
There are maybe transitive dependencies!
Page 1 from 1 (items total 10)
© 2015 - 2024 Weber Informatics LLC | Privacy Policy