All Downloads are FREE. Search and download functionalities are using the official Maven repository.

Download JAR files tagged by bias with all dependencies

Search JAR files by class name

xianlaocai-quant from group com.xianlaocai.quant (version XLCQ20240604)

基于Java实现常见指标MACD,RSI,BOLL,KDJ,CCI,MA,EMA,BIAS,TD,WR,DMI计算等,全部封装,简洁且准确,能非常方便的应用在各自股票股市技术分析,股票自动程序化交易,数字货币BTC等量化等领域

Group: com.xianlaocai.quant Artifact: xianlaocai-quant
Show all versions 
There is no JAR file uploaded. A download is not possible! Please choose another version.
0 downloads
Artifact xianlaocai-quant
Group com.xianlaocai.quant
Version XLCQ20240604
Last update 05. June 2024
Organization not specified
URL https://github.com/RootFive/xianlaocai-quant
License The Apache Software License, Version 2.0
Dependencies amount 0
Dependencies No dependencies
There are maybe transitive dependencies!

constants-float64-exponent-bias from group org.mvnpm.at.stdlib (version 0.0.8)

The bias of a double-precision floating-point number's exponent.

Group: org.mvnpm.at.stdlib Artifact: constants-float64-exponent-bias
Show documentation Show source 
 

0 downloads
Artifact constants-float64-exponent-bias
Group org.mvnpm.at.stdlib
Version 0.0.8
Last update 27. November 2023
Organization The Stdlib Authors
URL https://stdlib.io
License Apache-2.0
Dependencies amount 1
Dependencies utils-library-manifest,
There are maybe transitive dependencies!

generalized_schlick_bias_gain from group com.laamella (version 1.1)

A Convenient Generalization of Schlick’s Bias and Gain Functions

Group: com.laamella Artifact: generalized_schlick_bias_gain
Show documentation Show source 
 

0 downloads
Artifact generalized_schlick_bias_gain
Group com.laamella
Version 1.1
Last update 27. October 2020
Organization not specified
URL https://github.com/laamella-gad/generalized_schlick_bias_gain
License The Apache Software License, Version 2.0
Dependencies amount 0
Dependencies No dependencies
There are maybe transitive dependencies!

multiBoostAB from group nz.ac.waikato.cms.weka (version 1.0.2)

Class for boosting a classifier using the MultiBoosting method. MultiBoosting is an extension to the highly successful AdaBoost technique for forming decision committees. MultiBoosting can be viewed as combining AdaBoost with wagging. It is able to harness both AdaBoost's high bias and variance reduction with wagging's superior variance reduction. Using C4.5 as the base learning algorithm, Multi-boosting is demonstrated to produce decision committees with lower error than either AdaBoost or wagging significantly more often than the reverse over a large representative cross-section of UCI data sets. It offers the further advantage over AdaBoost of suiting parallel execution. For more information, see Geoffrey I. Webb (2000). MultiBoosting: A Technique for Combining Boosting and Wagging. Machine Learning. Vol.40(No.2).

Group: nz.ac.waikato.cms.weka Artifact: multiBoostAB
Show all versions Show documentation Show source 
 

0 downloads
Artifact multiBoostAB
Group nz.ac.waikato.cms.weka
Version 1.0.2
Last update 26. April 2012
Organization University of Waikato, Hamilton, NZ
URL http://weka.sourceforge.net/doc.packages/multiBoostAB
License GNU General Public License 3
Dependencies amount 1
Dependencies weka-dev,
There are maybe transitive dependencies!

multiLayerPerceptrons from group nz.ac.waikato.cms.weka (version 1.0.10)

This package currently contains classes for training multilayer perceptrons with one hidden layer, where the number of hidden units is user specified. MLPClassifier can be used for classification problems and MLPRegressor is the corresponding class for numeric prediction tasks. The former has as many output units as there are classes, the latter only one output unit. Both minimise a penalised squared error with a quadratic penalty on the (non-bias) weights, i.e., they implement "weight decay", where this penalised error is averaged over all training instances. The size of the penalty can be determined by the user by modifying the "ridge" parameter to control overfitting. The sum of squared weights is multiplied by this parameter before added to the squared error. Both classes use BFGS optimisation by default to find parameters that correspond to a local minimum of the error function. but optionally conjugated gradient descent is available, which can be faster for problems with many parameters. Logistic functions are used as the activation functions for all units apart from the output unit in MLPRegressor, which employs the identity function. Input attributes are standardised to zero mean and unit variance. MLPRegressor also rescales the target attribute (i.e., "class") using standardisation. All network parameters are initialised with small normally distributed random values.

Group: nz.ac.waikato.cms.weka Artifact: multiLayerPerceptrons
Show all versions Show documentation Show source 
 

10 downloads
Artifact multiLayerPerceptrons
Group nz.ac.waikato.cms.weka
Version 1.0.10
Last update 31. October 2016
Organization University of Waikato, Hamilton, NZ
URL http://weka.sourceforge.net/doc.packages/multiLayerPerceptrons
License GNU General Public License 3
Dependencies amount 1
Dependencies weka-dev,
There are maybe transitive dependencies!



Page 1 from 1 (items total 5)


© 2015 - 2024 Weber Informatics LLC | Privacy Policy