All Downloads are FREE. Search and download functionalities are using the official Maven repository.

com.pulumi.azurenative.machinelearningservices.kotlin.enums.ClassificationModels.kt Maven / Gradle / Ivy

Go to download

Build cloud applications and infrastructure by combining the safety and reliability of infrastructure as code with the power of the Kotlin programming language.

There is a newer version: 2.82.0.0
Show newest version
@file:Suppress("NAME_SHADOWING", "DEPRECATION")

package com.pulumi.azurenative.machinelearningservices.kotlin.enums

import com.pulumi.kotlin.ConvertibleToJava
import kotlin.Suppress

/**
 * Enum for all classification models supported by AutoML.
 */
public enum class ClassificationModels(
    public val javaValue: com.pulumi.azurenative.machinelearningservices.enums.ClassificationModels,
) : ConvertibleToJava {
    /**
     * Logistic regression is a fundamental classification technique.
     * It belongs to the group of linear classifiers and is somewhat similar to polynomial and linear regression.
     * Logistic regression is fast and relatively uncomplicated, and it's convenient for you to interpret the results.
     * Although it's essentially a method for binary classification, it can also be applied to multiclass problems.
     */
    LogisticRegression(com.pulumi.azurenative.machinelearningservices.enums.ClassificationModels.LogisticRegression),

    /**
     * SGD: Stochastic gradient descent is an optimization algorithm often used in machine learning applications
     * to find the model parameters that correspond to the best fit between predicted and actual outputs.
     */
    SGD(com.pulumi.azurenative.machinelearningservices.enums.ClassificationModels.SGD),

    /**
     * The multinomial Naive Bayes classifier is suitable for classification with discrete features (e.g., word counts for text classification).
     * The multinomial distribution normally requires integer feature counts. However, in practice, fractional counts such as tf-idf may also work.
     */
    MultinomialNaiveBayes(com.pulumi.azurenative.machinelearningservices.enums.ClassificationModels.MultinomialNaiveBayes),

    /**
     * Naive Bayes classifier for multivariate Bernoulli models.
     */
    BernoulliNaiveBayes(com.pulumi.azurenative.machinelearningservices.enums.ClassificationModels.BernoulliNaiveBayes),

    /**
     * A support vector machine (SVM) is a supervised machine learning model that uses classification algorithms for two-group classification problems.
     * After giving an SVM model sets of labeled training data for each category, they're able to categorize new text.
     */
    SVM(com.pulumi.azurenative.machinelearningservices.enums.ClassificationModels.SVM),

    /**
     * A support vector machine (SVM) is a supervised machine learning model that uses classification algorithms for two-group classification problems.
     * After giving an SVM model sets of labeled training data for each category, they're able to categorize new text.
     * Linear SVM performs best when input data is linear, i.e., data can be easily classified by drawing the straight line between classified values on a plotted graph.
     */
    LinearSVM(com.pulumi.azurenative.machinelearningservices.enums.ClassificationModels.LinearSVM),

    /**
     * K-nearest neighbors (KNN) algorithm uses 'feature similarity' to predict the values of new datapoints
     * which further means that the new data point will be assigned a value based on how closely it matches the points in the training set.
     */
    KNN(com.pulumi.azurenative.machinelearningservices.enums.ClassificationModels.KNN),

    /**
     * Decision Trees are a non-parametric supervised learning method used for both classification and regression tasks.
     * The goal is to create a model that predicts the value of a target variable by learning simple decision rules inferred from the data features.
     */
    DecisionTree(com.pulumi.azurenative.machinelearningservices.enums.ClassificationModels.DecisionTree),

    /**
     * Random forest is a supervised learning algorithm.
     * The "forest" it builds, is an ensemble of decision trees, usually trained with the “bagging” method.
     * The general idea of the bagging method is that a combination of learning models increases the overall result.
     */
    RandomForest(com.pulumi.azurenative.machinelearningservices.enums.ClassificationModels.RandomForest),

    /**
     * Extreme Trees is an ensemble machine learning algorithm that combines the predictions from many decision trees. It is related to the widely used random forest algorithm.
     */
    ExtremeRandomTrees(com.pulumi.azurenative.machinelearningservices.enums.ClassificationModels.ExtremeRandomTrees),

    /**
     * LightGBM is a gradient boosting framework that uses tree based learning algorithms.
     */
    LightGBM(com.pulumi.azurenative.machinelearningservices.enums.ClassificationModels.LightGBM),

    /**
     * The technique of transiting week learners into a strong learner is called Boosting. The gradient boosting algorithm process works on this theory of execution.
     */
    GradientBoosting(com.pulumi.azurenative.machinelearningservices.enums.ClassificationModels.GradientBoosting),

    /**
     * XGBoost: Extreme Gradient Boosting Algorithm. This algorithm is used for structured data where target column values can be divided into distinct class values.
     */
    XGBoostClassifier(com.pulumi.azurenative.machinelearningservices.enums.ClassificationModels.XGBoostClassifier),
    ;

    override fun toJava(): com.pulumi.azurenative.machinelearningservices.enums.ClassificationModels =
        javaValue

    public companion object {
        public fun toKotlin(javaType: com.pulumi.azurenative.machinelearningservices.enums.ClassificationModels): ClassificationModels = ClassificationModels.values().first { it.javaValue == javaType }
    }
}




© 2015 - 2025 Weber Informatics LLC | Privacy Policy