All Downloads are FREE. Search and download functionalities are using the official Maven repository.

target.apidocs.com.google.api.services.dataproc.model.SparkStandaloneAutoscalingConfig.html Maven / Gradle / Ivy







SparkStandaloneAutoscalingConfig (Cloud Dataproc API v1-rev20240605-2.0.0)












com.google.api.services.dataproc.model

Class SparkStandaloneAutoscalingConfig

    • Constructor Detail

      • SparkStandaloneAutoscalingConfig

        public SparkStandaloneAutoscalingConfig()
    • Method Detail

      • getGracefulDecommissionTimeout

        public String getGracefulDecommissionTimeout()
        Required. Timeout for Spark graceful decommissioning of spark workers. Specifies the duration to wait for spark worker to complete spark decommissioning tasks before forcefully removing workers. Only applicable to downscaling operations.Bounds: 0s, 1d.
        Returns:
        value or null for none
      • setGracefulDecommissionTimeout

        public SparkStandaloneAutoscalingConfig setGracefulDecommissionTimeout(String gracefulDecommissionTimeout)
        Required. Timeout for Spark graceful decommissioning of spark workers. Specifies the duration to wait for spark worker to complete spark decommissioning tasks before forcefully removing workers. Only applicable to downscaling operations.Bounds: 0s, 1d.
        Parameters:
        gracefulDecommissionTimeout - gracefulDecommissionTimeout or null for none
      • getRemoveOnlyIdleWorkers

        public Boolean getRemoveOnlyIdleWorkers()
        Optional. Remove only idle workers when scaling down cluster
        Returns:
        value or null for none
      • setRemoveOnlyIdleWorkers

        public SparkStandaloneAutoscalingConfig setRemoveOnlyIdleWorkers(Boolean removeOnlyIdleWorkers)
        Optional. Remove only idle workers when scaling down cluster
        Parameters:
        removeOnlyIdleWorkers - removeOnlyIdleWorkers or null for none
      • getScaleDownFactor

        public Double getScaleDownFactor()
        Required. Fraction of required executors to remove from Spark Serverless clusters. A scale-down factor of 1.0 will result in scaling down so that there are no more executors for the Spark Job.(more aggressive scaling). A scale-down factor closer to 0 will result in a smaller magnitude of scaling donw (less aggressive scaling).Bounds: 0.0, 1.0.
        Returns:
        value or null for none
      • setScaleDownFactor

        public SparkStandaloneAutoscalingConfig setScaleDownFactor(Double scaleDownFactor)
        Required. Fraction of required executors to remove from Spark Serverless clusters. A scale-down factor of 1.0 will result in scaling down so that there are no more executors for the Spark Job.(more aggressive scaling). A scale-down factor closer to 0 will result in a smaller magnitude of scaling donw (less aggressive scaling).Bounds: 0.0, 1.0.
        Parameters:
        scaleDownFactor - scaleDownFactor or null for none
      • getScaleDownMinWorkerFraction

        public Double getScaleDownMinWorkerFraction()
        Optional. Minimum scale-down threshold as a fraction of total cluster size before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1 means the autoscaler must recommend at least a 2 worker scale-down for the cluster to scale. A threshold of 0 means the autoscaler will scale down on any recommended change.Bounds: 0.0, 1.0. Default: 0.0.
        Returns:
        value or null for none
      • setScaleDownMinWorkerFraction

        public SparkStandaloneAutoscalingConfig setScaleDownMinWorkerFraction(Double scaleDownMinWorkerFraction)
        Optional. Minimum scale-down threshold as a fraction of total cluster size before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1 means the autoscaler must recommend at least a 2 worker scale-down for the cluster to scale. A threshold of 0 means the autoscaler will scale down on any recommended change.Bounds: 0.0, 1.0. Default: 0.0.
        Parameters:
        scaleDownMinWorkerFraction - scaleDownMinWorkerFraction or null for none
      • getScaleUpFactor

        public Double getScaleUpFactor()
        Required. Fraction of required workers to add to Spark Standalone clusters. A scale-up factor of 1.0 will result in scaling up so that there are no more required workers for the Spark Job (more aggressive scaling). A scale-up factor closer to 0 will result in a smaller magnitude of scaling up (less aggressive scaling).Bounds: 0.0, 1.0.
        Returns:
        value or null for none
      • setScaleUpFactor

        public SparkStandaloneAutoscalingConfig setScaleUpFactor(Double scaleUpFactor)
        Required. Fraction of required workers to add to Spark Standalone clusters. A scale-up factor of 1.0 will result in scaling up so that there are no more required workers for the Spark Job (more aggressive scaling). A scale-up factor closer to 0 will result in a smaller magnitude of scaling up (less aggressive scaling).Bounds: 0.0, 1.0.
        Parameters:
        scaleUpFactor - scaleUpFactor or null for none
      • getScaleUpMinWorkerFraction

        public Double getScaleUpMinWorkerFraction()
        Optional. Minimum scale-up threshold as a fraction of total cluster size before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1 means the autoscaler must recommend at least a 2-worker scale-up for the cluster to scale. A threshold of 0 means the autoscaler will scale up on any recommended change.Bounds: 0.0, 1.0. Default: 0.0.
        Returns:
        value or null for none
      • setScaleUpMinWorkerFraction

        public SparkStandaloneAutoscalingConfig setScaleUpMinWorkerFraction(Double scaleUpMinWorkerFraction)
        Optional. Minimum scale-up threshold as a fraction of total cluster size before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1 means the autoscaler must recommend at least a 2-worker scale-up for the cluster to scale. A threshold of 0 means the autoscaler will scale up on any recommended change.Bounds: 0.0, 1.0. Default: 0.0.
        Parameters:
        scaleUpMinWorkerFraction - scaleUpMinWorkerFraction or null for none

Copyright © 2011–2024 Google. All rights reserved.





© 2015 - 2025 Weber Informatics LLC | Privacy Policy