target.apidocs.com.google.api.services.dataproc.model.SparkStandaloneAutoscalingConfig.html Maven / Gradle / Ivy
SparkStandaloneAutoscalingConfig (Cloud Dataproc API v1-rev20240605-2.0.0)
com.google.api.services.dataproc.model
Class SparkStandaloneAutoscalingConfig
- java.lang.Object
-
- java.util.AbstractMap<String,Object>
-
- com.google.api.client.util.GenericData
-
- com.google.api.client.json.GenericJson
-
- com.google.api.services.dataproc.model.SparkStandaloneAutoscalingConfig
-
public final class SparkStandaloneAutoscalingConfig
extends com.google.api.client.json.GenericJson
Basic autoscaling configurations for Spark Standalone.
This is the Java data model class that specifies how to parse/serialize into the JSON that is
transmitted over HTTP when working with the Cloud Dataproc API. For a detailed explanation see:
https://developers.google.com/api-client-library/java/google-http-java-client/json
- Author:
- Google, Inc.
-
-
Nested Class Summary
-
Nested classes/interfaces inherited from class com.google.api.client.util.GenericData
com.google.api.client.util.GenericData.Flags
-
Nested classes/interfaces inherited from class java.util.AbstractMap
AbstractMap.SimpleEntry<K,V>, AbstractMap.SimpleImmutableEntry<K,V>
-
Constructor Summary
Constructors
Constructor and Description
SparkStandaloneAutoscalingConfig()
-
Method Summary
-
Methods inherited from class com.google.api.client.json.GenericJson
getFactory, setFactory, toPrettyString, toString
-
Methods inherited from class com.google.api.client.util.GenericData
entrySet, equals, get, getClassInfo, getUnknownKeys, hashCode, put, putAll, remove, setUnknownKeys
-
Methods inherited from class java.util.AbstractMap
clear, containsKey, containsValue, isEmpty, keySet, size, values
-
Methods inherited from class java.lang.Object
finalize, getClass, notify, notifyAll, wait, wait, wait
-
Methods inherited from interface java.util.Map
compute, computeIfAbsent, computeIfPresent, forEach, getOrDefault, merge, putIfAbsent, remove, replace, replace, replaceAll
-
-
Method Detail
-
getGracefulDecommissionTimeout
public String getGracefulDecommissionTimeout()
Required. Timeout for Spark graceful decommissioning of spark workers. Specifies the duration
to wait for spark worker to complete spark decommissioning tasks before forcefully removing
workers. Only applicable to downscaling operations.Bounds: 0s, 1d.
- Returns:
- value or
null
for none
-
setGracefulDecommissionTimeout
public SparkStandaloneAutoscalingConfig setGracefulDecommissionTimeout(String gracefulDecommissionTimeout)
Required. Timeout for Spark graceful decommissioning of spark workers. Specifies the duration
to wait for spark worker to complete spark decommissioning tasks before forcefully removing
workers. Only applicable to downscaling operations.Bounds: 0s, 1d.
- Parameters:
gracefulDecommissionTimeout
- gracefulDecommissionTimeout or null
for none
-
getRemoveOnlyIdleWorkers
public Boolean getRemoveOnlyIdleWorkers()
Optional. Remove only idle workers when scaling down cluster
- Returns:
- value or
null
for none
-
setRemoveOnlyIdleWorkers
public SparkStandaloneAutoscalingConfig setRemoveOnlyIdleWorkers(Boolean removeOnlyIdleWorkers)
Optional. Remove only idle workers when scaling down cluster
- Parameters:
removeOnlyIdleWorkers
- removeOnlyIdleWorkers or null
for none
-
getScaleDownFactor
public Double getScaleDownFactor()
Required. Fraction of required executors to remove from Spark Serverless clusters. A scale-down
factor of 1.0 will result in scaling down so that there are no more executors for the Spark
Job.(more aggressive scaling). A scale-down factor closer to 0 will result in a smaller
magnitude of scaling donw (less aggressive scaling).Bounds: 0.0, 1.0.
- Returns:
- value or
null
for none
-
setScaleDownFactor
public SparkStandaloneAutoscalingConfig setScaleDownFactor(Double scaleDownFactor)
Required. Fraction of required executors to remove from Spark Serverless clusters. A scale-down
factor of 1.0 will result in scaling down so that there are no more executors for the Spark
Job.(more aggressive scaling). A scale-down factor closer to 0 will result in a smaller
magnitude of scaling donw (less aggressive scaling).Bounds: 0.0, 1.0.
- Parameters:
scaleDownFactor
- scaleDownFactor or null
for none
-
getScaleDownMinWorkerFraction
public Double getScaleDownMinWorkerFraction()
Optional. Minimum scale-down threshold as a fraction of total cluster size before scaling
occurs. For example, in a 20-worker cluster, a threshold of 0.1 means the autoscaler must
recommend at least a 2 worker scale-down for the cluster to scale. A threshold of 0 means the
autoscaler will scale down on any recommended change.Bounds: 0.0, 1.0. Default: 0.0.
- Returns:
- value or
null
for none
-
setScaleDownMinWorkerFraction
public SparkStandaloneAutoscalingConfig setScaleDownMinWorkerFraction(Double scaleDownMinWorkerFraction)
Optional. Minimum scale-down threshold as a fraction of total cluster size before scaling
occurs. For example, in a 20-worker cluster, a threshold of 0.1 means the autoscaler must
recommend at least a 2 worker scale-down for the cluster to scale. A threshold of 0 means the
autoscaler will scale down on any recommended change.Bounds: 0.0, 1.0. Default: 0.0.
- Parameters:
scaleDownMinWorkerFraction
- scaleDownMinWorkerFraction or null
for none
-
getScaleUpFactor
public Double getScaleUpFactor()
Required. Fraction of required workers to add to Spark Standalone clusters. A scale-up factor
of 1.0 will result in scaling up so that there are no more required workers for the Spark Job
(more aggressive scaling). A scale-up factor closer to 0 will result in a smaller magnitude of
scaling up (less aggressive scaling).Bounds: 0.0, 1.0.
- Returns:
- value or
null
for none
-
setScaleUpFactor
public SparkStandaloneAutoscalingConfig setScaleUpFactor(Double scaleUpFactor)
Required. Fraction of required workers to add to Spark Standalone clusters. A scale-up factor
of 1.0 will result in scaling up so that there are no more required workers for the Spark Job
(more aggressive scaling). A scale-up factor closer to 0 will result in a smaller magnitude of
scaling up (less aggressive scaling).Bounds: 0.0, 1.0.
- Parameters:
scaleUpFactor
- scaleUpFactor or null
for none
-
getScaleUpMinWorkerFraction
public Double getScaleUpMinWorkerFraction()
Optional. Minimum scale-up threshold as a fraction of total cluster size before scaling occurs.
For example, in a 20-worker cluster, a threshold of 0.1 means the autoscaler must recommend at
least a 2-worker scale-up for the cluster to scale. A threshold of 0 means the autoscaler will
scale up on any recommended change.Bounds: 0.0, 1.0. Default: 0.0.
- Returns:
- value or
null
for none
-
setScaleUpMinWorkerFraction
public SparkStandaloneAutoscalingConfig setScaleUpMinWorkerFraction(Double scaleUpMinWorkerFraction)
Optional. Minimum scale-up threshold as a fraction of total cluster size before scaling occurs.
For example, in a 20-worker cluster, a threshold of 0.1 means the autoscaler must recommend at
least a 2-worker scale-up for the cluster to scale. A threshold of 0 means the autoscaler will
scale up on any recommended change.Bounds: 0.0, 1.0. Default: 0.0.
- Parameters:
scaleUpMinWorkerFraction
- scaleUpMinWorkerFraction or null
for none
-
set
public SparkStandaloneAutoscalingConfig set(String fieldName,
Object value)
- Overrides:
set
in class com.google.api.client.json.GenericJson
-
clone
public SparkStandaloneAutoscalingConfig clone()
- Overrides:
clone
in class com.google.api.client.json.GenericJson
Copyright © 2011–2024 Google. All rights reserved.
© 2015 - 2025 Weber Informatics LLC | Privacy Policy