target.apidocs.com.google.api.services.bigquery.model.SparkStatistics.html Maven / Gradle / Ivy
SparkStatistics (BigQuery API v2-rev20240616-2.0.0)
com.google.api.services.bigquery.model
Class SparkStatistics
- java.lang.Object
-
- java.util.AbstractMap<String,Object>
-
- com.google.api.client.util.GenericData
-
- com.google.api.client.json.GenericJson
-
- com.google.api.services.bigquery.model.SparkStatistics
-
public final class SparkStatistics
extends com.google.api.client.json.GenericJson
Statistics for a BigSpark query. Populated as part of JobStatistics2
This is the Java data model class that specifies how to parse/serialize into the JSON that is
transmitted over HTTP when working with the BigQuery API. For a detailed explanation see:
https://developers.google.com/api-client-library/java/google-http-java-client/json
- Author:
- Google, Inc.
-
-
Nested Class Summary
-
Nested classes/interfaces inherited from class com.google.api.client.util.GenericData
com.google.api.client.util.GenericData.Flags
-
Nested classes/interfaces inherited from class java.util.AbstractMap
AbstractMap.SimpleEntry<K,V>, AbstractMap.SimpleImmutableEntry<K,V>
-
Constructor Summary
Constructors
Constructor and Description
SparkStatistics()
-
Method Summary
All Methods Instance Methods Concrete Methods
Modifier and Type
Method and Description
SparkStatistics
clone()
Map<String,String>
getEndpoints()
Output only.
String
getGcsStagingBucket()
Output only.
String
getKmsKeyName()
Output only.
SparkLoggingInfo
getLoggingInfo()
Output only.
String
getSparkJobId()
Output only.
String
getSparkJobLocation()
Output only.
SparkStatistics
set(String fieldName,
Object value)
SparkStatistics
setEndpoints(Map<String,String> endpoints)
Output only.
SparkStatistics
setGcsStagingBucket(String gcsStagingBucket)
Output only.
SparkStatistics
setKmsKeyName(String kmsKeyName)
Output only.
SparkStatistics
setLoggingInfo(SparkLoggingInfo loggingInfo)
Output only.
SparkStatistics
setSparkJobId(String sparkJobId)
Output only.
SparkStatistics
setSparkJobLocation(String sparkJobLocation)
Output only.
-
Methods inherited from class com.google.api.client.json.GenericJson
getFactory, setFactory, toPrettyString, toString
-
Methods inherited from class com.google.api.client.util.GenericData
entrySet, equals, get, getClassInfo, getUnknownKeys, hashCode, put, putAll, remove, setUnknownKeys
-
Methods inherited from class java.util.AbstractMap
clear, containsKey, containsValue, isEmpty, keySet, size, values
-
Methods inherited from class java.lang.Object
finalize, getClass, notify, notifyAll, wait, wait, wait
-
Methods inherited from interface java.util.Map
compute, computeIfAbsent, computeIfPresent, forEach, getOrDefault, merge, putIfAbsent, remove, replace, replace, replaceAll
-
-
Method Detail
-
getEndpoints
public Map<String,String> getEndpoints()
Output only. Endpoints returned from Dataproc. Key list: - history_server_endpoint: A link to
Spark job UI.
- Returns:
- value or
null
for none
-
setEndpoints
public SparkStatistics setEndpoints(Map<String,String> endpoints)
Output only. Endpoints returned from Dataproc. Key list: - history_server_endpoint: A link to
Spark job UI.
- Parameters:
endpoints
- endpoints or null
for none
-
getGcsStagingBucket
public String getGcsStagingBucket()
Output only. The Google Cloud Storage bucket that is used as the default file system by the
Spark application. This field is only filled when the Spark procedure uses the invoker security
mode. The `gcsStagingBucket` bucket is inferred from the
`@@spark_proc_properties.staging_bucket` system variable (if it is provided). Otherwise,
BigQuery creates a default staging bucket for the job and returns the bucket name in this
field. Example: * `gs://[bucket_name]`
- Returns:
- value or
null
for none
-
setGcsStagingBucket
public SparkStatistics setGcsStagingBucket(String gcsStagingBucket)
Output only. The Google Cloud Storage bucket that is used as the default file system by the
Spark application. This field is only filled when the Spark procedure uses the invoker security
mode. The `gcsStagingBucket` bucket is inferred from the
`@@spark_proc_properties.staging_bucket` system variable (if it is provided). Otherwise,
BigQuery creates a default staging bucket for the job and returns the bucket name in this
field. Example: * `gs://[bucket_name]`
- Parameters:
gcsStagingBucket
- gcsStagingBucket or null
for none
-
getKmsKeyName
public String getKmsKeyName()
Output only. The Cloud KMS encryption key that is used to protect the resources created by the
Spark job. If the Spark procedure uses the invoker security mode, the Cloud KMS encryption key
is either inferred from the provided system variable, `@@spark_proc_properties.kms_key_name`,
or the default key of the BigQuery job's project (if the CMEK organization policy is enforced).
Otherwise, the Cloud KMS key is either inferred from the Spark connection associated with the
procedure (if it is provided), or from the default key of the Spark connection's project if the
CMEK organization policy is enforced. Example: *
`projects/[kms_project_id]/locations/[region]/keyRings/[key_region]/cryptoKeys/[key]`
- Returns:
- value or
null
for none
-
setKmsKeyName
public SparkStatistics setKmsKeyName(String kmsKeyName)
Output only. The Cloud KMS encryption key that is used to protect the resources created by the
Spark job. If the Spark procedure uses the invoker security mode, the Cloud KMS encryption key
is either inferred from the provided system variable, `@@spark_proc_properties.kms_key_name`,
or the default key of the BigQuery job's project (if the CMEK organization policy is enforced).
Otherwise, the Cloud KMS key is either inferred from the Spark connection associated with the
procedure (if it is provided), or from the default key of the Spark connection's project if the
CMEK organization policy is enforced. Example: *
`projects/[kms_project_id]/locations/[region]/keyRings/[key_region]/cryptoKeys/[key]`
- Parameters:
kmsKeyName
- kmsKeyName or null
for none
-
getLoggingInfo
public SparkLoggingInfo getLoggingInfo()
Output only. Logging info is used to generate a link to Cloud Logging.
- Returns:
- value or
null
for none
-
setLoggingInfo
public SparkStatistics setLoggingInfo(SparkLoggingInfo loggingInfo)
Output only. Logging info is used to generate a link to Cloud Logging.
- Parameters:
loggingInfo
- loggingInfo or null
for none
-
getSparkJobId
public String getSparkJobId()
Output only. Spark job ID if a Spark job is created successfully.
- Returns:
- value or
null
for none
-
setSparkJobId
public SparkStatistics setSparkJobId(String sparkJobId)
Output only. Spark job ID if a Spark job is created successfully.
- Parameters:
sparkJobId
- sparkJobId or null
for none
-
getSparkJobLocation
public String getSparkJobLocation()
Output only. Location where the Spark job is executed. A location is selected by BigQueury for
jobs configured to run in a multi-region.
- Returns:
- value or
null
for none
-
setSparkJobLocation
public SparkStatistics setSparkJobLocation(String sparkJobLocation)
Output only. Location where the Spark job is executed. A location is selected by BigQueury for
jobs configured to run in a multi-region.
- Parameters:
sparkJobLocation
- sparkJobLocation or null
for none
-
set
public SparkStatistics set(String fieldName,
Object value)
- Overrides:
set
in class com.google.api.client.json.GenericJson
-
clone
public SparkStatistics clone()
- Overrides:
clone
in class com.google.api.client.json.GenericJson
Copyright © 2011–2024 Google. All rights reserved.