All Downloads are FREE. Search and download functionalities are using the official Maven repository.

target.apidocs.com.google.api.services.bigquery.model.SparkStatistics.html Maven / Gradle / Ivy

There is a newer version: v2-rev20241027-2.0.0
Show newest version






SparkStatistics (BigQuery API v2-rev20240616-2.0.0)












com.google.api.services.bigquery.model

Class SparkStatistics

    • Constructor Detail

      • SparkStatistics

        public SparkStatistics()
    • Method Detail

      • getEndpoints

        public Map<String,String> getEndpoints()
        Output only. Endpoints returned from Dataproc. Key list: - history_server_endpoint: A link to Spark job UI.
        Returns:
        value or null for none
      • setEndpoints

        public SparkStatistics setEndpoints(Map<String,String> endpoints)
        Output only. Endpoints returned from Dataproc. Key list: - history_server_endpoint: A link to Spark job UI.
        Parameters:
        endpoints - endpoints or null for none
      • getGcsStagingBucket

        public String getGcsStagingBucket()
        Output only. The Google Cloud Storage bucket that is used as the default file system by the Spark application. This field is only filled when the Spark procedure uses the invoker security mode. The `gcsStagingBucket` bucket is inferred from the `@@spark_proc_properties.staging_bucket` system variable (if it is provided). Otherwise, BigQuery creates a default staging bucket for the job and returns the bucket name in this field. Example: * `gs://[bucket_name]`
        Returns:
        value or null for none
      • setGcsStagingBucket

        public SparkStatistics setGcsStagingBucket(String gcsStagingBucket)
        Output only. The Google Cloud Storage bucket that is used as the default file system by the Spark application. This field is only filled when the Spark procedure uses the invoker security mode. The `gcsStagingBucket` bucket is inferred from the `@@spark_proc_properties.staging_bucket` system variable (if it is provided). Otherwise, BigQuery creates a default staging bucket for the job and returns the bucket name in this field. Example: * `gs://[bucket_name]`
        Parameters:
        gcsStagingBucket - gcsStagingBucket or null for none
      • getKmsKeyName

        public String getKmsKeyName()
        Output only. The Cloud KMS encryption key that is used to protect the resources created by the Spark job. If the Spark procedure uses the invoker security mode, the Cloud KMS encryption key is either inferred from the provided system variable, `@@spark_proc_properties.kms_key_name`, or the default key of the BigQuery job's project (if the CMEK organization policy is enforced). Otherwise, the Cloud KMS key is either inferred from the Spark connection associated with the procedure (if it is provided), or from the default key of the Spark connection's project if the CMEK organization policy is enforced. Example: * `projects/[kms_project_id]/locations/[region]/keyRings/[key_region]/cryptoKeys/[key]`
        Returns:
        value or null for none
      • setKmsKeyName

        public SparkStatistics setKmsKeyName(String kmsKeyName)
        Output only. The Cloud KMS encryption key that is used to protect the resources created by the Spark job. If the Spark procedure uses the invoker security mode, the Cloud KMS encryption key is either inferred from the provided system variable, `@@spark_proc_properties.kms_key_name`, or the default key of the BigQuery job's project (if the CMEK organization policy is enforced). Otherwise, the Cloud KMS key is either inferred from the Spark connection associated with the procedure (if it is provided), or from the default key of the Spark connection's project if the CMEK organization policy is enforced. Example: * `projects/[kms_project_id]/locations/[region]/keyRings/[key_region]/cryptoKeys/[key]`
        Parameters:
        kmsKeyName - kmsKeyName or null for none
      • getLoggingInfo

        public SparkLoggingInfo getLoggingInfo()
        Output only. Logging info is used to generate a link to Cloud Logging.
        Returns:
        value or null for none
      • setLoggingInfo

        public SparkStatistics setLoggingInfo(SparkLoggingInfo loggingInfo)
        Output only. Logging info is used to generate a link to Cloud Logging.
        Parameters:
        loggingInfo - loggingInfo or null for none
      • getSparkJobId

        public String getSparkJobId()
        Output only. Spark job ID if a Spark job is created successfully.
        Returns:
        value or null for none
      • setSparkJobId

        public SparkStatistics setSparkJobId(String sparkJobId)
        Output only. Spark job ID if a Spark job is created successfully.
        Parameters:
        sparkJobId - sparkJobId or null for none
      • getSparkJobLocation

        public String getSparkJobLocation()
        Output only. Location where the Spark job is executed. A location is selected by BigQueury for jobs configured to run in a multi-region.
        Returns:
        value or null for none
      • setSparkJobLocation

        public SparkStatistics setSparkJobLocation(String sparkJobLocation)
        Output only. Location where the Spark job is executed. A location is selected by BigQueury for jobs configured to run in a multi-region.
        Parameters:
        sparkJobLocation - sparkJobLocation or null for none
      • clone

        public SparkStatistics clone()
        Overrides:
        clone in class com.google.api.client.json.GenericJson

Copyright © 2011–2024 Google. All rights reserved.





© 2015 - 2024 Weber Informatics LLC | Privacy Policy