target.apidocs.com.google.api.services.bigquery.model.ExternalDataConfiguration.html Maven / Gradle / Ivy
ExternalDataConfiguration (BigQuery API v2-rev20240905-2.0.0)
com.google.api.services.bigquery.model
Class ExternalDataConfiguration
- java.lang.Object
-
- java.util.AbstractMap<String,Object>
-
- com.google.api.client.util.GenericData
-
- com.google.api.client.json.GenericJson
-
- com.google.api.services.bigquery.model.ExternalDataConfiguration
-
public final class ExternalDataConfiguration
extends com.google.api.client.json.GenericJson
Model definition for ExternalDataConfiguration.
This is the Java data model class that specifies how to parse/serialize into the JSON that is
transmitted over HTTP when working with the BigQuery API. For a detailed explanation see:
https://developers.google.com/api-client-library/java/google-http-java-client/json
- Author:
- Google, Inc.
-
-
Nested Class Summary
-
Nested classes/interfaces inherited from class com.google.api.client.util.GenericData
com.google.api.client.util.GenericData.Flags
-
Nested classes/interfaces inherited from class java.util.AbstractMap
AbstractMap.SimpleEntry<K,V>, AbstractMap.SimpleImmutableEntry<K,V>
-
Constructor Summary
Constructors
Constructor and Description
ExternalDataConfiguration()
-
Method Summary
-
Methods inherited from class com.google.api.client.json.GenericJson
getFactory, setFactory, toPrettyString, toString
-
Methods inherited from class com.google.api.client.util.GenericData
entrySet, equals, get, getClassInfo, getUnknownKeys, hashCode, put, putAll, remove, setUnknownKeys
-
Methods inherited from class java.util.AbstractMap
clear, containsKey, containsValue, isEmpty, keySet, size, values
-
Methods inherited from class java.lang.Object
finalize, getClass, notify, notifyAll, wait, wait, wait
-
Methods inherited from interface java.util.Map
compute, computeIfAbsent, computeIfPresent, forEach, getOrDefault, merge, putIfAbsent, remove, replace, replace, replaceAll
-
-
Method Detail
-
getAutodetect
public Boolean getAutodetect()
Try to detect schema and format options automatically. Any option specified explicitly will be
honored.
- Returns:
- value or
null
for none
-
setAutodetect
public ExternalDataConfiguration setAutodetect(Boolean autodetect)
Try to detect schema and format options automatically. Any option specified explicitly will be
honored.
- Parameters:
autodetect
- autodetect or null
for none
-
getAvroOptions
public AvroOptions getAvroOptions()
Optional. Additional properties to set if sourceFormat is set to AVRO.
- Returns:
- value or
null
for none
-
setAvroOptions
public ExternalDataConfiguration setAvroOptions(AvroOptions avroOptions)
Optional. Additional properties to set if sourceFormat is set to AVRO.
- Parameters:
avroOptions
- avroOptions or null
for none
-
getBigtableOptions
public BigtableOptions getBigtableOptions()
Optional. Additional options if sourceFormat is set to BIGTABLE.
- Returns:
- value or
null
for none
-
setBigtableOptions
public ExternalDataConfiguration setBigtableOptions(BigtableOptions bigtableOptions)
Optional. Additional options if sourceFormat is set to BIGTABLE.
- Parameters:
bigtableOptions
- bigtableOptions or null
for none
-
getCompression
public String getCompression()
Optional. The compression type of the data source. Possible values include GZIP and NONE. The
default value is NONE. This setting is ignored for Google Cloud Bigtable, Google Cloud
Datastore backups, Avro, ORC and Parquet formats. An empty string is an invalid value.
- Returns:
- value or
null
for none
-
setCompression
public ExternalDataConfiguration setCompression(String compression)
Optional. The compression type of the data source. Possible values include GZIP and NONE. The
default value is NONE. This setting is ignored for Google Cloud Bigtable, Google Cloud
Datastore backups, Avro, ORC and Parquet formats. An empty string is an invalid value.
- Parameters:
compression
- compression or null
for none
-
getConnectionId
public String getConnectionId()
Optional. The connection specifying the credentials to be used to read external storage, such
as Azure Blob, Cloud Storage, or S3. The connection_id can have the form
`{project_id}.{location_id};{connection_id}` or
`projects/{project_id}/locations/{location_id}/connections/{connection_id}`.
- Returns:
- value or
null
for none
-
setConnectionId
public ExternalDataConfiguration setConnectionId(String connectionId)
Optional. The connection specifying the credentials to be used to read external storage, such
as Azure Blob, Cloud Storage, or S3. The connection_id can have the form
`{project_id}.{location_id};{connection_id}` or
`projects/{project_id}/locations/{location_id}/connections/{connection_id}`.
- Parameters:
connectionId
- connectionId or null
for none
-
getCsvOptions
public CsvOptions getCsvOptions()
Optional. Additional properties to set if sourceFormat is set to CSV.
- Returns:
- value or
null
for none
-
setCsvOptions
public ExternalDataConfiguration setCsvOptions(CsvOptions csvOptions)
Optional. Additional properties to set if sourceFormat is set to CSV.
- Parameters:
csvOptions
- csvOptions or null
for none
-
getDecimalTargetTypes
public List<String> getDecimalTargetTypes()
Defines the list of possible SQL data types to which the source decimal values are converted.
This list and the precision and the scale parameters of the decimal field determine the target
type. In the order of NUMERIC, BIGNUMERIC, and STRING, a type is picked if it is in the
specified list and if it supports the precision and the scale. STRING supports all precision
and scale values. If none of the listed types supports the precision and the scale, the type
supporting the widest range in the specified list is picked, and if a value exceeds the
supported range when reading the data, an error will be thrown. Example: Suppose the value of
this field is ["NUMERIC", "BIGNUMERIC"]. If (precision,scale) is: * (38,9) -> NUMERIC; * (39,9)
-> BIGNUMERIC (NUMERIC cannot hold 30 integer digits); * (38,10) -> BIGNUMERIC (NUMERIC cannot
hold 10 fractional digits); * (76,38) -> BIGNUMERIC; * (77,38) -> BIGNUMERIC (error if value
exeeds supported range). This field cannot contain duplicate types. The order of the types in
this field is ignored. For example, ["BIGNUMERIC", "NUMERIC"] is the same as ["NUMERIC",
"BIGNUMERIC"] and NUMERIC always takes precedence over BIGNUMERIC. Defaults to ["NUMERIC",
"STRING"] for ORC and ["NUMERIC"] for the other file formats.
- Returns:
- value or
null
for none
-
setDecimalTargetTypes
public ExternalDataConfiguration setDecimalTargetTypes(List<String> decimalTargetTypes)
Defines the list of possible SQL data types to which the source decimal values are converted.
This list and the precision and the scale parameters of the decimal field determine the target
type. In the order of NUMERIC, BIGNUMERIC, and STRING, a type is picked if it is in the
specified list and if it supports the precision and the scale. STRING supports all precision
and scale values. If none of the listed types supports the precision and the scale, the type
supporting the widest range in the specified list is picked, and if a value exceeds the
supported range when reading the data, an error will be thrown. Example: Suppose the value of
this field is ["NUMERIC", "BIGNUMERIC"]. If (precision,scale) is: * (38,9) -> NUMERIC; * (39,9)
-> BIGNUMERIC (NUMERIC cannot hold 30 integer digits); * (38,10) -> BIGNUMERIC (NUMERIC cannot
hold 10 fractional digits); * (76,38) -> BIGNUMERIC; * (77,38) -> BIGNUMERIC (error if value
exeeds supported range). This field cannot contain duplicate types. The order of the types in
this field is ignored. For example, ["BIGNUMERIC", "NUMERIC"] is the same as ["NUMERIC",
"BIGNUMERIC"] and NUMERIC always takes precedence over BIGNUMERIC. Defaults to ["NUMERIC",
"STRING"] for ORC and ["NUMERIC"] for the other file formats.
- Parameters:
decimalTargetTypes
- decimalTargetTypes or null
for none
-
getFileSetSpecType
public String getFileSetSpecType()
Optional. Specifies how source URIs are interpreted for constructing the file set to load. By
default source URIs are expanded against the underlying storage. Other options include
specifying manifest files. Only applicable to object storage systems.
- Returns:
- value or
null
for none
-
setFileSetSpecType
public ExternalDataConfiguration setFileSetSpecType(String fileSetSpecType)
Optional. Specifies how source URIs are interpreted for constructing the file set to load. By
default source URIs are expanded against the underlying storage. Other options include
specifying manifest files. Only applicable to object storage systems.
- Parameters:
fileSetSpecType
- fileSetSpecType or null
for none
-
getGoogleSheetsOptions
public GoogleSheetsOptions getGoogleSheetsOptions()
Optional. Additional options if sourceFormat is set to GOOGLE_SHEETS.
- Returns:
- value or
null
for none
-
setGoogleSheetsOptions
public ExternalDataConfiguration setGoogleSheetsOptions(GoogleSheetsOptions googleSheetsOptions)
Optional. Additional options if sourceFormat is set to GOOGLE_SHEETS.
- Parameters:
googleSheetsOptions
- googleSheetsOptions or null
for none
-
getHivePartitioningOptions
public HivePartitioningOptions getHivePartitioningOptions()
Optional. When set, configures hive partitioning support. Not all storage formats support hive
partitioning -- requesting hive partitioning on an unsupported format will lead to an error, as
will providing an invalid specification.
- Returns:
- value or
null
for none
-
setHivePartitioningOptions
public ExternalDataConfiguration setHivePartitioningOptions(HivePartitioningOptions hivePartitioningOptions)
Optional. When set, configures hive partitioning support. Not all storage formats support hive
partitioning -- requesting hive partitioning on an unsupported format will lead to an error, as
will providing an invalid specification.
- Parameters:
hivePartitioningOptions
- hivePartitioningOptions or null
for none
-
getIgnoreUnknownValues
public Boolean getIgnoreUnknownValues()
Optional. Indicates if BigQuery should allow extra values that are not represented in the table
schema. If true, the extra values are ignored. If false, records with extra columns are treated
as bad records, and if there are too many bad records, an invalid error is returned in the job
result. The default value is false. The sourceFormat property determines what BigQuery treats
as an extra value: CSV: Trailing columns JSON: Named values that don't match any column names
Google Cloud Bigtable: This setting is ignored. Google Cloud Datastore backups: This setting is
ignored. Avro: This setting is ignored. ORC: This setting is ignored. Parquet: This setting is
ignored.
- Returns:
- value or
null
for none
-
setIgnoreUnknownValues
public ExternalDataConfiguration setIgnoreUnknownValues(Boolean ignoreUnknownValues)
Optional. Indicates if BigQuery should allow extra values that are not represented in the table
schema. If true, the extra values are ignored. If false, records with extra columns are treated
as bad records, and if there are too many bad records, an invalid error is returned in the job
result. The default value is false. The sourceFormat property determines what BigQuery treats
as an extra value: CSV: Trailing columns JSON: Named values that don't match any column names
Google Cloud Bigtable: This setting is ignored. Google Cloud Datastore backups: This setting is
ignored. Avro: This setting is ignored. ORC: This setting is ignored. Parquet: This setting is
ignored.
- Parameters:
ignoreUnknownValues
- ignoreUnknownValues or null
for none
-
getJsonExtension
public String getJsonExtension()
Optional. Load option to be used together with source_format newline-delimited JSON to indicate
that a variant of JSON is being loaded. To load newline-delimited GeoJSON, specify GEOJSON (and
source_format must be set to NEWLINE_DELIMITED_JSON).
- Returns:
- value or
null
for none
-
setJsonExtension
public ExternalDataConfiguration setJsonExtension(String jsonExtension)
Optional. Load option to be used together with source_format newline-delimited JSON to indicate
that a variant of JSON is being loaded. To load newline-delimited GeoJSON, specify GEOJSON (and
source_format must be set to NEWLINE_DELIMITED_JSON).
- Parameters:
jsonExtension
- jsonExtension or null
for none
-
getJsonOptions
public JsonOptions getJsonOptions()
Optional. Additional properties to set if sourceFormat is set to JSON.
- Returns:
- value or
null
for none
-
setJsonOptions
public ExternalDataConfiguration setJsonOptions(JsonOptions jsonOptions)
Optional. Additional properties to set if sourceFormat is set to JSON.
- Parameters:
jsonOptions
- jsonOptions or null
for none
-
getMaxBadRecords
public Integer getMaxBadRecords()
Optional. The maximum number of bad records that BigQuery can ignore when reading data. If the
number of bad records exceeds this value, an invalid error is returned in the job result. The
default value is 0, which requires that all records are valid. This setting is ignored for
Google Cloud Bigtable, Google Cloud Datastore backups, Avro, ORC and Parquet formats.
- Returns:
- value or
null
for none
-
setMaxBadRecords
public ExternalDataConfiguration setMaxBadRecords(Integer maxBadRecords)
Optional. The maximum number of bad records that BigQuery can ignore when reading data. If the
number of bad records exceeds this value, an invalid error is returned in the job result. The
default value is 0, which requires that all records are valid. This setting is ignored for
Google Cloud Bigtable, Google Cloud Datastore backups, Avro, ORC and Parquet formats.
- Parameters:
maxBadRecords
- maxBadRecords or null
for none
-
getMetadataCacheMode
public String getMetadataCacheMode()
Optional. Metadata Cache Mode for the table. Set this to enable caching of metadata from
external data source.
- Returns:
- value or
null
for none
-
setMetadataCacheMode
public ExternalDataConfiguration setMetadataCacheMode(String metadataCacheMode)
Optional. Metadata Cache Mode for the table. Set this to enable caching of metadata from
external data source.
- Parameters:
metadataCacheMode
- metadataCacheMode or null
for none
-
getObjectMetadata
public String getObjectMetadata()
Optional. ObjectMetadata is used to create Object Tables. Object Tables contain a listing of
objects (with their metadata) found at the source_uris. If ObjectMetadata is set, source_format
should be omitted. Currently SIMPLE is the only supported Object Metadata type.
- Returns:
- value or
null
for none
-
setObjectMetadata
public ExternalDataConfiguration setObjectMetadata(String objectMetadata)
Optional. ObjectMetadata is used to create Object Tables. Object Tables contain a listing of
objects (with their metadata) found at the source_uris. If ObjectMetadata is set, source_format
should be omitted. Currently SIMPLE is the only supported Object Metadata type.
- Parameters:
objectMetadata
- objectMetadata or null
for none
-
getParquetOptions
public ParquetOptions getParquetOptions()
Optional. Additional properties to set if sourceFormat is set to PARQUET.
- Returns:
- value or
null
for none
-
setParquetOptions
public ExternalDataConfiguration setParquetOptions(ParquetOptions parquetOptions)
Optional. Additional properties to set if sourceFormat is set to PARQUET.
- Parameters:
parquetOptions
- parquetOptions or null
for none
-
getReferenceFileSchemaUri
public String getReferenceFileSchemaUri()
Optional. When creating an external table, the user can provide a reference file with the table
schema. This is enabled for the following formats: AVRO, PARQUET, ORC.
- Returns:
- value or
null
for none
-
setReferenceFileSchemaUri
public ExternalDataConfiguration setReferenceFileSchemaUri(String referenceFileSchemaUri)
Optional. When creating an external table, the user can provide a reference file with the table
schema. This is enabled for the following formats: AVRO, PARQUET, ORC.
- Parameters:
referenceFileSchemaUri
- referenceFileSchemaUri or null
for none
-
getSchema
public TableSchema getSchema()
Optional. The schema for the data. Schema is required for CSV and JSON formats if autodetect is
not on. Schema is disallowed for Google Cloud Bigtable, Cloud Datastore backups, Avro, ORC and
Parquet formats.
- Returns:
- value or
null
for none
-
setSchema
public ExternalDataConfiguration setSchema(TableSchema schema)
Optional. The schema for the data. Schema is required for CSV and JSON formats if autodetect is
not on. Schema is disallowed for Google Cloud Bigtable, Cloud Datastore backups, Avro, ORC and
Parquet formats.
- Parameters:
schema
- schema or null
for none
-
getSourceFormat
public String getSourceFormat()
[Required] The data format. For CSV files, specify "CSV". For Google sheets, specify
"GOOGLE_SHEETS". For newline-delimited JSON, specify "NEWLINE_DELIMITED_JSON". For Avro files,
specify "AVRO". For Google Cloud Datastore backups, specify "DATASTORE_BACKUP". For Apache
Iceberg tables, specify "ICEBERG". For ORC files, specify "ORC". For Parquet files, specify
"PARQUET". [Beta] For Google Cloud Bigtable, specify "BIGTABLE".
- Returns:
- value or
null
for none
-
setSourceFormat
public ExternalDataConfiguration setSourceFormat(String sourceFormat)
[Required] The data format. For CSV files, specify "CSV". For Google sheets, specify
"GOOGLE_SHEETS". For newline-delimited JSON, specify "NEWLINE_DELIMITED_JSON". For Avro files,
specify "AVRO". For Google Cloud Datastore backups, specify "DATASTORE_BACKUP". For Apache
Iceberg tables, specify "ICEBERG". For ORC files, specify "ORC". For Parquet files, specify
"PARQUET". [Beta] For Google Cloud Bigtable, specify "BIGTABLE".
- Parameters:
sourceFormat
- sourceFormat or null
for none
-
getSourceUris
public List<String> getSourceUris()
[Required] The fully-qualified URIs that point to your data in Google Cloud. For Google Cloud
Storage URIs: Each URI can contain one '*' wildcard character and it must come after the
'bucket' name. Size limits related to load jobs apply to external data sources. For Google
Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid
HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups, exactly one
URI can be specified. Also, the '*' wildcard character is not allowed.
- Returns:
- value or
null
for none
-
setSourceUris
public ExternalDataConfiguration setSourceUris(List<String> sourceUris)
[Required] The fully-qualified URIs that point to your data in Google Cloud. For Google Cloud
Storage URIs: Each URI can contain one '*' wildcard character and it must come after the
'bucket' name. Size limits related to load jobs apply to external data sources. For Google
Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid
HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups, exactly one
URI can be specified. Also, the '*' wildcard character is not allowed.
- Parameters:
sourceUris
- sourceUris or null
for none
-
set
public ExternalDataConfiguration set(String fieldName,
Object value)
- Overrides:
set
in class com.google.api.client.json.GenericJson
-
clone
public ExternalDataConfiguration clone()
- Overrides:
clone
in class com.google.api.client.json.GenericJson
Copyright © 2011–2024 Google. All rights reserved.