target.apidocs.com.google.api.services.bigquery.model.JobConfigurationLoad.html Maven / Gradle / Ivy
JobConfigurationLoad (BigQuery API v2-rev20190423-1.28.0)
com.google.api.services.bigquery.model
Class JobConfigurationLoad
- java.lang.Object
-
- java.util.AbstractMap<String,Object>
-
- com.google.api.client.util.GenericData
-
- com.google.api.client.json.GenericJson
-
- com.google.api.services.bigquery.model.JobConfigurationLoad
-
public final class JobConfigurationLoad
extends com.google.api.client.json.GenericJson
Model definition for JobConfigurationLoad.
This is the Java data model class that specifies how to parse/serialize into the JSON that is
transmitted over HTTP when working with the BigQuery API. For a detailed explanation see:
https://developers.google.com/api-client-library/java/google-http-java-client/json
- Author:
- Google, Inc.
-
-
Nested Class Summary
-
Nested classes/interfaces inherited from class com.google.api.client.util.GenericData
com.google.api.client.util.GenericData.Flags
-
Nested classes/interfaces inherited from class java.util.AbstractMap
AbstractMap.SimpleEntry<K,V>, AbstractMap.SimpleImmutableEntry<K,V>
-
Constructor Summary
Constructors
Constructor and Description
JobConfigurationLoad()
-
Method Summary
All Methods Instance Methods Concrete Methods
Modifier and Type
Method and Description
JobConfigurationLoad
clone()
Boolean
getAllowJaggedRows()
[Optional] Accept rows that are missing trailing optional columns.
Boolean
getAllowQuotedNewlines()
Indicates if BigQuery should allow quoted data sections that contain newline characters in a
CSV file.
Boolean
getAutodetect()
[Optional] Indicates if we should automatically infer the options and schema for CSV and JSON
sources.
Clustering
getClustering()
[Beta] Clustering specification for the destination table.
String
getCreateDisposition()
[Optional] Specifies whether the job is allowed to create new tables.
EncryptionConfiguration
getDestinationEncryptionConfiguration()
Custom encryption configuration (e.g., Cloud KMS keys).
TableReference
getDestinationTable()
[Required] The destination table to load the data into.
DestinationTableProperties
getDestinationTableProperties()
[Beta] [Optional] Properties with which to create the destination table if it is new.
String
getEncoding()
[Optional] The character encoding of the data.
String
getFieldDelimiter()
[Optional] The separator for fields in a CSV file.
String
getHivePartitioningMode()
[Optional, Experimental] If hive partitioning is enabled, which mode to use.
Boolean
getIgnoreUnknownValues()
[Optional] Indicates if BigQuery should allow extra values that are not represented in the
table schema.
Integer
getMaxBadRecords()
[Optional] The maximum number of bad records that BigQuery can ignore when running the job.
String
getNullMarker()
[Optional] Specifies a string that represents a null value in a CSV file.
List<String>
getProjectionFields()
If sourceFormat is set to "DATASTORE_BACKUP", indicates which entity properties to load into
BigQuery from a Cloud Datastore backup.
String
getQuote()
[Optional] The value that is used to quote data sections in a CSV file.
RangePartitioning
getRangePartitioning()
[TrustedTester] Range partitioning specification for this table.
TableSchema
getSchema()
[Optional] The schema for the destination table.
String
getSchemaInline()
[Deprecated] The inline schema.
String
getSchemaInlineFormat()
[Deprecated] The format of the schemaInline property.
List<String>
getSchemaUpdateOptions()
Allows the schema of the destination table to be updated as a side effect of the load job if a
schema is autodetected or supplied in the job configuration.
Integer
getSkipLeadingRows()
[Optional] The number of rows at the top of a CSV file that BigQuery will skip when loading the
data.
String
getSourceFormat()
[Optional] The format of the data files.
List<String>
getSourceUris()
[Required] The fully-qualified URIs that point to your data in Google Cloud.
TimePartitioning
getTimePartitioning()
Time-based partitioning specification for the destination table.
Boolean
getUseAvroLogicalTypes()
[Optional] If sourceFormat is set to "AVRO", indicates whether to enable interpreting logical
types into their corresponding types (ie.
String
getWriteDisposition()
[Optional] Specifies the action that occurs if the destination table already exists.
JobConfigurationLoad
set(String fieldName,
Object value)
JobConfigurationLoad
setAllowJaggedRows(Boolean allowJaggedRows)
[Optional] Accept rows that are missing trailing optional columns.
JobConfigurationLoad
setAllowQuotedNewlines(Boolean allowQuotedNewlines)
Indicates if BigQuery should allow quoted data sections that contain newline characters in a
CSV file.
JobConfigurationLoad
setAutodetect(Boolean autodetect)
[Optional] Indicates if we should automatically infer the options and schema for CSV and JSON
sources.
JobConfigurationLoad
setClustering(Clustering clustering)
[Beta] Clustering specification for the destination table.
JobConfigurationLoad
setCreateDisposition(String createDisposition)
[Optional] Specifies whether the job is allowed to create new tables.
JobConfigurationLoad
setDestinationEncryptionConfiguration(EncryptionConfiguration destinationEncryptionConfiguration)
Custom encryption configuration (e.g., Cloud KMS keys).
JobConfigurationLoad
setDestinationTable(TableReference destinationTable)
[Required] The destination table to load the data into.
JobConfigurationLoad
setDestinationTableProperties(DestinationTableProperties destinationTableProperties)
[Beta] [Optional] Properties with which to create the destination table if it is new.
JobConfigurationLoad
setEncoding(String encoding)
[Optional] The character encoding of the data.
JobConfigurationLoad
setFieldDelimiter(String fieldDelimiter)
[Optional] The separator for fields in a CSV file.
JobConfigurationLoad
setHivePartitioningMode(String hivePartitioningMode)
[Optional, Experimental] If hive partitioning is enabled, which mode to use.
JobConfigurationLoad
setIgnoreUnknownValues(Boolean ignoreUnknownValues)
[Optional] Indicates if BigQuery should allow extra values that are not represented in the
table schema.
JobConfigurationLoad
setMaxBadRecords(Integer maxBadRecords)
[Optional] The maximum number of bad records that BigQuery can ignore when running the job.
JobConfigurationLoad
setNullMarker(String nullMarker)
[Optional] Specifies a string that represents a null value in a CSV file.
JobConfigurationLoad
setProjectionFields(List<String> projectionFields)
If sourceFormat is set to "DATASTORE_BACKUP", indicates which entity properties to load into
BigQuery from a Cloud Datastore backup.
JobConfigurationLoad
setQuote(String quote)
[Optional] The value that is used to quote data sections in a CSV file.
JobConfigurationLoad
setRangePartitioning(RangePartitioning rangePartitioning)
[TrustedTester] Range partitioning specification for this table.
JobConfigurationLoad
setSchema(TableSchema schema)
[Optional] The schema for the destination table.
JobConfigurationLoad
setSchemaInline(String schemaInline)
[Deprecated] The inline schema.
JobConfigurationLoad
setSchemaInlineFormat(String schemaInlineFormat)
[Deprecated] The format of the schemaInline property.
JobConfigurationLoad
setSchemaUpdateOptions(List<String> schemaUpdateOptions)
Allows the schema of the destination table to be updated as a side effect of the load job if a
schema is autodetected or supplied in the job configuration.
JobConfigurationLoad
setSkipLeadingRows(Integer skipLeadingRows)
[Optional] The number of rows at the top of a CSV file that BigQuery will skip when loading the
data.
JobConfigurationLoad
setSourceFormat(String sourceFormat)
[Optional] The format of the data files.
JobConfigurationLoad
setSourceUris(List<String> sourceUris)
[Required] The fully-qualified URIs that point to your data in Google Cloud.
JobConfigurationLoad
setTimePartitioning(TimePartitioning timePartitioning)
Time-based partitioning specification for the destination table.
JobConfigurationLoad
setUseAvroLogicalTypes(Boolean useAvroLogicalTypes)
[Optional] If sourceFormat is set to "AVRO", indicates whether to enable interpreting logical
types into their corresponding types (ie.
JobConfigurationLoad
setWriteDisposition(String writeDisposition)
[Optional] Specifies the action that occurs if the destination table already exists.
-
Methods inherited from class com.google.api.client.json.GenericJson
getFactory, setFactory, toPrettyString, toString
-
Methods inherited from class com.google.api.client.util.GenericData
entrySet, get, getClassInfo, getUnknownKeys, put, putAll, remove, setUnknownKeys
-
Methods inherited from class java.util.AbstractMap
clear, containsKey, containsValue, equals, hashCode, isEmpty, keySet, size, values
-
Methods inherited from class java.lang.Object
finalize, getClass, notify, notifyAll, wait, wait, wait
-
Methods inherited from interface java.util.Map
compute, computeIfAbsent, computeIfPresent, forEach, getOrDefault, merge, putIfAbsent, remove, replace, replace, replaceAll
-
-
Method Detail
-
getAllowJaggedRows
public Boolean getAllowJaggedRows()
[Optional] Accept rows that are missing trailing optional columns. The missing values are
treated as nulls. If false, records with missing trailing columns are treated as bad records,
and if there are too many bad records, an invalid error is returned in the job result. The
default value is false. Only applicable to CSV, ignored for other formats.
- Returns:
- value or
null
for none
-
setAllowJaggedRows
public JobConfigurationLoad setAllowJaggedRows(Boolean allowJaggedRows)
[Optional] Accept rows that are missing trailing optional columns. The missing values are
treated as nulls. If false, records with missing trailing columns are treated as bad records,
and if there are too many bad records, an invalid error is returned in the job result. The
default value is false. Only applicable to CSV, ignored for other formats.
- Parameters:
allowJaggedRows
- allowJaggedRows or null
for none
-
getAllowQuotedNewlines
public Boolean getAllowQuotedNewlines()
Indicates if BigQuery should allow quoted data sections that contain newline characters in a
CSV file. The default value is false.
- Returns:
- value or
null
for none
-
setAllowQuotedNewlines
public JobConfigurationLoad setAllowQuotedNewlines(Boolean allowQuotedNewlines)
Indicates if BigQuery should allow quoted data sections that contain newline characters in a
CSV file. The default value is false.
- Parameters:
allowQuotedNewlines
- allowQuotedNewlines or null
for none
-
getAutodetect
public Boolean getAutodetect()
[Optional] Indicates if we should automatically infer the options and schema for CSV and JSON
sources.
- Returns:
- value or
null
for none
-
setAutodetect
public JobConfigurationLoad setAutodetect(Boolean autodetect)
[Optional] Indicates if we should automatically infer the options and schema for CSV and JSON
sources.
- Parameters:
autodetect
- autodetect or null
for none
-
getClustering
public Clustering getClustering()
[Beta] Clustering specification for the destination table. Must be specified with time-based
partitioning, data in the table will be first partitioned and subsequently clustered.
- Returns:
- value or
null
for none
-
setClustering
public JobConfigurationLoad setClustering(Clustering clustering)
[Beta] Clustering specification for the destination table. Must be specified with time-based
partitioning, data in the table will be first partitioned and subsequently clustered.
- Parameters:
clustering
- clustering or null
for none
-
getCreateDisposition
public String getCreateDisposition()
[Optional] Specifies whether the job is allowed to create new tables. The following values are
supported: CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table.
CREATE_NEVER: The table must already exist. If it does not, a 'notFound' error is returned in
the job result. The default value is CREATE_IF_NEEDED. Creation, truncation and append actions
occur as one atomic update upon job completion.
- Returns:
- value or
null
for none
-
setCreateDisposition
public JobConfigurationLoad setCreateDisposition(String createDisposition)
[Optional] Specifies whether the job is allowed to create new tables. The following values are
supported: CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table.
CREATE_NEVER: The table must already exist. If it does not, a 'notFound' error is returned in
the job result. The default value is CREATE_IF_NEEDED. Creation, truncation and append actions
occur as one atomic update upon job completion.
- Parameters:
createDisposition
- createDisposition or null
for none
-
getDestinationEncryptionConfiguration
public EncryptionConfiguration getDestinationEncryptionConfiguration()
Custom encryption configuration (e.g., Cloud KMS keys).
- Returns:
- value or
null
for none
-
setDestinationEncryptionConfiguration
public JobConfigurationLoad setDestinationEncryptionConfiguration(EncryptionConfiguration destinationEncryptionConfiguration)
Custom encryption configuration (e.g., Cloud KMS keys).
- Parameters:
destinationEncryptionConfiguration
- destinationEncryptionConfiguration or null
for none
-
getDestinationTable
public TableReference getDestinationTable()
[Required] The destination table to load the data into.
- Returns:
- value or
null
for none
-
setDestinationTable
public JobConfigurationLoad setDestinationTable(TableReference destinationTable)
[Required] The destination table to load the data into.
- Parameters:
destinationTable
- destinationTable or null
for none
-
getDestinationTableProperties
public DestinationTableProperties getDestinationTableProperties()
[Beta] [Optional] Properties with which to create the destination table if it is new.
- Returns:
- value or
null
for none
-
setDestinationTableProperties
public JobConfigurationLoad setDestinationTableProperties(DestinationTableProperties destinationTableProperties)
[Beta] [Optional] Properties with which to create the destination table if it is new.
- Parameters:
destinationTableProperties
- destinationTableProperties or null
for none
-
getEncoding
public String getEncoding()
[Optional] The character encoding of the data. The supported values are UTF-8 or ISO-8859-1.
The default value is UTF-8. BigQuery decodes the data after the raw, binary data has been split
using the values of the quote and fieldDelimiter properties.
- Returns:
- value or
null
for none
-
setEncoding
public JobConfigurationLoad setEncoding(String encoding)
[Optional] The character encoding of the data. The supported values are UTF-8 or ISO-8859-1.
The default value is UTF-8. BigQuery decodes the data after the raw, binary data has been split
using the values of the quote and fieldDelimiter properties.
- Parameters:
encoding
- encoding or null
for none
-
getFieldDelimiter
public String getFieldDelimiter()
[Optional] The separator for fields in a CSV file. The separator can be any ISO-8859-1 single-
byte character. To use a character in the range 128-255, you must encode the character as UTF8.
BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the
encoded string to split the data in its raw, binary state. BigQuery also supports the escape
sequence "\t" to specify a tab separator. The default value is a comma (',').
- Returns:
- value or
null
for none
-
setFieldDelimiter
public JobConfigurationLoad setFieldDelimiter(String fieldDelimiter)
[Optional] The separator for fields in a CSV file. The separator can be any ISO-8859-1 single-
byte character. To use a character in the range 128-255, you must encode the character as UTF8.
BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the
encoded string to split the data in its raw, binary state. BigQuery also supports the escape
sequence "\t" to specify a tab separator. The default value is a comma (',').
- Parameters:
fieldDelimiter
- fieldDelimiter or null
for none
-
getHivePartitioningMode
public String getHivePartitioningMode()
[Optional, Experimental] If hive partitioning is enabled, which mode to use. Two modes are
supported: - AUTO: automatically infer partition key name(s) and type(s). - STRINGS: automatic
infer partition key name(s). All types are strings. Not all storage formats support hive
partitioning -- requesting hive partitioning on an unsupported format will lead to an error.
- Returns:
- value or
null
for none
-
setHivePartitioningMode
public JobConfigurationLoad setHivePartitioningMode(String hivePartitioningMode)
[Optional, Experimental] If hive partitioning is enabled, which mode to use. Two modes are
supported: - AUTO: automatically infer partition key name(s) and type(s). - STRINGS: automatic
infer partition key name(s). All types are strings. Not all storage formats support hive
partitioning -- requesting hive partitioning on an unsupported format will lead to an error.
- Parameters:
hivePartitioningMode
- hivePartitioningMode or null
for none
-
getIgnoreUnknownValues
public Boolean getIgnoreUnknownValues()
[Optional] Indicates if BigQuery should allow extra values that are not represented in the
table schema. If true, the extra values are ignored. If false, records with extra columns are
treated as bad records, and if there are too many bad records, an invalid error is returned in
the job result. The default value is false. The sourceFormat property determines what BigQuery
treats as an extra value: CSV: Trailing columns JSON: Named values that don't match any column
names
- Returns:
- value or
null
for none
-
setIgnoreUnknownValues
public JobConfigurationLoad setIgnoreUnknownValues(Boolean ignoreUnknownValues)
[Optional] Indicates if BigQuery should allow extra values that are not represented in the
table schema. If true, the extra values are ignored. If false, records with extra columns are
treated as bad records, and if there are too many bad records, an invalid error is returned in
the job result. The default value is false. The sourceFormat property determines what BigQuery
treats as an extra value: CSV: Trailing columns JSON: Named values that don't match any column
names
- Parameters:
ignoreUnknownValues
- ignoreUnknownValues or null
for none
-
getMaxBadRecords
public Integer getMaxBadRecords()
[Optional] The maximum number of bad records that BigQuery can ignore when running the job. If
the number of bad records exceeds this value, an invalid error is returned in the job result.
This is only valid for CSV and JSON. The default value is 0, which requires that all records
are valid.
- Returns:
- value or
null
for none
-
setMaxBadRecords
public JobConfigurationLoad setMaxBadRecords(Integer maxBadRecords)
[Optional] The maximum number of bad records that BigQuery can ignore when running the job. If
the number of bad records exceeds this value, an invalid error is returned in the job result.
This is only valid for CSV and JSON. The default value is 0, which requires that all records
are valid.
- Parameters:
maxBadRecords
- maxBadRecords or null
for none
-
getNullMarker
public String getNullMarker()
[Optional] Specifies a string that represents a null value in a CSV file. For example, if you
specify "\N", BigQuery interprets "\N" as a null value when loading a CSV file. The default
value is the empty string. If you set this property to a custom value, BigQuery throws an error
if an empty string is present for all data types except for STRING and BYTE. For STRING and
BYTE columns, BigQuery interprets the empty string as an empty value.
- Returns:
- value or
null
for none
-
setNullMarker
public JobConfigurationLoad setNullMarker(String nullMarker)
[Optional] Specifies a string that represents a null value in a CSV file. For example, if you
specify "\N", BigQuery interprets "\N" as a null value when loading a CSV file. The default
value is the empty string. If you set this property to a custom value, BigQuery throws an error
if an empty string is present for all data types except for STRING and BYTE. For STRING and
BYTE columns, BigQuery interprets the empty string as an empty value.
- Parameters:
nullMarker
- nullMarker or null
for none
-
getProjectionFields
public List<String> getProjectionFields()
If sourceFormat is set to "DATASTORE_BACKUP", indicates which entity properties to load into
BigQuery from a Cloud Datastore backup. Property names are case sensitive and must be top-level
properties. If no properties are specified, BigQuery loads all properties. If any named
property isn't found in the Cloud Datastore backup, an invalid error is returned in the job
result.
- Returns:
- value or
null
for none
-
setProjectionFields
public JobConfigurationLoad setProjectionFields(List<String> projectionFields)
If sourceFormat is set to "DATASTORE_BACKUP", indicates which entity properties to load into
BigQuery from a Cloud Datastore backup. Property names are case sensitive and must be top-level
properties. If no properties are specified, BigQuery loads all properties. If any named
property isn't found in the Cloud Datastore backup, an invalid error is returned in the job
result.
- Parameters:
projectionFields
- projectionFields or null
for none
-
getQuote
public String getQuote()
[Optional] The value that is used to quote data sections in a CSV file. BigQuery converts the
string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the
data in its raw, binary state. The default value is a double-quote ('"'). If your data does not
contain quoted sections, set the property value to an empty string. If your data contains
quoted newline characters, you must also set the allowQuotedNewlines property to true.
- Returns:
- value or
null
for none
-
setQuote
public JobConfigurationLoad setQuote(String quote)
[Optional] The value that is used to quote data sections in a CSV file. BigQuery converts the
string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the
data in its raw, binary state. The default value is a double-quote ('"'). If your data does not
contain quoted sections, set the property value to an empty string. If your data contains
quoted newline characters, you must also set the allowQuotedNewlines property to true.
- Parameters:
quote
- quote or null
for none
-
getRangePartitioning
public RangePartitioning getRangePartitioning()
[TrustedTester] Range partitioning specification for this table. Only one of timePartitioning
and rangePartitioning should be specified.
- Returns:
- value or
null
for none
-
setRangePartitioning
public JobConfigurationLoad setRangePartitioning(RangePartitioning rangePartitioning)
[TrustedTester] Range partitioning specification for this table. Only one of timePartitioning
and rangePartitioning should be specified.
- Parameters:
rangePartitioning
- rangePartitioning or null
for none
-
getSchema
public TableSchema getSchema()
[Optional] The schema for the destination table. The schema can be omitted if the destination
table already exists, or if you're loading data from Google Cloud Datastore.
- Returns:
- value or
null
for none
-
setSchema
public JobConfigurationLoad setSchema(TableSchema schema)
[Optional] The schema for the destination table. The schema can be omitted if the destination
table already exists, or if you're loading data from Google Cloud Datastore.
- Parameters:
schema
- schema or null
for none
-
getSchemaInline
public String getSchemaInline()
[Deprecated] The inline schema. For CSV schemas, specify as "Field1:Type1[,Field2:Type2]*". For
example, "foo:STRING, bar:INTEGER, baz:FLOAT".
- Returns:
- value or
null
for none
-
setSchemaInline
public JobConfigurationLoad setSchemaInline(String schemaInline)
[Deprecated] The inline schema. For CSV schemas, specify as "Field1:Type1[,Field2:Type2]*". For
example, "foo:STRING, bar:INTEGER, baz:FLOAT".
- Parameters:
schemaInline
- schemaInline or null
for none
-
getSchemaInlineFormat
public String getSchemaInlineFormat()
[Deprecated] The format of the schemaInline property.
- Returns:
- value or
null
for none
-
setSchemaInlineFormat
public JobConfigurationLoad setSchemaInlineFormat(String schemaInlineFormat)
[Deprecated] The format of the schemaInline property.
- Parameters:
schemaInlineFormat
- schemaInlineFormat or null
for none
-
getSchemaUpdateOptions
public List<String> getSchemaUpdateOptions()
Allows the schema of the destination table to be updated as a side effect of the load job if a
schema is autodetected or supplied in the job configuration. Schema update options are
supported in two cases: when writeDisposition is WRITE_APPEND; when writeDisposition is
WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition
decorators. For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of
the following values are specified: ALLOW_FIELD_ADDITION: allow adding a nullable field to the
schema. ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original schema to
nullable.
- Returns:
- value or
null
for none
-
setSchemaUpdateOptions
public JobConfigurationLoad setSchemaUpdateOptions(List<String> schemaUpdateOptions)
Allows the schema of the destination table to be updated as a side effect of the load job if a
schema is autodetected or supplied in the job configuration. Schema update options are
supported in two cases: when writeDisposition is WRITE_APPEND; when writeDisposition is
WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition
decorators. For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of
the following values are specified: ALLOW_FIELD_ADDITION: allow adding a nullable field to the
schema. ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original schema to
nullable.
- Parameters:
schemaUpdateOptions
- schemaUpdateOptions or null
for none
-
getSkipLeadingRows
public Integer getSkipLeadingRows()
[Optional] The number of rows at the top of a CSV file that BigQuery will skip when loading the
data. The default value is 0. This property is useful if you have header rows in the file that
should be skipped.
- Returns:
- value or
null
for none
-
setSkipLeadingRows
public JobConfigurationLoad setSkipLeadingRows(Integer skipLeadingRows)
[Optional] The number of rows at the top of a CSV file that BigQuery will skip when loading the
data. The default value is 0. This property is useful if you have header rows in the file that
should be skipped.
- Parameters:
skipLeadingRows
- skipLeadingRows or null
for none
-
getSourceFormat
public String getSourceFormat()
[Optional] The format of the data files. For CSV files, specify "CSV". For datastore backups,
specify "DATASTORE_BACKUP". For newline-delimited JSON, specify "NEWLINE_DELIMITED_JSON". For
Avro, specify "AVRO". For parquet, specify "PARQUET". For orc, specify "ORC". The default value
is CSV.
- Returns:
- value or
null
for none
-
setSourceFormat
public JobConfigurationLoad setSourceFormat(String sourceFormat)
[Optional] The format of the data files. For CSV files, specify "CSV". For datastore backups,
specify "DATASTORE_BACKUP". For newline-delimited JSON, specify "NEWLINE_DELIMITED_JSON". For
Avro, specify "AVRO". For parquet, specify "PARQUET". For orc, specify "ORC". The default value
is CSV.
- Parameters:
sourceFormat
- sourceFormat or null
for none
-
getSourceUris
public List<String> getSourceUris()
[Required] The fully-qualified URIs that point to your data in Google Cloud. For Google Cloud
Storage URIs: Each URI can contain one '*' wildcard character and it must come after the
'bucket' name. Size limits related to load jobs apply to external data sources. For Google
Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid
HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups: Exactly one
URI can be specified. Also, the '*' wildcard character is not allowed.
- Returns:
- value or
null
for none
-
setSourceUris
public JobConfigurationLoad setSourceUris(List<String> sourceUris)
[Required] The fully-qualified URIs that point to your data in Google Cloud. For Google Cloud
Storage URIs: Each URI can contain one '*' wildcard character and it must come after the
'bucket' name. Size limits related to load jobs apply to external data sources. For Google
Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid
HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups: Exactly one
URI can be specified. Also, the '*' wildcard character is not allowed.
- Parameters:
sourceUris
- sourceUris or null
for none
-
getTimePartitioning
public TimePartitioning getTimePartitioning()
Time-based partitioning specification for the destination table. Only one of timePartitioning
and rangePartitioning should be specified.
- Returns:
- value or
null
for none
-
setTimePartitioning
public JobConfigurationLoad setTimePartitioning(TimePartitioning timePartitioning)
Time-based partitioning specification for the destination table. Only one of timePartitioning
and rangePartitioning should be specified.
- Parameters:
timePartitioning
- timePartitioning or null
for none
-
getUseAvroLogicalTypes
public Boolean getUseAvroLogicalTypes()
[Optional] If sourceFormat is set to "AVRO", indicates whether to enable interpreting logical
types into their corresponding types (ie. TIMESTAMP), instead of only using their raw types
(ie. INTEGER).
- Returns:
- value or
null
for none
-
setUseAvroLogicalTypes
public JobConfigurationLoad setUseAvroLogicalTypes(Boolean useAvroLogicalTypes)
[Optional] If sourceFormat is set to "AVRO", indicates whether to enable interpreting logical
types into their corresponding types (ie. TIMESTAMP), instead of only using their raw types
(ie. INTEGER).
- Parameters:
useAvroLogicalTypes
- useAvroLogicalTypes or null
for none
-
getWriteDisposition
public String getWriteDisposition()
[Optional] Specifies the action that occurs if the destination table already exists. The
following values are supported: WRITE_TRUNCATE: If the table already exists, BigQuery
overwrites the table data. WRITE_APPEND: If the table already exists, BigQuery appends the data
to the table. WRITE_EMPTY: If the table already exists and contains data, a 'duplicate' error
is returned in the job result. The default value is WRITE_APPEND. Each action is atomic and
only occurs if BigQuery is able to complete the job successfully. Creation, truncation and
append actions occur as one atomic update upon job completion.
- Returns:
- value or
null
for none
-
setWriteDisposition
public JobConfigurationLoad setWriteDisposition(String writeDisposition)
[Optional] Specifies the action that occurs if the destination table already exists. The
following values are supported: WRITE_TRUNCATE: If the table already exists, BigQuery
overwrites the table data. WRITE_APPEND: If the table already exists, BigQuery appends the data
to the table. WRITE_EMPTY: If the table already exists and contains data, a 'duplicate' error
is returned in the job result. The default value is WRITE_APPEND. Each action is atomic and
only occurs if BigQuery is able to complete the job successfully. Creation, truncation and
append actions occur as one atomic update upon job completion.
- Parameters:
writeDisposition
- writeDisposition or null
for none
-
set
public JobConfigurationLoad set(String fieldName,
Object value)
- Overrides:
set
in class com.google.api.client.json.GenericJson
-
clone
public JobConfigurationLoad clone()
- Overrides:
clone
in class com.google.api.client.json.GenericJson
Copyright © 2011–2019 Google. All rights reserved.
© 2015 - 2025 Weber Informatics LLC | Privacy Policy