target.apidocs.com.google.api.services.bigquery.model.JobConfigurationLoad.html Maven / Gradle / Ivy
JobConfigurationLoad (BigQuery API v2-rev20240727-2.0.0)
com.google.api.services.bigquery.model
Class JobConfigurationLoad
- java.lang.Object
-
- java.util.AbstractMap<String,Object>
-
- com.google.api.client.util.GenericData
-
- com.google.api.client.json.GenericJson
-
- com.google.api.services.bigquery.model.JobConfigurationLoad
-
public final class JobConfigurationLoad
extends com.google.api.client.json.GenericJson
JobConfigurationLoad contains the configuration properties for loading data into a destination
table.
This is the Java data model class that specifies how to parse/serialize into the JSON that is
transmitted over HTTP when working with the BigQuery API. For a detailed explanation see:
https://developers.google.com/api-client-library/java/google-http-java-client/json
- Author:
- Google, Inc.
-
-
Nested Class Summary
-
Nested classes/interfaces inherited from class com.google.api.client.util.GenericData
com.google.api.client.util.GenericData.Flags
-
Nested classes/interfaces inherited from class java.util.AbstractMap
AbstractMap.SimpleEntry<K,V>, AbstractMap.SimpleImmutableEntry<K,V>
-
Constructor Summary
Constructors
Constructor and Description
JobConfigurationLoad()
-
Method Summary
All Methods Instance Methods Concrete Methods
Modifier and Type
Method and Description
JobConfigurationLoad
clone()
Boolean
getAllowJaggedRows()
Optional.
Boolean
getAllowQuotedNewlines()
Indicates if BigQuery should allow quoted data sections that contain newline characters in a
CSV file.
Boolean
getAutodetect()
Optional.
Clustering
getClustering()
Clustering specification for the destination table.
String
getColumnNameCharacterMap()
Optional.
List<ConnectionProperty>
getConnectionProperties()
Optional.
Boolean
getCopyFilesOnly()
Optional.
String
getCreateDisposition()
Optional.
Boolean
getCreateSession()
Optional.
List<String>
getDecimalTargetTypes()
Defines the list of possible SQL data types to which the source decimal values are converted.
EncryptionConfiguration
getDestinationEncryptionConfiguration()
Custom encryption configuration (e.g., Cloud KMS keys)
TableReference
getDestinationTable()
[Required] The destination table to load the data into.
DestinationTableProperties
getDestinationTableProperties()
Optional.
String
getEncoding()
Optional.
String
getFieldDelimiter()
Optional.
String
getFileSetSpecType()
Optional.
HivePartitioningOptions
getHivePartitioningOptions()
Optional.
Boolean
getIgnoreUnknownValues()
Optional.
String
getJsonExtension()
Optional.
Integer
getMaxBadRecords()
Optional.
String
getNullMarker()
Optional.
ParquetOptions
getParquetOptions()
Optional.
Boolean
getPreserveAsciiControlCharacters()
Optional.
List<String>
getProjectionFields()
If sourceFormat is set to "DATASTORE_BACKUP", indicates which entity properties to load into
BigQuery from a Cloud Datastore backup.
String
getQuote()
Optional.
RangePartitioning
getRangePartitioning()
Range partitioning specification for the destination table.
String
getReferenceFileSchemaUri()
Optional.
TableSchema
getSchema()
Optional.
String
getSchemaInline()
[Deprecated] The inline schema.
String
getSchemaInlineFormat()
[Deprecated] The format of the schemaInline property.
List<String>
getSchemaUpdateOptions()
Allows the schema of the destination table to be updated as a side effect of the load job if a
schema is autodetected or supplied in the job configuration.
Integer
getSkipLeadingRows()
Optional.
String
getSourceFormat()
Optional.
List<String>
getSourceUris()
[Required] The fully-qualified URIs that point to your data in Google Cloud.
TimePartitioning
getTimePartitioning()
Time-based partitioning specification for the destination table.
Boolean
getUseAvroLogicalTypes()
Optional.
String
getWriteDisposition()
Optional.
JobConfigurationLoad
set(String fieldName,
Object value)
JobConfigurationLoad
setAllowJaggedRows(Boolean allowJaggedRows)
Optional.
JobConfigurationLoad
setAllowQuotedNewlines(Boolean allowQuotedNewlines)
Indicates if BigQuery should allow quoted data sections that contain newline characters in a
CSV file.
JobConfigurationLoad
setAutodetect(Boolean autodetect)
Optional.
JobConfigurationLoad
setClustering(Clustering clustering)
Clustering specification for the destination table.
JobConfigurationLoad
setColumnNameCharacterMap(String columnNameCharacterMap)
Optional.
JobConfigurationLoad
setConnectionProperties(List<ConnectionProperty> connectionProperties)
Optional.
JobConfigurationLoad
setCopyFilesOnly(Boolean copyFilesOnly)
Optional.
JobConfigurationLoad
setCreateDisposition(String createDisposition)
Optional.
JobConfigurationLoad
setCreateSession(Boolean createSession)
Optional.
JobConfigurationLoad
setDecimalTargetTypes(List<String> decimalTargetTypes)
Defines the list of possible SQL data types to which the source decimal values are converted.
JobConfigurationLoad
setDestinationEncryptionConfiguration(EncryptionConfiguration destinationEncryptionConfiguration)
Custom encryption configuration (e.g., Cloud KMS keys)
JobConfigurationLoad
setDestinationTable(TableReference destinationTable)
[Required] The destination table to load the data into.
JobConfigurationLoad
setDestinationTableProperties(DestinationTableProperties destinationTableProperties)
Optional.
JobConfigurationLoad
setEncoding(String encoding)
Optional.
JobConfigurationLoad
setFieldDelimiter(String fieldDelimiter)
Optional.
JobConfigurationLoad
setFileSetSpecType(String fileSetSpecType)
Optional.
JobConfigurationLoad
setHivePartitioningOptions(HivePartitioningOptions hivePartitioningOptions)
Optional.
JobConfigurationLoad
setIgnoreUnknownValues(Boolean ignoreUnknownValues)
Optional.
JobConfigurationLoad
setJsonExtension(String jsonExtension)
Optional.
JobConfigurationLoad
setMaxBadRecords(Integer maxBadRecords)
Optional.
JobConfigurationLoad
setNullMarker(String nullMarker)
Optional.
JobConfigurationLoad
setParquetOptions(ParquetOptions parquetOptions)
Optional.
JobConfigurationLoad
setPreserveAsciiControlCharacters(Boolean preserveAsciiControlCharacters)
Optional.
JobConfigurationLoad
setProjectionFields(List<String> projectionFields)
If sourceFormat is set to "DATASTORE_BACKUP", indicates which entity properties to load into
BigQuery from a Cloud Datastore backup.
JobConfigurationLoad
setQuote(String quote)
Optional.
JobConfigurationLoad
setRangePartitioning(RangePartitioning rangePartitioning)
Range partitioning specification for the destination table.
JobConfigurationLoad
setReferenceFileSchemaUri(String referenceFileSchemaUri)
Optional.
JobConfigurationLoad
setSchema(TableSchema schema)
Optional.
JobConfigurationLoad
setSchemaInline(String schemaInline)
[Deprecated] The inline schema.
JobConfigurationLoad
setSchemaInlineFormat(String schemaInlineFormat)
[Deprecated] The format of the schemaInline property.
JobConfigurationLoad
setSchemaUpdateOptions(List<String> schemaUpdateOptions)
Allows the schema of the destination table to be updated as a side effect of the load job if a
schema is autodetected or supplied in the job configuration.
JobConfigurationLoad
setSkipLeadingRows(Integer skipLeadingRows)
Optional.
JobConfigurationLoad
setSourceFormat(String sourceFormat)
Optional.
JobConfigurationLoad
setSourceUris(List<String> sourceUris)
[Required] The fully-qualified URIs that point to your data in Google Cloud.
JobConfigurationLoad
setTimePartitioning(TimePartitioning timePartitioning)
Time-based partitioning specification for the destination table.
JobConfigurationLoad
setUseAvroLogicalTypes(Boolean useAvroLogicalTypes)
Optional.
JobConfigurationLoad
setWriteDisposition(String writeDisposition)
Optional.
-
Methods inherited from class com.google.api.client.json.GenericJson
getFactory, setFactory, toPrettyString, toString
-
Methods inherited from class com.google.api.client.util.GenericData
entrySet, equals, get, getClassInfo, getUnknownKeys, hashCode, put, putAll, remove, setUnknownKeys
-
Methods inherited from class java.util.AbstractMap
clear, containsKey, containsValue, isEmpty, keySet, size, values
-
Methods inherited from class java.lang.Object
finalize, getClass, notify, notifyAll, wait, wait, wait
-
Methods inherited from interface java.util.Map
compute, computeIfAbsent, computeIfPresent, forEach, getOrDefault, merge, putIfAbsent, remove, replace, replace, replaceAll
-
-
Method Detail
-
getAllowJaggedRows
public Boolean getAllowJaggedRows()
Optional. Accept rows that are missing trailing optional columns. The missing values are
treated as nulls. If false, records with missing trailing columns are treated as bad records,
and if there are too many bad records, an invalid error is returned in the job result. The
default value is false. Only applicable to CSV, ignored for other formats.
- Returns:
- value or
null
for none
-
setAllowJaggedRows
public JobConfigurationLoad setAllowJaggedRows(Boolean allowJaggedRows)
Optional. Accept rows that are missing trailing optional columns. The missing values are
treated as nulls. If false, records with missing trailing columns are treated as bad records,
and if there are too many bad records, an invalid error is returned in the job result. The
default value is false. Only applicable to CSV, ignored for other formats.
- Parameters:
allowJaggedRows
- allowJaggedRows or null
for none
-
getAllowQuotedNewlines
public Boolean getAllowQuotedNewlines()
Indicates if BigQuery should allow quoted data sections that contain newline characters in a
CSV file. The default value is false.
- Returns:
- value or
null
for none
-
setAllowQuotedNewlines
public JobConfigurationLoad setAllowQuotedNewlines(Boolean allowQuotedNewlines)
Indicates if BigQuery should allow quoted data sections that contain newline characters in a
CSV file. The default value is false.
- Parameters:
allowQuotedNewlines
- allowQuotedNewlines or null
for none
-
getAutodetect
public Boolean getAutodetect()
Optional. Indicates if we should automatically infer the options and schema for CSV and JSON
sources.
- Returns:
- value or
null
for none
-
setAutodetect
public JobConfigurationLoad setAutodetect(Boolean autodetect)
Optional. Indicates if we should automatically infer the options and schema for CSV and JSON
sources.
- Parameters:
autodetect
- autodetect or null
for none
-
getClustering
public Clustering getClustering()
Clustering specification for the destination table.
- Returns:
- value or
null
for none
-
setClustering
public JobConfigurationLoad setClustering(Clustering clustering)
Clustering specification for the destination table.
- Parameters:
clustering
- clustering or null
for none
-
getColumnNameCharacterMap
public String getColumnNameCharacterMap()
Optional. Character map supported for column names in CSV/Parquet loads. Defaults to STRICT and
can be overridden by Project Config Service. Using this option with unsupporting load formats
will result in an error.
- Returns:
- value or
null
for none
-
setColumnNameCharacterMap
public JobConfigurationLoad setColumnNameCharacterMap(String columnNameCharacterMap)
Optional. Character map supported for column names in CSV/Parquet loads. Defaults to STRICT and
can be overridden by Project Config Service. Using this option with unsupporting load formats
will result in an error.
- Parameters:
columnNameCharacterMap
- columnNameCharacterMap or null
for none
-
getConnectionProperties
public List<ConnectionProperty> getConnectionProperties()
Optional. Connection properties which can modify the load job behavior. Currently, only the
'session_id' connection property is supported, and is used to resolve _SESSION appearing as the
dataset id.
- Returns:
- value or
null
for none
-
setConnectionProperties
public JobConfigurationLoad setConnectionProperties(List<ConnectionProperty> connectionProperties)
Optional. Connection properties which can modify the load job behavior. Currently, only the
'session_id' connection property is supported, and is used to resolve _SESSION appearing as the
dataset id.
- Parameters:
connectionProperties
- connectionProperties or null
for none
-
getCopyFilesOnly
public Boolean getCopyFilesOnly()
Optional. [Experimental] Configures the load job to copy files directly to the destination
BigLake managed table, bypassing file content reading and rewriting. Copying files only is
supported when all the following are true: * `source_uris` are located in the same Cloud
Storage location as the destination table's `storage_uri` location. * `source_format` is
`PARQUET`. * `destination_table` is an existing BigLake managed table. The table's schema does
not have flexible column names. The table's columns do not have type parameters other than
precision and scale. * No options other than the above are specified.
- Returns:
- value or
null
for none
-
setCopyFilesOnly
public JobConfigurationLoad setCopyFilesOnly(Boolean copyFilesOnly)
Optional. [Experimental] Configures the load job to copy files directly to the destination
BigLake managed table, bypassing file content reading and rewriting. Copying files only is
supported when all the following are true: * `source_uris` are located in the same Cloud
Storage location as the destination table's `storage_uri` location. * `source_format` is
`PARQUET`. * `destination_table` is an existing BigLake managed table. The table's schema does
not have flexible column names. The table's columns do not have type parameters other than
precision and scale. * No options other than the above are specified.
- Parameters:
copyFilesOnly
- copyFilesOnly or null
for none
-
getCreateDisposition
public String getCreateDisposition()
Optional. Specifies whether the job is allowed to create new tables. The following values are
supported: * CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table. *
CREATE_NEVER: The table must already exist. If it does not, a 'notFound' error is returned in
the job result. The default value is CREATE_IF_NEEDED. Creation, truncation and append actions
occur as one atomic update upon job completion.
- Returns:
- value or
null
for none
-
setCreateDisposition
public JobConfigurationLoad setCreateDisposition(String createDisposition)
Optional. Specifies whether the job is allowed to create new tables. The following values are
supported: * CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table. *
CREATE_NEVER: The table must already exist. If it does not, a 'notFound' error is returned in
the job result. The default value is CREATE_IF_NEEDED. Creation, truncation and append actions
occur as one atomic update upon job completion.
- Parameters:
createDisposition
- createDisposition or null
for none
-
getCreateSession
public Boolean getCreateSession()
Optional. If this property is true, the job creates a new session using a randomly generated
session_id. To continue using a created session with subsequent queries, pass the existing
session identifier as a `ConnectionProperty` value. The session identifier is returned as part
of the `SessionInfo` message within the query statistics. The new session's location will be
set to `Job.JobReference.location` if it is present, otherwise it's set to the default location
based on existing routing logic.
- Returns:
- value or
null
for none
-
setCreateSession
public JobConfigurationLoad setCreateSession(Boolean createSession)
Optional. If this property is true, the job creates a new session using a randomly generated
session_id. To continue using a created session with subsequent queries, pass the existing
session identifier as a `ConnectionProperty` value. The session identifier is returned as part
of the `SessionInfo` message within the query statistics. The new session's location will be
set to `Job.JobReference.location` if it is present, otherwise it's set to the default location
based on existing routing logic.
- Parameters:
createSession
- createSession or null
for none
-
getDecimalTargetTypes
public List<String> getDecimalTargetTypes()
Defines the list of possible SQL data types to which the source decimal values are converted.
This list and the precision and the scale parameters of the decimal field determine the target
type. In the order of NUMERIC, BIGNUMERIC, and STRING, a type is picked if it is in the
specified list and if it supports the precision and the scale. STRING supports all precision
and scale values. If none of the listed types supports the precision and the scale, the type
supporting the widest range in the specified list is picked, and if a value exceeds the
supported range when reading the data, an error will be thrown. Example: Suppose the value of
this field is ["NUMERIC", "BIGNUMERIC"]. If (precision,scale) is: * (38,9) -> NUMERIC; * (39,9)
-> BIGNUMERIC (NUMERIC cannot hold 30 integer digits); * (38,10) -> BIGNUMERIC (NUMERIC cannot
hold 10 fractional digits); * (76,38) -> BIGNUMERIC; * (77,38) -> BIGNUMERIC (error if value
exeeds supported range). This field cannot contain duplicate types. The order of the types in
this field is ignored. For example, ["BIGNUMERIC", "NUMERIC"] is the same as ["NUMERIC",
"BIGNUMERIC"] and NUMERIC always takes precedence over BIGNUMERIC. Defaults to ["NUMERIC",
"STRING"] for ORC and ["NUMERIC"] for the other file formats.
- Returns:
- value or
null
for none
-
setDecimalTargetTypes
public JobConfigurationLoad setDecimalTargetTypes(List<String> decimalTargetTypes)
Defines the list of possible SQL data types to which the source decimal values are converted.
This list and the precision and the scale parameters of the decimal field determine the target
type. In the order of NUMERIC, BIGNUMERIC, and STRING, a type is picked if it is in the
specified list and if it supports the precision and the scale. STRING supports all precision
and scale values. If none of the listed types supports the precision and the scale, the type
supporting the widest range in the specified list is picked, and if a value exceeds the
supported range when reading the data, an error will be thrown. Example: Suppose the value of
this field is ["NUMERIC", "BIGNUMERIC"]. If (precision,scale) is: * (38,9) -> NUMERIC; * (39,9)
-> BIGNUMERIC (NUMERIC cannot hold 30 integer digits); * (38,10) -> BIGNUMERIC (NUMERIC cannot
hold 10 fractional digits); * (76,38) -> BIGNUMERIC; * (77,38) -> BIGNUMERIC (error if value
exeeds supported range). This field cannot contain duplicate types. The order of the types in
this field is ignored. For example, ["BIGNUMERIC", "NUMERIC"] is the same as ["NUMERIC",
"BIGNUMERIC"] and NUMERIC always takes precedence over BIGNUMERIC. Defaults to ["NUMERIC",
"STRING"] for ORC and ["NUMERIC"] for the other file formats.
- Parameters:
decimalTargetTypes
- decimalTargetTypes or null
for none
-
getDestinationEncryptionConfiguration
public EncryptionConfiguration getDestinationEncryptionConfiguration()
Custom encryption configuration (e.g., Cloud KMS keys)
- Returns:
- value or
null
for none
-
setDestinationEncryptionConfiguration
public JobConfigurationLoad setDestinationEncryptionConfiguration(EncryptionConfiguration destinationEncryptionConfiguration)
Custom encryption configuration (e.g., Cloud KMS keys)
- Parameters:
destinationEncryptionConfiguration
- destinationEncryptionConfiguration or null
for none
-
getDestinationTable
public TableReference getDestinationTable()
[Required] The destination table to load the data into.
- Returns:
- value or
null
for none
-
setDestinationTable
public JobConfigurationLoad setDestinationTable(TableReference destinationTable)
[Required] The destination table to load the data into.
- Parameters:
destinationTable
- destinationTable or null
for none
-
getDestinationTableProperties
public DestinationTableProperties getDestinationTableProperties()
Optional. [Experimental] Properties with which to create the destination table if it is new.
- Returns:
- value or
null
for none
-
setDestinationTableProperties
public JobConfigurationLoad setDestinationTableProperties(DestinationTableProperties destinationTableProperties)
Optional. [Experimental] Properties with which to create the destination table if it is new.
- Parameters:
destinationTableProperties
- destinationTableProperties or null
for none
-
getEncoding
public String getEncoding()
Optional. The character encoding of the data. The supported values are UTF-8, ISO-8859-1,
UTF-16BE, UTF-16LE, UTF-32BE, and UTF-32LE. The default value is UTF-8. BigQuery decodes the
data after the raw, binary data has been split using the values of the `quote` and
`fieldDelimiter` properties. If you don't specify an encoding, or if you specify a UTF-8
encoding when the CSV file is not UTF-8 encoded, BigQuery attempts to convert the data to
UTF-8. Generally, your data loads successfully, but it may not match byte-for-byte what you
expect. To avoid this, specify the correct encoding by using the `--encoding` flag. If BigQuery
can't convert a character other than the ASCII `0` character, BigQuery converts the character
to the standard Unicode replacement character: �.
- Returns:
- value or
null
for none
-
setEncoding
public JobConfigurationLoad setEncoding(String encoding)
Optional. The character encoding of the data. The supported values are UTF-8, ISO-8859-1,
UTF-16BE, UTF-16LE, UTF-32BE, and UTF-32LE. The default value is UTF-8. BigQuery decodes the
data after the raw, binary data has been split using the values of the `quote` and
`fieldDelimiter` properties. If you don't specify an encoding, or if you specify a UTF-8
encoding when the CSV file is not UTF-8 encoded, BigQuery attempts to convert the data to
UTF-8. Generally, your data loads successfully, but it may not match byte-for-byte what you
expect. To avoid this, specify the correct encoding by using the `--encoding` flag. If BigQuery
can't convert a character other than the ASCII `0` character, BigQuery converts the character
to the standard Unicode replacement character: �.
- Parameters:
encoding
- encoding or null
for none
-
getFieldDelimiter
public String getFieldDelimiter()
Optional. The separator character for fields in a CSV file. The separator is interpreted as a
single byte. For files encoded in ISO-8859-1, any single character can be used as a separator.
For files encoded in UTF-8, characters represented in decimal range 1-127 (U+0001-U+007F) can
be used without any modification. UTF-8 characters encoded with multiple bytes (i.e. U+0080 and
above) will have only the first byte used for separating fields. The remaining bytes will be
treated as a part of the field. BigQuery also supports the escape sequence "\t" (U+0009) to
specify a tab separator. The default value is comma (",", U+002C).
- Returns:
- value or
null
for none
-
setFieldDelimiter
public JobConfigurationLoad setFieldDelimiter(String fieldDelimiter)
Optional. The separator character for fields in a CSV file. The separator is interpreted as a
single byte. For files encoded in ISO-8859-1, any single character can be used as a separator.
For files encoded in UTF-8, characters represented in decimal range 1-127 (U+0001-U+007F) can
be used without any modification. UTF-8 characters encoded with multiple bytes (i.e. U+0080 and
above) will have only the first byte used for separating fields. The remaining bytes will be
treated as a part of the field. BigQuery also supports the escape sequence "\t" (U+0009) to
specify a tab separator. The default value is comma (",", U+002C).
- Parameters:
fieldDelimiter
- fieldDelimiter or null
for none
-
getFileSetSpecType
public String getFileSetSpecType()
Optional. Specifies how source URIs are interpreted for constructing the file set to load. By
default, source URIs are expanded against the underlying storage. You can also specify manifest
files to control how the file set is constructed. This option is only applicable to object
storage systems.
- Returns:
- value or
null
for none
-
setFileSetSpecType
public JobConfigurationLoad setFileSetSpecType(String fileSetSpecType)
Optional. Specifies how source URIs are interpreted for constructing the file set to load. By
default, source URIs are expanded against the underlying storage. You can also specify manifest
files to control how the file set is constructed. This option is only applicable to object
storage systems.
- Parameters:
fileSetSpecType
- fileSetSpecType or null
for none
-
getHivePartitioningOptions
public HivePartitioningOptions getHivePartitioningOptions()
Optional. When set, configures hive partitioning support. Not all storage formats support hive
partitioning -- requesting hive partitioning on an unsupported format will lead to an error, as
will providing an invalid specification.
- Returns:
- value or
null
for none
-
setHivePartitioningOptions
public JobConfigurationLoad setHivePartitioningOptions(HivePartitioningOptions hivePartitioningOptions)
Optional. When set, configures hive partitioning support. Not all storage formats support hive
partitioning -- requesting hive partitioning on an unsupported format will lead to an error, as
will providing an invalid specification.
- Parameters:
hivePartitioningOptions
- hivePartitioningOptions or null
for none
-
getIgnoreUnknownValues
public Boolean getIgnoreUnknownValues()
Optional. Indicates if BigQuery should allow extra values that are not represented in the table
schema. If true, the extra values are ignored. If false, records with extra columns are treated
as bad records, and if there are too many bad records, an invalid error is returned in the job
result. The default value is false. The sourceFormat property determines what BigQuery treats
as an extra value: CSV: Trailing columns JSON: Named values that don't match any column names
in the table schema Avro, Parquet, ORC: Fields in the file schema that don't exist in the table
schema.
- Returns:
- value or
null
for none
-
setIgnoreUnknownValues
public JobConfigurationLoad setIgnoreUnknownValues(Boolean ignoreUnknownValues)
Optional. Indicates if BigQuery should allow extra values that are not represented in the table
schema. If true, the extra values are ignored. If false, records with extra columns are treated
as bad records, and if there are too many bad records, an invalid error is returned in the job
result. The default value is false. The sourceFormat property determines what BigQuery treats
as an extra value: CSV: Trailing columns JSON: Named values that don't match any column names
in the table schema Avro, Parquet, ORC: Fields in the file schema that don't exist in the table
schema.
- Parameters:
ignoreUnknownValues
- ignoreUnknownValues or null
for none
-
getJsonExtension
public String getJsonExtension()
Optional. Load option to be used together with source_format newline-delimited JSON to indicate
that a variant of JSON is being loaded. To load newline-delimited GeoJSON, specify GEOJSON (and
source_format must be set to NEWLINE_DELIMITED_JSON).
- Returns:
- value or
null
for none
-
setJsonExtension
public JobConfigurationLoad setJsonExtension(String jsonExtension)
Optional. Load option to be used together with source_format newline-delimited JSON to indicate
that a variant of JSON is being loaded. To load newline-delimited GeoJSON, specify GEOJSON (and
source_format must be set to NEWLINE_DELIMITED_JSON).
- Parameters:
jsonExtension
- jsonExtension or null
for none
-
getMaxBadRecords
public Integer getMaxBadRecords()
Optional. The maximum number of bad records that BigQuery can ignore when running the job. If
the number of bad records exceeds this value, an invalid error is returned in the job result.
The default value is 0, which requires that all records are valid. This is only supported for
CSV and NEWLINE_DELIMITED_JSON file formats.
- Returns:
- value or
null
for none
-
setMaxBadRecords
public JobConfigurationLoad setMaxBadRecords(Integer maxBadRecords)
Optional. The maximum number of bad records that BigQuery can ignore when running the job. If
the number of bad records exceeds this value, an invalid error is returned in the job result.
The default value is 0, which requires that all records are valid. This is only supported for
CSV and NEWLINE_DELIMITED_JSON file formats.
- Parameters:
maxBadRecords
- maxBadRecords or null
for none
-
getNullMarker
public String getNullMarker()
Optional. Specifies a string that represents a null value in a CSV file. For example, if you
specify "\N", BigQuery interprets "\N" as a null value when loading a CSV file. The default
value is the empty string. If you set this property to a custom value, BigQuery throws an error
if an empty string is present for all data types except for STRING and BYTE. For STRING and
BYTE columns, BigQuery interprets the empty string as an empty value.
- Returns:
- value or
null
for none
-
setNullMarker
public JobConfigurationLoad setNullMarker(String nullMarker)
Optional. Specifies a string that represents a null value in a CSV file. For example, if you
specify "\N", BigQuery interprets "\N" as a null value when loading a CSV file. The default
value is the empty string. If you set this property to a custom value, BigQuery throws an error
if an empty string is present for all data types except for STRING and BYTE. For STRING and
BYTE columns, BigQuery interprets the empty string as an empty value.
- Parameters:
nullMarker
- nullMarker or null
for none
-
getParquetOptions
public ParquetOptions getParquetOptions()
Optional. Additional properties to set if sourceFormat is set to PARQUET.
- Returns:
- value or
null
for none
-
setParquetOptions
public JobConfigurationLoad setParquetOptions(ParquetOptions parquetOptions)
Optional. Additional properties to set if sourceFormat is set to PARQUET.
- Parameters:
parquetOptions
- parquetOptions or null
for none
-
getPreserveAsciiControlCharacters
public Boolean getPreserveAsciiControlCharacters()
Optional. When sourceFormat is set to "CSV", this indicates whether the embedded ASCII control
characters (the first 32 characters in the ASCII-table, from '\x00' to '\x1F') are preserved.
- Returns:
- value or
null
for none
-
setPreserveAsciiControlCharacters
public JobConfigurationLoad setPreserveAsciiControlCharacters(Boolean preserveAsciiControlCharacters)
Optional. When sourceFormat is set to "CSV", this indicates whether the embedded ASCII control
characters (the first 32 characters in the ASCII-table, from '\x00' to '\x1F') are preserved.
- Parameters:
preserveAsciiControlCharacters
- preserveAsciiControlCharacters or null
for none
-
getProjectionFields
public List<String> getProjectionFields()
If sourceFormat is set to "DATASTORE_BACKUP", indicates which entity properties to load into
BigQuery from a Cloud Datastore backup. Property names are case sensitive and must be top-level
properties. If no properties are specified, BigQuery loads all properties. If any named
property isn't found in the Cloud Datastore backup, an invalid error is returned in the job
result.
- Returns:
- value or
null
for none
-
setProjectionFields
public JobConfigurationLoad setProjectionFields(List<String> projectionFields)
If sourceFormat is set to "DATASTORE_BACKUP", indicates which entity properties to load into
BigQuery from a Cloud Datastore backup. Property names are case sensitive and must be top-level
properties. If no properties are specified, BigQuery loads all properties. If any named
property isn't found in the Cloud Datastore backup, an invalid error is returned in the job
result.
- Parameters:
projectionFields
- projectionFields or null
for none
-
getQuote
public String getQuote()
Optional. The value that is used to quote data sections in a CSV file. BigQuery converts the
string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the
data in its raw, binary state. The default value is a double-quote ('"'). If your data does not
contain quoted sections, set the property value to an empty string. If your data contains
quoted newline characters, you must also set the allowQuotedNewlines property to true. To
include the specific quote character within a quoted value, precede it with an additional
matching quote character. For example, if you want to escape the default character ' " ', use '
"" '. @default "
- Returns:
- value or
null
for none
-
setQuote
public JobConfigurationLoad setQuote(String quote)
Optional. The value that is used to quote data sections in a CSV file. BigQuery converts the
string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the
data in its raw, binary state. The default value is a double-quote ('"'). If your data does not
contain quoted sections, set the property value to an empty string. If your data contains
quoted newline characters, you must also set the allowQuotedNewlines property to true. To
include the specific quote character within a quoted value, precede it with an additional
matching quote character. For example, if you want to escape the default character ' " ', use '
"" '. @default "
- Parameters:
quote
- quote or null
for none
-
getRangePartitioning
public RangePartitioning getRangePartitioning()
Range partitioning specification for the destination table. Only one of timePartitioning and
rangePartitioning should be specified.
- Returns:
- value or
null
for none
-
setRangePartitioning
public JobConfigurationLoad setRangePartitioning(RangePartitioning rangePartitioning)
Range partitioning specification for the destination table. Only one of timePartitioning and
rangePartitioning should be specified.
- Parameters:
rangePartitioning
- rangePartitioning or null
for none
-
getReferenceFileSchemaUri
public String getReferenceFileSchemaUri()
Optional. The user can provide a reference file with the reader schema. This file is only
loaded if it is part of source URIs, but is not loaded otherwise. It is enabled for the
following formats: AVRO, PARQUET, ORC.
- Returns:
- value or
null
for none
-
setReferenceFileSchemaUri
public JobConfigurationLoad setReferenceFileSchemaUri(String referenceFileSchemaUri)
Optional. The user can provide a reference file with the reader schema. This file is only
loaded if it is part of source URIs, but is not loaded otherwise. It is enabled for the
following formats: AVRO, PARQUET, ORC.
- Parameters:
referenceFileSchemaUri
- referenceFileSchemaUri or null
for none
-
getSchema
public TableSchema getSchema()
Optional. The schema for the destination table. The schema can be omitted if the destination
table already exists, or if you're loading data from Google Cloud Datastore.
- Returns:
- value or
null
for none
-
setSchema
public JobConfigurationLoad setSchema(TableSchema schema)
Optional. The schema for the destination table. The schema can be omitted if the destination
table already exists, or if you're loading data from Google Cloud Datastore.
- Parameters:
schema
- schema or null
for none
-
getSchemaInline
public String getSchemaInline()
[Deprecated] The inline schema. For CSV schemas, specify as "Field1:Type1[,Field2:Type2]*". For
example, "foo:STRING, bar:INTEGER, baz:FLOAT".
- Returns:
- value or
null
for none
-
setSchemaInline
public JobConfigurationLoad setSchemaInline(String schemaInline)
[Deprecated] The inline schema. For CSV schemas, specify as "Field1:Type1[,Field2:Type2]*". For
example, "foo:STRING, bar:INTEGER, baz:FLOAT".
- Parameters:
schemaInline
- schemaInline or null
for none
-
getSchemaInlineFormat
public String getSchemaInlineFormat()
[Deprecated] The format of the schemaInline property.
- Returns:
- value or
null
for none
-
setSchemaInlineFormat
public JobConfigurationLoad setSchemaInlineFormat(String schemaInlineFormat)
[Deprecated] The format of the schemaInline property.
- Parameters:
schemaInlineFormat
- schemaInlineFormat or null
for none
-
getSchemaUpdateOptions
public List<String> getSchemaUpdateOptions()
Allows the schema of the destination table to be updated as a side effect of the load job if a
schema is autodetected or supplied in the job configuration. Schema update options are
supported in two cases: when writeDisposition is WRITE_APPEND; when writeDisposition is
WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition
decorators. For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of
the following values are specified: * ALLOW_FIELD_ADDITION: allow adding a nullable field to
the schema. * ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original schema to
nullable.
- Returns:
- value or
null
for none
-
setSchemaUpdateOptions
public JobConfigurationLoad setSchemaUpdateOptions(List<String> schemaUpdateOptions)
Allows the schema of the destination table to be updated as a side effect of the load job if a
schema is autodetected or supplied in the job configuration. Schema update options are
supported in two cases: when writeDisposition is WRITE_APPEND; when writeDisposition is
WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition
decorators. For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of
the following values are specified: * ALLOW_FIELD_ADDITION: allow adding a nullable field to
the schema. * ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original schema to
nullable.
- Parameters:
schemaUpdateOptions
- schemaUpdateOptions or null
for none
-
getSkipLeadingRows
public Integer getSkipLeadingRows()
Optional. The number of rows at the top of a CSV file that BigQuery will skip when loading the
data. The default value is 0. This property is useful if you have header rows in the file that
should be skipped. When autodetect is on, the behavior is the following: * skipLeadingRows
unspecified - Autodetect tries to detect headers in the first row. If they are not detected,
the row is read as data. Otherwise data is read starting from the second row. * skipLeadingRows
is 0 - Instructs autodetect that there are no headers and data should be read starting from the
first row. * skipLeadingRows = N > 0 - Autodetect skips N-1 rows and tries to detect headers in
row N. If headers are not detected, row N is just skipped. Otherwise row N is used to extract
column names for the detected schema.
- Returns:
- value or
null
for none
-
setSkipLeadingRows
public JobConfigurationLoad setSkipLeadingRows(Integer skipLeadingRows)
Optional. The number of rows at the top of a CSV file that BigQuery will skip when loading the
data. The default value is 0. This property is useful if you have header rows in the file that
should be skipped. When autodetect is on, the behavior is the following: * skipLeadingRows
unspecified - Autodetect tries to detect headers in the first row. If they are not detected,
the row is read as data. Otherwise data is read starting from the second row. * skipLeadingRows
is 0 - Instructs autodetect that there are no headers and data should be read starting from the
first row. * skipLeadingRows = N > 0 - Autodetect skips N-1 rows and tries to detect headers in
row N. If headers are not detected, row N is just skipped. Otherwise row N is used to extract
column names for the detected schema.
- Parameters:
skipLeadingRows
- skipLeadingRows or null
for none
-
getSourceFormat
public String getSourceFormat()
Optional. The format of the data files. For CSV files, specify "CSV". For datastore backups,
specify "DATASTORE_BACKUP". For newline-delimited JSON, specify "NEWLINE_DELIMITED_JSON". For
Avro, specify "AVRO". For parquet, specify "PARQUET". For orc, specify "ORC". The default value
is CSV.
- Returns:
- value or
null
for none
-
setSourceFormat
public JobConfigurationLoad setSourceFormat(String sourceFormat)
Optional. The format of the data files. For CSV files, specify "CSV". For datastore backups,
specify "DATASTORE_BACKUP". For newline-delimited JSON, specify "NEWLINE_DELIMITED_JSON". For
Avro, specify "AVRO". For parquet, specify "PARQUET". For orc, specify "ORC". The default value
is CSV.
- Parameters:
sourceFormat
- sourceFormat or null
for none
-
getSourceUris
public List<String> getSourceUris()
[Required] The fully-qualified URIs that point to your data in Google Cloud. For Google Cloud
Storage URIs: Each URI can contain one '*' wildcard character and it must come after the
'bucket' name. Size limits related to load jobs apply to external data sources. For Google
Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid
HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups: Exactly one
URI can be specified. Also, the '*' wildcard character is not allowed.
- Returns:
- value or
null
for none
-
setSourceUris
public JobConfigurationLoad setSourceUris(List<String> sourceUris)
[Required] The fully-qualified URIs that point to your data in Google Cloud. For Google Cloud
Storage URIs: Each URI can contain one '*' wildcard character and it must come after the
'bucket' name. Size limits related to load jobs apply to external data sources. For Google
Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid
HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups: Exactly one
URI can be specified. Also, the '*' wildcard character is not allowed.
- Parameters:
sourceUris
- sourceUris or null
for none
-
getTimePartitioning
public TimePartitioning getTimePartitioning()
Time-based partitioning specification for the destination table. Only one of timePartitioning
and rangePartitioning should be specified.
- Returns:
- value or
null
for none
-
setTimePartitioning
public JobConfigurationLoad setTimePartitioning(TimePartitioning timePartitioning)
Time-based partitioning specification for the destination table. Only one of timePartitioning
and rangePartitioning should be specified.
- Parameters:
timePartitioning
- timePartitioning or null
for none
-
getUseAvroLogicalTypes
public Boolean getUseAvroLogicalTypes()
Optional. If sourceFormat is set to "AVRO", indicates whether to interpret logical types as the
corresponding BigQuery data type (for example, TIMESTAMP), instead of using the raw type (for
example, INTEGER).
- Returns:
- value or
null
for none
-
setUseAvroLogicalTypes
public JobConfigurationLoad setUseAvroLogicalTypes(Boolean useAvroLogicalTypes)
Optional. If sourceFormat is set to "AVRO", indicates whether to interpret logical types as the
corresponding BigQuery data type (for example, TIMESTAMP), instead of using the raw type (for
example, INTEGER).
- Parameters:
useAvroLogicalTypes
- useAvroLogicalTypes or null
for none
-
getWriteDisposition
public String getWriteDisposition()
Optional. Specifies the action that occurs if the destination table already exists. The
following values are supported: * WRITE_TRUNCATE: If the table already exists, BigQuery
overwrites the data, removes the constraints and uses the schema from the load job. *
WRITE_APPEND: If the table already exists, BigQuery appends the data to the table. *
WRITE_EMPTY: If the table already exists and contains data, a 'duplicate' error is returned in
the job result. The default value is WRITE_APPEND. Each action is atomic and only occurs if
BigQuery is able to complete the job successfully. Creation, truncation and append actions
occur as one atomic update upon job completion.
- Returns:
- value or
null
for none
-
setWriteDisposition
public JobConfigurationLoad setWriteDisposition(String writeDisposition)
Optional. Specifies the action that occurs if the destination table already exists. The
following values are supported: * WRITE_TRUNCATE: If the table already exists, BigQuery
overwrites the data, removes the constraints and uses the schema from the load job. *
WRITE_APPEND: If the table already exists, BigQuery appends the data to the table. *
WRITE_EMPTY: If the table already exists and contains data, a 'duplicate' error is returned in
the job result. The default value is WRITE_APPEND. Each action is atomic and only occurs if
BigQuery is able to complete the job successfully. Creation, truncation and append actions
occur as one atomic update upon job completion.
- Parameters:
writeDisposition
- writeDisposition or null
for none
-
set
public JobConfigurationLoad set(String fieldName,
Object value)
- Overrides:
set
in class com.google.api.client.json.GenericJson
-
clone
public JobConfigurationLoad clone()
- Overrides:
clone
in class com.google.api.client.json.GenericJson
Copyright © 2011–2024 Google. All rights reserved.
© 2015 - 2025 Weber Informatics LLC | Privacy Policy