com.pulumi.gcp.bigquery.kotlin.outputs.JobLoad.kt Maven / Gradle / Ivy
Go to download
Show more of this group Show more artifacts with this name
Show all versions of pulumi-gcp-kotlin Show documentation
Show all versions of pulumi-gcp-kotlin Show documentation
Build cloud applications and infrastructure by combining the safety and reliability of infrastructure as code with the power of the Kotlin programming language.
@file:Suppress("NAME_SHADOWING", "DEPRECATION")
package com.pulumi.gcp.bigquery.kotlin.outputs
import kotlin.Boolean
import kotlin.Int
import kotlin.String
import kotlin.Suppress
import kotlin.collections.List
/**
*
* @property allowJaggedRows Accept rows that are missing trailing optional columns. The missing values are treated as nulls.
* If false, records with missing trailing columns are treated as bad records, and if there are too many bad records,
* an invalid error is returned in the job result. The default value is false. Only applicable to CSV, ignored for other formats.
* @property allowQuotedNewlines Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file.
* The default value is false.
* @property autodetect Indicates if we should automatically infer the options and schema for CSV and JSON sources.
* @property createDisposition Specifies whether the job is allowed to create new tables. The following values are supported:
* CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table.
* CREATE_NEVER: The table must already exist. If it does not, a 'notFound' error is returned in the job result.
* Creation, truncation and append actions occur as one atomic update upon job completion
* Default value is `CREATE_IF_NEEDED`.
* Possible values are: `CREATE_IF_NEEDED`, `CREATE_NEVER`.
* @property destinationEncryptionConfiguration Custom encryption configuration (e.g., Cloud KMS keys)
* Structure is documented below.
* @property destinationTable The destination table to load the data into.
* Structure is documented below.
* @property encoding The character encoding of the data. The supported values are UTF-8 or ISO-8859-1.
* The default value is UTF-8. BigQuery decodes the data after the raw, binary data
* has been split using the values of the quote and fieldDelimiter properties.
* @property fieldDelimiter The separator for fields in a CSV file. The separator can be any ISO-8859-1 single-byte character.
* To use a character in the range 128-255, you must encode the character as UTF8. BigQuery converts
* the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the
* data in its raw, binary state. BigQuery also supports the escape sequence "\t" to specify a tab separator.
* The default value is a comma (',').
* @property ignoreUnknownValues Indicates if BigQuery should allow extra values that are not represented in the table schema.
* If true, the extra values are ignored. If false, records with extra columns are treated as bad records,
* and if there are too many bad records, an invalid error is returned in the job result.
* The default value is false. The sourceFormat property determines what BigQuery treats as an extra value:
* CSV: Trailing columns
* JSON: Named values that don't match any column names
* @property jsonExtension If sourceFormat is set to newline-delimited JSON, indicates whether it should be processed as a JSON variant such as GeoJSON.
* For a sourceFormat other than JSON, omit this field. If the sourceFormat is newline-delimited JSON: - for newline-delimited
* GeoJSON: set to GEOJSON.
* @property maxBadRecords The maximum number of bad records that BigQuery can ignore when running the job. If the number of bad records exceeds this value,
* an invalid error is returned in the job result. The default value is 0, which requires that all records are valid.
* @property nullMarker Specifies a string that represents a null value in a CSV file. The default value is the empty string. If you set this
* property to a custom value, BigQuery throws an error if an
* empty string is present for all data types except for STRING and BYTE. For STRING and BYTE columns, BigQuery interprets the empty string as
* an empty value.
* @property parquetOptions Parquet Options for load and make external tables.
* Structure is documented below.
* @property projectionFields If sourceFormat is set to "DATASTORE_BACKUP", indicates which entity properties to load into BigQuery from a Cloud Datastore backup.
* Property names are case sensitive and must be top-level properties. If no properties are specified, BigQuery loads all properties.
* If any named property isn't found in the Cloud Datastore backup, an invalid error is returned in the job result.
* @property quote The value that is used to quote data sections in a CSV file. BigQuery converts the string to ISO-8859-1 encoding,
* and then uses the first byte of the encoded string to split the data in its raw, binary state.
* The default value is a double-quote ('"'). If your data does not contain quoted sections, set the property value to an empty string.
* If your data contains quoted newline characters, you must also set the allowQuotedNewlines property to true.
* @property schemaUpdateOptions Allows the schema of the destination table to be updated as a side effect of the load job if a schema is autodetected or
* supplied in the job configuration. Schema update options are supported in two cases: when writeDisposition is WRITE_APPEND;
* when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators.
* For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of the following values are specified:
* ALLOW_FIELD_ADDITION: allow adding a nullable field to the schema.
* ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original schema to nullable.
* @property skipLeadingRows The number of rows at the top of a CSV file that BigQuery will skip when loading the data.
* The default value is 0. This property is useful if you have header rows in the file that should be skipped.
* When autodetect is on, the behavior is the following:
* skipLeadingRows unspecified - Autodetect tries to detect headers in the first row. If they are not detected,
* the row is read as data. Otherwise data is read starting from the second row.
* skipLeadingRows is 0 - Instructs autodetect that there are no headers and data should be read starting from the first row.
* skipLeadingRows = N > 0 - Autodetect skips N-1 rows and tries to detect headers in row N. If headers are not detected,
* row N is just skipped. Otherwise row N is used to extract column names for the detected schema.
* @property sourceFormat The format of the data files. For CSV files, specify "CSV". For datastore backups, specify "DATASTORE_BACKUP".
* For newline-delimited JSON, specify "NEWLINE_DELIMITED_JSON". For Avro, specify "AVRO". For parquet, specify "PARQUET".
* For orc, specify "ORC". [Beta] For Bigtable, specify "BIGTABLE".
* The default value is CSV.
* @property sourceUris The fully-qualified URIs that point to your data in Google Cloud.
* For Google Cloud Storage URIs: Each URI can contain one '\*' wildcard character
* and it must come after the 'bucket' name. Size limits related to load jobs apply
* to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be
* specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table.
* For Google Cloud Datastore backups: Exactly one URI can be specified. Also, the '\*' wildcard character is not allowed.
* @property timePartitioning Time-based partitioning specification for the destination table.
* Structure is documented below.
* @property writeDisposition Specifies the action that occurs if the destination table already exists. The following values are supported:
* WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table data and uses the schema from the query result.
* WRITE_APPEND: If the table already exists, BigQuery appends the data to the table.
* WRITE_EMPTY: If the table already exists and contains data, a 'duplicate' error is returned in the job result.
* Each action is atomic and only occurs if BigQuery is able to complete the job successfully.
* Creation, truncation and append actions occur as one atomic update upon job completion.
* Default value is `WRITE_EMPTY`.
* Possible values are: `WRITE_TRUNCATE`, `WRITE_APPEND`, `WRITE_EMPTY`.
*/
public data class JobLoad(
public val allowJaggedRows: Boolean? = null,
public val allowQuotedNewlines: Boolean? = null,
public val autodetect: Boolean? = null,
public val createDisposition: String? = null,
public val destinationEncryptionConfiguration: JobLoadDestinationEncryptionConfiguration? = null,
public val destinationTable: JobLoadDestinationTable,
public val encoding: String? = null,
public val fieldDelimiter: String? = null,
public val ignoreUnknownValues: Boolean? = null,
public val jsonExtension: String? = null,
public val maxBadRecords: Int? = null,
public val nullMarker: String? = null,
public val parquetOptions: JobLoadParquetOptions? = null,
public val projectionFields: List? = null,
public val quote: String? = null,
public val schemaUpdateOptions: List? = null,
public val skipLeadingRows: Int? = null,
public val sourceFormat: String? = null,
public val sourceUris: List,
public val timePartitioning: JobLoadTimePartitioning? = null,
public val writeDisposition: String? = null,
) {
public companion object {
public fun toKotlin(javaType: com.pulumi.gcp.bigquery.outputs.JobLoad): JobLoad = JobLoad(
allowJaggedRows = javaType.allowJaggedRows().map({ args0 -> args0 }).orElse(null),
allowQuotedNewlines = javaType.allowQuotedNewlines().map({ args0 -> args0 }).orElse(null),
autodetect = javaType.autodetect().map({ args0 -> args0 }).orElse(null),
createDisposition = javaType.createDisposition().map({ args0 -> args0 }).orElse(null),
destinationEncryptionConfiguration = javaType.destinationEncryptionConfiguration().map({ args0 ->
args0.let({ args0 ->
com.pulumi.gcp.bigquery.kotlin.outputs.JobLoadDestinationEncryptionConfiguration.Companion.toKotlin(args0)
})
}).orElse(null),
destinationTable = javaType.destinationTable().let({ args0 ->
com.pulumi.gcp.bigquery.kotlin.outputs.JobLoadDestinationTable.Companion.toKotlin(args0)
}),
encoding = javaType.encoding().map({ args0 -> args0 }).orElse(null),
fieldDelimiter = javaType.fieldDelimiter().map({ args0 -> args0 }).orElse(null),
ignoreUnknownValues = javaType.ignoreUnknownValues().map({ args0 -> args0 }).orElse(null),
jsonExtension = javaType.jsonExtension().map({ args0 -> args0 }).orElse(null),
maxBadRecords = javaType.maxBadRecords().map({ args0 -> args0 }).orElse(null),
nullMarker = javaType.nullMarker().map({ args0 -> args0 }).orElse(null),
parquetOptions = javaType.parquetOptions().map({ args0 ->
args0.let({ args0 ->
com.pulumi.gcp.bigquery.kotlin.outputs.JobLoadParquetOptions.Companion.toKotlin(args0)
})
}).orElse(null),
projectionFields = javaType.projectionFields().map({ args0 -> args0 }),
quote = javaType.quote().map({ args0 -> args0 }).orElse(null),
schemaUpdateOptions = javaType.schemaUpdateOptions().map({ args0 -> args0 }),
skipLeadingRows = javaType.skipLeadingRows().map({ args0 -> args0 }).orElse(null),
sourceFormat = javaType.sourceFormat().map({ args0 -> args0 }).orElse(null),
sourceUris = javaType.sourceUris().map({ args0 -> args0 }),
timePartitioning = javaType.timePartitioning().map({ args0 ->
args0.let({ args0 ->
com.pulumi.gcp.bigquery.kotlin.outputs.JobLoadTimePartitioning.Companion.toKotlin(args0)
})
}).orElse(null),
writeDisposition = javaType.writeDisposition().map({ args0 -> args0 }).orElse(null),
)
}
}
© 2015 - 2025 Weber Informatics LLC | Privacy Policy