All Downloads are FREE. Search and download functionalities are using the official Maven repository.

com.pulumi.gcp.bigquery.kotlin.inputs.JobLoadArgs.kt Maven / Gradle / Ivy

Go to download

Build cloud applications and infrastructure by combining the safety and reliability of infrastructure as code with the power of the Kotlin programming language.

There is a newer version: 8.12.0.0
Show newest version
@file:Suppress("NAME_SHADOWING", "DEPRECATION")

package com.pulumi.gcp.bigquery.kotlin.inputs

import com.pulumi.core.Output
import com.pulumi.core.Output.of
import com.pulumi.gcp.bigquery.inputs.JobLoadArgs.builder
import com.pulumi.kotlin.ConvertibleToJava
import com.pulumi.kotlin.PulumiNullFieldException
import com.pulumi.kotlin.PulumiTagMarker
import com.pulumi.kotlin.applySuspend
import kotlin.Boolean
import kotlin.Int
import kotlin.String
import kotlin.Suppress
import kotlin.Unit
import kotlin.collections.List
import kotlin.jvm.JvmName

/**
 *
 * @property allowJaggedRows Accept rows that are missing trailing optional columns. The missing values are treated as nulls.
 * If false, records with missing trailing columns are treated as bad records, and if there are too many bad records,
 * an invalid error is returned in the job result. The default value is false. Only applicable to CSV, ignored for other formats.
 * @property allowQuotedNewlines Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file.
 * The default value is false.
 * @property autodetect Indicates if we should automatically infer the options and schema for CSV and JSON sources.
 * @property createDisposition Specifies whether the job is allowed to create new tables. The following values are supported:
 * CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table.
 * CREATE_NEVER: The table must already exist. If it does not, a 'notFound' error is returned in the job result.
 * Creation, truncation and append actions occur as one atomic update upon job completion
 * Default value is `CREATE_IF_NEEDED`.
 * Possible values are: `CREATE_IF_NEEDED`, `CREATE_NEVER`.
 * @property destinationEncryptionConfiguration Custom encryption configuration (e.g., Cloud KMS keys)
 * Structure is documented below.
 * @property destinationTable The destination table to load the data into.
 * Structure is documented below.
 * @property encoding The character encoding of the data. The supported values are UTF-8 or ISO-8859-1.
 * The default value is UTF-8. BigQuery decodes the data after the raw, binary data
 * has been split using the values of the quote and fieldDelimiter properties.
 * @property fieldDelimiter The separator for fields in a CSV file. The separator can be any ISO-8859-1 single-byte character.
 * To use a character in the range 128-255, you must encode the character as UTF8. BigQuery converts
 * the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the
 * data in its raw, binary state. BigQuery also supports the escape sequence "\t" to specify a tab separator.
 * The default value is a comma (',').
 * @property ignoreUnknownValues Indicates if BigQuery should allow extra values that are not represented in the table schema.
 * If true, the extra values are ignored. If false, records with extra columns are treated as bad records,
 * and if there are too many bad records, an invalid error is returned in the job result.
 * The default value is false. The sourceFormat property determines what BigQuery treats as an extra value:
 * CSV: Trailing columns
 * JSON: Named values that don't match any column names
 * @property jsonExtension If sourceFormat is set to newline-delimited JSON, indicates whether it should be processed as a JSON variant such as GeoJSON.
 * For a sourceFormat other than JSON, omit this field. If the sourceFormat is newline-delimited JSON: - for newline-delimited
 * GeoJSON: set to GEOJSON.
 * @property maxBadRecords The maximum number of bad records that BigQuery can ignore when running the job. If the number of bad records exceeds this value,
 * an invalid error is returned in the job result. The default value is 0, which requires that all records are valid.
 * @property nullMarker Specifies a string that represents a null value in a CSV file. The default value is the empty string. If you set this
 * property to a custom value, BigQuery throws an error if an
 * empty string is present for all data types except for STRING and BYTE. For STRING and BYTE columns, BigQuery interprets the empty string as
 * an empty value.
 * @property parquetOptions Parquet Options for load and make external tables.
 * Structure is documented below.
 * @property projectionFields If sourceFormat is set to "DATASTORE_BACKUP", indicates which entity properties to load into BigQuery from a Cloud Datastore backup.
 * Property names are case sensitive and must be top-level properties. If no properties are specified, BigQuery loads all properties.
 * If any named property isn't found in the Cloud Datastore backup, an invalid error is returned in the job result.
 * @property quote The value that is used to quote data sections in a CSV file. BigQuery converts the string to ISO-8859-1 encoding,
 * and then uses the first byte of the encoded string to split the data in its raw, binary state.
 * The default value is a double-quote ('"'). If your data does not contain quoted sections, set the property value to an empty string.
 * If your data contains quoted newline characters, you must also set the allowQuotedNewlines property to true.
 * @property schemaUpdateOptions Allows the schema of the destination table to be updated as a side effect of the load job if a schema is autodetected or
 * supplied in the job configuration. Schema update options are supported in two cases: when writeDisposition is WRITE_APPEND;
 * when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators.
 * For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of the following values are specified:
 * ALLOW_FIELD_ADDITION: allow adding a nullable field to the schema.
 * ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original schema to nullable.
 * @property skipLeadingRows The number of rows at the top of a CSV file that BigQuery will skip when loading the data.
 * The default value is 0. This property is useful if you have header rows in the file that should be skipped.
 * When autodetect is on, the behavior is the following:
 * skipLeadingRows unspecified - Autodetect tries to detect headers in the first row. If they are not detected,
 * the row is read as data. Otherwise data is read starting from the second row.
 * skipLeadingRows is 0 - Instructs autodetect that there are no headers and data should be read starting from the first row.
 * skipLeadingRows = N > 0 - Autodetect skips N-1 rows and tries to detect headers in row N. If headers are not detected,
 * row N is just skipped. Otherwise row N is used to extract column names for the detected schema.
 * @property sourceFormat The format of the data files. For CSV files, specify "CSV". For datastore backups, specify "DATASTORE_BACKUP".
 * For newline-delimited JSON, specify "NEWLINE_DELIMITED_JSON". For Avro, specify "AVRO". For parquet, specify "PARQUET".
 * For orc, specify "ORC". [Beta] For Bigtable, specify "BIGTABLE".
 * The default value is CSV.
 * @property sourceUris The fully-qualified URIs that point to your data in Google Cloud.
 * For Google Cloud Storage URIs: Each URI can contain one '\*' wildcard character
 * and it must come after the 'bucket' name. Size limits related to load jobs apply
 * to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be
 * specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table.
 * For Google Cloud Datastore backups: Exactly one URI can be specified. Also, the '\*' wildcard character is not allowed.
 * @property timePartitioning Time-based partitioning specification for the destination table.
 * Structure is documented below.
 * @property writeDisposition Specifies the action that occurs if the destination table already exists. The following values are supported:
 * WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table data and uses the schema from the query result.
 * WRITE_APPEND: If the table already exists, BigQuery appends the data to the table.
 * WRITE_EMPTY: If the table already exists and contains data, a 'duplicate' error is returned in the job result.
 * Each action is atomic and only occurs if BigQuery is able to complete the job successfully.
 * Creation, truncation and append actions occur as one atomic update upon job completion.
 * Default value is `WRITE_EMPTY`.
 * Possible values are: `WRITE_TRUNCATE`, `WRITE_APPEND`, `WRITE_EMPTY`.
 */
public data class JobLoadArgs(
    public val allowJaggedRows: Output? = null,
    public val allowQuotedNewlines: Output? = null,
    public val autodetect: Output? = null,
    public val createDisposition: Output? = null,
    public val destinationEncryptionConfiguration: Output? = null,
    public val destinationTable: Output,
    public val encoding: Output? = null,
    public val fieldDelimiter: Output? = null,
    public val ignoreUnknownValues: Output? = null,
    public val jsonExtension: Output? = null,
    public val maxBadRecords: Output? = null,
    public val nullMarker: Output? = null,
    public val parquetOptions: Output? = null,
    public val projectionFields: Output>? = null,
    public val quote: Output? = null,
    public val schemaUpdateOptions: Output>? = null,
    public val skipLeadingRows: Output? = null,
    public val sourceFormat: Output? = null,
    public val sourceUris: Output>,
    public val timePartitioning: Output? = null,
    public val writeDisposition: Output? = null,
) : ConvertibleToJava {
    override fun toJava(): com.pulumi.gcp.bigquery.inputs.JobLoadArgs =
        com.pulumi.gcp.bigquery.inputs.JobLoadArgs.builder()
            .allowJaggedRows(allowJaggedRows?.applyValue({ args0 -> args0 }))
            .allowQuotedNewlines(allowQuotedNewlines?.applyValue({ args0 -> args0 }))
            .autodetect(autodetect?.applyValue({ args0 -> args0 }))
            .createDisposition(createDisposition?.applyValue({ args0 -> args0 }))
            .destinationEncryptionConfiguration(
                destinationEncryptionConfiguration?.applyValue({ args0 ->
                    args0.let({ args0 -> args0.toJava() })
                }),
            )
            .destinationTable(destinationTable.applyValue({ args0 -> args0.let({ args0 -> args0.toJava() }) }))
            .encoding(encoding?.applyValue({ args0 -> args0 }))
            .fieldDelimiter(fieldDelimiter?.applyValue({ args0 -> args0 }))
            .ignoreUnknownValues(ignoreUnknownValues?.applyValue({ args0 -> args0 }))
            .jsonExtension(jsonExtension?.applyValue({ args0 -> args0 }))
            .maxBadRecords(maxBadRecords?.applyValue({ args0 -> args0 }))
            .nullMarker(nullMarker?.applyValue({ args0 -> args0 }))
            .parquetOptions(parquetOptions?.applyValue({ args0 -> args0.let({ args0 -> args0.toJava() }) }))
            .projectionFields(projectionFields?.applyValue({ args0 -> args0.map({ args0 -> args0 }) }))
            .quote(quote?.applyValue({ args0 -> args0 }))
            .schemaUpdateOptions(schemaUpdateOptions?.applyValue({ args0 -> args0.map({ args0 -> args0 }) }))
            .skipLeadingRows(skipLeadingRows?.applyValue({ args0 -> args0 }))
            .sourceFormat(sourceFormat?.applyValue({ args0 -> args0 }))
            .sourceUris(sourceUris.applyValue({ args0 -> args0.map({ args0 -> args0 }) }))
            .timePartitioning(timePartitioning?.applyValue({ args0 -> args0.let({ args0 -> args0.toJava() }) }))
            .writeDisposition(writeDisposition?.applyValue({ args0 -> args0 })).build()
}

/**
 * Builder for [JobLoadArgs].
 */
@PulumiTagMarker
public class JobLoadArgsBuilder internal constructor() {
    private var allowJaggedRows: Output? = null

    private var allowQuotedNewlines: Output? = null

    private var autodetect: Output? = null

    private var createDisposition: Output? = null

    private var destinationEncryptionConfiguration:
        Output? = null

    private var destinationTable: Output? = null

    private var encoding: Output? = null

    private var fieldDelimiter: Output? = null

    private var ignoreUnknownValues: Output? = null

    private var jsonExtension: Output? = null

    private var maxBadRecords: Output? = null

    private var nullMarker: Output? = null

    private var parquetOptions: Output? = null

    private var projectionFields: Output>? = null

    private var quote: Output? = null

    private var schemaUpdateOptions: Output>? = null

    private var skipLeadingRows: Output? = null

    private var sourceFormat: Output? = null

    private var sourceUris: Output>? = null

    private var timePartitioning: Output? = null

    private var writeDisposition: Output? = null

    /**
     * @param value Accept rows that are missing trailing optional columns. The missing values are treated as nulls.
     * If false, records with missing trailing columns are treated as bad records, and if there are too many bad records,
     * an invalid error is returned in the job result. The default value is false. Only applicable to CSV, ignored for other formats.
     */
    @JvmName("crbcdxnubdihrfyy")
    public suspend fun allowJaggedRows(`value`: Output) {
        this.allowJaggedRows = value
    }

    /**
     * @param value Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file.
     * The default value is false.
     */
    @JvmName("ykmwiqglyhjurisp")
    public suspend fun allowQuotedNewlines(`value`: Output) {
        this.allowQuotedNewlines = value
    }

    /**
     * @param value Indicates if we should automatically infer the options and schema for CSV and JSON sources.
     */
    @JvmName("tqxiwaghmdilfgrm")
    public suspend fun autodetect(`value`: Output) {
        this.autodetect = value
    }

    /**
     * @param value Specifies whether the job is allowed to create new tables. The following values are supported:
     * CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table.
     * CREATE_NEVER: The table must already exist. If it does not, a 'notFound' error is returned in the job result.
     * Creation, truncation and append actions occur as one atomic update upon job completion
     * Default value is `CREATE_IF_NEEDED`.
     * Possible values are: `CREATE_IF_NEEDED`, `CREATE_NEVER`.
     */
    @JvmName("pdfccnsyapgumtat")
    public suspend fun createDisposition(`value`: Output) {
        this.createDisposition = value
    }

    /**
     * @param value Custom encryption configuration (e.g., Cloud KMS keys)
     * Structure is documented below.
     */
    @JvmName("vjcwkhhpkcjkisir")
    public suspend fun destinationEncryptionConfiguration(`value`: Output) {
        this.destinationEncryptionConfiguration = value
    }

    /**
     * @param value The destination table to load the data into.
     * Structure is documented below.
     */
    @JvmName("wqpixeodysesjrlp")
    public suspend fun destinationTable(`value`: Output) {
        this.destinationTable = value
    }

    /**
     * @param value The character encoding of the data. The supported values are UTF-8 or ISO-8859-1.
     * The default value is UTF-8. BigQuery decodes the data after the raw, binary data
     * has been split using the values of the quote and fieldDelimiter properties.
     */
    @JvmName("ggnmqfvnihnosipl")
    public suspend fun encoding(`value`: Output) {
        this.encoding = value
    }

    /**
     * @param value The separator for fields in a CSV file. The separator can be any ISO-8859-1 single-byte character.
     * To use a character in the range 128-255, you must encode the character as UTF8. BigQuery converts
     * the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the
     * data in its raw, binary state. BigQuery also supports the escape sequence "\t" to specify a tab separator.
     * The default value is a comma (',').
     */
    @JvmName("gjwcernqvoxvrsou")
    public suspend fun fieldDelimiter(`value`: Output) {
        this.fieldDelimiter = value
    }

    /**
     * @param value Indicates if BigQuery should allow extra values that are not represented in the table schema.
     * If true, the extra values are ignored. If false, records with extra columns are treated as bad records,
     * and if there are too many bad records, an invalid error is returned in the job result.
     * The default value is false. The sourceFormat property determines what BigQuery treats as an extra value:
     * CSV: Trailing columns
     * JSON: Named values that don't match any column names
     */
    @JvmName("xjhdfcskmcbypsmu")
    public suspend fun ignoreUnknownValues(`value`: Output) {
        this.ignoreUnknownValues = value
    }

    /**
     * @param value If sourceFormat is set to newline-delimited JSON, indicates whether it should be processed as a JSON variant such as GeoJSON.
     * For a sourceFormat other than JSON, omit this field. If the sourceFormat is newline-delimited JSON: - for newline-delimited
     * GeoJSON: set to GEOJSON.
     */
    @JvmName("ohtcwcflqmjdhhnf")
    public suspend fun jsonExtension(`value`: Output) {
        this.jsonExtension = value
    }

    /**
     * @param value The maximum number of bad records that BigQuery can ignore when running the job. If the number of bad records exceeds this value,
     * an invalid error is returned in the job result. The default value is 0, which requires that all records are valid.
     */
    @JvmName("noqegocvoruukhdf")
    public suspend fun maxBadRecords(`value`: Output) {
        this.maxBadRecords = value
    }

    /**
     * @param value Specifies a string that represents a null value in a CSV file. The default value is the empty string. If you set this
     * property to a custom value, BigQuery throws an error if an
     * empty string is present for all data types except for STRING and BYTE. For STRING and BYTE columns, BigQuery interprets the empty string as
     * an empty value.
     */
    @JvmName("sotsxbmlqjgihuho")
    public suspend fun nullMarker(`value`: Output) {
        this.nullMarker = value
    }

    /**
     * @param value Parquet Options for load and make external tables.
     * Structure is documented below.
     */
    @JvmName("nvsywqsfsgampsjy")
    public suspend fun parquetOptions(`value`: Output) {
        this.parquetOptions = value
    }

    /**
     * @param value If sourceFormat is set to "DATASTORE_BACKUP", indicates which entity properties to load into BigQuery from a Cloud Datastore backup.
     * Property names are case sensitive and must be top-level properties. If no properties are specified, BigQuery loads all properties.
     * If any named property isn't found in the Cloud Datastore backup, an invalid error is returned in the job result.
     */
    @JvmName("mulpbcwcwknvewlg")
    public suspend fun projectionFields(`value`: Output>) {
        this.projectionFields = value
    }

    @JvmName("wrgrohpwmfdhjvge")
    public suspend fun projectionFields(vararg values: Output) {
        this.projectionFields = Output.all(values.asList())
    }

    /**
     * @param values If sourceFormat is set to "DATASTORE_BACKUP", indicates which entity properties to load into BigQuery from a Cloud Datastore backup.
     * Property names are case sensitive and must be top-level properties. If no properties are specified, BigQuery loads all properties.
     * If any named property isn't found in the Cloud Datastore backup, an invalid error is returned in the job result.
     */
    @JvmName("lnxooxwfleoybsti")
    public suspend fun projectionFields(values: List>) {
        this.projectionFields = Output.all(values)
    }

    /**
     * @param value The value that is used to quote data sections in a CSV file. BigQuery converts the string to ISO-8859-1 encoding,
     * and then uses the first byte of the encoded string to split the data in its raw, binary state.
     * The default value is a double-quote ('"'). If your data does not contain quoted sections, set the property value to an empty string.
     * If your data contains quoted newline characters, you must also set the allowQuotedNewlines property to true.
     */
    @JvmName("lmnafsaruqvopxfr")
    public suspend fun quote(`value`: Output) {
        this.quote = value
    }

    /**
     * @param value Allows the schema of the destination table to be updated as a side effect of the load job if a schema is autodetected or
     * supplied in the job configuration. Schema update options are supported in two cases: when writeDisposition is WRITE_APPEND;
     * when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators.
     * For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of the following values are specified:
     * ALLOW_FIELD_ADDITION: allow adding a nullable field to the schema.
     * ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original schema to nullable.
     */
    @JvmName("rhrwkggqjaryoepk")
    public suspend fun schemaUpdateOptions(`value`: Output>) {
        this.schemaUpdateOptions = value
    }

    @JvmName("djoxpdsfyhooigtt")
    public suspend fun schemaUpdateOptions(vararg values: Output) {
        this.schemaUpdateOptions = Output.all(values.asList())
    }

    /**
     * @param values Allows the schema of the destination table to be updated as a side effect of the load job if a schema is autodetected or
     * supplied in the job configuration. Schema update options are supported in two cases: when writeDisposition is WRITE_APPEND;
     * when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators.
     * For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of the following values are specified:
     * ALLOW_FIELD_ADDITION: allow adding a nullable field to the schema.
     * ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original schema to nullable.
     */
    @JvmName("dcascsnpqvnakkca")
    public suspend fun schemaUpdateOptions(values: List>) {
        this.schemaUpdateOptions = Output.all(values)
    }

    /**
     * @param value The number of rows at the top of a CSV file that BigQuery will skip when loading the data.
     * The default value is 0. This property is useful if you have header rows in the file that should be skipped.
     * When autodetect is on, the behavior is the following:
     * skipLeadingRows unspecified - Autodetect tries to detect headers in the first row. If they are not detected,
     * the row is read as data. Otherwise data is read starting from the second row.
     * skipLeadingRows is 0 - Instructs autodetect that there are no headers and data should be read starting from the first row.
     * skipLeadingRows = N > 0 - Autodetect skips N-1 rows and tries to detect headers in row N. If headers are not detected,
     * row N is just skipped. Otherwise row N is used to extract column names for the detected schema.
     */
    @JvmName("manssiivornvyrnq")
    public suspend fun skipLeadingRows(`value`: Output) {
        this.skipLeadingRows = value
    }

    /**
     * @param value The format of the data files. For CSV files, specify "CSV". For datastore backups, specify "DATASTORE_BACKUP".
     * For newline-delimited JSON, specify "NEWLINE_DELIMITED_JSON". For Avro, specify "AVRO". For parquet, specify "PARQUET".
     * For orc, specify "ORC". [Beta] For Bigtable, specify "BIGTABLE".
     * The default value is CSV.
     */
    @JvmName("blefmblhiluqjeao")
    public suspend fun sourceFormat(`value`: Output) {
        this.sourceFormat = value
    }

    /**
     * @param value The fully-qualified URIs that point to your data in Google Cloud.
     * For Google Cloud Storage URIs: Each URI can contain one '\*' wildcard character
     * and it must come after the 'bucket' name. Size limits related to load jobs apply
     * to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be
     * specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table.
     * For Google Cloud Datastore backups: Exactly one URI can be specified. Also, the '\*' wildcard character is not allowed.
     */
    @JvmName("xqaswogdtknluhsp")
    public suspend fun sourceUris(`value`: Output>) {
        this.sourceUris = value
    }

    @JvmName("feelbxlussnxwynm")
    public suspend fun sourceUris(vararg values: Output) {
        this.sourceUris = Output.all(values.asList())
    }

    /**
     * @param values The fully-qualified URIs that point to your data in Google Cloud.
     * For Google Cloud Storage URIs: Each URI can contain one '\*' wildcard character
     * and it must come after the 'bucket' name. Size limits related to load jobs apply
     * to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be
     * specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table.
     * For Google Cloud Datastore backups: Exactly one URI can be specified. Also, the '\*' wildcard character is not allowed.
     */
    @JvmName("monjdmramcuqrrwa")
    public suspend fun sourceUris(values: List>) {
        this.sourceUris = Output.all(values)
    }

    /**
     * @param value Time-based partitioning specification for the destination table.
     * Structure is documented below.
     */
    @JvmName("jsnphriyevogdjjh")
    public suspend fun timePartitioning(`value`: Output) {
        this.timePartitioning = value
    }

    /**
     * @param value Specifies the action that occurs if the destination table already exists. The following values are supported:
     * WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table data and uses the schema from the query result.
     * WRITE_APPEND: If the table already exists, BigQuery appends the data to the table.
     * WRITE_EMPTY: If the table already exists and contains data, a 'duplicate' error is returned in the job result.
     * Each action is atomic and only occurs if BigQuery is able to complete the job successfully.
     * Creation, truncation and append actions occur as one atomic update upon job completion.
     * Default value is `WRITE_EMPTY`.
     * Possible values are: `WRITE_TRUNCATE`, `WRITE_APPEND`, `WRITE_EMPTY`.
     */
    @JvmName("xhjsoamcoaiopqqu")
    public suspend fun writeDisposition(`value`: Output) {
        this.writeDisposition = value
    }

    /**
     * @param value Accept rows that are missing trailing optional columns. The missing values are treated as nulls.
     * If false, records with missing trailing columns are treated as bad records, and if there are too many bad records,
     * an invalid error is returned in the job result. The default value is false. Only applicable to CSV, ignored for other formats.
     */
    @JvmName("rokfmqcpwfatxysy")
    public suspend fun allowJaggedRows(`value`: Boolean?) {
        val toBeMapped = value
        val mapped = toBeMapped?.let({ args0 -> of(args0) })
        this.allowJaggedRows = mapped
    }

    /**
     * @param value Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file.
     * The default value is false.
     */
    @JvmName("asgmocfnurddjwbr")
    public suspend fun allowQuotedNewlines(`value`: Boolean?) {
        val toBeMapped = value
        val mapped = toBeMapped?.let({ args0 -> of(args0) })
        this.allowQuotedNewlines = mapped
    }

    /**
     * @param value Indicates if we should automatically infer the options and schema for CSV and JSON sources.
     */
    @JvmName("pynrexpawfybgovx")
    public suspend fun autodetect(`value`: Boolean?) {
        val toBeMapped = value
        val mapped = toBeMapped?.let({ args0 -> of(args0) })
        this.autodetect = mapped
    }

    /**
     * @param value Specifies whether the job is allowed to create new tables. The following values are supported:
     * CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table.
     * CREATE_NEVER: The table must already exist. If it does not, a 'notFound' error is returned in the job result.
     * Creation, truncation and append actions occur as one atomic update upon job completion
     * Default value is `CREATE_IF_NEEDED`.
     * Possible values are: `CREATE_IF_NEEDED`, `CREATE_NEVER`.
     */
    @JvmName("qqvjgmdupomuyfmx")
    public suspend fun createDisposition(`value`: String?) {
        val toBeMapped = value
        val mapped = toBeMapped?.let({ args0 -> of(args0) })
        this.createDisposition = mapped
    }

    /**
     * @param value Custom encryption configuration (e.g., Cloud KMS keys)
     * Structure is documented below.
     */
    @JvmName("lqiegxbtyoqlgmpf")
    public suspend fun destinationEncryptionConfiguration(`value`: JobLoadDestinationEncryptionConfigurationArgs?) {
        val toBeMapped = value
        val mapped = toBeMapped?.let({ args0 -> of(args0) })
        this.destinationEncryptionConfiguration = mapped
    }

    /**
     * @param argument Custom encryption configuration (e.g., Cloud KMS keys)
     * Structure is documented below.
     */
    @JvmName("ikfxjpgegymutadr")
    public suspend fun destinationEncryptionConfiguration(argument: suspend JobLoadDestinationEncryptionConfigurationArgsBuilder.() -> Unit) {
        val toBeMapped = JobLoadDestinationEncryptionConfigurationArgsBuilder().applySuspend {
            argument()
        }.build()
        val mapped = of(toBeMapped)
        this.destinationEncryptionConfiguration = mapped
    }

    /**
     * @param value The destination table to load the data into.
     * Structure is documented below.
     */
    @JvmName("snmyleeytdxyoyvi")
    public suspend fun destinationTable(`value`: JobLoadDestinationTableArgs) {
        val toBeMapped = value
        val mapped = toBeMapped.let({ args0 -> of(args0) })
        this.destinationTable = mapped
    }

    /**
     * @param argument The destination table to load the data into.
     * Structure is documented below.
     */
    @JvmName("gpgqfrgholdkyufs")
    public suspend fun destinationTable(argument: suspend JobLoadDestinationTableArgsBuilder.() -> Unit) {
        val toBeMapped = JobLoadDestinationTableArgsBuilder().applySuspend { argument() }.build()
        val mapped = of(toBeMapped)
        this.destinationTable = mapped
    }

    /**
     * @param value The character encoding of the data. The supported values are UTF-8 or ISO-8859-1.
     * The default value is UTF-8. BigQuery decodes the data after the raw, binary data
     * has been split using the values of the quote and fieldDelimiter properties.
     */
    @JvmName("xcptjtltkiouwnxd")
    public suspend fun encoding(`value`: String?) {
        val toBeMapped = value
        val mapped = toBeMapped?.let({ args0 -> of(args0) })
        this.encoding = mapped
    }

    /**
     * @param value The separator for fields in a CSV file. The separator can be any ISO-8859-1 single-byte character.
     * To use a character in the range 128-255, you must encode the character as UTF8. BigQuery converts
     * the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the
     * data in its raw, binary state. BigQuery also supports the escape sequence "\t" to specify a tab separator.
     * The default value is a comma (',').
     */
    @JvmName("npqbccjdvxsivwgw")
    public suspend fun fieldDelimiter(`value`: String?) {
        val toBeMapped = value
        val mapped = toBeMapped?.let({ args0 -> of(args0) })
        this.fieldDelimiter = mapped
    }

    /**
     * @param value Indicates if BigQuery should allow extra values that are not represented in the table schema.
     * If true, the extra values are ignored. If false, records with extra columns are treated as bad records,
     * and if there are too many bad records, an invalid error is returned in the job result.
     * The default value is false. The sourceFormat property determines what BigQuery treats as an extra value:
     * CSV: Trailing columns
     * JSON: Named values that don't match any column names
     */
    @JvmName("xkbgetblyrhqfdff")
    public suspend fun ignoreUnknownValues(`value`: Boolean?) {
        val toBeMapped = value
        val mapped = toBeMapped?.let({ args0 -> of(args0) })
        this.ignoreUnknownValues = mapped
    }

    /**
     * @param value If sourceFormat is set to newline-delimited JSON, indicates whether it should be processed as a JSON variant such as GeoJSON.
     * For a sourceFormat other than JSON, omit this field. If the sourceFormat is newline-delimited JSON: - for newline-delimited
     * GeoJSON: set to GEOJSON.
     */
    @JvmName("bairbrusxcnxhuee")
    public suspend fun jsonExtension(`value`: String?) {
        val toBeMapped = value
        val mapped = toBeMapped?.let({ args0 -> of(args0) })
        this.jsonExtension = mapped
    }

    /**
     * @param value The maximum number of bad records that BigQuery can ignore when running the job. If the number of bad records exceeds this value,
     * an invalid error is returned in the job result. The default value is 0, which requires that all records are valid.
     */
    @JvmName("nrecskatjbfgaiob")
    public suspend fun maxBadRecords(`value`: Int?) {
        val toBeMapped = value
        val mapped = toBeMapped?.let({ args0 -> of(args0) })
        this.maxBadRecords = mapped
    }

    /**
     * @param value Specifies a string that represents a null value in a CSV file. The default value is the empty string. If you set this
     * property to a custom value, BigQuery throws an error if an
     * empty string is present for all data types except for STRING and BYTE. For STRING and BYTE columns, BigQuery interprets the empty string as
     * an empty value.
     */
    @JvmName("hrblhhdluhdnaeuf")
    public suspend fun nullMarker(`value`: String?) {
        val toBeMapped = value
        val mapped = toBeMapped?.let({ args0 -> of(args0) })
        this.nullMarker = mapped
    }

    /**
     * @param value Parquet Options for load and make external tables.
     * Structure is documented below.
     */
    @JvmName("wihqcnksdknsiqtb")
    public suspend fun parquetOptions(`value`: JobLoadParquetOptionsArgs?) {
        val toBeMapped = value
        val mapped = toBeMapped?.let({ args0 -> of(args0) })
        this.parquetOptions = mapped
    }

    /**
     * @param argument Parquet Options for load and make external tables.
     * Structure is documented below.
     */
    @JvmName("lfwkpxdotavpdimc")
    public suspend fun parquetOptions(argument: suspend JobLoadParquetOptionsArgsBuilder.() -> Unit) {
        val toBeMapped = JobLoadParquetOptionsArgsBuilder().applySuspend { argument() }.build()
        val mapped = of(toBeMapped)
        this.parquetOptions = mapped
    }

    /**
     * @param value If sourceFormat is set to "DATASTORE_BACKUP", indicates which entity properties to load into BigQuery from a Cloud Datastore backup.
     * Property names are case sensitive and must be top-level properties. If no properties are specified, BigQuery loads all properties.
     * If any named property isn't found in the Cloud Datastore backup, an invalid error is returned in the job result.
     */
    @JvmName("ktssuqycqhrrumem")
    public suspend fun projectionFields(`value`: List?) {
        val toBeMapped = value
        val mapped = toBeMapped?.let({ args0 -> of(args0) })
        this.projectionFields = mapped
    }

    /**
     * @param values If sourceFormat is set to "DATASTORE_BACKUP", indicates which entity properties to load into BigQuery from a Cloud Datastore backup.
     * Property names are case sensitive and must be top-level properties. If no properties are specified, BigQuery loads all properties.
     * If any named property isn't found in the Cloud Datastore backup, an invalid error is returned in the job result.
     */
    @JvmName("wbfssffmrlgpyuqs")
    public suspend fun projectionFields(vararg values: String) {
        val toBeMapped = values.toList()
        val mapped = toBeMapped.let({ args0 -> of(args0) })
        this.projectionFields = mapped
    }

    /**
     * @param value The value that is used to quote data sections in a CSV file. BigQuery converts the string to ISO-8859-1 encoding,
     * and then uses the first byte of the encoded string to split the data in its raw, binary state.
     * The default value is a double-quote ('"'). If your data does not contain quoted sections, set the property value to an empty string.
     * If your data contains quoted newline characters, you must also set the allowQuotedNewlines property to true.
     */
    @JvmName("iapfvjxmkbtmwylc")
    public suspend fun quote(`value`: String?) {
        val toBeMapped = value
        val mapped = toBeMapped?.let({ args0 -> of(args0) })
        this.quote = mapped
    }

    /**
     * @param value Allows the schema of the destination table to be updated as a side effect of the load job if a schema is autodetected or
     * supplied in the job configuration. Schema update options are supported in two cases: when writeDisposition is WRITE_APPEND;
     * when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators.
     * For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of the following values are specified:
     * ALLOW_FIELD_ADDITION: allow adding a nullable field to the schema.
     * ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original schema to nullable.
     */
    @JvmName("gtykobwipnpmlesj")
    public suspend fun schemaUpdateOptions(`value`: List?) {
        val toBeMapped = value
        val mapped = toBeMapped?.let({ args0 -> of(args0) })
        this.schemaUpdateOptions = mapped
    }

    /**
     * @param values Allows the schema of the destination table to be updated as a side effect of the load job if a schema is autodetected or
     * supplied in the job configuration. Schema update options are supported in two cases: when writeDisposition is WRITE_APPEND;
     * when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators.
     * For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of the following values are specified:
     * ALLOW_FIELD_ADDITION: allow adding a nullable field to the schema.
     * ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original schema to nullable.
     */
    @JvmName("vnrhgdwsbjoxnjhw")
    public suspend fun schemaUpdateOptions(vararg values: String) {
        val toBeMapped = values.toList()
        val mapped = toBeMapped.let({ args0 -> of(args0) })
        this.schemaUpdateOptions = mapped
    }

    /**
     * @param value The number of rows at the top of a CSV file that BigQuery will skip when loading the data.
     * The default value is 0. This property is useful if you have header rows in the file that should be skipped.
     * When autodetect is on, the behavior is the following:
     * skipLeadingRows unspecified - Autodetect tries to detect headers in the first row. If they are not detected,
     * the row is read as data. Otherwise data is read starting from the second row.
     * skipLeadingRows is 0 - Instructs autodetect that there are no headers and data should be read starting from the first row.
     * skipLeadingRows = N > 0 - Autodetect skips N-1 rows and tries to detect headers in row N. If headers are not detected,
     * row N is just skipped. Otherwise row N is used to extract column names for the detected schema.
     */
    @JvmName("vpyrxlvtjpesajml")
    public suspend fun skipLeadingRows(`value`: Int?) {
        val toBeMapped = value
        val mapped = toBeMapped?.let({ args0 -> of(args0) })
        this.skipLeadingRows = mapped
    }

    /**
     * @param value The format of the data files. For CSV files, specify "CSV". For datastore backups, specify "DATASTORE_BACKUP".
     * For newline-delimited JSON, specify "NEWLINE_DELIMITED_JSON". For Avro, specify "AVRO". For parquet, specify "PARQUET".
     * For orc, specify "ORC". [Beta] For Bigtable, specify "BIGTABLE".
     * The default value is CSV.
     */
    @JvmName("smhdyhaminlolgfg")
    public suspend fun sourceFormat(`value`: String?) {
        val toBeMapped = value
        val mapped = toBeMapped?.let({ args0 -> of(args0) })
        this.sourceFormat = mapped
    }

    /**
     * @param value The fully-qualified URIs that point to your data in Google Cloud.
     * For Google Cloud Storage URIs: Each URI can contain one '\*' wildcard character
     * and it must come after the 'bucket' name. Size limits related to load jobs apply
     * to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be
     * specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table.
     * For Google Cloud Datastore backups: Exactly one URI can be specified. Also, the '\*' wildcard character is not allowed.
     */
    @JvmName("kerjijpvaveayffv")
    public suspend fun sourceUris(`value`: List) {
        val toBeMapped = value
        val mapped = toBeMapped.let({ args0 -> of(args0) })
        this.sourceUris = mapped
    }

    /**
     * @param values The fully-qualified URIs that point to your data in Google Cloud.
     * For Google Cloud Storage URIs: Each URI can contain one '\*' wildcard character
     * and it must come after the 'bucket' name. Size limits related to load jobs apply
     * to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be
     * specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table.
     * For Google Cloud Datastore backups: Exactly one URI can be specified. Also, the '\*' wildcard character is not allowed.
     */
    @JvmName("irxguejutgiehsxm")
    public suspend fun sourceUris(vararg values: String) {
        val toBeMapped = values.toList()
        val mapped = toBeMapped.let({ args0 -> of(args0) })
        this.sourceUris = mapped
    }

    /**
     * @param value Time-based partitioning specification for the destination table.
     * Structure is documented below.
     */
    @JvmName("ikdirgkdweupujxa")
    public suspend fun timePartitioning(`value`: JobLoadTimePartitioningArgs?) {
        val toBeMapped = value
        val mapped = toBeMapped?.let({ args0 -> of(args0) })
        this.timePartitioning = mapped
    }

    /**
     * @param argument Time-based partitioning specification for the destination table.
     * Structure is documented below.
     */
    @JvmName("jpswsijwfaphhaam")
    public suspend fun timePartitioning(argument: suspend JobLoadTimePartitioningArgsBuilder.() -> Unit) {
        val toBeMapped = JobLoadTimePartitioningArgsBuilder().applySuspend { argument() }.build()
        val mapped = of(toBeMapped)
        this.timePartitioning = mapped
    }

    /**
     * @param value Specifies the action that occurs if the destination table already exists. The following values are supported:
     * WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table data and uses the schema from the query result.
     * WRITE_APPEND: If the table already exists, BigQuery appends the data to the table.
     * WRITE_EMPTY: If the table already exists and contains data, a 'duplicate' error is returned in the job result.
     * Each action is atomic and only occurs if BigQuery is able to complete the job successfully.
     * Creation, truncation and append actions occur as one atomic update upon job completion.
     * Default value is `WRITE_EMPTY`.
     * Possible values are: `WRITE_TRUNCATE`, `WRITE_APPEND`, `WRITE_EMPTY`.
     */
    @JvmName("dmeaucybkplhblje")
    public suspend fun writeDisposition(`value`: String?) {
        val toBeMapped = value
        val mapped = toBeMapped?.let({ args0 -> of(args0) })
        this.writeDisposition = mapped
    }

    internal fun build(): JobLoadArgs = JobLoadArgs(
        allowJaggedRows = allowJaggedRows,
        allowQuotedNewlines = allowQuotedNewlines,
        autodetect = autodetect,
        createDisposition = createDisposition,
        destinationEncryptionConfiguration = destinationEncryptionConfiguration,
        destinationTable = destinationTable ?: throw PulumiNullFieldException("destinationTable"),
        encoding = encoding,
        fieldDelimiter = fieldDelimiter,
        ignoreUnknownValues = ignoreUnknownValues,
        jsonExtension = jsonExtension,
        maxBadRecords = maxBadRecords,
        nullMarker = nullMarker,
        parquetOptions = parquetOptions,
        projectionFields = projectionFields,
        quote = quote,
        schemaUpdateOptions = schemaUpdateOptions,
        skipLeadingRows = skipLeadingRows,
        sourceFormat = sourceFormat,
        sourceUris = sourceUris ?: throw PulumiNullFieldException("sourceUris"),
        timePartitioning = timePartitioning,
        writeDisposition = writeDisposition,
    )
}




© 2015 - 2025 Weber Informatics LLC | Privacy Policy