All Downloads are FREE. Search and download functionalities are using the official Maven repository.

META-INF.smithy.timestreamwrite.smithy Maven / Gradle / Ivy

The newest version!
$version: "2.0"

metadata suppressions = [
    {
        id: "HttpMethodSemantics"
        namespace: "*"
    }
    {
        id: "HttpResponseCodeSemantics"
        namespace: "*"
    }
    {
        id: "PaginatedTrait"
        namespace: "*"
    }
    {
        id: "HttpHeaderTrait"
        namespace: "*"
    }
    {
        id: "HttpUriConflict"
        namespace: "*"
    }
    {
        id: "Service"
        namespace: "*"
    }
]

namespace com.amazonaws.timestreamwrite

use aws.api#clientDiscoveredEndpoint
use aws.api#clientEndpointDiscovery
use aws.api#service
use aws.auth#sigv4
use aws.protocols#awsJson1_0

/// Amazon Timestream Write
///          

Amazon Timestream is a fast, scalable, fully managed time-series database service /// that makes it easy to store and analyze trillions of time-series data points per day. With /// Timestream, you can easily store and analyze IoT sensor data to derive insights /// from your IoT applications. You can analyze industrial telemetry to streamline equipment /// management and maintenance. You can also store and analyze log data and metrics to improve /// the performance and availability of your applications.

///

Timestream is built from the ground up to effectively ingest, process, and /// store time-series data. It organizes data to optimize query processing. It automatically /// scales based on the volume of data ingested and on the query volume to ensure you receive /// optimal performance while inserting and querying data. As your data grows over time, /// Timestream’s adaptive query processing engine spans across storage tiers to /// provide fast analysis while reducing costs.

@clientEndpointDiscovery( operation: DescribeEndpoints error: InvalidEndpointException ) @service( sdkId: "Timestream Write" arnNamespace: "timestream" cloudFormationName: "TimestreamWrite" cloudTrailEventSource: "timestreamwrite.amazonaws.com" endpointPrefix: "ingest.timestream" ) @sigv4( name: "timestream" ) @awsJson1_0 @title("Amazon Timestream Write") service Timestream_20181101 { version: "2018-11-01" operations: [ CreateBatchLoadTask CreateDatabase CreateTable DeleteDatabase DeleteTable DescribeBatchLoadTask DescribeDatabase DescribeEndpoints DescribeTable ListBatchLoadTasks ListDatabases ListTables ListTagsForResource ResumeBatchLoadTask TagResource UntagResource UpdateDatabase UpdateTable WriteRecords ] } ///

Creates a new Timestream batch load task. A batch load task processes data from /// a CSV source in an S3 location and writes to a Timestream table. A mapping from /// source to target is defined in a batch load task. Errors and events are written to a report /// at an S3 location. For the report, if the KMS key is not specified, the /// report will be encrypted with an S3 managed key when SSE_S3 is the option. /// Otherwise an error is thrown. For more information, see Amazon Web Services managed /// keys. Service quotas apply. For /// details, see code /// sample.

@clientDiscoveredEndpoint( required: true ) operation CreateBatchLoadTask { input: CreateBatchLoadTaskRequest output: CreateBatchLoadTaskResponse errors: [ AccessDeniedException ConflictException InternalServerException InvalidEndpointException ResourceNotFoundException ServiceQuotaExceededException ThrottlingException ValidationException ] } ///

Creates a new Timestream database. If the KMS key is not /// specified, the database will be encrypted with a Timestream managed KMS key located in your account. For more information, see Amazon Web Services managed keys. Service quotas apply. For /// details, see code sample. ///

@clientDiscoveredEndpoint( required: true ) operation CreateDatabase { input: CreateDatabaseRequest output: CreateDatabaseResponse errors: [ AccessDeniedException ConflictException InternalServerException InvalidEndpointException ServiceQuotaExceededException ThrottlingException ValidationException ] } ///

Adds a new table to an existing database in your account. In an Amazon Web Services account, table names must be at least unique within each Region if they are in the same /// database. You might have identical table names in the same Region if the tables are in /// separate databases. While creating the table, you must specify the table name, database /// name, and the retention properties. Service quotas apply. See /// code /// sample for details.

@clientDiscoveredEndpoint( required: true ) operation CreateTable { input: CreateTableRequest output: CreateTableResponse errors: [ AccessDeniedException ConflictException InternalServerException InvalidEndpointException ResourceNotFoundException ServiceQuotaExceededException ThrottlingException ValidationException ] } ///

Deletes a given Timestream database. This is an irreversible /// operation. After a database is deleted, the time-series data from its tables cannot be /// recovered. ///

/// ///

All tables in the database must be deleted first, or a ValidationException error will /// be thrown.

///

Due to the nature of distributed retries, the operation can return either success or /// a ResourceNotFoundException. Clients should consider them equivalent.

///
///

See code sample /// for details.

@clientDiscoveredEndpoint( required: true ) operation DeleteDatabase { input: DeleteDatabaseRequest output: Unit errors: [ AccessDeniedException InternalServerException InvalidEndpointException ResourceNotFoundException ThrottlingException ValidationException ] } ///

Deletes a given Timestream table. This is an irreversible operation. After a /// Timestream database table is deleted, the time-series data stored in the table /// cannot be recovered.

/// ///

Due to the nature of distributed retries, the operation can return either success or /// a ResourceNotFoundException. Clients should consider them equivalent.

///
///

See code /// sample for details.

@clientDiscoveredEndpoint( required: true ) operation DeleteTable { input: DeleteTableRequest output: Unit errors: [ AccessDeniedException InternalServerException InvalidEndpointException ResourceNotFoundException ThrottlingException ValidationException ] } ///

Returns information about the batch load task, including configurations, mappings, /// progress, and other details. Service quotas apply. See /// code /// sample for details.

@clientDiscoveredEndpoint( required: true ) operation DescribeBatchLoadTask { input: DescribeBatchLoadTaskRequest output: DescribeBatchLoadTaskResponse errors: [ AccessDeniedException InternalServerException InvalidEndpointException ResourceNotFoundException ThrottlingException ] } ///

Returns information about the database, including the database name, time that the /// database was created, and the total number of tables found within the database. Service /// quotas apply. See code sample /// for details.

@clientDiscoveredEndpoint( required: true ) operation DescribeDatabase { input: DescribeDatabaseRequest output: DescribeDatabaseResponse errors: [ AccessDeniedException InternalServerException InvalidEndpointException ResourceNotFoundException ThrottlingException ValidationException ] } ///

Returns a list of available endpoints to make Timestream API calls against. /// This API operation is available through both the Write and Query APIs.

///

Because the Timestream SDKs are designed to transparently work with the /// service’s architecture, including the management and mapping of the service endpoints, /// we don't recommend that you use this API operation unless:

/// ///

For detailed information on how and when to use and implement DescribeEndpoints, see /// The /// Endpoint Discovery Pattern.

operation DescribeEndpoints { input: DescribeEndpointsRequest output: DescribeEndpointsResponse errors: [ InternalServerException ThrottlingException ValidationException ] } ///

Returns information about the table, including the table name, database name, retention /// duration of the memory store and the magnetic store. Service quotas apply. See /// code /// sample for details.

@clientDiscoveredEndpoint( required: true ) operation DescribeTable { input: DescribeTableRequest output: DescribeTableResponse errors: [ AccessDeniedException InternalServerException InvalidEndpointException ResourceNotFoundException ThrottlingException ValidationException ] } ///

Provides a list of batch load tasks, along with the name, status, when the task is /// resumable until, and other details. See code /// sample for details.

@clientDiscoveredEndpoint( required: true ) @paginated( inputToken: "NextToken" outputToken: "NextToken" pageSize: "MaxResults" ) operation ListBatchLoadTasks { input: ListBatchLoadTasksRequest output: ListBatchLoadTasksResponse errors: [ AccessDeniedException InternalServerException InvalidEndpointException ThrottlingException ValidationException ] } ///

Returns a list of your Timestream databases. Service quotas apply. See /// code sample for /// details.

@clientDiscoveredEndpoint( required: true ) @paginated( inputToken: "NextToken" outputToken: "NextToken" pageSize: "MaxResults" ) operation ListDatabases { input: ListDatabasesRequest output: ListDatabasesResponse errors: [ AccessDeniedException InternalServerException InvalidEndpointException ThrottlingException ValidationException ] } ///

Provides a list of tables, along with the name, status, and retention properties of each /// table. See code sample /// for details.

@clientDiscoveredEndpoint( required: true ) @paginated( inputToken: "NextToken" outputToken: "NextToken" pageSize: "MaxResults" ) operation ListTables { input: ListTablesRequest output: ListTablesResponse errors: [ AccessDeniedException InternalServerException InvalidEndpointException ResourceNotFoundException ThrottlingException ValidationException ] } ///

Lists all tags on a Timestream resource.

@clientDiscoveredEndpoint( required: true ) operation ListTagsForResource { input: ListTagsForResourceRequest output: ListTagsForResourceResponse errors: [ InvalidEndpointException ResourceNotFoundException ThrottlingException ValidationException ] } ///

///

@clientDiscoveredEndpoint( required: true ) operation ResumeBatchLoadTask { input: ResumeBatchLoadTaskRequest output: ResumeBatchLoadTaskResponse errors: [ AccessDeniedException InternalServerException InvalidEndpointException ResourceNotFoundException ThrottlingException ValidationException ] } ///

Associates a set of tags with a Timestream resource. You can then activate /// these user-defined tags so that they appear on the Billing and Cost Management console for /// cost allocation tracking.

@clientDiscoveredEndpoint( required: true ) operation TagResource { input: TagResourceRequest output: TagResourceResponse errors: [ InvalidEndpointException ResourceNotFoundException ServiceQuotaExceededException ThrottlingException ValidationException ] } ///

Removes the association of tags from a Timestream resource.

@clientDiscoveredEndpoint( required: true ) operation UntagResource { input: UntagResourceRequest output: UntagResourceResponse errors: [ InvalidEndpointException ResourceNotFoundException ServiceQuotaExceededException ThrottlingException ValidationException ] } ///

Modifies the KMS key for an existing database. While updating the /// database, you must specify the database name and the identifier of the new KMS key to be used (KmsKeyId). If there are any concurrent /// UpdateDatabase requests, first writer wins.

///

See code sample /// for details.

@clientDiscoveredEndpoint( required: true ) operation UpdateDatabase { input: UpdateDatabaseRequest output: UpdateDatabaseResponse errors: [ AccessDeniedException InternalServerException InvalidEndpointException ResourceNotFoundException ServiceQuotaExceededException ThrottlingException ValidationException ] } ///

Modifies the retention duration of the memory store and magnetic store for your Timestream table. Note that the change in retention duration takes effect immediately. /// For example, if the retention period of the memory store was initially set to 2 hours and /// then changed to 24 hours, the memory store will be capable of holding 24 hours of data, but /// will be populated with 24 hours of data 22 hours after this change was made. Timestream does not retrieve data from the magnetic store to populate the memory store.

///

See code /// sample for details.

@clientDiscoveredEndpoint( required: true ) operation UpdateTable { input: UpdateTableRequest output: UpdateTableResponse errors: [ AccessDeniedException InternalServerException InvalidEndpointException ResourceNotFoundException ThrottlingException ValidationException ] } ///

Enables you to write your time-series data into Timestream. You can specify a /// single data point or a batch of data points to be inserted into the system. Timestream offers you a flexible schema that auto detects the column names and data /// types for your Timestream tables based on the dimension names and data types of /// the data points you specify when invoking writes into the database.

///

Timestream supports eventual consistency read semantics. This means that when /// you query data immediately after writing a batch of data into Timestream, the /// query results might not reflect the results of a recently completed write operation. The /// results may also include some stale data. If you repeat the query request after a short /// time, the results should return the latest data. Service quotas apply.

///

See code sample for /// details.

///

/// Upserts ///

///

You can use the Version parameter in a WriteRecords request to /// update data points. Timestream tracks a version number with each record. /// Version defaults to 1 when it's not specified for the record /// in the request. Timestream updates an existing record’s measure value along with /// its Version when it receives a write request with a higher /// Version number for that record. When it receives an update request where /// the measure value is the same as that of the existing record, Timestream still /// updates Version, if it is greater than the existing value of /// Version. You can update a data point as many times as desired, as long as /// the value of Version continuously increases.

///

For example, suppose you write a new record without indicating Version in /// the request. Timestream stores this record, and set Version to /// 1. Now, suppose you try to update this record with a /// WriteRecords request of the same record with a different measure value but, /// like before, do not provide Version. In this case, Timestream will /// reject this update with a RejectedRecordsException since the updated record’s /// version is not greater than the existing value of Version.

///

However, if you were to resend the update request with Version set to /// 2, Timestream would then succeed in updating the record’s value, /// and the Version would be set to 2. Next, suppose you sent a /// WriteRecords request with this same record and an identical measure value, /// but with Version set to 3. In this case, Timestream /// would only update Version to 3. Any further updates would need to /// send a version number greater than 3, or the update requests would receive a /// RejectedRecordsException.

@clientDiscoveredEndpoint( required: true ) operation WriteRecords { input: WriteRecordsRequest output: WriteRecordsResponse errors: [ AccessDeniedException InternalServerException InvalidEndpointException RejectedRecordsException ResourceNotFoundException ThrottlingException ValidationException ] } ///

You are not authorized to perform this action.

@error("client") @httpError(403) structure AccessDeniedException { @required Message: ErrorMessage } ///

Details about the progress of a batch load task.

structure BatchLoadProgressReport { ///

RecordsProcessed: Long = 0 ///

RecordsIngested: Long = 0 ///

ParseFailures: Long = 0 ///

RecordIngestionFailures: Long = 0 ///

FileFailures: Long = 0 ///

BytesMetered: Long = 0 } ///

Details about a batch load task.

structure BatchLoadTask { ///

The ID of the batch load task.

TaskId: BatchLoadTaskId ///

Status of the batch load task.

TaskStatus: BatchLoadStatus ///

Database name for the database into which a batch load task loads data.

DatabaseName: ResourceName ///

Table name for the table into which a batch load task loads data.

TableName: ResourceName ///

The time when the Timestream batch load task was created.

CreationTime: Date ///

The time when the Timestream batch load task was last updated.

LastUpdatedTime: Date ///

///

ResumableUntil: Date } ///

Details about a batch load task.

structure BatchLoadTaskDescription { ///

The ID of the batch load task.

TaskId: BatchLoadTaskId ///

ErrorMessage: StringValue2048 ///

Configuration details about the data source for a batch load task.

DataSourceConfiguration: DataSourceConfiguration ///

ProgressReport: BatchLoadProgressReport ///

Report configuration for a batch load task. This contains details about where error /// reports are stored.

ReportConfiguration: ReportConfiguration ///

Data model configuration for a batch load task. This contains details about where a data /// model for a batch load task is stored.

DataModelConfiguration: DataModelConfiguration ///

TargetDatabaseName: ResourceName ///

TargetTableName: ResourceName ///

Status of the batch load task.

TaskStatus: BatchLoadStatus ///

RecordVersion: RecordVersion = 0 ///

The time when the Timestream batch load task was created.

CreationTime: Date ///

The time when the Timestream batch load task was last updated.

LastUpdatedTime: Date ///

///

ResumableUntil: Date } ///

Timestream was unable to process this request because it contains resource that /// already exists.

@error("client") @httpError(409) structure ConflictException { @required Message: ErrorMessage } @input structure CreateBatchLoadTaskRequest { ///

@idempotencyToken ClientToken: ClientRequestToken DataModelConfiguration: DataModelConfiguration ///

Defines configuration details about the data source for a batch load task.

@required DataSourceConfiguration: DataSourceConfiguration @required ReportConfiguration: ReportConfiguration ///

Target Timestream database for a batch load task.

@required TargetDatabaseName: ResourceCreateAPIName ///

Target Timestream table for a batch load task.

@required TargetTableName: ResourceCreateAPIName ///

RecordVersion: RecordVersion = null } @output structure CreateBatchLoadTaskResponse { ///

The ID of the batch load task.

@required TaskId: BatchLoadTaskId } @input structure CreateDatabaseRequest { ///

The name of the Timestream database.

@required DatabaseName: ResourceCreateAPIName ///

The KMS key for the database. If the KMS key is not /// specified, the database will be encrypted with a Timestream managed KMS key located in your account. For more information, see Amazon Web Services managed keys.

KmsKeyId: StringValue2048 ///

A list of key-value pairs to label the table.

Tags: TagList } @output structure CreateDatabaseResponse { ///

The newly created Timestream database.

Database: Database } @input structure CreateTableRequest { ///

The name of the Timestream database.

@required DatabaseName: ResourceCreateAPIName ///

The name of the Timestream table.

@required TableName: ResourceCreateAPIName ///

The duration for which your time-series data must be stored in the memory store and the /// magnetic store.

RetentionProperties: RetentionProperties ///

A list of key-value pairs to label the table.

Tags: TagList ///

Contains properties to set on the table when enabling magnetic store writes.

MagneticStoreWriteProperties: MagneticStoreWriteProperties ///

The schema of the table.

Schema: Schema } @output structure CreateTableResponse { ///

The newly created Timestream table.

Table: Table } ///

A delimited data format where the column separator can be a comma and the record /// separator is a newline character.

structure CsvConfiguration { ///

Column separator can be one of comma (','), pipe ('|), semicolon (';'), tab('/t'), or /// blank space (' ').

ColumnSeparator: StringValue1 ///

Escape character can be one of

EscapeChar: StringValue1 ///

Can be single quote (') or double quote (").

QuoteChar: StringValue1 ///

Can be blank space (' ').

NullValue: StringValue256 ///

Specifies to trim leading and trailing white space.

TrimWhiteSpace: Boolean } ///

A top-level container for a table. Databases and tables are the fundamental management /// concepts in Amazon Timestream. All tables in a database are encrypted with the /// same KMS key.

structure Database { ///

The Amazon Resource Name that uniquely identifies this database.

Arn: String ///

The name of the Timestream database.

DatabaseName: ResourceName ///

The total number of tables found within a Timestream database.

TableCount: Long = 0 ///

The identifier of the KMS key used to encrypt the data stored in the /// database.

KmsKeyId: StringValue2048 ///

The time when the database was created, calculated from the Unix epoch time.

CreationTime: Date ///

The last time that this database was updated.

LastUpdatedTime: Date } ///

Data model for a batch load task.

structure DataModel { ///

Source column to be mapped to time.

TimeColumn: StringValue256 ///

The granularity of the timestamp unit. It indicates if the time value is in seconds, /// milliseconds, nanoseconds, or other supported values. Default is MILLISECONDS. ///

TimeUnit: TimeUnit ///

Source to target mappings for dimensions.

@required DimensionMappings: DimensionMappings ///

Source to target mappings for multi-measure records.

MultiMeasureMappings: MultiMeasureMappings ///

Source to target mappings for measures.

MixedMeasureMappings: MixedMeasureMappingList ///

MeasureNameColumn: StringValue256 } ///

structure DataModelConfiguration { ///

DataModel: DataModel ///

DataModelS3Configuration: DataModelS3Configuration } ///

structure DataModelS3Configuration { ///

BucketName: S3BucketName ///

ObjectKey: S3ObjectKey } ///

Defines configuration details about the data source.

structure DataSourceConfiguration { ///

Configuration of an S3 location for a file which contains data to load.

@required DataSourceS3Configuration: DataSourceS3Configuration CsvConfiguration: CsvConfiguration ///

This is currently CSV.

@required DataFormat: BatchLoadDataFormat } ///

///

structure DataSourceS3Configuration { ///

The bucket name of the customer S3 bucket.

@required BucketName: S3BucketName ///

///

ObjectKeyPrefix: S3ObjectKey } @input structure DeleteDatabaseRequest { ///

The name of the Timestream database to be deleted.

@required DatabaseName: ResourceName } @input structure DeleteTableRequest { ///

The name of the database where the Timestream database is to be deleted.

@required DatabaseName: ResourceName ///

The name of the Timestream table to be deleted.

@required TableName: ResourceName } @input structure DescribeBatchLoadTaskRequest { ///

The ID of the batch load task.

@required TaskId: BatchLoadTaskId } @output structure DescribeBatchLoadTaskResponse { ///

Description of the batch load task.

@required BatchLoadTaskDescription: BatchLoadTaskDescription } @input structure DescribeDatabaseRequest { ///

The name of the Timestream database.

@required DatabaseName: ResourceName } @output structure DescribeDatabaseResponse { ///

The name of the Timestream table.

Database: Database } @input structure DescribeEndpointsRequest {} @output structure DescribeEndpointsResponse { ///

An Endpoints object is returned when a DescribeEndpoints /// request is made.

@required Endpoints: Endpoints } @input structure DescribeTableRequest { ///

The name of the Timestream database.

@required DatabaseName: ResourceName ///

The name of the Timestream table.

@required TableName: ResourceName } @output structure DescribeTableResponse { ///

The Timestream table.

Table: Table } ///

Represents the metadata attributes of the time series. For example, the name and /// Availability Zone of an EC2 instance or the name of the manufacturer of a wind turbine are /// dimensions.

structure Dimension { ///

Dimension represents the metadata attributes of the time series. For example, the name /// and Availability Zone of an EC2 instance or the name of the manufacturer of a wind turbine /// are dimensions.

///

For constraints on dimension names, see Naming /// Constraints.

@required Name: SchemaName ///

The value of the dimension.

@required Value: SchemaValue ///

The data type of the dimension for the time-series data point.

DimensionValueType: DimensionValueType } ///

structure DimensionMapping { ///

SourceColumn: SchemaName ///

///

DestinationColumn: SchemaName } ///

Represents an available endpoint against which to make API calls against, as well as the /// TTL for that endpoint.

structure Endpoint { ///

An endpoint address.

@required Address: String ///

The TTL for the endpoint, in minutes.

@required CachePeriodInMinutes: Long = 0 } ///

/// Timestream was unable to fully process this request because of an internal server /// error.

@error("server") @httpError(500) structure InternalServerException { @required Message: ErrorMessage } ///

The requested endpoint was not valid.

@error("client") @httpError(421) structure InvalidEndpointException { Message: ErrorMessage } @input structure ListBatchLoadTasksRequest { ///

A token to specify where to start paginating. This is the NextToken from a previously /// truncated response.

NextToken: String ///

The total number of items to return in the output. If the total number of items /// available is more than the value specified, a NextToken is provided in the output. To /// resume pagination, provide the NextToken value as argument of a subsequent API /// invocation.

MaxResults: PageLimit ///

Status of the batch load task.

TaskStatus: BatchLoadStatus } @output structure ListBatchLoadTasksResponse { ///

A token to specify where to start paginating. Provide the next /// ListBatchLoadTasksRequest.

NextToken: String ///

A list of batch load task details.

BatchLoadTasks: BatchLoadTaskList } @input structure ListDatabasesRequest { ///

The pagination token. To resume pagination, provide the NextToken value as argument of a /// subsequent API invocation.

NextToken: String ///

The total number of items to return in the output. If the total number of items /// available is more than the value specified, a NextToken is provided in the output. To /// resume pagination, provide the NextToken value as argument of a subsequent API /// invocation.

MaxResults: PaginationLimit } @output structure ListDatabasesResponse { ///

A list of database names.

Databases: DatabaseList ///

The pagination token. This parameter is returned when the response is truncated.

NextToken: String } @input structure ListTablesRequest { ///

The name of the Timestream database.

DatabaseName: ResourceName ///

The pagination token. To resume pagination, provide the NextToken value as argument of a /// subsequent API invocation.

NextToken: String ///

The total number of items to return in the output. If the total number of items /// available is more than the value specified, a NextToken is provided in the output. To /// resume pagination, provide the NextToken value as argument of a subsequent API /// invocation.

MaxResults: PaginationLimit } @output structure ListTablesResponse { ///

A list of tables.

Tables: TableList ///

A token to specify where to start paginating. This is the NextToken from a previously /// truncated response.

NextToken: String } @input structure ListTagsForResourceRequest { ///

The Timestream resource with tags to be listed. This value is an Amazon /// Resource Name (ARN).

@required ResourceARN: AmazonResourceName } @output structure ListTagsForResourceResponse { ///

The tags currently associated with the Timestream resource.

Tags: TagList } ///

The location to write error reports for records rejected, asynchronously, during /// magnetic store writes.

structure MagneticStoreRejectedDataLocation { ///

Configuration of an S3 location to write error reports for records rejected, /// asynchronously, during magnetic store writes.

S3Configuration: S3Configuration } ///

The set of properties on a table for configuring magnetic store writes.

structure MagneticStoreWriteProperties { ///

A flag to enable magnetic store writes.

@required EnableMagneticStoreWrites: Boolean ///

The location to write error reports for records rejected asynchronously during magnetic /// store writes.

MagneticStoreRejectedDataLocation: MagneticStoreRejectedDataLocation } ///

Represents the data attribute of the time series. For example, the CPU utilization of /// an EC2 instance or the RPM of a wind turbine are measures. MeasureValue has both name and /// value.

///

MeasureValue is only allowed for type MULTI. Using MULTI /// type, you can pass multiple data attributes associated with the same time series in a /// single record

structure MeasureValue { ///

The name of the MeasureValue.

///

For constraints on MeasureValue names, see Naming Constraints in the Amazon Timestream Developer Guide.

@required Name: SchemaName ///

The value for the MeasureValue. For information, see Data /// types.

@required Value: StringValue2048 ///

Contains the data type of the MeasureValue for the time-series data point.

@required Type: MeasureValueType } ///

structure MixedMeasureMapping { ///

MeasureName: SchemaName ///

SourceColumn: SchemaName ///

TargetMeasureName: SchemaName ///

@required MeasureValueType: MeasureValueType ///

MultiMeasureAttributeMappings: MultiMeasureAttributeMappingList } ///

structure MultiMeasureAttributeMapping { ///

@required SourceColumn: SchemaName ///

TargetMultiMeasureAttributeName: SchemaName ///

MeasureValueType: ScalarMeasureValueType } ///

structure MultiMeasureMappings { ///

TargetMultiMeasureName: SchemaName ///

@required MultiMeasureAttributeMappings: MultiMeasureAttributeMappingList } ///

An attribute used in partitioning data in a table. A dimension key partitions data /// using the values of the dimension specified by the dimension-name as partition key, while a /// measure key partitions data using measure names (values of the 'measure_name' column). ///

structure PartitionKey { ///

The type of the partition key. Options are DIMENSION (dimension key) and MEASURE /// (measure key).

@required Type: PartitionKeyType ///

The name of the attribute used for a dimension key.

Name: SchemaName ///

The level of enforcement for the specification of a dimension key in ingested records. /// Options are REQUIRED (dimension key must be specified) and OPTIONAL (dimension key does not /// have to be specified).

EnforcementInRecord: PartitionKeyEnforcementLevel } ///

Represents a time-series data point being written into Timestream. Each record /// contains an array of dimensions. Dimensions represent the metadata attributes of a /// time-series data point, such as the instance name or Availability Zone of an EC2 instance. /// A record also contains the measure name, which is the name of the measure being collected /// (for example, the CPU utilization of an EC2 instance). Additionally, a record contains the /// measure value and the value type, which is the data type of the measure value. Also, the /// record contains the timestamp of when the measure was collected and the timestamp unit, /// which represents the granularity of the timestamp.

///

Records have a Version field, which is a 64-bit long that you /// can use for updating data points. Writes of a duplicate record with the same dimension, /// timestamp, and measure name but different measure value will only succeed if the /// Version attribute of the record in the write request is higher than that of /// the existing record. Timestream defaults to a Version of /// 1 for records without the Version field.

structure Record { ///

Contains the list of dimensions for time-series data points.

Dimensions: Dimensions ///

Measure represents the data attribute of the time series. For example, the CPU /// utilization of an EC2 instance or the RPM of a wind turbine are measures.

MeasureName: SchemaName ///

Contains the measure value for the time-series data point.

MeasureValue: StringValue2048 ///

Contains the data type of the measure value for the time-series data point. Default /// type is DOUBLE. For more information, see Data /// types.

MeasureValueType: MeasureValueType ///

Contains the time at which the measure value for the data point was collected. The time /// value plus the unit provides the time elapsed since the epoch. For example, if the time /// value is 12345 and the unit is ms, then 12345 ms /// have elapsed since the epoch.

Time: StringValue256 ///

The granularity of the timestamp unit. It indicates if the time value is in seconds, /// milliseconds, nanoseconds, or other supported values. Default is MILLISECONDS. ///

TimeUnit: TimeUnit ///

64-bit attribute used for record updates. Write requests for duplicate data with a /// higher version number will update the existing measure value and version. In cases where /// the measure value is the same, Version will still be updated. Default value is /// 1.

/// ///

/// Version must be 1 or greater, or you will receive a /// ValidationException error.

///
Version: RecordVersion = null ///

Contains the list of MeasureValue for time-series data points.

///

This is only allowed for type MULTI. For scalar values, use /// MeasureValue attribute of the record directly.

MeasureValues: MeasureValues } ///

Information on the records ingested by this request.

structure RecordsIngested { ///

Total count of successfully ingested records.

Total: Integer = 0 ///

Count of records ingested into the memory store.

MemoryStore: Integer = 0 ///

Count of records ingested into the magnetic store.

MagneticStore: Integer = 0 } ///

Represents records that were not successfully inserted into Timestream due to /// data validation issues that must be resolved before reinserting time-series data into the /// system.

structure RejectedRecord { ///

The index of the record in the input request for WriteRecords. Indexes begin with 0. ///

RecordIndex: RecordIndex = 0 ///

The reason why a record was not successfully inserted into Timestream. /// Possible causes of failure include:

///
    ///
  • ///

    Records with duplicate data where there are multiple records with the same /// dimensions, timestamps, and measure names but:

    ///
      ///
    • ///

      Measure values are different

      ///
    • ///
    • ///

      Version is not present in the request, or the value of /// version in the new record is equal to or lower than the existing value

      ///
    • ///
    ///

    If Timestream rejects data for this case, the /// ExistingVersion field in the RejectedRecords response /// will indicate the current record’s version. To force an update, you can resend the /// request with a version for the record set to a value greater than the /// ExistingVersion.

    ///
  • ///
  • ///

    Records with timestamps that lie outside the retention duration of the memory /// store.

    /// ///

    When the retention window is updated, you will receive a /// RejectedRecords exception if you immediately try to ingest data /// within the new window. To avoid a RejectedRecords exception, wait /// until the duration of the new window to ingest new data. For further information, /// see Best /// Practices for Configuring Timestream and the /// explanation of how storage works in Timestream.

    ///
    ///
  • ///
  • ///

    Records with dimensions or measures that exceed the Timestream defined /// limits.

    ///
  • ///
///

For more information, see Access Management in the /// Timestream Developer Guide.

Reason: ErrorMessage ///

The existing version of the record. This value is populated in scenarios where an /// identical record exists with a higher version than the version in the write request.

ExistingVersion: RecordVersion = null } ///

WriteRecords would throw this exception in the following cases:

///
    ///
  • ///

    Records with duplicate data where there are multiple records with the same /// dimensions, timestamps, and measure names but:

    ///
      ///
    • ///

      Measure values are different

      ///
    • ///
    • ///

      Version is not present in the request or the value of /// version in the new record is equal to or lower than the existing value

      ///
    • ///
    ///

    In this case, if Timestream rejects data, the /// ExistingVersion field in the RejectedRecords response /// will indicate the current record’s version. To force an update, you can resend the /// request with a version for the record set to a value greater than the /// ExistingVersion.

    ///
  • ///
  • ///

    Records with timestamps that lie outside the retention duration of the memory /// store.

    ///
  • ///
  • ///

    Records with dimensions or measures that exceed the Timestream defined /// limits.

    ///
  • ///
///

For more information, see Quotas in the Amazon Timestream Developer Guide.

@error("client") @httpError(419) structure RejectedRecordsException { Message: ErrorMessage ///

///

RejectedRecords: RejectedRecords } ///

Report configuration for a batch load task. This contains details about where error /// reports are stored.

structure ReportConfiguration { ///

Configuration of an S3 location to write error reports and events for a batch /// load.

ReportS3Configuration: ReportS3Configuration } ///

structure ReportS3Configuration { ///

@required BucketName: S3BucketName ///

ObjectKeyPrefix: S3ObjectKeyPrefix ///

EncryptionOption: S3EncryptionOption ///

KmsKeyId: StringValue2048 } ///

The operation tried to access a nonexistent resource. The resource might not be /// specified correctly, or its status might not be ACTIVE.

@error("client") @httpError(404) structure ResourceNotFoundException { Message: ErrorMessage } @input structure ResumeBatchLoadTaskRequest { ///

The ID of the batch load task to resume.

@required TaskId: BatchLoadTaskId } @output structure ResumeBatchLoadTaskResponse {} ///

Retention properties contain the duration for which your time-series data must be stored /// in the magnetic store and the memory store.

structure RetentionProperties { ///

The duration for which data must be stored in the memory store.

@required MemoryStoreRetentionPeriodInHours: MemoryStoreRetentionPeriodInHours = 0 ///

The duration for which data must be stored in the magnetic store.

@required MagneticStoreRetentionPeriodInDays: MagneticStoreRetentionPeriodInDays = 0 } ///

The configuration that specifies an S3 location.

structure S3Configuration { ///

The bucket name of the customer S3 bucket.

BucketName: S3BucketName ///

The object key preview for the customer S3 location.

ObjectKeyPrefix: S3ObjectKeyPrefix ///

The encryption option for the customer S3 location. Options are S3 server-side /// encryption with an S3 managed key or Amazon Web Services managed key.

EncryptionOption: S3EncryptionOption ///

The KMS key ID for the customer S3 location when encrypting with an /// Amazon Web Services managed key.

KmsKeyId: StringValue2048 } ///

A Schema specifies the expected data model of the table.

structure Schema { ///

A non-empty list of partition keys defining the attributes used to partition the table /// data. The order of the list determines the partition hierarchy. The name and type of each /// partition key as well as the partition key order cannot be changed after the table is /// created. However, the enforcement level of each partition key can be changed.

CompositePartitionKey: PartitionKeyList } ///

The instance quota of resource exceeded for this account.

@error("client") @httpError(402) structure ServiceQuotaExceededException { Message: ErrorMessage } ///

Represents a database table in Timestream. Tables contain one or more related /// time series. You can modify the retention duration of the memory store and the magnetic /// store for a table.

structure Table { ///

The Amazon Resource Name that uniquely identifies this table.

Arn: String ///

The name of the Timestream table.

TableName: ResourceName ///

The name of the Timestream database that contains this table.

DatabaseName: ResourceName ///

The current state of the table:

///
    ///
  • ///

    /// DELETING - The table is being deleted.

    ///
  • ///
  • ///

    /// ACTIVE - The table is ready for use.

    ///
  • ///
TableStatus: TableStatus ///

The retention duration for the memory store and magnetic store.

RetentionProperties: RetentionProperties ///

The time when the Timestream table was created.

CreationTime: Date ///

The time when the Timestream table was last updated.

LastUpdatedTime: Date ///

Contains properties to set on the table when enabling magnetic store writes.

MagneticStoreWriteProperties: MagneticStoreWriteProperties ///

The schema of the table.

Schema: Schema } ///

A tag is a label that you assign to a Timestream database and/or table. Each /// tag consists of a key and an optional value, both of which you define. With tags, you can /// categorize databases and/or tables, for example, by purpose, owner, or environment.

structure Tag { ///

The key of the tag. Tag keys are case sensitive.

@required Key: TagKey ///

The value of the tag. Tag values are case-sensitive and can be null.

@required Value: TagValue } @input structure TagResourceRequest { ///

Identifies the Timestream resource to which tags should be added. This value /// is an Amazon Resource Name (ARN).

@required ResourceARN: AmazonResourceName ///

The tags to be assigned to the Timestream resource.

@required Tags: TagList } @output structure TagResourceResponse {} ///

Too many requests were made by a user and they exceeded the service quotas. The request /// was throttled.

@error("client") @httpError(429) structure ThrottlingException { @required Message: ErrorMessage } @input structure UntagResourceRequest { ///

The Timestream resource that the tags will be removed from. This value is an /// Amazon Resource Name (ARN).

@required ResourceARN: AmazonResourceName ///

A list of tags keys. Existing tags of the resource whose keys are members of this list /// will be removed from the Timestream resource.

@required TagKeys: TagKeyList } @output structure UntagResourceResponse {} @input structure UpdateDatabaseRequest { ///

The name of the database.

@required DatabaseName: ResourceName ///

The identifier of the new KMS key (KmsKeyId) to be used to /// encrypt the data stored in the database. If the KmsKeyId currently registered /// with the database is the same as the KmsKeyId in the request, there will not /// be any update.

///

You can specify the KmsKeyId using any of the following:

///
    ///
  • ///

    Key ID: 1234abcd-12ab-34cd-56ef-1234567890ab ///

    ///
  • ///
  • ///

    Key ARN: /// arn:aws:kms:us-east-1:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab ///

    ///
  • ///
  • ///

    Alias name: alias/ExampleAlias ///

    ///
  • ///
  • ///

    Alias ARN: /// arn:aws:kms:us-east-1:111122223333:alias/ExampleAlias ///

    ///
  • ///
@required KmsKeyId: StringValue2048 } @output structure UpdateDatabaseResponse { Database: Database } @input structure UpdateTableRequest { ///

The name of the Timestream database.

@required DatabaseName: ResourceName ///

The name of the Timestream table.

@required TableName: ResourceName ///

The retention duration of the memory store and the magnetic store.

RetentionProperties: RetentionProperties ///

Contains properties to set on the table when enabling magnetic store writes.

MagneticStoreWriteProperties: MagneticStoreWriteProperties ///

The schema of the table.

Schema: Schema } @output structure UpdateTableResponse { ///

The updated Timestream table.

Table: Table } ///

An invalid or malformed request.

@error("client") @httpError(400) structure ValidationException { @required Message: ErrorMessage } @input structure WriteRecordsRequest { ///

The name of the Timestream database.

@required DatabaseName: ResourceName ///

The name of the Timestream table.

@required TableName: ResourceName ///

A record that contains the common measure, dimension, time, and version attributes /// shared across all the records in the request. The measure and dimension attributes /// specified will be merged with the measure and dimension attributes in the records object /// when the data is written into Timestream. Dimensions may not overlap, or a /// ValidationException will be thrown. In other words, a record must contain /// dimensions with unique names.

CommonAttributes: Record ///

An array of records that contain the unique measure, dimension, time, and version /// attributes for each time-series data point.

@required Records: Records } @output structure WriteRecordsResponse { ///

Information on the records ingested by this request.

RecordsIngested: RecordsIngested } list BatchLoadTaskList { member: BatchLoadTask } list DatabaseList { member: Database } @length( min: 1 ) list DimensionMappings { member: DimensionMapping } @length( min: 0 max: 128 ) list Dimensions { member: Dimension } list Endpoints { member: Endpoint } list MeasureValues { member: MeasureValue } @length( min: 1 ) list MixedMeasureMappingList { member: MixedMeasureMapping } @length( min: 1 ) list MultiMeasureAttributeMappingList { member: MultiMeasureAttributeMapping } @length( min: 1 ) list PartitionKeyList { member: PartitionKey } @length( min: 1 max: 100 ) list Records { member: Record } list RejectedRecords { member: RejectedRecord } list TableList { member: Table } @length( min: 0 max: 200 ) list TagKeyList { member: TagKey } @length( min: 0 max: 200 ) list TagList { member: Tag } @length( min: 1 max: 1011 ) string AmazonResourceName enum BatchLoadDataFormat { CSV } enum BatchLoadStatus { CREATED IN_PROGRESS FAILED SUCCEEDED PROGRESS_STOPPED PENDING_RESUME } @length( min: 3 max: 32 ) @pattern("^[A-Z0-9]+$") string BatchLoadTaskId boolean Boolean @length( min: 1 max: 64 ) @sensitive string ClientRequestToken timestamp Date enum DimensionValueType { VARCHAR } string ErrorMessage @default(0) integer Integer @default(0) long Long @default(0) @range( min: 1 max: 73000 ) long MagneticStoreRetentionPeriodInDays enum MeasureValueType { DOUBLE BIGINT VARCHAR BOOLEAN TIMESTAMP MULTI } @default(0) @range( min: 1 max: 8766 ) long MemoryStoreRetentionPeriodInHours @range( min: 1 max: 100 ) integer PageLimit @range( min: 1 max: 20 ) integer PaginationLimit enum PartitionKeyEnforcementLevel { REQUIRED OPTIONAL } enum PartitionKeyType { DIMENSION MEASURE } @default(0) integer RecordIndex @default(0) long RecordVersion @pattern("^[a-zA-Z0-9_.-]+$") string ResourceCreateAPIName string ResourceName @length( min: 3 max: 63 ) @pattern("^[a-z0-9][\\.\\-a-z0-9]{1,61}[a-z0-9]$") string S3BucketName enum S3EncryptionOption { SSE_S3 SSE_KMS } @length( min: 1 max: 1024 ) @pattern("^[a-zA-Z0-9|!\\-_*'\\(\\)]([a-zA-Z0-9]|[!\\-_*'\\(\\)\\/.])+$") string S3ObjectKey @length( min: 1 max: 928 ) @pattern("^[a-zA-Z0-9|!\\-_*'\\(\\)]([a-zA-Z0-9]|[!\\-_*'\\(\\)\\/.])+$") string S3ObjectKeyPrefix enum ScalarMeasureValueType { DOUBLE BIGINT BOOLEAN VARCHAR TIMESTAMP } @length( min: 1 ) string SchemaName string SchemaValue string String @length( min: 1 max: 1 ) string StringValue1 @length( min: 1 max: 2048 ) string StringValue2048 @length( min: 1 max: 256 ) string StringValue256 enum TableStatus { ACTIVE DELETING RESTORING } @length( min: 1 max: 128 ) string TagKey @length( min: 0 max: 256 ) string TagValue enum TimeUnit { MILLISECONDS SECONDS MICROSECONDS NANOSECONDS }




© 2015 - 2024 Weber Informatics LLC | Privacy Policy