All Downloads are FREE. Search and download functionalities are using the official Maven repository.

com.amazonaws.services.kinesisfirehose.AmazonKinesisFirehoseAsync Maven / Gradle / Ivy

Go to download

The AWS Java SDK for Amazon Kinesis module holds the client classes that are used for communicating with Amazon Kinesis Service

There is a newer version: 1.12.772
Show newest version
/*
 * Copyright 2014-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
 * 
 * Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with
 * the License. A copy of the License is located at
 * 
 * http://aws.amazon.com/apache2.0
 * 
 * or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
 * CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions
 * and limitations under the License.
 */
package com.amazonaws.services.kinesisfirehose;

import javax.annotation.Generated;

import com.amazonaws.services.kinesisfirehose.model.*;

/**
 * Interface for accessing Firehose asynchronously. Each asynchronous method will return a Java Future object
 * representing the asynchronous operation; overloads which accept an {@code AsyncHandler} can be used to receive
 * notification when an asynchronous operation completes.
 * 

* Note: Do not directly implement this interface, new methods are added to it regularly. Extend from * {@link com.amazonaws.services.kinesisfirehose.AbstractAmazonKinesisFirehoseAsync} instead. *

*

* Amazon Kinesis Data Firehose API Reference *

* Amazon Kinesis Data Firehose is a fully managed service that delivers real-time streaming data to destinations such * as Amazon Simple Storage Service (Amazon S3), Amazon Elasticsearch Service (Amazon ES), Amazon Redshift, and Splunk. *

*/ @Generated("com.amazonaws:aws-java-sdk-code-generator") public interface AmazonKinesisFirehoseAsync extends AmazonKinesisFirehose { /** *

* Creates a Kinesis Data Firehose delivery stream. *

*

* By default, you can create up to 50 delivery streams per AWS Region. *

*

* This is an asynchronous operation that immediately returns. The initial status of the delivery stream is * CREATING. After the delivery stream is created, its status is ACTIVE and it now accepts * data. Attempts to send data to a delivery stream that is not in the ACTIVE state cause an exception. * To check the state of a delivery stream, use DescribeDeliveryStream. *

*

* A Kinesis Data Firehose delivery stream can be configured to receive records directly from providers using * PutRecord or PutRecordBatch, or it can be configured to use an existing Kinesis stream as its * source. To specify a Kinesis data stream as input, set the DeliveryStreamType parameter to * KinesisStreamAsSource, and provide the Kinesis stream Amazon Resource Name (ARN) and role ARN in the * KinesisStreamSourceConfiguration parameter. *

*

* A delivery stream is configured with a single destination: Amazon S3, Amazon ES, Amazon Redshift, or Splunk. You * must specify only one of the following destination configuration parameters: * ExtendedS3DestinationConfiguration, S3DestinationConfiguration, * ElasticsearchDestinationConfiguration, RedshiftDestinationConfiguration, or * SplunkDestinationConfiguration. *

*

* When you specify S3DestinationConfiguration, you can also provide the following optional values: * BufferingHints, EncryptionConfiguration, and CompressionFormat. By default, if no * BufferingHints value is provided, Kinesis Data Firehose buffers data up to 5 MB or for 5 minutes, * whichever condition is satisfied first. BufferingHints is a hint, so there are some cases where the * service cannot adhere to these conditions strictly. For example, record boundaries might be such that the size is * a little over or under the configured buffering size. By default, no encryption is performed. We strongly * recommend that you enable encryption to ensure secure data storage in Amazon S3. *

*

* A few notes about Amazon Redshift as a destination: *

*
    *
  • *

    * An Amazon Redshift destination requires an S3 bucket as intermediate location. Kinesis Data Firehose first * delivers data to Amazon S3 and then uses COPY syntax to load data into an Amazon Redshift table. * This is specified in the RedshiftDestinationConfiguration.S3Configuration parameter. *

    *
  • *
  • *

    * The compression formats SNAPPY or ZIP cannot be specified in * RedshiftDestinationConfiguration.S3Configuration because the Amazon Redshift COPY * operation that reads from the S3 bucket doesn't support these compression formats. *

    *
  • *
  • *

    * We strongly recommend that you use the user name and password you provide exclusively with Kinesis Data Firehose, * and that the permissions for the account are restricted for Amazon Redshift INSERT permissions. *

    *
  • *
*

* Kinesis Data Firehose assumes the IAM role that is configured as part of the destination. The role should allow * the Kinesis Data Firehose principal to assume the role, and the role should have permissions that allow the * service to deliver the data. For more information, see Grant Kinesis Data * Firehose Access to an Amazon S3 Destination in the Amazon Kinesis Data Firehose Developer Guide. *

* * @param createDeliveryStreamRequest * @return A Java Future containing the result of the CreateDeliveryStream operation returned by the service. * @sample AmazonKinesisFirehoseAsync.CreateDeliveryStream * @see AWS * API Documentation */ java.util.concurrent.Future createDeliveryStreamAsync(CreateDeliveryStreamRequest createDeliveryStreamRequest); /** *

* Creates a Kinesis Data Firehose delivery stream. *

*

* By default, you can create up to 50 delivery streams per AWS Region. *

*

* This is an asynchronous operation that immediately returns. The initial status of the delivery stream is * CREATING. After the delivery stream is created, its status is ACTIVE and it now accepts * data. Attempts to send data to a delivery stream that is not in the ACTIVE state cause an exception. * To check the state of a delivery stream, use DescribeDeliveryStream. *

*

* A Kinesis Data Firehose delivery stream can be configured to receive records directly from providers using * PutRecord or PutRecordBatch, or it can be configured to use an existing Kinesis stream as its * source. To specify a Kinesis data stream as input, set the DeliveryStreamType parameter to * KinesisStreamAsSource, and provide the Kinesis stream Amazon Resource Name (ARN) and role ARN in the * KinesisStreamSourceConfiguration parameter. *

*

* A delivery stream is configured with a single destination: Amazon S3, Amazon ES, Amazon Redshift, or Splunk. You * must specify only one of the following destination configuration parameters: * ExtendedS3DestinationConfiguration, S3DestinationConfiguration, * ElasticsearchDestinationConfiguration, RedshiftDestinationConfiguration, or * SplunkDestinationConfiguration. *

*

* When you specify S3DestinationConfiguration, you can also provide the following optional values: * BufferingHints, EncryptionConfiguration, and CompressionFormat. By default, if no * BufferingHints value is provided, Kinesis Data Firehose buffers data up to 5 MB or for 5 minutes, * whichever condition is satisfied first. BufferingHints is a hint, so there are some cases where the * service cannot adhere to these conditions strictly. For example, record boundaries might be such that the size is * a little over or under the configured buffering size. By default, no encryption is performed. We strongly * recommend that you enable encryption to ensure secure data storage in Amazon S3. *

*

* A few notes about Amazon Redshift as a destination: *

*
    *
  • *

    * An Amazon Redshift destination requires an S3 bucket as intermediate location. Kinesis Data Firehose first * delivers data to Amazon S3 and then uses COPY syntax to load data into an Amazon Redshift table. * This is specified in the RedshiftDestinationConfiguration.S3Configuration parameter. *

    *
  • *
  • *

    * The compression formats SNAPPY or ZIP cannot be specified in * RedshiftDestinationConfiguration.S3Configuration because the Amazon Redshift COPY * operation that reads from the S3 bucket doesn't support these compression formats. *

    *
  • *
  • *

    * We strongly recommend that you use the user name and password you provide exclusively with Kinesis Data Firehose, * and that the permissions for the account are restricted for Amazon Redshift INSERT permissions. *

    *
  • *
*

* Kinesis Data Firehose assumes the IAM role that is configured as part of the destination. The role should allow * the Kinesis Data Firehose principal to assume the role, and the role should have permissions that allow the * service to deliver the data. For more information, see Grant Kinesis Data * Firehose Access to an Amazon S3 Destination in the Amazon Kinesis Data Firehose Developer Guide. *

* * @param createDeliveryStreamRequest * @param asyncHandler * Asynchronous callback handler for events in the lifecycle of the request. Users can provide an * implementation of the callback methods in this interface to receive notification of successful or * unsuccessful completion of the operation. * @return A Java Future containing the result of the CreateDeliveryStream operation returned by the service. * @sample AmazonKinesisFirehoseAsyncHandler.CreateDeliveryStream * @see AWS * API Documentation */ java.util.concurrent.Future createDeliveryStreamAsync(CreateDeliveryStreamRequest createDeliveryStreamRequest, com.amazonaws.handlers.AsyncHandler asyncHandler); /** *

* Deletes a delivery stream and its data. *

*

* You can delete a delivery stream only if it is in ACTIVE or DELETING state, and not in * the CREATING state. While the deletion request is in process, the delivery stream is in the * DELETING state. *

*

* To check the state of a delivery stream, use DescribeDeliveryStream. *

*

* While the delivery stream is DELETING state, the service might continue to accept the records, but * it doesn't make any guarantees with respect to delivering the data. Therefore, as a best practice, you should * first stop any applications that are sending records before deleting a delivery stream. *

* * @param deleteDeliveryStreamRequest * @return A Java Future containing the result of the DeleteDeliveryStream operation returned by the service. * @sample AmazonKinesisFirehoseAsync.DeleteDeliveryStream * @see AWS * API Documentation */ java.util.concurrent.Future deleteDeliveryStreamAsync(DeleteDeliveryStreamRequest deleteDeliveryStreamRequest); /** *

* Deletes a delivery stream and its data. *

*

* You can delete a delivery stream only if it is in ACTIVE or DELETING state, and not in * the CREATING state. While the deletion request is in process, the delivery stream is in the * DELETING state. *

*

* To check the state of a delivery stream, use DescribeDeliveryStream. *

*

* While the delivery stream is DELETING state, the service might continue to accept the records, but * it doesn't make any guarantees with respect to delivering the data. Therefore, as a best practice, you should * first stop any applications that are sending records before deleting a delivery stream. *

* * @param deleteDeliveryStreamRequest * @param asyncHandler * Asynchronous callback handler for events in the lifecycle of the request. Users can provide an * implementation of the callback methods in this interface to receive notification of successful or * unsuccessful completion of the operation. * @return A Java Future containing the result of the DeleteDeliveryStream operation returned by the service. * @sample AmazonKinesisFirehoseAsyncHandler.DeleteDeliveryStream * @see AWS * API Documentation */ java.util.concurrent.Future deleteDeliveryStreamAsync(DeleteDeliveryStreamRequest deleteDeliveryStreamRequest, com.amazonaws.handlers.AsyncHandler asyncHandler); /** *

* Describes the specified delivery stream and gets the status. For example, after your delivery stream is created, * call DescribeDeliveryStream to see whether the delivery stream is ACTIVE and therefore * ready for data to be sent to it. *

* * @param describeDeliveryStreamRequest * @return A Java Future containing the result of the DescribeDeliveryStream operation returned by the service. * @sample AmazonKinesisFirehoseAsync.DescribeDeliveryStream * @see AWS API Documentation */ java.util.concurrent.Future describeDeliveryStreamAsync(DescribeDeliveryStreamRequest describeDeliveryStreamRequest); /** *

* Describes the specified delivery stream and gets the status. For example, after your delivery stream is created, * call DescribeDeliveryStream to see whether the delivery stream is ACTIVE and therefore * ready for data to be sent to it. *

* * @param describeDeliveryStreamRequest * @param asyncHandler * Asynchronous callback handler for events in the lifecycle of the request. Users can provide an * implementation of the callback methods in this interface to receive notification of successful or * unsuccessful completion of the operation. * @return A Java Future containing the result of the DescribeDeliveryStream operation returned by the service. * @sample AmazonKinesisFirehoseAsyncHandler.DescribeDeliveryStream * @see AWS API Documentation */ java.util.concurrent.Future describeDeliveryStreamAsync(DescribeDeliveryStreamRequest describeDeliveryStreamRequest, com.amazonaws.handlers.AsyncHandler asyncHandler); /** *

* Lists your delivery streams in alphabetical order of their names. *

*

* The number of delivery streams might be too large to return using a single call to * ListDeliveryStreams. You can limit the number of delivery streams returned, using the * Limit parameter. To determine whether there are more delivery streams to list, check the value of * HasMoreDeliveryStreams in the output. If there are more delivery streams to list, you can request * them by calling this operation again and setting the ExclusiveStartDeliveryStreamName parameter to * the name of the last delivery stream returned in the last call. *

* * @param listDeliveryStreamsRequest * @return A Java Future containing the result of the ListDeliveryStreams operation returned by the service. * @sample AmazonKinesisFirehoseAsync.ListDeliveryStreams * @see AWS * API Documentation */ java.util.concurrent.Future listDeliveryStreamsAsync(ListDeliveryStreamsRequest listDeliveryStreamsRequest); /** *

* Lists your delivery streams in alphabetical order of their names. *

*

* The number of delivery streams might be too large to return using a single call to * ListDeliveryStreams. You can limit the number of delivery streams returned, using the * Limit parameter. To determine whether there are more delivery streams to list, check the value of * HasMoreDeliveryStreams in the output. If there are more delivery streams to list, you can request * them by calling this operation again and setting the ExclusiveStartDeliveryStreamName parameter to * the name of the last delivery stream returned in the last call. *

* * @param listDeliveryStreamsRequest * @param asyncHandler * Asynchronous callback handler for events in the lifecycle of the request. Users can provide an * implementation of the callback methods in this interface to receive notification of successful or * unsuccessful completion of the operation. * @return A Java Future containing the result of the ListDeliveryStreams operation returned by the service. * @sample AmazonKinesisFirehoseAsyncHandler.ListDeliveryStreams * @see AWS * API Documentation */ java.util.concurrent.Future listDeliveryStreamsAsync(ListDeliveryStreamsRequest listDeliveryStreamsRequest, com.amazonaws.handlers.AsyncHandler asyncHandler); /** *

* Lists the tags for the specified delivery stream. This operation has a limit of five transactions per second per * account. *

* * @param listTagsForDeliveryStreamRequest * @return A Java Future containing the result of the ListTagsForDeliveryStream operation returned by the service. * @sample AmazonKinesisFirehoseAsync.ListTagsForDeliveryStream * @see AWS API Documentation */ java.util.concurrent.Future listTagsForDeliveryStreamAsync( ListTagsForDeliveryStreamRequest listTagsForDeliveryStreamRequest); /** *

* Lists the tags for the specified delivery stream. This operation has a limit of five transactions per second per * account. *

* * @param listTagsForDeliveryStreamRequest * @param asyncHandler * Asynchronous callback handler for events in the lifecycle of the request. Users can provide an * implementation of the callback methods in this interface to receive notification of successful or * unsuccessful completion of the operation. * @return A Java Future containing the result of the ListTagsForDeliveryStream operation returned by the service. * @sample AmazonKinesisFirehoseAsyncHandler.ListTagsForDeliveryStream * @see AWS API Documentation */ java.util.concurrent.Future listTagsForDeliveryStreamAsync( ListTagsForDeliveryStreamRequest listTagsForDeliveryStreamRequest, com.amazonaws.handlers.AsyncHandler asyncHandler); /** *

* Writes a single data record into an Amazon Kinesis Data Firehose delivery stream. To write multiple data records * into a delivery stream, use PutRecordBatch. Applications using these operations are referred to as * producers. *

*

* By default, each delivery stream can take in up to 2,000 transactions per second, 5,000 records per second, or 5 * MB per second. If you use PutRecord and PutRecordBatch, the limits are an aggregate across these * two operations for each delivery stream. For more information about limits and how to request an increase, see Amazon Kinesis Data Firehose Limits. *

*

* You must specify the name of the delivery stream and the data record when using PutRecord. The data record * consists of a data blob that can be up to 1,000 KB in size, and any kind of data. For example, it can be a * segment from a log file, geographic location data, website clickstream data, and so on. *

*

* Kinesis Data Firehose buffers records before delivering them to the destination. To disambiguate the data blobs * at the destination, a common solution is to use delimiters in the data, such as a newline (\n) or * some other character unique within the data. This allows the consumer application to parse individual data items * when reading the data from the destination. *

*

* The PutRecord operation returns a RecordId, which is a unique string assigned to each * record. Producer applications can use this ID for purposes such as auditability and investigation. *

*

* If the PutRecord operation throws a ServiceUnavailableException, back off and retry. If * the exception persists, it is possible that the throughput limits have been exceeded for the delivery stream. *

*

* Data records sent to Kinesis Data Firehose are stored for 24 hours from the time they are added to a delivery * stream as it tries to send the records to the destination. If the destination is unreachable for more than 24 * hours, the data is no longer available. *

* *

* Don't concatenate two or more base64 strings to form the data fields of your records. Instead, concatenate the * raw data, then perform base64 encoding. *

*
* * @param putRecordRequest * @return A Java Future containing the result of the PutRecord operation returned by the service. * @sample AmazonKinesisFirehoseAsync.PutRecord * @see AWS API * Documentation */ java.util.concurrent.Future putRecordAsync(PutRecordRequest putRecordRequest); /** *

* Writes a single data record into an Amazon Kinesis Data Firehose delivery stream. To write multiple data records * into a delivery stream, use PutRecordBatch. Applications using these operations are referred to as * producers. *

*

* By default, each delivery stream can take in up to 2,000 transactions per second, 5,000 records per second, or 5 * MB per second. If you use PutRecord and PutRecordBatch, the limits are an aggregate across these * two operations for each delivery stream. For more information about limits and how to request an increase, see Amazon Kinesis Data Firehose Limits. *

*

* You must specify the name of the delivery stream and the data record when using PutRecord. The data record * consists of a data blob that can be up to 1,000 KB in size, and any kind of data. For example, it can be a * segment from a log file, geographic location data, website clickstream data, and so on. *

*

* Kinesis Data Firehose buffers records before delivering them to the destination. To disambiguate the data blobs * at the destination, a common solution is to use delimiters in the data, such as a newline (\n) or * some other character unique within the data. This allows the consumer application to parse individual data items * when reading the data from the destination. *

*

* The PutRecord operation returns a RecordId, which is a unique string assigned to each * record. Producer applications can use this ID for purposes such as auditability and investigation. *

*

* If the PutRecord operation throws a ServiceUnavailableException, back off and retry. If * the exception persists, it is possible that the throughput limits have been exceeded for the delivery stream. *

*

* Data records sent to Kinesis Data Firehose are stored for 24 hours from the time they are added to a delivery * stream as it tries to send the records to the destination. If the destination is unreachable for more than 24 * hours, the data is no longer available. *

* *

* Don't concatenate two or more base64 strings to form the data fields of your records. Instead, concatenate the * raw data, then perform base64 encoding. *

*
* * @param putRecordRequest * @param asyncHandler * Asynchronous callback handler for events in the lifecycle of the request. Users can provide an * implementation of the callback methods in this interface to receive notification of successful or * unsuccessful completion of the operation. * @return A Java Future containing the result of the PutRecord operation returned by the service. * @sample AmazonKinesisFirehoseAsyncHandler.PutRecord * @see AWS API * Documentation */ java.util.concurrent.Future putRecordAsync(PutRecordRequest putRecordRequest, com.amazonaws.handlers.AsyncHandler asyncHandler); /** *

* Writes multiple data records into a delivery stream in a single call, which can achieve higher throughput per * producer than when writing single records. To write single data records into a delivery stream, use * PutRecord. Applications using these operations are referred to as producers. *

*

* By default, each delivery stream can take in up to 2,000 transactions per second, 5,000 records per second, or 5 * MB per second. If you use PutRecord and PutRecordBatch, the limits are an aggregate across these * two operations for each delivery stream. For more information about limits, see Amazon Kinesis Data Firehose Limits. *

*

* Each PutRecordBatch request supports up to 500 records. Each record in the request can be as large as * 1,000 KB (before 64-bit encoding), up to a limit of 4 MB for the entire request. These limits cannot be changed. *

*

* You must specify the name of the delivery stream and the data record when using PutRecord. The data record * consists of a data blob that can be up to 1,000 KB in size, and any kind of data. For example, it could be a * segment from a log file, geographic location data, website clickstream data, and so on. *

*

* Kinesis Data Firehose buffers records before delivering them to the destination. To disambiguate the data blobs * at the destination, a common solution is to use delimiters in the data, such as a newline (\n) or * some other character unique within the data. This allows the consumer application to parse individual data items * when reading the data from the destination. *

*

* The PutRecordBatch response includes a count of failed records, FailedPutCount, and an array * of responses, RequestResponses. Even if the PutRecordBatch call succeeds, the value of * FailedPutCount may be greater than 0, indicating that there are records for which the operation * didn't succeed. Each entry in the RequestResponses array provides additional information about the * processed record. It directly correlates with a record in the request array using the same ordering, from the top * to the bottom. The response array always includes the same number of records as the request array. * RequestResponses includes both successfully and unsuccessfully processed records. Kinesis Data * Firehose tries to process all records in each PutRecordBatch request. A single record failure does not * stop the processing of subsequent records. *

*

* A successfully processed record includes a RecordId value, which is unique for the record. An * unsuccessfully processed record includes ErrorCode and ErrorMessage values. * ErrorCode reflects the type of error, and is one of the following values: * ServiceUnavailableException or InternalFailure. ErrorMessage provides more * detailed information about the error. *

*

* If there is an internal server error or a timeout, the write might have completed or it might have failed. If * FailedPutCount is greater than 0, retry the request, resending only those records that might have * failed processing. This minimizes the possible duplicate records and also reduces the total bytes sent (and * corresponding charges). We recommend that you handle any duplicates at the destination. *

*

* If PutRecordBatch throws ServiceUnavailableException, back off and retry. If the exception * persists, it is possible that the throughput limits have been exceeded for the delivery stream. *

*

* Data records sent to Kinesis Data Firehose are stored for 24 hours from the time they are added to a delivery * stream as it attempts to send the records to the destination. If the destination is unreachable for more than 24 * hours, the data is no longer available. *

* *

* Don't concatenate two or more base64 strings to form the data fields of your records. Instead, concatenate the * raw data, then perform base64 encoding. *

*
* * @param putRecordBatchRequest * @return A Java Future containing the result of the PutRecordBatch operation returned by the service. * @sample AmazonKinesisFirehoseAsync.PutRecordBatch * @see AWS API * Documentation */ java.util.concurrent.Future putRecordBatchAsync(PutRecordBatchRequest putRecordBatchRequest); /** *

* Writes multiple data records into a delivery stream in a single call, which can achieve higher throughput per * producer than when writing single records. To write single data records into a delivery stream, use * PutRecord. Applications using these operations are referred to as producers. *

*

* By default, each delivery stream can take in up to 2,000 transactions per second, 5,000 records per second, or 5 * MB per second. If you use PutRecord and PutRecordBatch, the limits are an aggregate across these * two operations for each delivery stream. For more information about limits, see Amazon Kinesis Data Firehose Limits. *

*

* Each PutRecordBatch request supports up to 500 records. Each record in the request can be as large as * 1,000 KB (before 64-bit encoding), up to a limit of 4 MB for the entire request. These limits cannot be changed. *

*

* You must specify the name of the delivery stream and the data record when using PutRecord. The data record * consists of a data blob that can be up to 1,000 KB in size, and any kind of data. For example, it could be a * segment from a log file, geographic location data, website clickstream data, and so on. *

*

* Kinesis Data Firehose buffers records before delivering them to the destination. To disambiguate the data blobs * at the destination, a common solution is to use delimiters in the data, such as a newline (\n) or * some other character unique within the data. This allows the consumer application to parse individual data items * when reading the data from the destination. *

*

* The PutRecordBatch response includes a count of failed records, FailedPutCount, and an array * of responses, RequestResponses. Even if the PutRecordBatch call succeeds, the value of * FailedPutCount may be greater than 0, indicating that there are records for which the operation * didn't succeed. Each entry in the RequestResponses array provides additional information about the * processed record. It directly correlates with a record in the request array using the same ordering, from the top * to the bottom. The response array always includes the same number of records as the request array. * RequestResponses includes both successfully and unsuccessfully processed records. Kinesis Data * Firehose tries to process all records in each PutRecordBatch request. A single record failure does not * stop the processing of subsequent records. *

*

* A successfully processed record includes a RecordId value, which is unique for the record. An * unsuccessfully processed record includes ErrorCode and ErrorMessage values. * ErrorCode reflects the type of error, and is one of the following values: * ServiceUnavailableException or InternalFailure. ErrorMessage provides more * detailed information about the error. *

*

* If there is an internal server error or a timeout, the write might have completed or it might have failed. If * FailedPutCount is greater than 0, retry the request, resending only those records that might have * failed processing. This minimizes the possible duplicate records and also reduces the total bytes sent (and * corresponding charges). We recommend that you handle any duplicates at the destination. *

*

* If PutRecordBatch throws ServiceUnavailableException, back off and retry. If the exception * persists, it is possible that the throughput limits have been exceeded for the delivery stream. *

*

* Data records sent to Kinesis Data Firehose are stored for 24 hours from the time they are added to a delivery * stream as it attempts to send the records to the destination. If the destination is unreachable for more than 24 * hours, the data is no longer available. *

* *

* Don't concatenate two or more base64 strings to form the data fields of your records. Instead, concatenate the * raw data, then perform base64 encoding. *

*
* * @param putRecordBatchRequest * @param asyncHandler * Asynchronous callback handler for events in the lifecycle of the request. Users can provide an * implementation of the callback methods in this interface to receive notification of successful or * unsuccessful completion of the operation. * @return A Java Future containing the result of the PutRecordBatch operation returned by the service. * @sample AmazonKinesisFirehoseAsyncHandler.PutRecordBatch * @see AWS API * Documentation */ java.util.concurrent.Future putRecordBatchAsync(PutRecordBatchRequest putRecordBatchRequest, com.amazonaws.handlers.AsyncHandler asyncHandler); /** *

* Enables server-side encryption (SSE) for the delivery stream. *

*

* This operation is asynchronous. It returns immediately. When you invoke it, Kinesis Data Firehose first sets the * status of the stream to ENABLING, and then to ENABLED. You can continue to read and * write data to your stream while its status is ENABLING, but the data is not encrypted. It can take * up to 5 seconds after the encryption status changes to ENABLED before all records written to the * delivery stream are encrypted. To find out whether a record or a batch of records was encrypted, check the * response elements PutRecordOutput$Encrypted and PutRecordBatchOutput$Encrypted, respectively. *

*

* To check the encryption state of a delivery stream, use DescribeDeliveryStream. *

*

* You can only enable SSE for a delivery stream that uses DirectPut as its source. *

*

* The StartDeliveryStreamEncryption and StopDeliveryStreamEncryption operations have a * combined limit of 25 calls per delivery stream per 24 hours. For example, you reach the limit if you call * StartDeliveryStreamEncryption 13 times and StopDeliveryStreamEncryption 12 times for * the same delivery stream in a 24-hour period. *

* * @param startDeliveryStreamEncryptionRequest * @return A Java Future containing the result of the StartDeliveryStreamEncryption operation returned by the * service. * @sample AmazonKinesisFirehoseAsync.StartDeliveryStreamEncryption * @see AWS API Documentation */ java.util.concurrent.Future startDeliveryStreamEncryptionAsync( StartDeliveryStreamEncryptionRequest startDeliveryStreamEncryptionRequest); /** *

* Enables server-side encryption (SSE) for the delivery stream. *

*

* This operation is asynchronous. It returns immediately. When you invoke it, Kinesis Data Firehose first sets the * status of the stream to ENABLING, and then to ENABLED. You can continue to read and * write data to your stream while its status is ENABLING, but the data is not encrypted. It can take * up to 5 seconds after the encryption status changes to ENABLED before all records written to the * delivery stream are encrypted. To find out whether a record or a batch of records was encrypted, check the * response elements PutRecordOutput$Encrypted and PutRecordBatchOutput$Encrypted, respectively. *

*

* To check the encryption state of a delivery stream, use DescribeDeliveryStream. *

*

* You can only enable SSE for a delivery stream that uses DirectPut as its source. *

*

* The StartDeliveryStreamEncryption and StopDeliveryStreamEncryption operations have a * combined limit of 25 calls per delivery stream per 24 hours. For example, you reach the limit if you call * StartDeliveryStreamEncryption 13 times and StopDeliveryStreamEncryption 12 times for * the same delivery stream in a 24-hour period. *

* * @param startDeliveryStreamEncryptionRequest * @param asyncHandler * Asynchronous callback handler for events in the lifecycle of the request. Users can provide an * implementation of the callback methods in this interface to receive notification of successful or * unsuccessful completion of the operation. * @return A Java Future containing the result of the StartDeliveryStreamEncryption operation returned by the * service. * @sample AmazonKinesisFirehoseAsyncHandler.StartDeliveryStreamEncryption * @see AWS API Documentation */ java.util.concurrent.Future startDeliveryStreamEncryptionAsync( StartDeliveryStreamEncryptionRequest startDeliveryStreamEncryptionRequest, com.amazonaws.handlers.AsyncHandler asyncHandler); /** *

* Disables server-side encryption (SSE) for the delivery stream. *

*

* This operation is asynchronous. It returns immediately. When you invoke it, Kinesis Data Firehose first sets the * status of the stream to DISABLING, and then to DISABLED. You can continue to read and * write data to your stream while its status is DISABLING. It can take up to 5 seconds after the * encryption status changes to DISABLED before all records written to the delivery stream are no * longer subject to encryption. To find out whether a record or a batch of records was encrypted, check the * response elements PutRecordOutput$Encrypted and PutRecordBatchOutput$Encrypted, respectively. *

*

* To check the encryption state of a delivery stream, use DescribeDeliveryStream. *

*

* The StartDeliveryStreamEncryption and StopDeliveryStreamEncryption operations have a * combined limit of 25 calls per delivery stream per 24 hours. For example, you reach the limit if you call * StartDeliveryStreamEncryption 13 times and StopDeliveryStreamEncryption 12 times for * the same delivery stream in a 24-hour period. *

* * @param stopDeliveryStreamEncryptionRequest * @return A Java Future containing the result of the StopDeliveryStreamEncryption operation returned by the * service. * @sample AmazonKinesisFirehoseAsync.StopDeliveryStreamEncryption * @see AWS API Documentation */ java.util.concurrent.Future stopDeliveryStreamEncryptionAsync( StopDeliveryStreamEncryptionRequest stopDeliveryStreamEncryptionRequest); /** *

* Disables server-side encryption (SSE) for the delivery stream. *

*

* This operation is asynchronous. It returns immediately. When you invoke it, Kinesis Data Firehose first sets the * status of the stream to DISABLING, and then to DISABLED. You can continue to read and * write data to your stream while its status is DISABLING. It can take up to 5 seconds after the * encryption status changes to DISABLED before all records written to the delivery stream are no * longer subject to encryption. To find out whether a record or a batch of records was encrypted, check the * response elements PutRecordOutput$Encrypted and PutRecordBatchOutput$Encrypted, respectively. *

*

* To check the encryption state of a delivery stream, use DescribeDeliveryStream. *

*

* The StartDeliveryStreamEncryption and StopDeliveryStreamEncryption operations have a * combined limit of 25 calls per delivery stream per 24 hours. For example, you reach the limit if you call * StartDeliveryStreamEncryption 13 times and StopDeliveryStreamEncryption 12 times for * the same delivery stream in a 24-hour period. *

* * @param stopDeliveryStreamEncryptionRequest * @param asyncHandler * Asynchronous callback handler for events in the lifecycle of the request. Users can provide an * implementation of the callback methods in this interface to receive notification of successful or * unsuccessful completion of the operation. * @return A Java Future containing the result of the StopDeliveryStreamEncryption operation returned by the * service. * @sample AmazonKinesisFirehoseAsyncHandler.StopDeliveryStreamEncryption * @see AWS API Documentation */ java.util.concurrent.Future stopDeliveryStreamEncryptionAsync( StopDeliveryStreamEncryptionRequest stopDeliveryStreamEncryptionRequest, com.amazonaws.handlers.AsyncHandler asyncHandler); /** *

* Adds or updates tags for the specified delivery stream. A tag is a key-value pair that you can define and assign * to AWS resources. If you specify a tag that already exists, the tag value is replaced with the value that you * specify in the request. Tags are metadata. For example, you can add friendly names and descriptions or other * types of information that can help you distinguish the delivery stream. For more information about tags, see Using Cost Allocation * Tags in the AWS Billing and Cost Management User Guide. *

*

* Each delivery stream can have up to 50 tags. *

*

* This operation has a limit of five transactions per second per account. *

* * @param tagDeliveryStreamRequest * @return A Java Future containing the result of the TagDeliveryStream operation returned by the service. * @sample AmazonKinesisFirehoseAsync.TagDeliveryStream * @see AWS API * Documentation */ java.util.concurrent.Future tagDeliveryStreamAsync(TagDeliveryStreamRequest tagDeliveryStreamRequest); /** *

* Adds or updates tags for the specified delivery stream. A tag is a key-value pair that you can define and assign * to AWS resources. If you specify a tag that already exists, the tag value is replaced with the value that you * specify in the request. Tags are metadata. For example, you can add friendly names and descriptions or other * types of information that can help you distinguish the delivery stream. For more information about tags, see Using Cost Allocation * Tags in the AWS Billing and Cost Management User Guide. *

*

* Each delivery stream can have up to 50 tags. *

*

* This operation has a limit of five transactions per second per account. *

* * @param tagDeliveryStreamRequest * @param asyncHandler * Asynchronous callback handler for events in the lifecycle of the request. Users can provide an * implementation of the callback methods in this interface to receive notification of successful or * unsuccessful completion of the operation. * @return A Java Future containing the result of the TagDeliveryStream operation returned by the service. * @sample AmazonKinesisFirehoseAsyncHandler.TagDeliveryStream * @see AWS API * Documentation */ java.util.concurrent.Future tagDeliveryStreamAsync(TagDeliveryStreamRequest tagDeliveryStreamRequest, com.amazonaws.handlers.AsyncHandler asyncHandler); /** *

* Removes tags from the specified delivery stream. Removed tags are deleted, and you can't recover them after this * operation successfully completes. *

*

* If you specify a tag that doesn't exist, the operation ignores it. *

*

* This operation has a limit of five transactions per second per account. *

* * @param untagDeliveryStreamRequest * @return A Java Future containing the result of the UntagDeliveryStream operation returned by the service. * @sample AmazonKinesisFirehoseAsync.UntagDeliveryStream * @see AWS * API Documentation */ java.util.concurrent.Future untagDeliveryStreamAsync(UntagDeliveryStreamRequest untagDeliveryStreamRequest); /** *

* Removes tags from the specified delivery stream. Removed tags are deleted, and you can't recover them after this * operation successfully completes. *

*

* If you specify a tag that doesn't exist, the operation ignores it. *

*

* This operation has a limit of five transactions per second per account. *

* * @param untagDeliveryStreamRequest * @param asyncHandler * Asynchronous callback handler for events in the lifecycle of the request. Users can provide an * implementation of the callback methods in this interface to receive notification of successful or * unsuccessful completion of the operation. * @return A Java Future containing the result of the UntagDeliveryStream operation returned by the service. * @sample AmazonKinesisFirehoseAsyncHandler.UntagDeliveryStream * @see AWS * API Documentation */ java.util.concurrent.Future untagDeliveryStreamAsync(UntagDeliveryStreamRequest untagDeliveryStreamRequest, com.amazonaws.handlers.AsyncHandler asyncHandler); /** *

* Updates the specified destination of the specified delivery stream. *

*

* Use this operation to change the destination type (for example, to replace the Amazon S3 destination with Amazon * Redshift) or change the parameters associated with a destination (for example, to change the bucket name of the * Amazon S3 destination). The update might not occur immediately. The target delivery stream remains active while * the configurations are updated, so data writes to the delivery stream can continue during this process. The * updated configurations are usually effective within a few minutes. *

*

* Switching between Amazon ES and other services is not supported. For an Amazon ES destination, you can only * update to another Amazon ES destination. *

*

* If the destination type is the same, Kinesis Data Firehose merges the configuration parameters specified with the * destination configuration that already exists on the delivery stream. If any of the parameters are not specified * in the call, the existing values are retained. For example, in the Amazon S3 destination, if * EncryptionConfiguration is not specified, then the existing EncryptionConfiguration is * maintained on the destination. *

*

* If the destination type is not the same, for example, changing the destination from Amazon S3 to Amazon Redshift, * Kinesis Data Firehose does not merge any parameters. In this case, all parameters must be specified. *

*

* Kinesis Data Firehose uses CurrentDeliveryStreamVersionId to avoid race conditions and conflicting * merges. This is a required field, and the service updates the configuration only if the existing configuration * has a version ID that matches. After the update is applied successfully, the version ID is updated, and can be * retrieved using DescribeDeliveryStream. Use the new version ID to set * CurrentDeliveryStreamVersionId in the next call. *

* * @param updateDestinationRequest * @return A Java Future containing the result of the UpdateDestination operation returned by the service. * @sample AmazonKinesisFirehoseAsync.UpdateDestination * @see AWS API * Documentation */ java.util.concurrent.Future updateDestinationAsync(UpdateDestinationRequest updateDestinationRequest); /** *

* Updates the specified destination of the specified delivery stream. *

*

* Use this operation to change the destination type (for example, to replace the Amazon S3 destination with Amazon * Redshift) or change the parameters associated with a destination (for example, to change the bucket name of the * Amazon S3 destination). The update might not occur immediately. The target delivery stream remains active while * the configurations are updated, so data writes to the delivery stream can continue during this process. The * updated configurations are usually effective within a few minutes. *

*

* Switching between Amazon ES and other services is not supported. For an Amazon ES destination, you can only * update to another Amazon ES destination. *

*

* If the destination type is the same, Kinesis Data Firehose merges the configuration parameters specified with the * destination configuration that already exists on the delivery stream. If any of the parameters are not specified * in the call, the existing values are retained. For example, in the Amazon S3 destination, if * EncryptionConfiguration is not specified, then the existing EncryptionConfiguration is * maintained on the destination. *

*

* If the destination type is not the same, for example, changing the destination from Amazon S3 to Amazon Redshift, * Kinesis Data Firehose does not merge any parameters. In this case, all parameters must be specified. *

*

* Kinesis Data Firehose uses CurrentDeliveryStreamVersionId to avoid race conditions and conflicting * merges. This is a required field, and the service updates the configuration only if the existing configuration * has a version ID that matches. After the update is applied successfully, the version ID is updated, and can be * retrieved using DescribeDeliveryStream. Use the new version ID to set * CurrentDeliveryStreamVersionId in the next call. *

* * @param updateDestinationRequest * @param asyncHandler * Asynchronous callback handler for events in the lifecycle of the request. Users can provide an * implementation of the callback methods in this interface to receive notification of successful or * unsuccessful completion of the operation. * @return A Java Future containing the result of the UpdateDestination operation returned by the service. * @sample AmazonKinesisFirehoseAsyncHandler.UpdateDestination * @see AWS API * Documentation */ java.util.concurrent.Future updateDestinationAsync(UpdateDestinationRequest updateDestinationRequest, com.amazonaws.handlers.AsyncHandler asyncHandler); }




© 2015 - 2024 Weber Informatics LLC | Privacy Policy