String streamName
The name of the stream.
SdkInternalMap<K,V> tags
A set of up to 10 key-value pairs to use to create the tags.
String consumerName
The name of the consumer is something you choose when you register the consumer.
String consumerARN
When you register a consumer, Kinesis Data Streams generates an ARN for it. You need this ARN to be able to call SubscribeToShard.
If you delete a consumer and then create a new one with the same name, it won't have the same ARN. That's because consumer ARNs contain the creation timestamp. This is important to keep in mind if you have IAM policies that reference consumer ARNs.
String consumerStatus
A consumer can't read data while in the CREATING
or DELETING
states.
Date consumerCreationTimestamp
String consumerName
The name of the consumer is something you choose when you register the consumer.
String consumerARN
When you register a consumer, Kinesis Data Streams generates an ARN for it. You need this ARN to be able to call SubscribeToShard.
If you delete a consumer and then create a new one with the same name, it won't have the same ARN. That's because consumer ARNs contain the creation timestamp. This is important to keep in mind if you have IAM policies that reference consumer ARNs.
String consumerStatus
A consumer can't read data while in the CREATING
or DELETING
states.
Date consumerCreationTimestamp
String streamARN
The ARN of the stream with which you registered the consumer.
String streamName
A name to identify the stream. The stream name is scoped to the AWS account used by the application that creates the stream. It is also scoped by AWS Region. That is, two streams in two different AWS accounts can have the same name. Two streams in the same AWS account but in two different Regions can also have the same name.
Integer shardCount
The number of shards that the stream will use. The throughput of the stream is a function of the number of shards; more shards are required for greater provisioned throughput.
DefaultShardLimit;
String streamARN
The ARN of the Kinesis data stream that the consumer is registered with. For more information, see Amazon Resource Names (ARNs) and AWS Service Namespaces.
String consumerName
The name that you gave to the consumer.
String consumerARN
The ARN returned by Kinesis Data Streams when you registered the consumer. If you don't know the ARN of the consumer that you want to deregister, you can use the ListStreamConsumers operation to get a list of the descriptions of all the consumers that are currently registered with a given data stream. The description of a consumer contains its ARN.
String streamARN
The ARN of the Kinesis data stream that the consumer is registered with. For more information, see Amazon Resource Names (ARNs) and AWS Service Namespaces.
String consumerName
The name that you gave to the consumer.
String consumerARN
The ARN returned by Kinesis Data Streams when you registered the consumer.
ConsumerDescription consumerDescription
An object that represents the details of the consumer.
String streamName
The name of the stream to describe.
Integer limit
The maximum number of shards to return in a single call. The default value is 100. If you specify a value greater than 100, at most 100 shards are returned.
String exclusiveStartShardId
The shard ID of the shard to start with.
StreamDescription streamDescription
The current status of the stream, the stream Amazon Resource Name (ARN), an array of shard objects that comprise the stream, and whether there are more shards available.
String streamName
The name of the stream to describe.
StreamDescriptionSummary streamDescriptionSummary
A StreamDescriptionSummary containing information about the stream.
String streamName
The name of the Kinesis data stream for which to disable enhanced monitoring.
SdkInternalList<T> shardLevelMetrics
List of shard-level metrics to disable.
The following are the valid shard-level metrics. The value "ALL
" disables every metric.
IncomingBytes
IncomingRecords
OutgoingBytes
OutgoingRecords
WriteProvisionedThroughputExceeded
ReadProvisionedThroughputExceeded
IteratorAgeMilliseconds
ALL
For more information, see Monitoring the Amazon Kinesis Data Streams Service with Amazon CloudWatch in the Amazon Kinesis Data Streams Developer Guide.
String streamName
The name of the Kinesis data stream.
SdkInternalList<T> currentShardLevelMetrics
Represents the current state of the metrics that are in the enhanced state before the operation.
SdkInternalList<T> desiredShardLevelMetrics
Represents the list of all the metrics that would be in the enhanced state after the operation.
String streamName
The name of the stream for which to enable enhanced monitoring.
SdkInternalList<T> shardLevelMetrics
List of shard-level metrics to enable.
The following are the valid shard-level metrics. The value "ALL
" enables every metric.
IncomingBytes
IncomingRecords
OutgoingBytes
OutgoingRecords
WriteProvisionedThroughputExceeded
ReadProvisionedThroughputExceeded
IteratorAgeMilliseconds
ALL
For more information, see Monitoring the Amazon Kinesis Data Streams Service with Amazon CloudWatch in the Amazon Kinesis Data Streams Developer Guide.
String streamName
The name of the Kinesis data stream.
SdkInternalList<T> currentShardLevelMetrics
Represents the current state of the metrics that are in the enhanced state before the operation.
SdkInternalList<T> desiredShardLevelMetrics
Represents the list of all the metrics that would be in the enhanced state after the operation.
SdkInternalList<T> shardLevelMetrics
List of shard-level metrics.
The following are the valid shard-level metrics. The value "ALL
" enhances every metric.
IncomingBytes
IncomingRecords
OutgoingBytes
OutgoingRecords
WriteProvisionedThroughputExceeded
ReadProvisionedThroughputExceeded
IteratorAgeMilliseconds
ALL
For more information, see Monitoring the Amazon Kinesis Data Streams Service with Amazon CloudWatch in the Amazon Kinesis Data Streams Developer Guide.
String shardIterator
The position in the shard from which you want to start sequentially reading data records. A shard iterator specifies this position using the sequence number of a data record in the shard.
Integer limit
The maximum number of records to return. Specify a value of up to 10,000. If you specify a value that is greater
than 10,000, GetRecords throws InvalidArgumentException
.
SdkInternalList<T> records
The data records retrieved from the shard.
String nextShardIterator
The next position in the shard from which to start sequentially reading data records. If set to null
, the shard has been closed and the requested iterator does not return any more data.
Long millisBehindLatest
The number of milliseconds the GetRecords response is from the tip of the stream, indicating how far behind current time the consumer is. A value of zero indicates that record processing is caught up, and there are no new records to process at this moment.
String streamName
The name of the Amazon Kinesis data stream.
String shardId
The shard ID of the Kinesis Data Streams shard to get the iterator for.
String shardIteratorType
Determines how the shard iterator is used to start reading data records from the shard.
The following are the valid Amazon Kinesis shard iterator types:
AT_SEQUENCE_NUMBER - Start reading from the position denoted by a specific sequence number, provided in the value
StartingSequenceNumber
.
AFTER_SEQUENCE_NUMBER - Start reading right after the position denoted by a specific sequence number, provided in
the value StartingSequenceNumber
.
AT_TIMESTAMP - Start reading from the position denoted by a specific time stamp, provided in the value
Timestamp
.
TRIM_HORIZON - Start reading at the last untrimmed record in the shard in the system, which is the oldest data record in the shard.
LATEST - Start reading just after the most recent record in the shard, so that you always read the most recent data in the shard.
String startingSequenceNumber
The sequence number of the data record in the shard from which to start reading. Used with shard iterator type AT_SEQUENCE_NUMBER and AFTER_SEQUENCE_NUMBER.
Date timestamp
The time stamp of the data record from which to start reading. Used with shard iterator type AT_TIMESTAMP. A time
stamp is the Unix epoch date with precision in milliseconds. For example,
2016-04-04T19:58:46.480-00:00
or 1459799926.480
. If a record with this exact time stamp
does not exist, the iterator returned is for the next (later) record. If the time stamp is older than the current
trim horizon, the iterator returned is for the oldest untrimmed data record (TRIM_HORIZON).
String shardIterator
The position in the shard from which to start reading data records sequentially. A shard iterator specifies this position using the sequence number of a data record in a shard.
String streamName
The name of the data stream whose shards you want to list.
You cannot specify this parameter if you specify the NextToken
parameter.
String nextToken
When the number of shards in the data stream is greater than the default value for the MaxResults
parameter, or if you explicitly specify a value for MaxResults
that is less than the number of
shards in the data stream, the response includes a pagination token named NextToken
. You can specify
this NextToken
value in a subsequent call to ListShards
to list the next set of shards.
Don't specify StreamName
or StreamCreationTimestamp
if you specify
NextToken
because the latter unambiguously identifies the stream.
You can optionally specify a value for the MaxResults
parameter when you specify
NextToken
. If you specify a MaxResults
value that is less than the number of shards
that the operation returns if you don't specify MaxResults
, the response will contain a new
NextToken
value. You can use the new NextToken
value in a subsequent call to the
ListShards
operation.
Tokens expire after 300 seconds. When you obtain a value for NextToken
in the response to a call to
ListShards
, you have 300 seconds to use that value. If you specify an expired token in a call to
ListShards
, you get ExpiredNextTokenException
.
String exclusiveStartShardId
Specify this parameter to indicate that you want to list the shards starting with the shard whose ID immediately
follows ExclusiveStartShardId
.
If you don't specify this parameter, the default behavior is for ListShards
to list the shards
starting with the first one in the stream.
You cannot specify this parameter if you specify NextToken
.
Integer maxResults
The maximum number of shards to return in a single call to ListShards
. The minimum value you can
specify for this parameter is 1, and the maximum is 1,000, which is also the default.
When the number of shards to be listed is greater than the value of MaxResults
, the response
contains a NextToken
value that you can use in a subsequent call to ListShards
to list
the next set of shards.
Date streamCreationTimestamp
Specify this input parameter to distinguish data streams that have the same name. For example, if you create a data stream and then delete it, and you later create another data stream with the same name, you can use this input parameter to specify which of the two streams you want to list the shards for.
You cannot specify this parameter if you specify the NextToken
parameter.
SdkInternalList<T> shards
An array of JSON objects. Each object represents one shard and specifies the IDs of the shard, the shard's parent, and the shard that's adjacent to the shard's parent. Each object also contains the starting and ending hash keys and the starting and ending sequence numbers for the shard.
String nextToken
When the number of shards in the data stream is greater than the default value for the MaxResults
parameter, or if you explicitly specify a value for MaxResults
that is less than the number of
shards in the data stream, the response includes a pagination token named NextToken
. You can specify
this NextToken
value in a subsequent call to ListShards
to list the next set of shards.
For more information about the use of this pagination token when calling the ListShards
operation,
see ListShardsInput$NextToken.
Tokens expire after 300 seconds. When you obtain a value for NextToken
in the response to a call to
ListShards
, you have 300 seconds to use that value. If you specify an expired token in a call to
ListShards
, you get ExpiredNextTokenException
.
String streamARN
The ARN of the Kinesis data stream for which you want to list the registered consumers. For more information, see Amazon Resource Names (ARNs) and AWS Service Namespaces.
String nextToken
When the number of consumers that are registered with the data stream is greater than the default value for the
MaxResults
parameter, or if you explicitly specify a value for MaxResults
that is less
than the number of consumers that are registered with the data stream, the response includes a pagination token
named NextToken
. You can specify this NextToken
value in a subsequent call to
ListStreamConsumers
to list the next set of registered consumers.
Don't specify StreamName
or StreamCreationTimestamp
if you specify
NextToken
because the latter unambiguously identifies the stream.
You can optionally specify a value for the MaxResults
parameter when you specify
NextToken
. If you specify a MaxResults
value that is less than the number of consumers
that the operation returns if you don't specify MaxResults
, the response will contain a new
NextToken
value. You can use the new NextToken
value in a subsequent call to the
ListStreamConsumers
operation to list the next set of consumers.
Tokens expire after 300 seconds. When you obtain a value for NextToken
in the response to a call to
ListStreamConsumers
, you have 300 seconds to use that value. If you specify an expired token in a
call to ListStreamConsumers
, you get ExpiredNextTokenException
.
Integer maxResults
The maximum number of consumers that you want a single call of ListStreamConsumers
to return.
Date streamCreationTimestamp
Specify this input parameter to distinguish data streams that have the same name. For example, if you create a data stream and then delete it, and you later create another data stream with the same name, you can use this input parameter to specify which of the two streams you want to list the consumers for.
You can't specify this parameter if you specify the NextToken parameter.
SdkInternalList<T> consumers
An array of JSON objects. Each object represents one registered consumer.
String nextToken
When the number of consumers that are registered with the data stream is greater than the default value for the
MaxResults
parameter, or if you explicitly specify a value for MaxResults
that is less
than the number of registered consumers, the response includes a pagination token named NextToken
.
You can specify this NextToken
value in a subsequent call to ListStreamConsumers
to
list the next set of registered consumers. For more information about the use of this pagination token when
calling the ListStreamConsumers
operation, see ListStreamConsumersInput$NextToken.
Tokens expire after 300 seconds. When you obtain a value for NextToken
in the response to a call to
ListStreamConsumers
, you have 300 seconds to use that value. If you specify an expired token in a
call to ListStreamConsumers
, you get ExpiredNextTokenException
.
SdkInternalList<T> streamNames
The names of the streams that are associated with the AWS account making the ListStreams
request.
Boolean hasMoreStreams
If set to true
, there are more streams available to list.
String streamName
The name of the stream.
String exclusiveStartTagKey
The key to use as the starting point for the list of tags. If this parameter is set,
ListTagsForStream
gets all tags that occur after ExclusiveStartTagKey
.
Integer limit
The number of tags to return. If this number is less than the total number of tags associated with the stream,
HasMoreTags
is set to true
. To list additional tags, set
ExclusiveStartTagKey
to the last key in the response.
SdkInternalList<T> tags
A list of tags associated with StreamName
, starting with the first tag after
ExclusiveStartTagKey
and up to the specified Limit
.
Boolean hasMoreTags
If set to true
, more tags are available. To request additional tags, set
ExclusiveStartTagKey
to the key of the last tag returned.
String streamName
The name of the stream to put the data record into.
ByteBuffer data
The data blob to put into the record, which is base64-encoded when the blob is serialized. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MB).
String partitionKey
Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 characters for each key. Amazon Kinesis Data Streams uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.
String explicitHashKey
The hash value used to explicitly determine the shard the data record is assigned to by overriding the partition key hash.
String sequenceNumberForOrdering
Guarantees strictly increasing sequence numbers, for puts from the same client and to the same partition key.
Usage: set the SequenceNumberForOrdering
of record n to the sequence number of record
n-1 (as returned in the result when putting record n-1). If this parameter is not set, records are
coarsely ordered based on arrival time.
String shardId
The shard ID of the shard where the data record was placed.
String sequenceNumber
The sequence number identifier that was assigned to the put data record. The sequence number for the record is unique across all records in the stream. A sequence number is the identifier associated with every record put into the stream.
String encryptionType
The encryption type to use on the record. This parameter can be one of the following values:
NONE
: Do not encrypt the records in the stream.
KMS
: Use server-side encryption on the records in the stream using a customer-managed AWS KMS key.
SdkInternalList<T> records
The records associated with the request.
String streamName
The stream name associated with the request.
ByteBuffer data
The data blob to put into the record, which is base64-encoded when the blob is serialized. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MB).
String explicitHashKey
The hash value used to determine explicitly the shard that the data record is assigned to by overriding the partition key hash.
String partitionKey
Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 characters for each key. Amazon Kinesis Data Streams uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.
Integer failedRecordCount
The number of unsuccessfully processed records in a PutRecords
request.
SdkInternalList<T> records
An array of successfully and unsuccessfully processed record results, correlated with the request by natural
ordering. A record that is successfully added to a stream includes SequenceNumber
and
ShardId
in the result. A record that fails to be added to a stream includes ErrorCode
and ErrorMessage
in the result.
String encryptionType
The encryption type used on the records. This parameter can be one of the following values:
NONE
: Do not encrypt the records.
KMS
: Use server-side encryption on the records using a customer-managed AWS KMS key.
String sequenceNumber
The sequence number for an individual record result.
String shardId
The shard ID for an individual record result.
String errorCode
The error code for an individual record result. ErrorCodes
can be either
ProvisionedThroughputExceededException
or InternalFailure
.
String errorMessage
The error message for an individual record result. An ErrorCode
value of
ProvisionedThroughputExceededException
has an error message that includes the account ID, stream
name, and shard ID. An ErrorCode
value of InternalFailure
has the error message
"Internal Service Failure"
.
String sequenceNumber
The unique identifier of the record within its shard.
Date approximateArrivalTimestamp
The approximate time that the record was inserted into the stream.
ByteBuffer data
The data blob. The data in the blob is both opaque and immutable to Kinesis Data Streams, which does not inspect, interpret, or change the data in the blob in any way. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MB).
String partitionKey
Identifies which shard in the stream the data record is assigned to.
String encryptionType
The encryption type used on the record. This parameter can be one of the following values:
NONE
: Do not encrypt the records in the stream.
KMS
: Use server-side encryption on the records in the stream using a customer-managed AWS KMS key.
String streamARN
The ARN of the Kinesis data stream that you want to register the consumer with. For more info, see Amazon Resource Names (ARNs) and AWS Service Namespaces.
String consumerName
For a given Kinesis data stream, each consumer must have a unique name. However, consumer names don't have to be unique across data streams.
Consumer consumer
An object that represents the details of the consumer you registered. When you register a consumer, it gets an ARN that is generated by Kinesis Data Streams.
String streamName
The name of the stream.
SdkInternalList<T> tagKeys
A list of tag keys. Each corresponding tag is removed from the stream.
String shardId
The unique identifier of the shard within the stream.
String parentShardId
The shard ID of the shard's parent.
String adjacentParentShardId
The shard ID of the shard adjacent to the shard's parent.
HashKeyRange hashKeyRange
The range of possible hash key values for the shard, which is a set of ordered contiguous positive integers.
SequenceNumberRange sequenceNumberRange
The range of possible sequence numbers for the shard.
String streamName
The name of the stream for the shard split.
String shardToSplit
The shard ID of the shard to split.
String newStartingHashKey
A hash key value for the starting hash key of one of the child shards created by the split. The hash key range
for a given shard constitutes a set of ordered contiguous positive integers. The value for
NewStartingHashKey
must be in the range of hash keys being mapped into the shard. The
NewStartingHashKey
hash key value and all higher hash key values in hash key range are distributed
to one of the child shards. All the lower hash key values in the range are distributed to the other child shard.
String streamName
The name of the stream for which to start encrypting records.
String encryptionType
The encryption type to use. The only valid value is KMS
.
String keyId
The GUID for the customer-managed AWS KMS key to use for encryption. This value can be a globally unique
identifier, a fully specified Amazon Resource Name (ARN) to either an alias or a key, or an alias name prefixed
by "alias/".You can also use a master key owned by Kinesis Data Streams by specifying the alias
aws/kinesis
.
Key ARN example: arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012
Alias ARN example: arn:aws:kms:us-east-1:123456789012:alias/MyAliasName
Globally unique key ID example: 12345678-1234-1234-1234-123456789012
Alias name example: alias/MyAliasName
Master key owned by Kinesis Data Streams: alias/aws/kinesis
String streamName
The name of the stream on which to stop encrypting records.
String encryptionType
The encryption type. The only valid value is KMS
.
String keyId
The GUID for the customer-managed AWS KMS key to use for encryption. This value can be a globally unique
identifier, a fully specified Amazon Resource Name (ARN) to either an alias or a key, or an alias name prefixed
by "alias/".You can also use a master key owned by Kinesis Data Streams by specifying the alias
aws/kinesis
.
Key ARN example: arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012
Alias ARN example: arn:aws:kms:us-east-1:123456789012:alias/MyAliasName
Globally unique key ID example: 12345678-1234-1234-1234-123456789012
Alias name example: alias/MyAliasName
Master key owned by Kinesis Data Streams: alias/aws/kinesis
String streamName
The name of the stream being described.
String streamARN
The Amazon Resource Name (ARN) for the stream being described.
String streamStatus
The current status of the stream being described. The stream status is one of the following states:
CREATING
- The stream is being created. Kinesis Data Streams immediately returns and sets
StreamStatus
to CREATING
.
DELETING
- The stream is being deleted. The specified stream is in the DELETING
state
until Kinesis Data Streams completes the deletion.
ACTIVE
- The stream exists and is ready for read and write operations or deletion. You should
perform read and write operations only on an ACTIVE
stream.
UPDATING
- Shards in the stream are being merged or split. Read and write operations continue to
work while the stream is in the UPDATING
state.
SdkInternalList<T> shards
The shards that comprise the stream.
Boolean hasMoreShards
If set to true
, more shards in the stream are available to describe.
Integer retentionPeriodHours
The current retention period, in hours.
Date streamCreationTimestamp
The approximate time that the stream was created.
SdkInternalList<T> enhancedMonitoring
Represents the current enhanced monitoring settings of the stream.
String encryptionType
The server-side encryption type used on the stream. This parameter can be one of the following values:
NONE
: Do not encrypt the records in the stream.
KMS
: Use server-side encryption on the records in the stream using a customer-managed AWS KMS key.
String keyId
The GUID for the customer-managed AWS KMS key to use for encryption. This value can be a globally unique
identifier, a fully specified ARN to either an alias or a key, or an alias name prefixed by "alias/".You can also
use a master key owned by Kinesis Data Streams by specifying the alias aws/kinesis
.
Key ARN example: arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012
Alias ARN example: arn:aws:kms:us-east-1:123456789012:alias/MyAliasName
Globally unique key ID example: 12345678-1234-1234-1234-123456789012
Alias name example: alias/MyAliasName
Master key owned by Kinesis Data Streams: alias/aws/kinesis
String streamName
The name of the stream being described.
String streamARN
The Amazon Resource Name (ARN) for the stream being described.
String streamStatus
The current status of the stream being described. The stream status is one of the following states:
CREATING
- The stream is being created. Kinesis Data Streams immediately returns and sets
StreamStatus
to CREATING
.
DELETING
- The stream is being deleted. The specified stream is in the DELETING
state
until Kinesis Data Streams completes the deletion.
ACTIVE
- The stream exists and is ready for read and write operations or deletion. You should
perform read and write operations only on an ACTIVE
stream.
UPDATING
- Shards in the stream are being merged or split. Read and write operations continue to
work while the stream is in the UPDATING
state.
Integer retentionPeriodHours
The current retention period, in hours.
Date streamCreationTimestamp
The approximate time that the stream was created.
SdkInternalList<T> enhancedMonitoring
Represents the current enhanced monitoring settings of the stream.
String encryptionType
The encryption type used. This value is one of the following:
KMS
NONE
String keyId
The GUID for the customer-managed AWS KMS key to use for encryption. This value can be a globally unique
identifier, a fully specified ARN to either an alias or a key, or an alias name prefixed by "alias/".You can also
use a master key owned by Kinesis Data Streams by specifying the alias aws/kinesis
.
Key ARN example: arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012
Alias ARN example: arn:aws:kms:us-east-1:123456789012:alias/MyAliasName
Globally unique key ID example: 12345678-1234-1234-1234-123456789012
Alias name example: alias/MyAliasName
Master key owned by Kinesis Data Streams: alias/aws/kinesis
Integer openShardCount
The number of open shards in the stream.
Integer consumerCount
The number of enhanced fan-out consumers registered with the stream.
String key
A unique identifier for the tag. Maximum length: 128 characters. Valid characters: Unicode letters, digits, white space, _ . / = + - % @
String value
An optional string, typically used to describe or define the tag. Maximum length: 256 characters. Valid characters: Unicode letters, digits, white space, _ . / = + - % @
String applicationName
The Kinesis Analytics application name.
Long currentApplicationVersionId
The version ID of the Kinesis Analytics application.
CloudWatchLoggingOption cloudWatchLoggingOption
Provides the CloudWatch log stream Amazon Resource Name (ARN) and the IAM role ARN. Note: To write application
messages to CloudWatch, the IAM role that is used must have the PutLogEvents
policy action enabled.
String applicationName
Name of the application to which you want to add the input processing configuration.
Long currentApplicationVersionId
Version of the application to which you want to add the input processing configuration. You can use the DescribeApplication operation to get the current application version. If the version specified is not the
current version, the ConcurrentModificationException
is returned.
String inputId
The ID of the input configuration to add the input processing configuration to. You can get a list of the input IDs for an application using the DescribeApplication operation.
InputProcessingConfiguration inputProcessingConfiguration
The InputProcessingConfiguration to add to the application.
String applicationName
Name of your existing Amazon Kinesis Analytics application to which you want to add the streaming source.
Long currentApplicationVersionId
Current version of your Amazon Kinesis Analytics application. You can use the DescribeApplication operation to find the current application version.
Input input
The Input to add.
String applicationName
Name of the application to which you want to add the output configuration.
Long currentApplicationVersionId
Version of the application to which you want to add the output configuration. You can use the DescribeApplication operation to get the current application version. If the version specified is not the
current version, the ConcurrentModificationException
is returned.
Output output
An array of objects, each describing one output configuration. In the output configuration, you specify the name of an in-application stream, a destination (that is, an Amazon Kinesis stream, an Amazon Kinesis Firehose delivery stream, or an AWS Lambda function), and record the formation to use when writing to the destination.
String applicationName
Name of an existing application.
Long currentApplicationVersionId
Version of the application for which you are adding the reference data source. You can use the DescribeApplication operation to get the current application version. If the version specified is not the
current version, the ConcurrentModificationException
is returned.
ReferenceDataSource referenceDataSource
The reference data source can be an object in your Amazon S3 bucket. Amazon Kinesis Analytics reads the object and copies the data into the in-application table that is created. You provide an S3 bucket, object key name, and the resulting in-application table that is created. You must also provide an IAM role with the necessary permissions that Amazon Kinesis Analytics can assume to read the object from your S3 bucket on your behalf.
String applicationName
Name of the application.
String applicationDescription
Description of the application.
String applicationARN
ARN of the application.
String applicationStatus
Status of the application.
Date createTimestamp
Time stamp when the application version was created.
Date lastUpdateTimestamp
Time stamp when the application was last updated.
List<E> inputDescriptions
Describes the application input configuration. For more information, see Configuring Application Input.
List<E> outputDescriptions
Describes the application output configuration. For more information, see Configuring Application Output.
List<E> referenceDataSourceDescriptions
Describes reference data sources configured for the application. For more information, see Configuring Application Input.
List<E> cloudWatchLoggingOptionDescriptions
Describes the CloudWatch log streams that are configured to receive application messages. For more information about using CloudWatch log streams with Amazon Kinesis Analytics applications, see Working with Amazon CloudWatch Logs.
String applicationCode
Returns the application code that you provided to perform data analysis on any of the in-application streams in your application.
Long applicationVersionId
Provides the current application version.
List<E> inputUpdates
Describes application input configuration updates.
String applicationCodeUpdate
Describes application code updates.
List<E> outputUpdates
Describes application output configuration updates.
List<E> referenceDataSourceUpdates
Describes application reference data source updates.
List<E> cloudWatchLoggingOptionUpdates
Describes application CloudWatch logging option updates.
String cloudWatchLoggingOptionId
ID of the CloudWatch logging option description.
String logStreamARN
ARN of the CloudWatch log to receive application messages.
String roleARN
IAM ARN of the role to use to send application messages. Note: To write application messages to CloudWatch, the
IAM role used must have the PutLogEvents
policy action enabled.
String cloudWatchLoggingOptionId
ID of the CloudWatch logging option to update
String logStreamARNUpdate
ARN of the CloudWatch log to receive application messages.
String roleARNUpdate
IAM ARN of the role to use to send application messages. Note: To write application messages to CloudWatch, the
IAM role used must have the PutLogEvents
policy action enabled.
String applicationName
Name of your Amazon Kinesis Analytics application (for example, sample-app
).
String applicationDescription
Summary description of the application.
List<E> inputs
Use this parameter to configure the application input.
You can configure your application to receive input from a single streaming source. In this configuration, you map this streaming source to an in-application stream that is created. Your application code can then query the in-application stream like a table (you can think of it as a constantly updating table).
For the streaming source, you provide its Amazon Resource Name (ARN) and format of data on the stream (for example, JSON, CSV, etc.). You also must provide an IAM role that Amazon Kinesis Analytics can assume to read this stream on your behalf.
To create the in-application stream, you need to specify a schema to transform your data into a schematized version used in SQL. In the schema, you provide the necessary mapping of the data elements in the streaming source to record columns in the in-app stream.
List<E> outputs
You can configure application output to write data from any of the in-application streams to up to three destinations.
These destinations can be Amazon Kinesis streams, Amazon Kinesis Firehose delivery streams, AWS Lambda destinations, or any combination of the three.
In the configuration, you specify the in-application stream name, the destination stream or Lambda function Amazon Resource Name (ARN), and the format to use when writing data. You must also provide an IAM role that Amazon Kinesis Analytics can assume to write to the destination stream or Lambda function on your behalf.
In the output configuration, you also provide the output stream or Lambda function ARN. For stream destinations, you provide the format of data in the stream (for example, JSON, CSV). You also must provide an IAM role that Amazon Kinesis Analytics can assume to write to the stream or Lambda function on your behalf.
List<E> cloudWatchLoggingOptions
Use this parameter to configure a CloudWatch log stream to monitor application configuration errors. For more information, see Working with Amazon CloudWatch Logs.
String applicationCode
One or more SQL statements that read input data, transform it, and generate output. For example, you can write a SQL statement that reads data from one in-application stream, generates a running average of the number of advertisement clicks by vendor, and insert resulting rows in another in-application stream using pumps. For more information about the typical pattern, see Application Code.
You can provide such series of SQL statements, where output of one statement can be used as the input for the next statement. You store intermediate results by creating in-application streams and pumps.
Note that the application code must create the streams with names specified in the Outputs
. For
example, if your Outputs
defines output streams named ExampleOutputStream1
and
ExampleOutputStream2
, then your application code must create these streams.
List<E> tags
A list of one or more tags to assign to the application. A tag is a key-value pair that identifies an application. Note that the maximum number of application tags includes system tags. The maximum number of user-defined application tags is 50. For more information, see Using Cost Allocation Tags in the AWS Billing and Cost Management Guide.
ApplicationSummary applicationSummary
In response to your CreateApplication
request, Amazon Kinesis Analytics returns a response with a
summary of the application it created, including the application Amazon Resource Name (ARN), name, and status.
String applicationName
The Kinesis Analytics application name.
Long currentApplicationVersionId
The version ID of the Kinesis Analytics application.
String cloudWatchLoggingOptionId
The CloudWatchLoggingOptionId
of the CloudWatch logging option to delete. You can get the
CloudWatchLoggingOptionId
by using the DescribeApplication operation.
String applicationName
The Kinesis Analytics application name.
Long currentApplicationVersionId
The version ID of the Kinesis Analytics application.
String inputId
The ID of the input configuration from which to delete the input processing configuration. You can get a list of the input IDs for an application by using the DescribeApplication operation.
String applicationName
Amazon Kinesis Analytics application name.
Long currentApplicationVersionId
Amazon Kinesis Analytics application version. You can use the DescribeApplication operation to get the current application version. If the version specified is not the
current version, the ConcurrentModificationException
is returned.
String outputId
The ID of the configuration to delete. Each output configuration that is added to the application, either when
the application is created or later using the AddApplicationOutput operation, has a unique ID. You need to provide the ID to uniquely identify the output
configuration that you want to delete from the application configuration. You can use the DescribeApplication operation to get the specific OutputId
.
String applicationName
Name of an existing application.
Long currentApplicationVersionId
Version of the application. You can use the DescribeApplication operation to get the current application version. If the version specified is not the
current version, the ConcurrentModificationException
is returned.
String referenceId
ID of the reference data source. When you add a reference data source to your application using the AddApplicationReferenceDataSource, Amazon Kinesis Analytics assigns an ID. You can use the DescribeApplication operation to get the reference ID.
String applicationName
Name of the application.
ApplicationDetail applicationDetail
Provides a description of the application, such as the application Amazon Resource Name (ARN), status, latest version, and input and output configuration details.
String recordFormatType
Specifies the format of the records on the output stream.
String resourceARN
Amazon Resource Name (ARN) of the streaming source.
String roleARN
ARN of the IAM role that Amazon Kinesis Analytics can assume to access the stream on your behalf.
InputStartingPositionConfiguration inputStartingPositionConfiguration
Point at which you want Amazon Kinesis Analytics to start reading records from the specified streaming source discovery purposes.
S3Configuration s3Configuration
Specify this parameter to discover a schema from data in an Amazon S3 object.
InputProcessingConfiguration inputProcessingConfiguration
The InputProcessingConfiguration to use to preprocess the records before discovering the schema of the records.
SourceSchema inputSchema
Schema inferred from the streaming source. It identifies the format of the data in the streaming source and how each data element maps to corresponding columns in the in-application stream that you can create.
List<E> parsedInputRecords
An array of elements, where each element corresponds to a row in a stream record (a stream record can have more than one row).
List<E> processedInputRecords
Stream data that was modified by the processor specified in the InputProcessingConfiguration
parameter.
List<E> rawInputRecords
Raw stream data that was sampled to infer the schema.
String namePrefix
Name prefix to use when creating an in-application stream. Suppose that you specify a prefix
"MyInApplicationStream." Amazon Kinesis Analytics then creates one or more (as per the
InputParallelism
count you specified) in-application streams with names "MyInApplicationStream_001,"
"MyInApplicationStream_002," and so on.
InputProcessingConfiguration inputProcessingConfiguration
The InputProcessingConfiguration for the input. An input processor transforms records as they are received from the stream, before the application's SQL code executes. Currently, the only input processing configuration available is InputLambdaProcessor.
KinesisStreamsInput kinesisStreamsInput
If the streaming source is an Amazon Kinesis stream, identifies the stream's Amazon Resource Name (ARN) and an IAM role that enables Amazon Kinesis Analytics to access the stream on your behalf.
Note: Either KinesisStreamsInput
or KinesisFirehoseInput
is required.
KinesisFirehoseInput kinesisFirehoseInput
If the streaming source is an Amazon Kinesis Firehose delivery stream, identifies the delivery stream's ARN and an IAM role that enables Amazon Kinesis Analytics to access the stream on your behalf.
Note: Either KinesisStreamsInput
or KinesisFirehoseInput
is required.
InputParallelism inputParallelism
Describes the number of in-application streams to create.
Data from your source is routed to these in-application input streams.
SourceSchema inputSchema
Describes the format of the data in the streaming source, and how each data element maps to corresponding columns in the in-application stream that is being created.
Also used to describe the format of the reference data source.
String id
Input source ID. You can get this ID by calling the DescribeApplication operation.
InputStartingPositionConfiguration inputStartingPositionConfiguration
Point at which you want the application to start processing records from the streaming source.
String inputId
Input ID associated with the application input. This is the ID that Amazon Kinesis Analytics assigns to each input configuration you add to your application.
String namePrefix
In-application name prefix.
List<E> inAppStreamNames
Returns the in-application stream names that are mapped to the stream source.
InputProcessingConfigurationDescription inputProcessingConfigurationDescription
The description of the preprocessor that executes on records in this input before the application's code is run.
KinesisStreamsInputDescription kinesisStreamsInputDescription
If an Amazon Kinesis stream is configured as streaming source, provides Amazon Kinesis stream's Amazon Resource Name (ARN) and an IAM role that enables Amazon Kinesis Analytics to access the stream on your behalf.
KinesisFirehoseInputDescription kinesisFirehoseInputDescription
If an Amazon Kinesis Firehose delivery stream is configured as a streaming source, provides the delivery stream's ARN and an IAM role that enables Amazon Kinesis Analytics to access the stream on your behalf.
SourceSchema inputSchema
Describes the format of the data in the streaming source, and how each data element maps to corresponding columns in the in-application stream that is being created.
InputParallelism inputParallelism
Describes the configured parallelism (number of in-application streams mapped to the streaming source).
InputStartingPositionConfiguration inputStartingPositionConfiguration
Point at which the application is configured to read from the input stream.
String resourceARN
The ARN of the AWS Lambda function that operates on records in the stream.
To specify an earlier version of the Lambda function than the latest, include the Lambda function version in the Lambda function ARN. For more information about Lambda ARNs, see Example ARNs: AWS Lambda
String roleARN
The ARN of the IAM role that is used to access the AWS Lambda function.
String resourceARN
The ARN of the AWS Lambda function that is used to preprocess the records in the stream.
String roleARN
The ARN of the IAM role that is used to access the AWS Lambda function.
String resourceARNUpdate
The Amazon Resource Name (ARN) of the new AWS Lambda function that is used to preprocess the records in the stream.
To specify an earlier version of the Lambda function than the latest, include the Lambda function version in the Lambda function ARN. For more information about Lambda ARNs, see Example ARNs: AWS Lambda
String roleARNUpdate
The ARN of the new IAM role that is used to access the AWS Lambda function.
Integer countUpdate
Number of in-application streams to create for the specified streaming source.
InputLambdaProcessor inputLambdaProcessor
The InputLambdaProcessor that is used to preprocess the records in the stream before being processed by your application code.
InputLambdaProcessorDescription inputLambdaProcessorDescription
Provides configuration information about the associated InputLambdaProcessorDescription.
InputLambdaProcessorUpdate inputLambdaProcessorUpdate
Provides update information for an InputLambdaProcessor.
RecordFormat recordFormatUpdate
Specifies the format of the records on the streaming source.
String recordEncodingUpdate
Specifies the encoding of the records in the streaming source. For example, UTF-8.
List<E> recordColumnUpdates
A list of RecordColumn
objects. Each object describes the mapping of the streaming source element to
the corresponding column in the in-application stream.
String inputStartingPosition
The starting position on the stream.
NOW
- Start reading just after the most recent record in the stream, start at the request time stamp
that the customer issued.
TRIM_HORIZON
- Start reading at the last untrimmed record in the stream, which is the oldest record
available in the stream. This option is not available for an Amazon Kinesis Firehose delivery stream.
LAST_STOPPED_POINT
- Resume reading from where the application last stopped reading.
String inputId
Input ID of the application input to be updated.
String namePrefixUpdate
Name prefix for in-application streams that Amazon Kinesis Analytics creates for the specific streaming source.
InputProcessingConfigurationUpdate inputProcessingConfigurationUpdate
Describes updates for an input processing configuration.
KinesisStreamsInputUpdate kinesisStreamsInputUpdate
If an Amazon Kinesis stream is the streaming source to be updated, provides an updated stream Amazon Resource Name (ARN) and IAM role ARN.
KinesisFirehoseInputUpdate kinesisFirehoseInputUpdate
If an Amazon Kinesis Firehose delivery stream is the streaming source to be updated, provides an updated stream ARN and IAM role ARN.
InputSchemaUpdate inputSchemaUpdate
Describes the data format on the streaming source, and how record elements on the streaming source map to columns of the in-application stream that is created.
InputParallelismUpdate inputParallelismUpdate
Describes the parallelism updates (the number in-application streams Amazon Kinesis Analytics creates for the specific streaming source).
String recordRowPath
Path to the top-level parent that contains the records.
String resourceARNUpdate
Amazon Resource Name (ARN) of the input Amazon Kinesis Firehose delivery stream to read.
String roleARNUpdate
ARN of the IAM role that Amazon Kinesis Analytics can assume to access the stream on your behalf. You need to grant the necessary permissions to this role.
String resourceARNUpdate
Amazon Resource Name (ARN) of the Amazon Kinesis Firehose delivery stream to write to.
String roleARNUpdate
ARN of the IAM role that Amazon Kinesis Analytics can assume to access the stream on your behalf. You need to grant the necessary permissions to this role.
String resourceARNUpdate
Amazon Resource Name (ARN) of the input Amazon Kinesis stream to read.
String roleARNUpdate
ARN of the IAM role that Amazon Kinesis Analytics can assume to access the stream on your behalf. You need to grant the necessary permissions to this role.
String resourceARNUpdate
Amazon Resource Name (ARN) of the Amazon Kinesis stream where you want to write the output.
String roleARNUpdate
ARN of the IAM role that Amazon Kinesis Analytics can assume to access the stream on your behalf. You need to grant the necessary permissions to this role.
String resourceARN
Amazon Resource Name (ARN) of the destination Lambda function to write to.
To specify an earlier version of the Lambda function than the latest, include the Lambda function version in the Lambda function ARN. For more information about Lambda ARNs, see Example ARNs: AWS Lambda
String roleARN
ARN of the IAM role that Amazon Kinesis Analytics can assume to write to the destination function on your behalf. You need to grant the necessary permissions to this role.
String resourceARNUpdate
Amazon Resource Name (ARN) of the destination Lambda function.
To specify an earlier version of the Lambda function than the latest, include the Lambda function version in the Lambda function ARN. For more information about Lambda ARNs, see Example ARNs: AWS Lambda
String roleARNUpdate
ARN of the IAM role that Amazon Kinesis Analytics can assume to write to the destination function on your behalf. You need to grant the necessary permissions to this role.
Integer limit
Maximum number of applications to list.
String exclusiveStartApplicationName
Name of the application to start the list with. When using pagination to retrieve the list, you don't need to specify this parameter in the first request. However, in subsequent requests, you add the last application name from the previous response to get the next page of applications.
String resourceARN
The ARN of the application for which to retrieve tags.
JSONMappingParameters jSONMappingParameters
Provides additional mapping information when JSON is the record format on the streaming source.
CSVMappingParameters cSVMappingParameters
Provides additional mapping information when the record format uses delimiters (for example, CSV).
String name
Name of the in-application stream.
KinesisStreamsOutput kinesisStreamsOutput
Identifies an Amazon Kinesis stream as the destination.
KinesisFirehoseOutput kinesisFirehoseOutput
Identifies an Amazon Kinesis Firehose delivery stream as the destination.
LambdaOutput lambdaOutput
Identifies an AWS Lambda function as the destination.
DestinationSchema destinationSchema
Describes the data format when records are written to the destination. For more information, see Configuring Application Output.
String outputId
A unique identifier for the output configuration.
String name
Name of the in-application stream configured as output.
KinesisStreamsOutputDescription kinesisStreamsOutputDescription
Describes Amazon Kinesis stream configured as the destination where output is written.
KinesisFirehoseOutputDescription kinesisFirehoseOutputDescription
Describes the Amazon Kinesis Firehose delivery stream configured as the destination where output is written.
LambdaOutputDescription lambdaOutputDescription
Describes the AWS Lambda function configured as the destination where output is written.
DestinationSchema destinationSchema
Data format used for writing data to the destination.
String outputId
Identifies the specific output configuration that you want to update.
String nameUpdate
If you want to specify a different in-application stream for this output configuration, use this field to specify the new in-application stream name.
KinesisStreamsOutputUpdate kinesisStreamsOutputUpdate
Describes an Amazon Kinesis stream as the destination for the output.
KinesisFirehoseOutputUpdate kinesisFirehoseOutputUpdate
Describes an Amazon Kinesis Firehose delivery stream as the destination for the output.
LambdaOutputUpdate lambdaOutputUpdate
Describes an AWS Lambda function as the destination for the output.
DestinationSchema destinationSchemaUpdate
Describes the data format when records are written to the destination. For more information, see Configuring Application Output.
String name
Name of the column created in the in-application input stream or reference table.
String mapping
Reference to the data element in the streaming input or the reference data source. This element is required if
the RecordFormatType is JSON
.
String sqlType
Type of column created in the in-application input stream or reference table.
String recordFormatType
The type of record format.
MappingParameters mappingParameters
When configuring application input at the time of creating or updating an application, provides additional mapping information specific to the record format (such as JSON, CSV, or record fields delimited by some delimiter) on the streaming source.
String tableName
Name of the in-application table to create.
S3ReferenceDataSource s3ReferenceDataSource
Identifies the S3 bucket and object that contains the reference data. Also identifies the IAM role Amazon Kinesis
Analytics can assume to read this object on your behalf. An Amazon Kinesis Analytics application loads reference
data only once. If the data changes, you call the UpdateApplication
operation to trigger reloading
of data into your application.
SourceSchema referenceSchema
Describes the format of the data in the streaming source, and how each data element maps to corresponding columns created in the in-application stream.
String referenceId
ID of the reference data source. This is the ID that Amazon Kinesis Analytics assigns when you add the reference data source to your application using the AddApplicationReferenceDataSource operation.
String tableName
The in-application table name created by the specific reference data source configuration.
S3ReferenceDataSourceDescription s3ReferenceDataSourceDescription
Provides the S3 bucket name, the object key name that contains the reference data. It also provides the Amazon Resource Name (ARN) of the IAM role that Amazon Kinesis Analytics can assume to read the Amazon S3 object and populate the in-application reference table.
SourceSchema referenceSchema
Describes the format of the data in the streaming source, and how each data element maps to corresponding columns created in the in-application stream.
String referenceId
ID of the reference data source being updated. You can use the DescribeApplication operation to get this value.
String tableNameUpdate
In-application table name that is created by this update.
S3ReferenceDataSourceUpdate s3ReferenceDataSourceUpdate
Describes the S3 bucket name, object key name, and IAM role that Amazon Kinesis Analytics can assume to read the Amazon S3 object on your behalf and populate the in-application reference table.
SourceSchema referenceSchemaUpdate
Describes the format of the data in the streaming source, and how each data element maps to corresponding columns created in the in-application stream.
String bucketARN
Amazon Resource Name (ARN) of the S3 bucket.
String fileKey
Object key name containing reference data.
String referenceRoleARN
ARN of the IAM role that the service can assume to read data on your behalf. This role must have permission for
the s3:GetObject
action on the object and trust policy that allows Amazon Kinesis Analytics service
principal to assume this role.
String bucketARN
Amazon Resource Name (ARN) of the S3 bucket.
String fileKey
Amazon S3 object key name.
String referenceRoleARN
ARN of the IAM role that Amazon Kinesis Analytics can assume to read the Amazon S3 object on your behalf to populate the in-application reference table.
String bucketARNUpdate
Amazon Resource Name (ARN) of the S3 bucket.
String fileKeyUpdate
Object key name.
String referenceRoleARNUpdate
ARN of the IAM role that Amazon Kinesis Analytics can assume to read the Amazon S3 object and populate the in-application.
RecordFormat recordFormat
Specifies the format of the records on the streaming source.
String recordEncoding
Specifies the encoding of the records in the streaming source. For example, UTF-8.
List<E> recordColumns
A list of RecordColumn
objects.
String applicationName
Name of the application.
List<E> inputConfigurations
Identifies the specific input, by ID, that the application starts consuming. Amazon Kinesis Analytics starts reading the streaming source associated with the input. You can also specify where in the streaming source you want Amazon Kinesis Analytics to start reading.
String applicationName
Name of the running application to stop.
String applicationName
Name of the Amazon Kinesis Analytics application to update.
Long currentApplicationVersionId
The current application version ID. You can use the DescribeApplication operation to get this value.
ApplicationUpdate applicationUpdate
Describes application updates.
Integer sizeInMBs
Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5.
We recommend setting this parameter to a value greater than the amount of data you typically ingest into the delivery stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec, the value should be 10 MB or higher.
Integer intervalInSeconds
Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 300.
Boolean enabled
Enables or disables CloudWatch logging.
String logGroupName
The CloudWatch group name for logging. This value is required if CloudWatch logging is enabled.
String logStreamName
The CloudWatch log stream name for logging. This value is required if CloudWatch logging is enabled.
String dataTableName
The name of the target table. The table must already exist in the database.
String dataTableColumns
A comma-separated list of column names.
String copyOptions
Optional parameters to use with the Amazon Redshift COPY
command. For more information, see the
"Optional Parameters" section of Amazon
Redshift COPY command. Some possible examples that would apply to Kinesis Data Firehose are as follows:
delimiter '\t' lzop;
- fields are delimited with "\t" (TAB character) and compressed using lzop.
delimiter '|'
- fields are delimited with "|" (this is the default delimiter).
delimiter '|' escape
- the delimiter should be escaped.
fixedwidth 'venueid:3,venuename:25,venuecity:12,venuestate:2,venueseats:6'
- fields are fixed width
in the source, with each width specified after every column in the table.
JSON 's3://mybucket/jsonpaths.txt'
- data is in JSON format, and the path specified is the format of
the data.
For more examples, see Amazon Redshift COPY command examples.
String deliveryStreamName
The name of the delivery stream. This name must be unique per AWS account in the same AWS Region. If the delivery streams are in different accounts or different Regions, you can have multiple delivery streams with the same name.
String deliveryStreamType
The delivery stream type. This parameter can be one of the following values:
DirectPut
: Provider applications access the delivery stream directly.
KinesisStreamAsSource
: The delivery stream uses a Kinesis data stream as a source.
KinesisStreamSourceConfiguration kinesisStreamSourceConfiguration
When a Kinesis data stream is used as the source for the delivery stream, a KinesisStreamSourceConfiguration containing the Kinesis data stream Amazon Resource Name (ARN) and the role ARN for the source stream.
S3DestinationConfiguration s3DestinationConfiguration
[Deprecated] The destination in Amazon S3. You can specify only one destination.
ExtendedS3DestinationConfiguration extendedS3DestinationConfiguration
The destination in Amazon S3. You can specify only one destination.
RedshiftDestinationConfiguration redshiftDestinationConfiguration
The destination in Amazon Redshift. You can specify only one destination.
ElasticsearchDestinationConfiguration elasticsearchDestinationConfiguration
The destination in Amazon ES. You can specify only one destination.
SplunkDestinationConfiguration splunkDestinationConfiguration
The destination in Splunk. You can specify only one destination.
List<E> tags
A set of tags to assign to the delivery stream. A tag is a key-value pair that you can define and assign to AWS resources. Tags are metadata. For example, you can add friendly names and descriptions or other types of information that can help you distinguish the delivery stream. For more information about tags, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.
You can specify up to 50 tags when creating a delivery stream.
String deliveryStreamARN
The ARN of the delivery stream.
SchemaConfiguration schemaConfiguration
Specifies the AWS Glue Data Catalog table that contains the column information.
InputFormatConfiguration inputFormatConfiguration
Specifies the deserializer that you want Kinesis Data Firehose to use to convert the format of your data from JSON.
OutputFormatConfiguration outputFormatConfiguration
Specifies the serializer that you want Kinesis Data Firehose to use to convert the format of your data to the Parquet or ORC format.
Boolean enabled
Defaults to true
. Set it to false
if you want to disable format conversion while
preserving the configuration details.
String deliveryStreamName
The name of the delivery stream.
String deliveryStreamName
The name of the delivery stream.
String deliveryStreamARN
The Amazon Resource Name (ARN) of the delivery stream. For more information, see Amazon Resource Names (ARNs) and AWS Service Namespaces.
String deliveryStreamStatus
The status of the delivery stream.
DeliveryStreamEncryptionConfiguration deliveryStreamEncryptionConfiguration
Indicates the server-side encryption (SSE) status for the delivery stream.
String deliveryStreamType
The delivery stream type. This can be one of the following values:
DirectPut
: Provider applications access the delivery stream directly.
KinesisStreamAsSource
: The delivery stream uses a Kinesis data stream as a source.
String versionId
Each time the destination is updated for a delivery stream, the version ID is changed, and the current version ID is required when updating the destination. This is so that the service knows it is applying the changes to the correct version of the delivery stream.
Date createTimestamp
The date and time that the delivery stream was created.
Date lastUpdateTimestamp
The date and time that the delivery stream was last updated.
SourceDescription source
If the DeliveryStreamType
parameter is KinesisStreamAsSource
, a
SourceDescription object describing the source Kinesis data stream.
List<E> destinations
The destinations.
Boolean hasMoreDestinations
Indicates whether there are more destinations available to list.
String status
For a full description of the different values of this status, see StartDeliveryStreamEncryption and StopDeliveryStreamEncryption.
String deliveryStreamName
The name of the delivery stream.
Integer limit
The limit on the number of destinations to return. You can have one destination per delivery stream.
String exclusiveStartDestinationId
The ID of the destination to start returning the destination information. Kinesis Data Firehose supports one destination per delivery stream.
DeliveryStreamDescription deliveryStreamDescription
Information about the delivery stream.
OpenXJsonSerDe openXJsonSerDe
The OpenX SerDe. Used by Kinesis Data Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the native Hive / HCatalog JsonSerDe.
HiveJsonSerDe hiveJsonSerDe
The native Hive / HCatalog JsonSerDe. Used by Kinesis Data Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the OpenX SerDe.
String destinationId
The ID of the destination.
S3DestinationDescription s3DestinationDescription
[Deprecated] The destination in Amazon S3.
ExtendedS3DestinationDescription extendedS3DestinationDescription
The destination in Amazon S3.
RedshiftDestinationDescription redshiftDestinationDescription
The destination in Amazon Redshift.
ElasticsearchDestinationDescription elasticsearchDestinationDescription
The destination in Amazon ES.
SplunkDestinationDescription splunkDestinationDescription
The destination in Splunk.
Integer intervalInSeconds
Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 300 (5 minutes).
Integer sizeInMBs
Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5.
We recommend setting this parameter to a value greater than the amount of data you typically ingest into the delivery stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec, the value should be 10 MB or higher.
String roleARN
The Amazon Resource Name (ARN) of the IAM role to be assumed by Kinesis Data Firehose for calling the Amazon ES Configuration API and for indexing documents. For more information, see Grant Kinesis Data Firehose Access to an Amazon S3 Destination and Amazon Resource Names (ARNs) and AWS Service Namespaces.
String domainARN
The ARN of the Amazon ES domain. The IAM role must have permissions for DescribeElasticsearchDomain
,
DescribeElasticsearchDomains
, and DescribeElasticsearchDomainConfig
 after assuming the
role specified in RoleARN. For more information, see Amazon Resource Names (ARNs)
and AWS Service Namespaces.
String indexName
The Elasticsearch index name.
String typeName
The Elasticsearch type name. For Elasticsearch 6.x, there can be only one type per index. If you try to specify a new type for an existing index that already has another type, Kinesis Data Firehose returns an error during run time.
String indexRotationPeriod
The Elasticsearch index rotation period. Index rotation appends a timestamp to the IndexName
to
facilitate the expiration of old data. For more information, see Index Rotation for the
Amazon ES Destination. The default value is OneDay
.
ElasticsearchBufferingHints bufferingHints
The buffering options. If no value is specified, the default values for ElasticsearchBufferingHints
are used.
ElasticsearchRetryOptions retryOptions
The retry behavior in case Kinesis Data Firehose is unable to deliver documents to Amazon ES. The default value is 300 (5 minutes).
String s3BackupMode
Defines how documents should be delivered to Amazon S3. When it is set to FailedDocumentsOnly
,
Kinesis Data Firehose writes any documents that could not be indexed to the configured Amazon S3 destination,
with elasticsearch-failed/
appended to the key prefix. When set to AllDocuments
,
Kinesis Data Firehose delivers all incoming records to Amazon S3, and also writes failed documents with
elasticsearch-failed/
appended to the prefix. For more information, see Amazon S3 Backup for the
Amazon ES Destination. Default value is FailedDocumentsOnly
.
S3DestinationConfiguration s3Configuration
The configuration for the backup Amazon S3 location.
ProcessingConfiguration processingConfiguration
The data processing configuration.
CloudWatchLoggingOptions cloudWatchLoggingOptions
The Amazon CloudWatch logging options for your delivery stream.
String roleARN
The Amazon Resource Name (ARN) of the AWS credentials. For more information, see Amazon Resource Names (ARNs) and AWS Service Namespaces.
String domainARN
The ARN of the Amazon ES domain. For more information, see Amazon Resource Names (ARNs) and AWS Service Namespaces.
String indexName
The Elasticsearch index name.
String typeName
The Elasticsearch type name.
String indexRotationPeriod
The Elasticsearch index rotation period
ElasticsearchBufferingHints bufferingHints
The buffering options.
ElasticsearchRetryOptions retryOptions
The Amazon ES retry options.
String s3BackupMode
The Amazon S3 backup mode.
S3DestinationDescription s3DestinationDescription
The Amazon S3 destination.
ProcessingConfiguration processingConfiguration
The data processing configuration.
CloudWatchLoggingOptions cloudWatchLoggingOptions
The Amazon CloudWatch logging options.
String roleARN
The Amazon Resource Name (ARN) of the IAM role to be assumed by Kinesis Data Firehose for calling the Amazon ES Configuration API and for indexing documents. For more information, see Grant Kinesis Data Firehose Access to an Amazon S3 Destination and Amazon Resource Names (ARNs) and AWS Service Namespaces.
String domainARN
The ARN of the Amazon ES domain. The IAM role must have permissions for DescribeElasticsearchDomain
,
DescribeElasticsearchDomains
, and DescribeElasticsearchDomainConfig
 after assuming the
IAM role specified in RoleARN
. For more information, see Amazon Resource Names (ARNs)
and AWS Service Namespaces.
String indexName
The Elasticsearch index name.
String typeName
The Elasticsearch type name. For Elasticsearch 6.x, there can be only one type per index. If you try to specify a new type for an existing index that already has another type, Kinesis Data Firehose returns an error during runtime.
String indexRotationPeriod
The Elasticsearch index rotation period. Index rotation appends a timestamp to IndexName
to
facilitate the expiration of old data. For more information, see Index Rotation for the
Amazon ES Destination. Default value is OneDay
.
ElasticsearchBufferingHints bufferingHints
The buffering options. If no value is specified, ElasticsearchBufferingHints
object default values
are used.
ElasticsearchRetryOptions retryOptions
The retry behavior in case Kinesis Data Firehose is unable to deliver documents to Amazon ES. The default value is 300 (5 minutes).
S3DestinationUpdate s3Update
The Amazon S3 destination.
ProcessingConfiguration processingConfiguration
The data processing configuration.
CloudWatchLoggingOptions cloudWatchLoggingOptions
The CloudWatch logging options for your delivery stream.
Integer durationInSeconds
After an initial failure to deliver to Amazon ES, the total amount of time during which Kinesis Data Firehose retries delivery (including the first attempt). After this time has elapsed, the failed documents are written to Amazon S3. Default value is 300 seconds (5 minutes). A value of 0 (zero) results in no retries.
String noEncryptionConfig
Specifically override existing encryption information to ensure that no encryption is used.
KMSEncryptionConfig kMSEncryptionConfig
The encryption key.
String roleARN
The Amazon Resource Name (ARN) of the AWS credentials. For more information, see Amazon Resource Names (ARNs) and AWS Service Namespaces.
String bucketARN
The ARN of the S3 bucket. For more information, see Amazon Resource Names (ARNs) and AWS Service Namespaces.
String prefix
The "YYYY/MM/DD/HH" time format prefix is automatically used for delivered Amazon S3 files. You can specify an extra prefix to be added in front of the time format prefix. If the prefix ends with a slash, it appears as a folder in the S3 bucket. For more information, see Amazon S3 Object Name Format in the Amazon Kinesis Data Firehose Developer Guide.
String errorOutputPrefix
A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name.
BufferingHints bufferingHints
The buffering option.
String compressionFormat
The compression format. If no value is specified, the default is UNCOMPRESSED.
EncryptionConfiguration encryptionConfiguration
The encryption configuration. If no value is specified, the default is no encryption.
CloudWatchLoggingOptions cloudWatchLoggingOptions
The Amazon CloudWatch logging options for your delivery stream.
ProcessingConfiguration processingConfiguration
The data processing configuration.
String s3BackupMode
The Amazon S3 backup mode.
S3DestinationConfiguration s3BackupConfiguration
The configuration for backup in Amazon S3.
DataFormatConversionConfiguration dataFormatConversionConfiguration
The serializer, deserializer, and schema for converting data from the JSON format to the Parquet or ORC format before writing it to Amazon S3.
String roleARN
The Amazon Resource Name (ARN) of the AWS credentials. For more information, see Amazon Resource Names (ARNs) and AWS Service Namespaces.
String bucketARN
The ARN of the S3 bucket. For more information, see Amazon Resource Names (ARNs) and AWS Service Namespaces.
String prefix
The "YYYY/MM/DD/HH" time format prefix is automatically used for delivered Amazon S3 files. You can specify an extra prefix to be added in front of the time format prefix. If the prefix ends with a slash, it appears as a folder in the S3 bucket. For more information, see Amazon S3 Object Name Format in the Amazon Kinesis Data Firehose Developer Guide.
String errorOutputPrefix
A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name.
BufferingHints bufferingHints
The buffering option.
String compressionFormat
The compression format. If no value is specified, the default is UNCOMPRESSED
.
EncryptionConfiguration encryptionConfiguration
The encryption configuration. If no value is specified, the default is no encryption.
CloudWatchLoggingOptions cloudWatchLoggingOptions
The Amazon CloudWatch logging options for your delivery stream.
ProcessingConfiguration processingConfiguration
The data processing configuration.
String s3BackupMode
The Amazon S3 backup mode.
S3DestinationDescription s3BackupDescription
The configuration for backup in Amazon S3.
DataFormatConversionConfiguration dataFormatConversionConfiguration
The serializer, deserializer, and schema for converting data from the JSON format to the Parquet or ORC format before writing it to Amazon S3.
String roleARN
The Amazon Resource Name (ARN) of the AWS credentials. For more information, see Amazon Resource Names (ARNs) and AWS Service Namespaces.
String bucketARN
The ARN of the S3 bucket. For more information, see Amazon Resource Names (ARNs) and AWS Service Namespaces.
String prefix
The "YYYY/MM/DD/HH" time format prefix is automatically used for delivered Amazon S3 files. You can specify an extra prefix to be added in front of the time format prefix. If the prefix ends with a slash, it appears as a folder in the S3 bucket. For more information, see Amazon S3 Object Name Format in the Amazon Kinesis Data Firehose Developer Guide.
String errorOutputPrefix
A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name.
BufferingHints bufferingHints
The buffering option.
String compressionFormat
The compression format. If no value is specified, the default is UNCOMPRESSED
.
EncryptionConfiguration encryptionConfiguration
The encryption configuration. If no value is specified, the default is no encryption.
CloudWatchLoggingOptions cloudWatchLoggingOptions
The Amazon CloudWatch logging options for your delivery stream.
ProcessingConfiguration processingConfiguration
The data processing configuration.
String s3BackupMode
Enables or disables Amazon S3 backup mode.
S3DestinationUpdate s3BackupUpdate
The Amazon S3 destination for backup.
DataFormatConversionConfiguration dataFormatConversionConfiguration
The serializer, deserializer, and schema for converting data from the JSON format to the Parquet or ORC format before writing it to Amazon S3.
List<E> timestampFormats
Indicates how you want Kinesis Data Firehose to parse the date and timestamps that may be present in your input
data JSON. To specify these format strings, follow the pattern syntax of JodaTime's DateTimeFormat format
strings. For more information, see Class DateTimeFormat.
You can also use the special value millis
to parse timestamps in epoch milliseconds. If you don't
specify a format, Kinesis Data Firehose uses java.sql.Timestamp::valueOf
by default.
Deserializer deserializer
Specifies which deserializer to use. You can choose either the Apache Hive JSON SerDe or the OpenX JSON SerDe. If both are non-null, the server rejects the request.
String kinesisStreamARN
The ARN of the source Kinesis data stream. For more information, see Amazon Kinesis Data Streams ARN Format.
String roleARN
The ARN of the role that provides access to the source Kinesis data stream. For more information, see AWS Identity and Access Management (IAM) ARN Format.
String kinesisStreamARN
The Amazon Resource Name (ARN) of the source Kinesis data stream. For more information, see Amazon Kinesis Data Streams ARN Format.
String roleARN
The ARN of the role used by the source Kinesis data stream. For more information, see AWS Identity and Access Management (IAM) ARN Format.
Date deliveryStartTimestamp
Kinesis Data Firehose starts retrieving records from the Kinesis data stream starting with this timestamp.
String aWSKMSKeyARN
The Amazon Resource Name (ARN) of the encryption key. Must belong to the same AWS Region as the destination Amazon S3 bucket. For more information, see Amazon Resource Names (ARNs) and AWS Service Namespaces.
Integer limit
The maximum number of delivery streams to list. The default value is 10.
String deliveryStreamType
The delivery stream type. This can be one of the following values:
DirectPut
: Provider applications access the delivery stream directly.
KinesisStreamAsSource
: The delivery stream uses a Kinesis data stream as a source.
This parameter is optional. If this parameter is omitted, delivery streams of all types are returned.
String exclusiveStartDeliveryStreamName
The list of delivery streams returned by this call to ListDeliveryStreams
will start with the
delivery stream whose name comes alphabetically immediately after the name you specify in
ExclusiveStartDeliveryStreamName
.
String deliveryStreamName
The name of the delivery stream whose tags you want to list.
String exclusiveStartTagKey
The key to use as the starting point for the list of tags. If you set this parameter,
ListTagsForDeliveryStream
gets all tags that occur after ExclusiveStartTagKey
.
Integer limit
The number of tags to return. If this number is less than the total number of tags associated with the delivery
stream, HasMoreTags
is set to true
in the response. To list additional tags, set
ExclusiveStartTagKey
to the last key in the response.
List<E> tags
A list of tags associated with DeliveryStreamName
, starting with the first tag after
ExclusiveStartTagKey
and up to the specified Limit
.
Boolean hasMoreTags
If this is true
in the response, more tags are available. To list the remaining tags, set
ExclusiveStartTagKey
to the key of the last tag returned and call
ListTagsForDeliveryStream
again.
Boolean convertDotsInJsonKeysToUnderscores
When set to true
, specifies that the names of the keys include dots and that you want Kinesis Data
Firehose to replace them with underscores. This is useful because Apache Hive does not allow dots in column
names. For example, if the JSON contains a key whose name is "a.b", you can define the column name to be "a_b"
when using this option.
The default is false
.
Boolean caseInsensitive
When set to true
, which is the default, Kinesis Data Firehose converts JSON keys to lowercase before
deserializing them.
Map<K,V> columnToJsonKeyMappings
Maps column names to JSON keys that aren't identical to the column names. This is useful when the JSON contains
keys that are Hive keywords. For example, timestamp
is a Hive keyword. If you have a JSON key named
timestamp
, set this parameter to {"ts": "timestamp"}
to map this key to a column named
ts
.
Integer stripeSizeBytes
The number of bytes in each stripe. The default is 64 MiB and the minimum is 8 MiB.
Integer blockSizeBytes
The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Kinesis Data Firehose uses this value for padding calculations.
Integer rowIndexStride
The number of rows between index entries. The default is 10,000 and the minimum is 1,000.
Boolean enablePadding
Set this to true
to indicate that you want stripes to be padded to the HDFS block boundaries. This
is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is
false
.
Double paddingTolerance
A number between 0 and 1 that defines the tolerance for block padding as a decimal fraction of stripe size. The default value is 0.05, which means 5 percent of stripe size.
For the default values of 64 MiB ORC stripes and 256 MiB HDFS blocks, the default block padding tolerance of 5 percent reserves a maximum of 3.2 MiB for padding within the 256 MiB block. In such a case, if the available size within the block is more than 3.2 MiB, a new, smaller stripe is inserted to fit within that space. This ensures that no stripe crosses block boundaries and causes remote reads within a node-local task.
Kinesis Data Firehose ignores this parameter when OrcSerDe$EnablePadding is false
.
String compression
The compression code to use over data blocks. The default is SNAPPY
.
List<E> bloomFilterColumns
The column names for which you want Kinesis Data Firehose to create bloom filters. The default is
null
.
Double bloomFilterFalsePositiveProbability
The Bloom filter false positive probability (FPP). The lower the FPP, the bigger the Bloom filter. The default value is 0.05, the minimum is 0, and the maximum is 1.
Double dictionaryKeyThreshold
Represents the fraction of the total number of non-null rows. To turn off dictionary encoding, set this fraction to a number that is less than the number of distinct keys in a dictionary. To always use dictionary encoding, set this threshold to 1.
String formatVersion
The version of the file to write. The possible values are V0_11
and V0_12
. The default
is V0_12
.
Serializer serializer
Specifies which serializer to use. You can choose either the ORC SerDe or the Parquet SerDe. If both are non-null, the server rejects the request.
Integer blockSizeBytes
The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Kinesis Data Firehose uses this value for padding calculations.
Integer pageSizeBytes
The Parquet page size. Column chunks are divided into pages. A page is conceptually an indivisible unit (in terms of compression and encoding). The minimum value is 64 KiB and the default is 1 MiB.
String compression
The compression code to use over data blocks. The possible values are UNCOMPRESSED
,
SNAPPY
, and GZIP
, with the default being SNAPPY
. Use SNAPPY
for higher decompression speed. Use GZIP
if the compression ration is more important than speed.
Boolean enableDictionaryCompression
Indicates whether to enable dictionary compression.
Integer maxPaddingBytes
The maximum amount of padding to apply. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 0.
String writerVersion
Indicates the version of row format to output. The possible values are V1
and V2
. The
default is V1
.
Integer failedPutCount
The number of records that might have failed processing. This number might be greater than 0 even if the
PutRecordBatch call succeeds. Check FailedPutCount
to determine whether there are records
that you need to resend.
Boolean encrypted
Indicates whether server-side encryption (SSE) was enabled during this operation.
List<E> requestResponses
The results array. For each record, the index of the response element is the same as the index used in the request array.
ByteBuffer data
The data blob, which is base64-encoded when the blob is serialized. The maximum size of the data blob, before base64-encoding, is 1,000 KiB.
String roleARN
The Amazon Resource Name (ARN) of the AWS credentials. For more information, see Amazon Resource Names (ARNs) and AWS Service Namespaces.
String clusterJDBCURL
The database connection string.
CopyCommand copyCommand
The COPY
command.
String username
The name of the user.
String password
The user password.
RedshiftRetryOptions retryOptions
The retry behavior in case Kinesis Data Firehose is unable to deliver documents to Amazon Redshift. Default value is 3600 (60 minutes).
S3DestinationConfiguration s3Configuration
The configuration for the intermediate Amazon S3 location from which Amazon Redshift obtains data. Restrictions are described in the topic for CreateDeliveryStream.
The compression formats SNAPPY
or ZIP
cannot be specified in
RedshiftDestinationConfiguration.S3Configuration
because the Amazon Redshift COPY
operation that reads from the S3 bucket doesn't support these compression formats.
ProcessingConfiguration processingConfiguration
The data processing configuration.
String s3BackupMode
The Amazon S3 backup mode.
S3DestinationConfiguration s3BackupConfiguration
The configuration for backup in Amazon S3.
CloudWatchLoggingOptions cloudWatchLoggingOptions
The CloudWatch logging options for your delivery stream.
String roleARN
The Amazon Resource Name (ARN) of the AWS credentials. For more information, see Amazon Resource Names (ARNs) and AWS Service Namespaces.
String clusterJDBCURL
The database connection string.
CopyCommand copyCommand
The COPY
command.
String username
The name of the user.
RedshiftRetryOptions retryOptions
The retry behavior in case Kinesis Data Firehose is unable to deliver documents to Amazon Redshift. Default value is 3600 (60 minutes).
S3DestinationDescription s3DestinationDescription
The Amazon S3 destination.
ProcessingConfiguration processingConfiguration
The data processing configuration.
String s3BackupMode
The Amazon S3 backup mode.
S3DestinationDescription s3BackupDescription
The configuration for backup in Amazon S3.
CloudWatchLoggingOptions cloudWatchLoggingOptions
The Amazon CloudWatch logging options for your delivery stream.
String roleARN
The Amazon Resource Name (ARN) of the AWS credentials. For more information, see Amazon Resource Names (ARNs) and AWS Service Namespaces.
String clusterJDBCURL
The database connection string.
CopyCommand copyCommand
The COPY
command.
String username
The name of the user.
String password
The user password.
RedshiftRetryOptions retryOptions
The retry behavior in case Kinesis Data Firehose is unable to deliver documents to Amazon Redshift. Default value is 3600 (60 minutes).
S3DestinationUpdate s3Update
The Amazon S3 destination.
The compression formats SNAPPY
or ZIP
cannot be specified in
RedshiftDestinationUpdate.S3Update
because the Amazon Redshift COPY
operation that
reads from the S3 bucket doesn't support these compression formats.
ProcessingConfiguration processingConfiguration
The data processing configuration.
String s3BackupMode
The Amazon S3 backup mode.
S3DestinationUpdate s3BackupUpdate
The Amazon S3 destination for backup.
CloudWatchLoggingOptions cloudWatchLoggingOptions
The Amazon CloudWatch logging options for your delivery stream.
Integer durationInSeconds
The length of time during which Kinesis Data Firehose retries delivery after a failure, starting from the initial
request and including the first attempt. The default value is 3600 seconds (60 minutes). Kinesis Data Firehose
does not retry if the value of DurationInSeconds
is 0 (zero) or if the first delivery attempt takes
longer than the current value.
String roleARN
The Amazon Resource Name (ARN) of the AWS credentials. For more information, see Amazon Resource Names (ARNs) and AWS Service Namespaces.
String bucketARN
The ARN of the S3 bucket. For more information, see Amazon Resource Names (ARNs) and AWS Service Namespaces.
String prefix
The "YYYY/MM/DD/HH" time format prefix is automatically used for delivered Amazon S3 files. You can specify an extra prefix to be added in front of the time format prefix. If the prefix ends with a slash, it appears as a folder in the S3 bucket. For more information, see Amazon S3 Object Name Format in the Amazon Kinesis Data Firehose Developer Guide.
String errorOutputPrefix
A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name.
BufferingHints bufferingHints
The buffering option. If no value is specified, BufferingHints
object default values are used.
String compressionFormat
The compression format. If no value is specified, the default is UNCOMPRESSED
.
The compression formats SNAPPY
or ZIP
cannot be specified for Amazon Redshift
destinations because they are not supported by the Amazon Redshift COPY
operation that reads from
the S3 bucket.
EncryptionConfiguration encryptionConfiguration
The encryption configuration. If no value is specified, the default is no encryption.
CloudWatchLoggingOptions cloudWatchLoggingOptions
The CloudWatch logging options for your delivery stream.
String roleARN
The Amazon Resource Name (ARN) of the AWS credentials. For more information, see Amazon Resource Names (ARNs) and AWS Service Namespaces.
String bucketARN
The ARN of the S3 bucket. For more information, see Amazon Resource Names (ARNs) and AWS Service Namespaces.
String prefix
The "YYYY/MM/DD/HH" time format prefix is automatically used for delivered Amazon S3 files. You can specify an extra prefix to be added in front of the time format prefix. If the prefix ends with a slash, it appears as a folder in the S3 bucket. For more information, see Amazon S3 Object Name Format in the Amazon Kinesis Data Firehose Developer Guide.
String errorOutputPrefix
A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name.
BufferingHints bufferingHints
The buffering option. If no value is specified, BufferingHints
object default values are used.
String compressionFormat
The compression format. If no value is specified, the default is UNCOMPRESSED
.
EncryptionConfiguration encryptionConfiguration
The encryption configuration. If no value is specified, the default is no encryption.
CloudWatchLoggingOptions cloudWatchLoggingOptions
The Amazon CloudWatch logging options for your delivery stream.
String roleARN
The Amazon Resource Name (ARN) of the AWS credentials. For more information, see Amazon Resource Names (ARNs) and AWS Service Namespaces.
String bucketARN
The ARN of the S3 bucket. For more information, see Amazon Resource Names (ARNs) and AWS Service Namespaces.
String prefix
The "YYYY/MM/DD/HH" time format prefix is automatically used for delivered Amazon S3 files. You can specify an extra prefix to be added in front of the time format prefix. If the prefix ends with a slash, it appears as a folder in the S3 bucket. For more information, see Amazon S3 Object Name Format in the Amazon Kinesis Data Firehose Developer Guide.
String errorOutputPrefix
A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name.
BufferingHints bufferingHints
The buffering option. If no value is specified, BufferingHints
object default values are used.
String compressionFormat
The compression format. If no value is specified, the default is UNCOMPRESSED
.
The compression formats SNAPPY
or ZIP
cannot be specified for Amazon Redshift
destinations because they are not supported by the Amazon Redshift COPY
operation that reads from
the S3 bucket.
EncryptionConfiguration encryptionConfiguration
The encryption configuration. If no value is specified, the default is no encryption.
CloudWatchLoggingOptions cloudWatchLoggingOptions
The CloudWatch logging options for your delivery stream.
String roleARN
The role that Kinesis Data Firehose can use to access AWS Glue. This role must be in the same account you use for Kinesis Data Firehose. Cross-account roles aren't allowed.
String catalogId
The ID of the AWS Glue Data Catalog. If you don't supply this, the AWS account ID is used by default.
String databaseName
Specifies the name of the AWS Glue database that contains the schema for the output data.
String tableName
Specifies the AWS Glue table that contains the column information that constitutes your data schema.
String region
If you don't specify an AWS Region, the default is the current Region.
String versionId
Specifies the table version for the output data schema. If you don't specify this version ID, or if you set it to
LATEST
, Kinesis Data Firehose uses the most recent version. This means that any updates to the table
are automatically picked up.
ParquetSerDe parquetSerDe
A serializer to use for converting data to the Parquet format before storing it in Amazon S3. For more information, see Apache Parquet.
OrcSerDe orcSerDe
A serializer to use for converting data to the ORC format before storing it in Amazon S3. For more information, see Apache ORC.
KinesisStreamSourceDescription kinesisStreamSourceDescription
The KinesisStreamSourceDescription value for the source Kinesis data stream.
String hECEndpoint
The HTTP Event Collector (HEC) endpoint to which Kinesis Data Firehose sends your data.
String hECEndpointType
This type can be either "Raw" or "Event."
String hECToken
This is a GUID that you obtain from your Splunk cluster when you create a new HEC endpoint.
Integer hECAcknowledgmentTimeoutInSeconds
The amount of time that Kinesis Data Firehose waits to receive an acknowledgment from Splunk after it sends it data. At the end of the timeout period, Kinesis Data Firehose either tries to send the data again or considers it an error, based on your retry settings.
SplunkRetryOptions retryOptions
The retry behavior in case Kinesis Data Firehose is unable to deliver data to Splunk, or if it doesn't receive an acknowledgment of receipt from Splunk.
String s3BackupMode
Defines how documents should be delivered to Amazon S3. When set to FailedDocumentsOnly
, Kinesis
Data Firehose writes any data that could not be indexed to the configured Amazon S3 destination. When set to
AllDocuments
, Kinesis Data Firehose delivers all incoming records to Amazon S3, and also writes
failed documents to Amazon S3. Default value is FailedDocumentsOnly
.
S3DestinationConfiguration s3Configuration
The configuration for the backup Amazon S3 location.
ProcessingConfiguration processingConfiguration
The data processing configuration.
CloudWatchLoggingOptions cloudWatchLoggingOptions
The Amazon CloudWatch logging options for your delivery stream.
String hECEndpoint
The HTTP Event Collector (HEC) endpoint to which Kinesis Data Firehose sends your data.
String hECEndpointType
This type can be either "Raw" or "Event."
String hECToken
A GUID you obtain from your Splunk cluster when you create a new HEC endpoint.
Integer hECAcknowledgmentTimeoutInSeconds
The amount of time that Kinesis Data Firehose waits to receive an acknowledgment from Splunk after it sends it data. At the end of the timeout period, Kinesis Data Firehose either tries to send the data again or considers it an error, based on your retry settings.
SplunkRetryOptions retryOptions
The retry behavior in case Kinesis Data Firehose is unable to deliver data to Splunk or if it doesn't receive an acknowledgment of receipt from Splunk.
String s3BackupMode
Defines how documents should be delivered to Amazon S3. When set to FailedDocumentsOnly
, Kinesis
Data Firehose writes any data that could not be indexed to the configured Amazon S3 destination. When set to
AllDocuments
, Kinesis Data Firehose delivers all incoming records to Amazon S3, and also writes
failed documents to Amazon S3. Default value is FailedDocumentsOnly
.
S3DestinationDescription s3DestinationDescription
The Amazon S3 destination.>
ProcessingConfiguration processingConfiguration
The data processing configuration.
CloudWatchLoggingOptions cloudWatchLoggingOptions
The Amazon CloudWatch logging options for your delivery stream.
String hECEndpoint
The HTTP Event Collector (HEC) endpoint to which Kinesis Data Firehose sends your data.
String hECEndpointType
This type can be either "Raw" or "Event."
String hECToken
A GUID that you obtain from your Splunk cluster when you create a new HEC endpoint.
Integer hECAcknowledgmentTimeoutInSeconds
The amount of time that Kinesis Data Firehose waits to receive an acknowledgment from Splunk after it sends data. At the end of the timeout period, Kinesis Data Firehose either tries to send the data again or considers it an error, based on your retry settings.
SplunkRetryOptions retryOptions
The retry behavior in case Kinesis Data Firehose is unable to deliver data to Splunk or if it doesn't receive an acknowledgment of receipt from Splunk.
String s3BackupMode
Defines how documents should be delivered to Amazon S3. When set to FailedDocumentsOnly
, Kinesis
Data Firehose writes any data that could not be indexed to the configured Amazon S3 destination. When set to
AllDocuments
, Kinesis Data Firehose delivers all incoming records to Amazon S3, and also writes
failed documents to Amazon S3. Default value is FailedDocumentsOnly
.
S3DestinationUpdate s3Update
Your update to the configuration of the backup Amazon S3 location.
ProcessingConfiguration processingConfiguration
The data processing configuration.
CloudWatchLoggingOptions cloudWatchLoggingOptions
The Amazon CloudWatch logging options for your delivery stream.
Integer durationInSeconds
The total amount of time that Kinesis Data Firehose spends on retries. This duration starts after the initial attempt to send data to Splunk fails. It doesn't include the periods during which Kinesis Data Firehose waits for acknowledgment from Splunk after each attempt.
String deliveryStreamName
The name of the delivery stream for which you want to enable server-side encryption (SSE).
String deliveryStreamName
The name of the delivery stream for which you want to disable server-side encryption (SSE).
String key
A unique identifier for the tag. Maximum length: 128 characters. Valid characters: Unicode letters, digits, white space, _ . / = + - % @
String value
An optional string, which you can use to describe or define the tag. Maximum length: 256 characters. Valid characters: Unicode letters, digits, white space, _ . / = + - % @
String deliveryStreamName
The name of the delivery stream.
String currentDeliveryStreamVersionId
Obtain this value from the VersionId
result of DeliveryStreamDescription. This value is
required, and helps the service perform conditional operations. For example, if there is an interleaving update
and this value is null, then the update destination fails. After the update is successful, the
VersionId
value is updated. The service then performs a merge of the old configuration with the new
configuration.
String destinationId
The ID of the destination.
S3DestinationUpdate s3DestinationUpdate
[Deprecated] Describes an update for a destination in Amazon S3.
ExtendedS3DestinationUpdate extendedS3DestinationUpdate
Describes an update for a destination in Amazon S3.
RedshiftDestinationUpdate redshiftDestinationUpdate
Describes an update for a destination in Amazon Redshift.
ElasticsearchDestinationUpdate elasticsearchDestinationUpdate
Describes an update for a destination in Amazon ES.
SplunkDestinationUpdate splunkDestinationUpdate
Describes an update for a destination in Splunk.
Copyright © 2019. All rights reserved.