String streamName
The name of the stream.
com.amazonaws.internal.SdkInternalMap<K,V> tags
The set of key-value pairs to use to create the tags.
String streamName
A name to identify the stream. The stream name is scoped to the AWS account used by the application that creates the stream. It is also scoped by region. That is, two streams in two different AWS accounts can have the same name, and two streams in the same AWS account, but in two different regions, can have the same name.
Integer shardCount
The number of shards that the stream will use. The throughput of the stream is a function of the number of shards; more shards are required for greater provisioned throughput.
DefaultShardLimit;
String streamName
The name of the stream to delete.
StreamDescription streamDescription
The current status of the stream, the stream ARN, an array of shard objects that comprise the stream, and states whether there are more shards available.
String shardIterator
The position in the shard from which you want to start sequentially reading data records. A shard iterator specifies this position using the sequence number of a data record in the shard.
Integer limit
The maximum number of records to return. Specify a value of up to 10,000.
If you specify a value that is greater than 10,000, GetRecords
throws InvalidArgumentException
.
com.amazonaws.internal.SdkInternalList<T> records
The data records retrieved from the shard.
String nextShardIterator
The next position in the shard from which to start sequentially reading
data records. If set to null
, the shard has been closed and
the requested iterator will not return any more data.
Long millisBehindLatest
The number of milliseconds the GetRecords response is from the tip of the stream, indicating how far behind current time the consumer is. A value of zero indicates record processing is caught up, and there are no new records to process at this moment.
String streamName
The name of the stream.
String shardId
The shard ID of the shard to get the iterator for.
String shardIteratorType
Determines how the shard iterator is used to start reading data records from the shard.
The following are the valid shard iterator types:
String startingSequenceNumber
The sequence number of the data record in the shard from which to start reading from.
String shardIterator
The position in the shard from which to start reading data records sequentially. A shard iterator specifies this position using the sequence number of a data record in a shard.
com.amazonaws.internal.SdkInternalList<T> streamNames
The names of the streams that are associated with the AWS account making
the ListStreams
request.
Boolean hasMoreStreams
If set to true
, there are more streams available to list.
String streamName
The name of the stream.
String exclusiveStartTagKey
The key to use as the starting point for the list of tags. If this
parameter is set, ListTagsForStream
gets all tags that occur
after ExclusiveStartTagKey
.
Integer limit
The number of tags to return. If this number is less than the total
number of tags associated with the stream, HasMoreTags
is
set to true
. To list additional tags, set
ExclusiveStartTagKey
to the last key in the response.
com.amazonaws.internal.SdkInternalList<T> tags
A list of tags associated with StreamName
, starting with the
first tag after ExclusiveStartTagKey
and up to the specified
Limit
.
Boolean hasMoreTags
If set to true
, more tags are available. To request
additional tags, set ExclusiveStartTagKey
to the key of the
last tag returned.
String streamName
The name of the stream to put the data record into.
ByteBuffer data
The data blob to put into the record, which is base64-encoded when the blob is serialized. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MB).
String partitionKey
Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 characters for each key. Amazon Kinesis uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key will map to the same shard within the stream.
String explicitHashKey
The hash value used to explicitly determine the shard the data record is assigned to by overriding the partition key hash.
String sequenceNumberForOrdering
Guarantees strictly increasing sequence numbers, for puts from the same
client and to the same partition key. Usage: set the
SequenceNumberForOrdering
of record n to the sequence
number of record n-1 (as returned in the result when putting
record n-1). If this parameter is not set, records will be
coarsely ordered based on arrival time.
String shardId
The shard ID of the shard where the data record was placed.
String sequenceNumber
The sequence number identifier that was assigned to the put data record. The sequence number for the record is unique across all records in the stream. A sequence number is the identifier associated with every record put into the stream.
com.amazonaws.internal.SdkInternalList<T> records
The records associated with the request.
String streamName
The stream name associated with the request.
ByteBuffer data
The data blob to put into the record, which is base64-encoded when the blob is serialized. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MB).
String explicitHashKey
The hash value used to determine explicitly the shard that the data record is assigned to by overriding the partition key hash.
String partitionKey
Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 characters for each key. Amazon Kinesis uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.
Integer failedRecordCount
The number of unsuccessfully processed records in a
PutRecords
request.
com.amazonaws.internal.SdkInternalList<T> records
An array of successfully and unsuccessfully processed record results,
correlated with the request by natural ordering. A record that is
successfully added to your Amazon Kinesis stream includes
SequenceNumber
and ShardId
in the result. A
record that fails to be added to your Amazon Kinesis stream includes
ErrorCode
and ErrorMessage
in the result.
String sequenceNumber
The sequence number for an individual record result.
String shardId
The shard ID for an individual record result.
String errorCode
The error code for an individual record result. ErrorCodes
can be either ProvisionedThroughputExceededException
or
InternalFailure
.
String errorMessage
The error message for an individual record result. An
ErrorCode
value of
ProvisionedThroughputExceededException
has an error message
that includes the account ID, stream name, and shard ID. An
ErrorCode
value of InternalFailure
has the
error message "Internal Service Failure"
.
String sequenceNumber
The unique identifier of the record in the stream.
Date approximateArrivalTimestamp
The approximate time that the record was inserted into the stream.
ByteBuffer data
The data blob. The data in the blob is both opaque and immutable to the Amazon Kinesis service, which does not inspect, interpret, or change the data in the blob in any way. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MB).
String partitionKey
Identifies which shard in the stream the data record is assigned to.
String streamName
The name of the stream.
com.amazonaws.internal.SdkInternalList<T> tagKeys
A list of tag keys. Each corresponding tag is removed from the stream.
String shardId
The unique identifier of the shard within the Amazon Kinesis stream.
String parentShardId
The shard Id of the shard's parent.
String adjacentParentShardId
The shard Id of the shard adjacent to the shard's parent.
HashKeyRange hashKeyRange
The range of possible hash key values for the shard, which is a set of ordered contiguous positive integers.
SequenceNumberRange sequenceNumberRange
The range of possible sequence numbers for the shard.
String streamName
The name of the stream for the shard split.
String shardToSplit
The shard ID of the shard to split.
String newStartingHashKey
A hash key value for the starting hash key of one of the child shards
created by the split. The hash key range for a given shard constitutes a
set of ordered contiguous positive integers. The value for
NewStartingHashKey
must be in the range of hash keys being
mapped into the shard. The NewStartingHashKey
hash key value
and all higher hash key values in hash key range are distributed to one
of the child shards. All the lower hash key values in the range are
distributed to the other child shard.
String streamName
The name of the stream being described.
String streamARN
The Amazon Resource Name (ARN) for the stream being described.
String streamStatus
The current status of the stream being described.
The stream status is one of the following states:
CREATING
- The stream is being created. Amazon Kinesis
immediately returns and sets StreamStatus
to
CREATING
.DELETING
- The stream is being deleted. The specified
stream is in the DELETING
state until Amazon Kinesis
completes the deletion.ACTIVE
- The stream exists and is ready for read and
write operations or deletion. You should perform read and write
operations only on an ACTIVE
stream.UPDATING
- Shards in the stream are being merged or
split. Read and write operations continue to work while the stream is in
the UPDATING
state.com.amazonaws.internal.SdkInternalList<T> shards
The shards that comprise the stream.
Boolean hasMoreShards
If set to true
, more shards in the stream are available to
describe.
Integer retentionPeriodHours
The current retention period, in hours.
String key
A unique identifier for the tag. Maximum length: 128 characters. Valid characters: Unicode letters, digits, white space, _ . / = + - % @
String value
An optional string, typically used to describe or define the tag. Maximum length: 256 characters. Valid characters: Unicode letters, digits, white space, _ . / = + - % @
Integer sizeInMBs
Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5.
We recommend setting SizeInMBs to a value greater than the amount of data you typically ingest into the delivery stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec set SizeInMBs to be 10 MB or higher.
Integer intervalInSeconds
Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 300.
String dataTableName
The name of the target table. The table must already exist in the database.
String dataTableColumns
A comma-separated list of column names.
String copyOptions
Optional parameters to use with the Amazon Redshift COPY
command. For more information, see the "Optional Parameters" section of
Amazon
Redshift COPY command. Some possible examples that would apply to
Amazon Kinesis Firehose are as follows.
delimiter '\t' lzop;
- fields are delimited with "\t" (TAB
character) and compressed using lzop.
delimiter '|
- fields are delimited with "|" (this is the
default delimiter).
delimiter '|' escape
- the delimiter should be escaped.
fixedwidth 'venueid:3,venuename:25,venuecity:12,venuestate:2,venueseats:6'
- fields are fixed width in the source, with each width specified after
every column in the table.
JSON 's3://mybucket/jsonpaths.txt'
- data is in JSON format,
and the path specified is the format of the data.
For more examples, see and Amazon Redshift COPY command exmaples.
String deliveryStreamName
The name of the delivery stream.
S3DestinationConfiguration s3DestinationConfiguration
The destination in Amazon S3. This value must be specified if
RedshiftDestinationConfiguration
is specified (see
restrictions listed above).
RedshiftDestinationConfiguration redshiftDestinationConfiguration
The destination in Amazon Redshift. This value cannot be specified if Amazon S3 is the desired destination (see restrictions listed above).
String deliveryStreamARN
The ARN of the delivery stream.
String deliveryStreamName
The name of the delivery stream.
String deliveryStreamName
The name of the delivery stream.
String deliveryStreamARN
The Amazon Resource Name (ARN) of the delivery stream.
String deliveryStreamStatus
The status of the delivery stream.
String versionId
Used when calling the UpdateDestination operation. Each time the destination is updated for the delivery stream, the VersionId is changed, and the current VersionId is required when updating the destination. This is so that the service knows it is applying the changes to the correct version of the delivery stream.
Date createTimestamp
The date and time that the delivery stream was created.
Date lastUpdateTimestamp
The date and time that the delivery stream was last updated.
List<E> destinations
The destinations.
Boolean hasMoreDestinations
Indicates whether there are more destinations available to list.
String deliveryStreamName
The name of the delivery stream.
Integer limit
The limit on the number of destinations to return. Currently, you can have one destination per delivery stream.
String exclusiveStartDestinationId
Specifies the destination ID to start returning the destination information. Currently Amazon Kinesis Firehose supports one destination per delivery stream.
DeliveryStreamDescription deliveryStreamDescription
Information about the delivery stream.
String destinationId
The ID of the destination.
S3DestinationDescription s3DestinationDescription
The Amazon S3 destination.
RedshiftDestinationDescription redshiftDestinationDescription
The destination in Amazon Redshift.
String noEncryptionConfig
Specifically override existing encryption information to ensure no encryption is used.
KMSEncryptionConfig kMSEncryptionConfig
The encryption key.
String aWSKMSKeyARN
The ARN of the encryption key. Must belong to the same region as the destination Amazon S3 bucket.
String recordId
The ID of the record.
ByteBuffer data
The data blob, which is base64-encoded when the blob is serialized. The maximum size of the data blob, before base64-encoding, is 1,000 KB.
String roleARN
The ARN of the AWS credentials.
String clusterJDBCURL
The database connection string.
CopyCommand copyCommand
The COPY
command.
String username
The name of the user.
String password
The user password.
S3DestinationConfiguration s3Configuration
The S3 configuration for the intermediate location from which Amazon Redshift obtains data. Restrictions are described in the topic for CreateDeliveryStream.
The compression formats SNAPPY
or ZIP
cannot be
specified in
RedshiftDestinationConfiguration.S3Configuration
because the
Amazon Redshift COPY
operation that reads from the S3 bucket
doesn't support these compression formats.
String roleARN
The ARN of the AWS credentials.
String clusterJDBCURL
The database connection string.
CopyCommand copyCommand
The COPY
command.
String username
The name of the user.
S3DestinationDescription s3DestinationDescription
The Amazon S3 destination.
String roleARN
The ARN of the AWS credentials.
String clusterJDBCURL
The database connection string.
CopyCommand copyCommand
The COPY
command.
String username
The name of the user.
String password
The user password.
S3DestinationUpdate s3Update
The Amazon S3 destination.
The compression formats SNAPPY
or ZIP
cannot be
specified in RedshiftDestinationUpdate.S3Update
because the
Amazon Redshift COPY
operation that reads from the S3 bucket
doesn't support these compression formats.
String roleARN
The ARN of the AWS credentials.
String bucketARN
The ARN of the S3 bucket.
String prefix
The "YYYY/MM/DD/HH" time format prefix is automatically used for delivered S3 files. You can specify an extra prefix to be added in front of the time format prefix. Note that if the prefix ends with a slash, it appears as a folder in the S3 bucket. For more information, see Amazon S3 Object Name Format in the guide-fh-dev.
BufferingHints bufferingHints
The buffering option. If no value is specified,
BufferingHints
object default values are used.
String compressionFormat
The compression format. If no value is specified, the default is
UNCOMPRESSED
.
The compression formats SNAPPY
or ZIP
cannot be
specified for Amazon Redshift destinations because they are not supported
by the Amazon Redshift COPY
operation that reads from the S3
bucket.
EncryptionConfiguration encryptionConfiguration
The encryption configuration. If no value is specified, the default is no encryption.
String roleARN
The ARN of the AWS credentials.
String bucketARN
The ARN of the S3 bucket.
String prefix
The "YYYY/MM/DD/HH" time format prefix is automatically used for delivered S3 files. You can specify an extra prefix to be added in front of the time format prefix. Note that if the prefix ends with a slash, it appears as a folder in the S3 bucket. For more information, see Amazon S3 Object Name Format in the guide-fh-dev.
BufferingHints bufferingHints
The buffering option. If no value is specified,
BufferingHints
object default values are used.
String compressionFormat
The compression format. If no value is specified, the default is
NOCOMPRESSION
.
EncryptionConfiguration encryptionConfiguration
The encryption configuration. If no value is specified, the default is no encryption.
String roleARN
The ARN of the AWS credentials.
String bucketARN
The ARN of the S3 bucket.
String prefix
The "YYYY/MM/DD/HH" time format prefix is automatically used for delivered S3 files. You can specify an extra prefix to be added in front of the time format prefix. Note that if the prefix ends with a slash, it appears as a folder in the S3 bucket. For more information, see Amazon S3 Object Name Format in the guide-fh-dev.
BufferingHints bufferingHints
The buffering option. If no value is specified,
BufferingHints
object default values are used.
String compressionFormat
The compression format. If no value is specified, the default is
NOCOMPRESSION
.
The compression formats SNAPPY
or ZIP
cannot be
specified for Amazon Redshift destinations because they are not supported
by the Amazon Redshift COPY
operation that reads from the S3
bucket.
EncryptionConfiguration encryptionConfiguration
The encryption configuration. If no value is specified, the default is no encryption.
String deliveryStreamName
The name of the delivery stream.
String currentDeliveryStreamVersionId
Obtain this value from the VersionId
result of the
DeliveryStreamDescription operation. This value is required, and
helps the service to perform conditional operations. For example, if
there is a interleaving update and this value is null, then the update
destination fails. After the update is successful, the
VersionId
value is updated. The service then performs a
merge of the old configuration with the new configuration.
String destinationId
The ID of the destination.
S3DestinationUpdate s3DestinationUpdate
RedshiftDestinationUpdate redshiftDestinationUpdate
Copyright © 2015. All rights reserved.