public class PutRecordsRequestEntry extends Object implements Serializable, Cloneable
Represents the output for PutRecords
.
Constructor and Description |
---|
PutRecordsRequestEntry() |
Modifier and Type | Method and Description |
---|---|
PutRecordsRequestEntry |
clone() |
boolean |
equals(Object obj) |
ByteBuffer |
getData()
The data blob to put into the record, which is base64-encoded when the
blob is serialized.
|
String |
getExplicitHashKey()
The hash value used to determine explicitly the shard that the data
record is assigned to by overriding the partition key hash.
|
String |
getPartitionKey()
Determines which shard in the stream the data record is assigned to.
|
int |
hashCode() |
void |
setData(ByteBuffer data)
The data blob to put into the record, which is base64-encoded when the
blob is serialized.
|
void |
setExplicitHashKey(String explicitHashKey)
The hash value used to determine explicitly the shard that the data
record is assigned to by overriding the partition key hash.
|
void |
setPartitionKey(String partitionKey)
Determines which shard in the stream the data record is assigned to.
|
String |
toString()
Returns a string representation of this object; useful for testing and
debugging.
|
PutRecordsRequestEntry |
withData(ByteBuffer data)
The data blob to put into the record, which is base64-encoded when the
blob is serialized.
|
PutRecordsRequestEntry |
withExplicitHashKey(String explicitHashKey)
The hash value used to determine explicitly the shard that the data
record is assigned to by overriding the partition key hash.
|
PutRecordsRequestEntry |
withPartitionKey(String partitionKey)
Determines which shard in the stream the data record is assigned to.
|
public ByteBuffer getData()
Constraints:
Length: 0 - 1048576
public void setData(ByteBuffer data)
Constraints:
Length: 0 - 1048576
data
- The data blob to put into the record, which is base64-encoded when the
blob is serialized. When the data blob (the payload before
base64-encoding) is added to the partition key size, the total size
must not exceed the maximum record size (1 MB).public PutRecordsRequestEntry withData(ByteBuffer data)
Returns a reference to this object so that method calls can be chained together.
Constraints:
Length: 0 - 1048576
data
- The data blob to put into the record, which is base64-encoded when the
blob is serialized. When the data blob (the payload before
base64-encoding) is added to the partition key size, the total size
must not exceed the maximum record size (1 MB).public String getExplicitHashKey()
Constraints:
Pattern: 0|([1-9]\d{0,38})
public void setExplicitHashKey(String explicitHashKey)
Constraints:
Pattern: 0|([1-9]\d{0,38})
explicitHashKey
- The hash value used to determine explicitly the shard that the data
record is assigned to by overriding the partition key hash.public PutRecordsRequestEntry withExplicitHashKey(String explicitHashKey)
Returns a reference to this object so that method calls can be chained together.
Constraints:
Pattern: 0|([1-9]\d{0,38})
explicitHashKey
- The hash value used to determine explicitly the shard that the data
record is assigned to by overriding the partition key hash.public String getPartitionKey()
Constraints:
Length: 1 - 256
public void setPartitionKey(String partitionKey)
Constraints:
Length: 1 - 256
partitionKey
- Determines which shard in the stream the data record is assigned to.
Partition keys are Unicode strings with a maximum length limit of 256
characters for each key. Amazon Kinesis uses the partition key as
input to a hash function that maps the partition key and associated
data to a specific shard. Specifically, an MD5 hash function is used
to map partition keys to 128-bit integer values and to map associated
data records to shards. As a result of this hashing mechanism, all
data records with the same partition key map to the same shard within
the stream.public PutRecordsRequestEntry withPartitionKey(String partitionKey)
Returns a reference to this object so that method calls can be chained together.
Constraints:
Length: 1 - 256
partitionKey
- Determines which shard in the stream the data record is assigned to.
Partition keys are Unicode strings with a maximum length limit of 256
characters for each key. Amazon Kinesis uses the partition key as
input to a hash function that maps the partition key and associated
data to a specific shard. Specifically, an MD5 hash function is used
to map partition keys to 128-bit integer values and to map associated
data records to shards. As a result of this hashing mechanism, all
data records with the same partition key map to the same shard within
the stream.public String toString()
toString
in class Object
Object.toString()
public PutRecordsRequestEntry clone()
Copyright © 2015. All rights reserved.