public class PutRecordsRequestEntry extends Object implements Serializable, Cloneable
Represents the output for PutRecords
.
Constructor and Description |
---|
PutRecordsRequestEntry() |
Modifier and Type | Method and Description |
---|---|
PutRecordsRequestEntry |
clone() |
boolean |
equals(Object obj) |
ByteBuffer |
getData()
The data blob to put into the record, which is base64-encoded when the
blob is serialized.
|
String |
getExplicitHashKey()
The hash value used to determine explicitly the shard that the data
record is assigned to by overriding the partition key hash.
|
String |
getPartitionKey()
Determines which shard in the stream the data record is assigned to.
|
int |
hashCode() |
void |
setData(ByteBuffer data)
The data blob to put into the record, which is base64-encoded when the
blob is serialized.
|
void |
setExplicitHashKey(String explicitHashKey)
The hash value used to determine explicitly the shard that the data
record is assigned to by overriding the partition key hash.
|
void |
setPartitionKey(String partitionKey)
Determines which shard in the stream the data record is assigned to.
|
String |
toString()
Returns a string representation of this object; useful for testing and
debugging.
|
PutRecordsRequestEntry |
withData(ByteBuffer data)
The data blob to put into the record, which is base64-encoded when the
blob is serialized.
|
PutRecordsRequestEntry |
withExplicitHashKey(String explicitHashKey)
The hash value used to determine explicitly the shard that the data
record is assigned to by overriding the partition key hash.
|
PutRecordsRequestEntry |
withPartitionKey(String partitionKey)
Determines which shard in the stream the data record is assigned to.
|
public void setData(ByteBuffer data)
The data blob to put into the record, which is base64-encoded when the blob is serialized. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MB).
AWS SDK for Java performs a Base64 encoding on this field before sending this request to AWS service by default. Users of the SDK should not perform Base64 encoding on this field.
Warning: ByteBuffers returned by the SDK are mutable. Changes to the content or position of the byte buffer will be seen by all objects that have a reference to this object. It is recommended to call ByteBuffer.duplicate() or ByteBuffer.asReadOnlyBuffer() before using or reading from the buffer. This behavior will be changed in a future major version of the SDK.
data
- The data blob to put into the record, which is base64-encoded when
the blob is serialized. When the data blob (the payload before
base64-encoding) is added to the partition key size, the total
size must not exceed the maximum record size (1 MB).public ByteBuffer getData()
The data blob to put into the record, which is base64-encoded when the blob is serialized. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MB).
ByteBuffer
s are stateful. Calling their get
methods
changes their position
. We recommend using
ByteBuffer.asReadOnlyBuffer()
to create a read-only view
of the buffer with an independent position
, and calling
get
methods on this rather than directly on the returned
ByteBuffer
. Doing so will ensure that anyone else using the
ByteBuffer
will not be affected by changes to the position
.
public PutRecordsRequestEntry withData(ByteBuffer data)
The data blob to put into the record, which is base64-encoded when the blob is serialized. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MB).
data
- The data blob to put into the record, which is base64-encoded when
the blob is serialized. When the data blob (the payload before
base64-encoding) is added to the partition key size, the total
size must not exceed the maximum record size (1 MB).public void setExplicitHashKey(String explicitHashKey)
The hash value used to determine explicitly the shard that the data record is assigned to by overriding the partition key hash.
explicitHashKey
- The hash value used to determine explicitly the shard that the
data record is assigned to by overriding the partition key hash.public String getExplicitHashKey()
The hash value used to determine explicitly the shard that the data record is assigned to by overriding the partition key hash.
public PutRecordsRequestEntry withExplicitHashKey(String explicitHashKey)
The hash value used to determine explicitly the shard that the data record is assigned to by overriding the partition key hash.
explicitHashKey
- The hash value used to determine explicitly the shard that the
data record is assigned to by overriding the partition key hash.public void setPartitionKey(String partitionKey)
Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 characters for each key. Amazon Kinesis uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.
partitionKey
- Determines which shard in the stream the data record is assigned
to. Partition keys are Unicode strings with a maximum length limit
of 256 characters for each key. Amazon Kinesis uses the partition
key as input to a hash function that maps the partition key and
associated data to a specific shard. Specifically, an MD5 hash
function is used to map partition keys to 128-bit integer values
and to map associated data records to shards. As a result of this
hashing mechanism, all data records with the same partition key
map to the same shard within the stream.public String getPartitionKey()
Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 characters for each key. Amazon Kinesis uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.
public PutRecordsRequestEntry withPartitionKey(String partitionKey)
Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 characters for each key. Amazon Kinesis uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.
partitionKey
- Determines which shard in the stream the data record is assigned
to. Partition keys are Unicode strings with a maximum length limit
of 256 characters for each key. Amazon Kinesis uses the partition
key as input to a hash function that maps the partition key and
associated data to a specific shard. Specifically, an MD5 hash
function is used to map partition keys to 128-bit integer values
and to map associated data records to shards. As a result of this
hashing mechanism, all data records with the same partition key
map to the same shard within the stream.public String toString()
toString
in class Object
Object.toString()
public PutRecordsRequestEntry clone()
Copyright © 2016. All rights reserved.