public class GetRecordsRequest extends AmazonWebServiceRequest implements Serializable, Cloneable
GetRecords operation
.
Gets data records from a shard.
Specify a shard iterator using the ShardIterator
parameter. The shard iterator specifies the position in the shard from
which you want to start reading data records sequentially. If there
are no records available in the portion of the shard that the iterator
points to, GetRecords returns an empty list. Note that it might take
multiple calls to get to a portion of the shard that contains records.
You can scale by provisioning multiple shards. Your application should
have one thread per shard, each reading continuously from its stream.
To read from a stream continually, call GetRecords in a loop. Use
GetShardIterator to get the shard iterator to specify in the first
GetRecords call. GetRecords returns a new shard iterator in
NextShardIterator
. Specify the shard iterator returned
in NextShardIterator
in subsequent calls to GetRecords.
Note that if the shard has been closed, the shard iterator can't
return more data and GetRecords returns null
in
NextShardIterator
. You can terminate the loop when the
shard is closed, or when the shard iterator reaches the record with
the sequence number or other attribute that marks it as the last
record to process.
Each data record can be up to 1 MB in size, and each shard can read up
to 2 MB per second. You can ensure that your calls don't exceed the
maximum supported size or throughput by using the Limit
parameter to specify the maximum number of records that GetRecords can
return. Consider your average record size when determining this limit.
The size of the data returned by GetRecords will vary depending on the
utilization of the shard. The maximum size of data that GetRecords can
return is 10 MB. If a call returns this amount of data, subsequent
calls made within the next 5 seconds throw
ProvisionedThroughputExceededException
. If there is
insufficient provisioned throughput on the shard, subsequent calls
made within the next 1 second throw
ProvisionedThroughputExceededException
. Note that
GetRecords won't return any data when it throws an exception. For this
reason, we recommend that you wait one second between calls to
GetRecords; however, it's possible that the application will get
exceptions for longer than 1 second.
To detect whether the application is falling behind in processing, you
can use the MillisBehindLatest
response attribute. You
can also monitor the stream using CloudWatch metrics (see
Monitoring Amazon Kinesis
in the Amazon Kinesis Developer Guide ).
Each Amazon Kinesis record includes a value,
ApproximateArrivalTimestamp
, that is set when an Amazon
Kinesis stream successfully receives and stores a record. This is
commonly referred to as a server-side timestamp, which is different
than a client-side timestamp, where the timestamp is set when a data
producer creates or sends the record to a stream. The timestamp has
millisecond precision. There are no guarantees about the timestamp
accuracy, or that the timestamp is always increasing. For example,
records in a shard or across a stream might have timestamps that are
out of order.
NOOP
Constructor and Description |
---|
GetRecordsRequest() |
Modifier and Type | Method and Description |
---|---|
GetRecordsRequest |
clone() |
boolean |
equals(Object obj) |
Integer |
getLimit()
The maximum number of records to return.
|
String |
getShardIterator()
The position in the shard from which you want to start sequentially
reading data records.
|
int |
hashCode() |
void |
setLimit(Integer limit)
The maximum number of records to return.
|
void |
setShardIterator(String shardIterator)
The position in the shard from which you want to start sequentially
reading data records.
|
String |
toString()
Returns a string representation of this object; useful for testing and
debugging.
|
GetRecordsRequest |
withLimit(Integer limit)
The maximum number of records to return.
|
GetRecordsRequest |
withShardIterator(String shardIterator)
The position in the shard from which you want to start sequentially
reading data records.
|
copyBaseTo, getCloneRoot, getCloneSource, getCustomQueryParameters, getCustomRequestHeaders, getGeneralProgressListener, getReadLimit, getRequestClientOptions, getRequestCredentials, getRequestMetricCollector, putCustomQueryParameter, putCustomRequestHeader, setGeneralProgressListener, setRequestCredentials, setRequestMetricCollector, withGeneralProgressListener, withRequestMetricCollector
public String getShardIterator()
Constraints:
Length: 1 - 512
public void setShardIterator(String shardIterator)
Constraints:
Length: 1 - 512
shardIterator
- The position in the shard from which you want to start sequentially
reading data records. A shard iterator specifies this position using
the sequence number of a data record in the shard.public GetRecordsRequest withShardIterator(String shardIterator)
Returns a reference to this object so that method calls can be chained together.
Constraints:
Length: 1 - 512
shardIterator
- The position in the shard from which you want to start sequentially
reading data records. A shard iterator specifies this position using
the sequence number of a data record in the shard.public Integer getLimit()
InvalidArgumentException
.
Constraints:
Range: 1 - 10000
InvalidArgumentException
.public void setLimit(Integer limit)
InvalidArgumentException
.
Constraints:
Range: 1 - 10000
limit
- The maximum number of records to return. Specify a value of up to
10,000. If you specify a value that is greater than 10,000,
GetRecords throws InvalidArgumentException
.public GetRecordsRequest withLimit(Integer limit)
InvalidArgumentException
.
Returns a reference to this object so that method calls can be chained together.
Constraints:
Range: 1 - 10000
limit
- The maximum number of records to return. Specify a value of up to
10,000. If you specify a value that is greater than 10,000,
GetRecords throws InvalidArgumentException
.public String toString()
toString
in class Object
Object.toString()
public GetRecordsRequest clone()
clone
in class AmazonWebServiceRequest
Copyright © 2015. All rights reserved.