Packages

p

fs2.aws

dynamodb

package dynamodb

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. dynamodb
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. Protected

Package Members

  1. package parsers

Type Members

  1. case class CommittableRecord(shardId: String, recordProcessorStartingSequenceNumber: ExtendedSequenceNumber, millisBehindLatest: Long, record: RecordAdapter, recordProcessor: RecordProcessor, checkpointer: IRecordProcessorCheckpointer, inFlightRecordsPhaser: Phaser) extends Product with Serializable

    A message type from Kinesis which has not yet been commited or checkpointed.

    A message type from Kinesis which has not yet been commited or checkpointed.

    shardId

    the unique identifier for the shard from which this record originated

    millisBehindLatest

    ms behind the latest record, used to detect if the consumer is lagging the producer

    record

    the original record document from Kinesis

    recordProcessor

    reference to the record processor that is responsible for processing this message

    checkpointer

    reference to the checkpointer used to commit this record

  2. class KinesisCheckpointSettings extends AnyRef

    Settings for configuring the Kinesis checkpointer pipe

  3. class KinesisStreamSettings extends AnyRef

    Settings for configuring the Kinesis consumer stream

  4. class RecordProcessor extends IRecordProcessor

    Concrete implementation of the AWS RecordProcessor interface.

    Concrete implementation of the AWS RecordProcessor interface. Wraps incoming records into CommitableRecord types to allow for downstream checkpointing

Value Members

  1. def checkpointRecords[F[_]](checkpointSettings: KinesisCheckpointSettings = KinesisCheckpointSettings.defaultInstance, parallelism: Int = 10)(implicit arg0: ConcurrentEffect[F], arg1: Timer[F]): Pipe[F, CommittableRecord, Record]

    Pipe to checkpoint records in Kinesis, marking them as processed Groups records by shard id, so that each shard is subject to its own clustering of records After accumulating maxBatchSize or reaching maxBatchWait for a respective shard, the latest record is checkpointed By design, all records prior to the checkpointed record are also checkpointed in Kinesis

    Pipe to checkpoint records in Kinesis, marking them as processed Groups records by shard id, so that each shard is subject to its own clustering of records After accumulating maxBatchSize or reaching maxBatchWait for a respective shard, the latest record is checkpointed By design, all records prior to the checkpointed record are also checkpointed in Kinesis

    F

    effect type of the fs2 stream

    checkpointSettings

    configure maxBatchSize and maxBatchWait time before triggering a checkpoint

    returns

    a stream of Record types representing checkpointed messages

  2. def checkpointRecords_[F[_]](checkpointSettings: KinesisCheckpointSettings = KinesisCheckpointSettings.defaultInstance)(implicit F: ConcurrentEffect[F], timer: Timer[F]): Pipe[F, CommittableRecord, Unit]

    Sink to checkpoint records in Kinesis, marking them as processed Groups records by shard id, so that each shard is subject to its own clustering of records After accumulating maxBatchSize or reaching maxBatchWait for a respective shard, the latest record is checkpointed By design, all records prior to the checkpointed record are also checkpointed in Kinesis

    Sink to checkpoint records in Kinesis, marking them as processed Groups records by shard id, so that each shard is subject to its own clustering of records After accumulating maxBatchSize or reaching maxBatchWait for a respective shard, the latest record is checkpointed By design, all records prior to the checkpointed record are also checkpointed in Kinesis

    F

    effect type of the fs2 stream

    checkpointSettings

    configure maxBatchSize and maxBatchWait time before triggering a checkpoint

    returns

    a Sink that accepts a stream of CommittableRecords

  3. def readFromDynamDBStream[F[_]](appName: String, streamName: String)(implicit arg0: ConcurrentEffect[F], arg1: ContextShift[F]): Stream[F, CommittableRecord]

    Intialize a worker and start streaming records from a Kinesis stream On stream finish (due to error or other), worker will be shutdown

    Intialize a worker and start streaming records from a Kinesis stream On stream finish (due to error or other), worker will be shutdown

    F

    effect type of the fs2 stream

    appName

    name of the Kinesis application. Used by KCL when resharding

    streamName

    name of the Kinesis stream to consume from

    returns

    an infinite fs2 Stream that emits Kinesis Records

  4. def readFromDynamoDBStream[F[_]](workerConfiguration: KinesisClientLibConfiguration, dynamoDBStreamsClient: AmazonDynamoDBStreams = AmazonDynamoDBStreamsClientBuilder.standard().withRegion(Regions.US_EAST_1).build(), dynamoDBClient: AmazonDynamoDB = AmazonDynamoDBClientBuilder.standard().withRegion(Regions.US_EAST_1).build(), cloudWatchClient: AmazonCloudWatch = AmazonCloudWatchClientBuilder.standard().withRegion(Regions.US_EAST_1).build(), streamConfig: KinesisStreamSettings = KinesisStreamSettings.defaultInstance)(implicit arg0: ConcurrentEffect[F], arg1: ContextShift[F]): Stream[F, CommittableRecord]

    Intialize a worker and start streaming records from a Kinesis stream On stream finish (due to error or other), worker will be shutdown

    Intialize a worker and start streaming records from a Kinesis stream On stream finish (due to error or other), worker will be shutdown

    F

    effect type of the fs2 stream

    streamConfig

    configuration for the internal stream

    returns

    an infinite fs2 Stream that emits Kinesis Records

  5. object CommittableRecord extends Serializable
  6. object KinesisCheckpointSettings
  7. object KinesisStreamSettings

Inherited from AnyRef

Inherited from Any

Ungrouped