A B C D E F G H I J K L M N O P R S T U V W
All Classes All Packages
All Classes All Packages
All Classes All Packages
A
- abortTransaction() - Method in class org.apache.flink.connector.kafka.sink.internal.FlinkKafkaInternalProducer
- abortTransaction(String) - Method in interface org.apache.flink.connector.kafka.sink.internal.TransactionAbortStrategyImpl.TransactionAborter
- abortTransactions(TransactionAbortStrategyImpl.Context) - Method in enum org.apache.flink.connector.kafka.sink.internal.TransactionAbortStrategyImpl
-
Aborts all transactions that have been created by this subtask in a previous run.
- addGroup(String) - Method in class org.apache.flink.connector.kafka.dynamic.source.metrics.KafkaClusterMetricGroup
- addGroup(String, String) - Method in class org.apache.flink.connector.kafka.dynamic.source.metrics.KafkaClusterMetricGroup
- addPostCommitTopology(DataStream<CommittableMessage<KafkaCommittable>>) - Method in class org.apache.flink.connector.kafka.sink.KafkaSink
- addReader(int) - Method in class org.apache.flink.connector.kafka.dynamic.source.enumerator.DynamicKafkaSourceEnumerator
-
NOTE: this happens at startup and failover.
- addReader(int) - Method in class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
- addSplits(List<DynamicKafkaSourceSplit>) - Method in class org.apache.flink.connector.kafka.dynamic.source.reader.DynamicKafkaSourceReader
- addSplitsBack(List<DynamicKafkaSourceSplit>, int) - Method in class org.apache.flink.connector.kafka.dynamic.source.enumerator.DynamicKafkaSourceEnumerator
- addSplitsBack(List<KafkaPartitionSplit>, int) - Method in class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
- AdminUtils - Class in org.apache.flink.connector.kafka.util
-
Utility methods for Kafka admin operations.
- ALL - org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ValueFieldsStrategy
- applyReadableMetadata(List<String>, DataType) - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
- applyWatermark(WatermarkStrategy<RowData>) - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
- applyWritableMetadata(List<String>, DataType) - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
- ASSIGNED - org.apache.flink.connector.kafka.source.enumerator.AssignmentStatus
-
Partitions that have been assigned to readers.
- assignedPartitions() - Method in class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumState
- assignmentStatus() - Method in class org.apache.flink.connector.kafka.source.enumerator.TopicPartitionAndAssignmentStatus
- AssignmentStatus - Enum in org.apache.flink.connector.kafka.source.enumerator
-
status of partition assignment.
- assignSplits(SplitsAssignment<KafkaPartitionSplit>) - Method in class org.apache.flink.connector.kafka.dynamic.source.enumerator.StoppableKafkaEnumContextProxy
-
Wrap splits with cluster metadata.
- asSummaryString() - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
- asSummaryString() - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
B
- Backchannel - Interface in org.apache.flink.connector.kafka.sink.internal
-
A backchannel for communication between the commiter -> writer.
- BackchannelFactory - Class in org.apache.flink.connector.kafka.sink.internal
-
Creates and manages backchannels for the Kafka sink.
- BackchannelImpl<T> - Class in org.apache.flink.connector.kafka.sink.internal
-
A backchannel for communication between the Kafka committer -> writer.
- beginningOffsets(Collection<TopicPartition>) - Method in interface org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer.PartitionOffsetsRetriever
-
List beginning offsets for the specified partitions.
- beginningOffsets(Collection<TopicPartition>) - Method in class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator.PartitionOffsetsRetrieverImpl
- beginTransaction() - Method in class org.apache.flink.connector.kafka.sink.internal.FlinkKafkaInternalProducer
- boundedMode - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
The bounded mode for the contained consumer (default is an unbounded data stream).
- BoundedMode - Enum in org.apache.flink.streaming.connectors.kafka.config
-
End modes for the Kafka Consumer.
- boundedTimestampMillis - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
The bounded timestamp to locate partition offsets; only relevant when bounded mode is
BoundedMode.TIMESTAMP
. - build() - Method in class org.apache.flink.connector.kafka.dynamic.source.DynamicKafkaSourceBuilder
-
Construct the source with the configuration that was set.
- build() - Method in class org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
-
Constructs the
KafkaRecordSerializationSchemaBuilder
with the configured properties. - build() - Method in class org.apache.flink.connector.kafka.sink.KafkaSinkBuilder
-
Constructs the
KafkaSink
with the configured properties. - build() - Method in class org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
Build the
KafkaSource
. - builder() - Static method in class org.apache.flink.connector.kafka.dynamic.source.DynamicKafkaSource
-
Get a builder for this source.
- builder() - Static method in interface org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema
-
Creates a default schema builder to provide common building blocks i.e.
- builder() - Static method in class org.apache.flink.connector.kafka.sink.KafkaSink
-
Create a
KafkaSinkBuilder
to construct a newKafkaSink
. - builder() - Static method in class org.apache.flink.connector.kafka.source.KafkaSource
-
Get a kafkaSourceBuilder to build a
KafkaSource
. - buildTransactionalId(long) - Method in class org.apache.flink.connector.kafka.sink.internal.TransactionNamingStrategyContextImpl
- buildTransactionalId(long) - Method in interface org.apache.flink.connector.kafka.sink.internal.TransactionNamingStrategyImpl.Context
- buildTransactionalId(String, int, long) - Static method in class org.apache.flink.connector.kafka.sink.internal.TransactionalIdFactory
-
Constructs a transactionalId with the following format
transactionalIdPrefix-subtaskId-checkpointOffset
. - BYTES_CONSUMED_TOTAL - Static variable in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
C
- callAsync(Callable<T>, BiConsumer<T, Throwable>) - Method in class org.apache.flink.connector.kafka.dynamic.source.enumerator.StoppableKafkaEnumContextProxy
-
Execute the one time callables in the coordinator.
- callAsync(Callable<T>, BiConsumer<T, Throwable>, long, long) - Method in class org.apache.flink.connector.kafka.dynamic.source.enumerator.StoppableKafkaEnumContextProxy
-
Schedule task via internal thread pool to proxy task so that the task handler callback can execute in the single threaded source coordinator thread pool to avoid synchronization needs.
- CheckpointTransaction - Class in org.apache.flink.connector.kafka.sink.internal
-
An immutable class that represents a transactional id and a checkpoint id.
- CheckpointTransaction(String, long) - Constructor for class org.apache.flink.connector.kafka.sink.internal.CheckpointTransaction
- CLIENT_ID_PREFIX - Static variable in class org.apache.flink.connector.kafka.source.KafkaSourceOptions
- close() - Method in class org.apache.flink.connector.kafka.dynamic.metadata.SingleClusterTopicMetadataService
- close() - Method in class org.apache.flink.connector.kafka.dynamic.source.enumerator.DynamicKafkaSourceEnumerator
- close() - Method in class org.apache.flink.connector.kafka.dynamic.source.enumerator.StoppableKafkaEnumContextProxy
-
Note that we can't close the source coordinator here, because these contexts can be closed during metadata change when the coordinator still needs to continue to run.
- close() - Method in class org.apache.flink.connector.kafka.dynamic.source.metrics.KafkaClusterMetricGroupManager
- close() - Method in class org.apache.flink.connector.kafka.dynamic.source.reader.DynamicKafkaSourceReader
- close() - Method in interface org.apache.flink.connector.kafka.sink.internal.Backchannel
- close() - Method in class org.apache.flink.connector.kafka.sink.internal.FlinkKafkaInternalProducer
- close() - Method in class org.apache.flink.connector.kafka.sink.internal.ProducerPoolImpl
- close() - Method in class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
- close() - Method in class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator.PartitionOffsetsRetrieverImpl
- close() - Method in class org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader
- close(String) - Method in class org.apache.flink.connector.kafka.dynamic.source.metrics.KafkaClusterMetricGroupManager
- close(Duration) - Method in class org.apache.flink.connector.kafka.sink.internal.FlinkKafkaInternalProducer
- ClusterMetadata - Class in org.apache.flink.connector.kafka.dynamic.metadata
-
ClusterMetadata
provides readers information about a cluster on what topics to read and how to connect to a cluster. - ClusterMetadata(Set<String>, Properties) - Constructor for class org.apache.flink.connector.kafka.dynamic.metadata.ClusterMetadata
-
Constructs the
ClusterMetadata
with the required properties. - COMMIT_OFFSETS_ON_CHECKPOINT - Static variable in class org.apache.flink.connector.kafka.source.KafkaSourceOptions
- commitOffsets(Map<TopicPartition, OffsetAndMetadata>, OffsetCommitCallback) - Method in class org.apache.flink.connector.kafka.source.reader.fetcher.KafkaSourceFetcherManager
- COMMITS_FAILED_METRIC_COUNTER - Static variable in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
- COMMITS_SUCCEEDED_METRIC_COUNTER - Static variable in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
- COMMITTED_OFFSET - Static variable in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
- COMMITTED_OFFSET_METRIC_GAUGE - Static variable in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
- committedOffsets() - Static method in interface org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer
-
Get an
OffsetsInitializer
which initializes the offsets to the committed offsets. - committedOffsets(Collection<TopicPartition>) - Method in interface org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer.PartitionOffsetsRetriever
-
The group id should be the set for
KafkaSource
before invoking this method. - committedOffsets(Collection<TopicPartition>) - Method in class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator.PartitionOffsetsRetrieverImpl
- committedOffsets(OffsetResetStrategy) - Static method in interface org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer
-
Get an
OffsetsInitializer
which initializes the offsets to the committed offsets. - commitTransaction() - Method in class org.apache.flink.connector.kafka.sink.internal.FlinkKafkaInternalProducer
- consumedDataType - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
Data type of consumed data type.
- CONSUMER_FETCH_MANAGER_GROUP - Static variable in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
- copy() - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
- copy() - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
- copyProperties(Properties, Properties) - Static method in class org.apache.flink.connector.kafka.source.KafkaPropertiesUtil
- counter(String) - Method in class org.apache.flink.connector.kafka.dynamic.source.metrics.KafkaClusterMetricGroup
- counter(String, C) - Method in class org.apache.flink.connector.kafka.dynamic.source.metrics.KafkaClusterMetricGroup
- create(SplitEnumeratorContext<DynamicKafkaSourceSplit>, String, KafkaMetadataService, Runnable) - Method in interface org.apache.flink.connector.kafka.dynamic.source.enumerator.StoppableKafkaEnumContextProxy.StoppableKafkaEnumContextProxyFactory
- createCommitter(CommitterInitContext) - Method in class org.apache.flink.connector.kafka.sink.KafkaSink
- createDynamicTableSink(DynamicTableFactory.Context) - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory
- createDynamicTableSink(DynamicTableFactory.Context) - Method in class org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory
- createDynamicTableSource(DynamicTableFactory.Context) - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory
- createDynamicTableSource(DynamicTableFactory.Context) - Method in class org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory
- createEnumerator(SplitEnumeratorContext<DynamicKafkaSourceSplit>) - Method in class org.apache.flink.connector.kafka.dynamic.source.DynamicKafkaSource
-
Create the
DynamicKafkaSourceEnumerator
. - createEnumerator(SplitEnumeratorContext<KafkaPartitionSplit>) - Method in class org.apache.flink.connector.kafka.source.KafkaSource
- createKafkaSource(DeserializationSchema<RowData>, DeserializationSchema<RowData>, TypeInformation<RowData>) - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
- createKafkaTableSink(DataType, EncodingFormat<SerializationSchema<RowData>>, EncodingFormat<SerializationSchema<RowData>>, int[], int[], String, List<String>, Pattern, Properties, KafkaPartitioner<RowData>, DeliveryGuarantee, Integer, String, TransactionNamingStrategy) - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory
- createKafkaTableSource(DataType, DecodingFormat<DeserializationSchema<RowData>>, DecodingFormat<DeserializationSchema<RowData>>, int[], int[], String, List<String>, Pattern, Properties, StartupMode, Map<TopicPartition, Long>, long, BoundedMode, Map<TopicPartition, Long>, long, String, Integer) - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory
- createObjectMapper() - Static method in class org.apache.flink.connector.kafka.util.JacksonMapperFactory
- createObjectMapper(JsonFactory) - Static method in class org.apache.flink.connector.kafka.util.JacksonMapperFactory
- createReader(SourceReaderContext) - Method in class org.apache.flink.connector.kafka.dynamic.source.DynamicKafkaSource
-
Create the
DynamicKafkaSourceReader
. - createReader(SourceReaderContext) - Method in class org.apache.flink.connector.kafka.source.KafkaSource
- createRuntimeDecoder(DynamicTableSource.Context, DataType) - Method in class org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.DecodingFormatWrapper
- createRuntimeEncoder(DynamicTableSink.Context, DataType) - Method in class org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.EncodingFormatWrapper
- createWriter(WriterInitContext) - Method in class org.apache.flink.connector.kafka.sink.KafkaSink
- createWriter(WriterInitContext) - Method in interface org.apache.flink.connector.kafka.sink.TwoPhaseCommittingStatefulSink
- CURRENT_OFFSET_METRIC_GAUGE - Static variable in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
- currentParallelism() - Method in class org.apache.flink.connector.kafka.dynamic.source.enumerator.StoppableKafkaEnumContextProxy
D
- datasetOf(String, KafkaDatasetFacet) - Static method in class org.apache.flink.connector.kafka.lineage.LineageUtil
- datasetOf(String, KafkaDatasetFacet, TypeDatasetFacet) - Static method in class org.apache.flink.connector.kafka.lineage.LineageUtil
- DecodingFormatWrapper(DecodingFormat<DeserializationSchema<RowData>>) - Constructor for class org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.DecodingFormatWrapper
- DEFAULT - Static variable in enum org.apache.flink.connector.kafka.sink.TransactionNamingStrategy
-
The default transaction naming strategy.
- DefaultKafkaDatasetFacet - Class in org.apache.flink.connector.kafka.lineage
-
Default implementation of
KafkaDatasetFacet
. - DefaultKafkaDatasetFacet(KafkaDatasetIdentifier) - Constructor for class org.apache.flink.connector.kafka.lineage.DefaultKafkaDatasetFacet
- DefaultKafkaDatasetFacet(KafkaDatasetIdentifier, Properties) - Constructor for class org.apache.flink.connector.kafka.lineage.DefaultKafkaDatasetFacet
- DefaultKafkaDatasetIdentifier - Class in org.apache.flink.connector.kafka.lineage
-
Default implementation of
KafkaDatasetIdentifier
. - DefaultKafkaSinkContext - Class in org.apache.flink.connector.kafka.sink
-
Context providing information to assist constructing a
ProducerRecord
. - DefaultKafkaSinkContext(int, int, Properties) - Constructor for class org.apache.flink.connector.kafka.sink.DefaultKafkaSinkContext
- DefaultTypeDatasetFacet - Class in org.apache.flink.connector.kafka.lineage
-
Default implementation of
KafkaDatasetFacet
. - DefaultTypeDatasetFacet(TypeInformation) - Constructor for class org.apache.flink.connector.kafka.lineage.DefaultTypeDatasetFacet
- DELIVERY_GUARANTEE - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- describeStreams(Collection<String>) - Method in interface org.apache.flink.connector.kafka.dynamic.metadata.KafkaMetadataService
-
Get current metadata for queried streams.
- describeStreams(Collection<String>) - Method in class org.apache.flink.connector.kafka.dynamic.metadata.SingleClusterTopicMetadataService
-
Get current metadata for queried streams.
- deserialize(int, byte[]) - Method in class org.apache.flink.connector.kafka.dynamic.source.enumerator.DynamicKafkaSourceEnumStateSerializer
- deserialize(int, byte[]) - Method in class org.apache.flink.connector.kafka.dynamic.source.split.DynamicKafkaSourceSplitSerializer
- deserialize(int, byte[]) - Method in class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumStateSerializer
- deserialize(int, byte[]) - Method in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplitSerializer
- deserialize(ConsumerRecord<byte[], byte[]>, Collector<ObjectNode>) - Method in class org.apache.flink.streaming.util.serialization.JSONKeyValueDeserializationSchema
- deserialize(ConsumerRecord<byte[], byte[]>, Collector<T>) - Method in interface org.apache.flink.connector.kafka.source.reader.deserializer.KafkaRecordDeserializationSchema
-
Deserializes the byte message.
- DISABLED - Static variable in class org.apache.flink.streaming.connectors.kafka.table.SinkBufferFlushMode
- DYNAMIC_KAFKA_SOURCE_METRIC_GROUP - Static variable in class org.apache.flink.connector.kafka.dynamic.source.metrics.KafkaClusterMetricGroup
- DynamicKafkaSource<T> - Class in org.apache.flink.connector.kafka.dynamic.source
-
Factory class for the DynamicKafkaSource components.
- DynamicKafkaSourceBuilder<T> - Class in org.apache.flink.connector.kafka.dynamic.source
-
A builder class to make it easier for users to construct a
DynamicKafkaSource
. - DynamicKafkaSourceEnumerator - Class in org.apache.flink.connector.kafka.dynamic.source.enumerator
-
This enumerator manages multiple
KafkaSourceEnumerator
's, which does not have any synchronization since it assumes single threaded execution. - DynamicKafkaSourceEnumerator(KafkaStreamSubscriber, KafkaMetadataService, SplitEnumeratorContext<DynamicKafkaSourceSplit>, OffsetsInitializer, OffsetsInitializer, Properties, Boundedness, DynamicKafkaSourceEnumState) - Constructor for class org.apache.flink.connector.kafka.dynamic.source.enumerator.DynamicKafkaSourceEnumerator
- DynamicKafkaSourceEnumState - Class in org.apache.flink.connector.kafka.dynamic.source.enumerator
-
The enumerator state keeps track of the state of the sub enumerators assigned splits and metadata.
- DynamicKafkaSourceEnumState() - Constructor for class org.apache.flink.connector.kafka.dynamic.source.enumerator.DynamicKafkaSourceEnumState
- DynamicKafkaSourceEnumState(Set<KafkaStream>, Map<String, KafkaSourceEnumState>) - Constructor for class org.apache.flink.connector.kafka.dynamic.source.enumerator.DynamicKafkaSourceEnumState
- DynamicKafkaSourceEnumStateSerializer - Class in org.apache.flink.connector.kafka.dynamic.source.enumerator
-
(De)serializer for
DynamicKafkaSourceEnumState
. - DynamicKafkaSourceEnumStateSerializer() - Constructor for class org.apache.flink.connector.kafka.dynamic.source.enumerator.DynamicKafkaSourceEnumStateSerializer
- DynamicKafkaSourceOptions - Class in org.apache.flink.connector.kafka.dynamic.source
-
The connector options for
DynamicKafkaSource
that can be passed through the source properties e.g. - DynamicKafkaSourceReader<T> - Class in org.apache.flink.connector.kafka.dynamic.source.reader
-
Manages state about underlying
KafkaSourceReader
to collect records and commit offsets from multiple Kafka clusters. - DynamicKafkaSourceReader(SourceReaderContext, KafkaRecordDeserializationSchema<T>, Properties) - Constructor for class org.apache.flink.connector.kafka.dynamic.source.reader.DynamicKafkaSourceReader
- DynamicKafkaSourceSplit - Class in org.apache.flink.connector.kafka.dynamic.source.split
-
Split that wraps
KafkaPartitionSplit
with Kafka cluster information. - DynamicKafkaSourceSplit(String, KafkaPartitionSplit) - Constructor for class org.apache.flink.connector.kafka.dynamic.source.split.DynamicKafkaSourceSplit
- DynamicKafkaSourceSplitSerializer - Class in org.apache.flink.connector.kafka.dynamic.source.split
-
(De)serializes the
DynamicKafkaSourceSplit
. - DynamicKafkaSourceSplitSerializer() - Constructor for class org.apache.flink.connector.kafka.dynamic.source.split.DynamicKafkaSourceSplitSerializer
E
- earliest() - Static method in interface org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer
-
Get an
OffsetsInitializer
which initializes the offsets to the earliest available offsets of each partition. - EARLIEST - org.apache.flink.streaming.connectors.kafka.config.StartupMode
-
Start from the earliest offset possible.
- EARLIEST_OFFSET - org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanStartupMode
- EARLIEST_OFFSET - Static variable in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
- EARLIEST_OFFSET - Static variable in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionStateSentinel
-
Magic number that defines the partition should start from the earliest offset.
- emitRecord(ConsumerRecord<byte[], byte[]>, SourceOutput<T>, KafkaPartitionSplitState) - Method in class org.apache.flink.connector.kafka.source.reader.KafkaRecordEmitter
- EncodingFormatWrapper(EncodingFormat<SerializationSchema<RowData>>) - Constructor for class org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.EncodingFormatWrapper
- endOffsets(Collection<TopicPartition>) - Method in interface org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer.PartitionOffsetsRetriever
-
List end offsets for the specified partitions.
- endOffsets(Collection<TopicPartition>) - Method in class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator.PartitionOffsetsRetrieverImpl
- equals(Object) - Method in class org.apache.flink.connector.kafka.dynamic.metadata.ClusterMetadata
- equals(Object) - Method in class org.apache.flink.connector.kafka.dynamic.metadata.KafkaStream
- equals(Object) - Method in class org.apache.flink.connector.kafka.dynamic.source.MetadataUpdateEvent
- equals(Object) - Method in class org.apache.flink.connector.kafka.dynamic.source.split.DynamicKafkaSourceSplit
- equals(Object) - Method in class org.apache.flink.connector.kafka.lineage.DefaultKafkaDatasetFacet
- equals(Object) - Method in class org.apache.flink.connector.kafka.lineage.DefaultKafkaDatasetIdentifier
- equals(Object) - Method in class org.apache.flink.connector.kafka.lineage.DefaultTypeDatasetFacet
- equals(Object) - Method in class org.apache.flink.connector.kafka.sink.internal.CheckpointTransaction
- equals(Object) - Method in class org.apache.flink.connector.kafka.sink.internal.TransactionFinished
- equals(Object) - Method in class org.apache.flink.connector.kafka.sink.KafkaWriterState
- equals(Object) - Method in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
- equals(Object) - Method in class org.apache.flink.streaming.connectors.kafka.partitioner.FlinkFixedPartitioner
- equals(Object) - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
- equals(Object) - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
- equals(Object) - Method in class org.apache.flink.streaming.connectors.kafka.table.SinkBufferFlushMode
- equals(Object) - Method in class org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.DecodingFormatWrapper
- equals(Object) - Method in class org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.EncodingFormatWrapper
- erroneously(String) - Static method in class org.apache.flink.connector.kafka.sink.internal.TransactionFinished
- EXCEPT_KEY - org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ValueFieldsStrategy
- EXPLICIT_BY_WRITER_STATE - org.apache.flink.connector.kafka.sink.internal.TransactionOwnership
-
The ownership is determined by the writer state that is recovered.
- extractPrefix(String) - Static method in class org.apache.flink.connector.kafka.sink.internal.TransactionalIdFactory
- extractSubtaskId(String) - Static method in class org.apache.flink.connector.kafka.sink.internal.TransactionalIdFactory
F
- factoryIdentifier() - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory
- factoryIdentifier() - Method in class org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory
- fetch() - Method in class org.apache.flink.connector.kafka.dynamic.source.reader.KafkaPartitionSplitReaderWrapper
- fetch() - Method in class org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader
- FlinkFixedPartitioner<T> - Class in org.apache.flink.streaming.connectors.kafka.partitioner
-
A partitioner ensuring that each internal Flink partition ends up in one Kafka partition.
- FlinkFixedPartitioner() - Constructor for class org.apache.flink.streaming.connectors.kafka.partitioner.FlinkFixedPartitioner
- FlinkKafkaInternalProducer<K,V> - Class in org.apache.flink.connector.kafka.sink.internal
-
A
KafkaProducer
that exposes private fields to allow resume producing from a given state. - FlinkKafkaInternalProducer(Properties) - Constructor for class org.apache.flink.connector.kafka.sink.internal.FlinkKafkaInternalProducer
- FlinkKafkaInternalProducer(Properties, String) - Constructor for class org.apache.flink.connector.kafka.sink.internal.FlinkKafkaInternalProducer
- flush() - Method in class org.apache.flink.connector.kafka.sink.internal.FlinkKafkaInternalProducer
- flushMode - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
Sink buffer flush config which only supported in upsert mode now.
- forwardOptions() - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory
- forwardOptions() - Method in class org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory
G
- gauge(String, G) - Method in class org.apache.flink.connector.kafka.dynamic.source.metrics.KafkaClusterMetricGroup
- getAllStreams() - Method in interface org.apache.flink.connector.kafka.dynamic.metadata.KafkaMetadataService
-
Get current metadata for all streams.
- getAllStreams() - Method in class org.apache.flink.connector.kafka.dynamic.metadata.SingleClusterTopicMetadataService
-
Get current metadata for all streams.
- getAllVariables() - Method in class org.apache.flink.connector.kafka.dynamic.source.metrics.KafkaClusterMetricGroup
- getAutoOffsetResetStrategy() - Method in class org.apache.flink.connector.kafka.source.enumerator.initializer.NoStoppingOffsetsInitializer
- getAutoOffsetResetStrategy() - Method in interface org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer
-
Get the auto offset reset strategy in case the initialized offsets falls out of the range.
- getAvailabilityHelperSize() - Method in class org.apache.flink.connector.kafka.dynamic.source.reader.DynamicKafkaSourceReader
- getBatchIntervalMs() - Method in class org.apache.flink.streaming.connectors.kafka.table.SinkBufferFlushMode
- getBatchSize() - Method in class org.apache.flink.streaming.connectors.kafka.table.SinkBufferFlushMode
- getBoundedness() - Method in class org.apache.flink.connector.kafka.dynamic.source.DynamicKafkaSource
-
Get the
Boundedness
. - getBoundedness() - Method in class org.apache.flink.connector.kafka.source.KafkaSource
- getChangelogMode() - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
- getChangelogMode() - Method in class org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.DecodingFormatWrapper
- getChangelogMode() - Method in class org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.EncodingFormatWrapper
- getChangelogMode(ChangelogMode) - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
- getCheckpointId() - Method in class org.apache.flink.connector.kafka.sink.internal.CheckpointTransaction
- getClusterEnumeratorStates() - Method in class org.apache.flink.connector.kafka.dynamic.source.enumerator.DynamicKafkaSourceEnumState
- getClusterMetadataMap() - Method in class org.apache.flink.connector.kafka.dynamic.metadata.KafkaStream
-
Get the metadata to connect to the various cluster(s).
- getCommittableSerializer() - Method in class org.apache.flink.connector.kafka.sink.KafkaSink
- getCurrentOffset() - Method in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplitState
- getCurrentParallelism() - Method in class org.apache.flink.connector.kafka.sink.internal.TransactionAbortStrategyContextImpl
- getCurrentParallelism() - Method in interface org.apache.flink.connector.kafka.sink.internal.TransactionAbortStrategyImpl.Context
- getCurrentSubtaskId() - Method in class org.apache.flink.connector.kafka.sink.internal.TransactionAbortStrategyContextImpl
- getCurrentSubtaskId() - Method in interface org.apache.flink.connector.kafka.sink.internal.TransactionAbortStrategyImpl.Context
- getDatasetIdentifier() - Method in interface org.apache.flink.connector.kafka.lineage.KafkaDatasetIdentifierProvider
-
Gets Kafka dataset identifier or empty in case a class implementing is not able to extract dataset identifier.
- getDefaultFactory() - Static method in interface org.apache.flink.connector.kafka.dynamic.source.enumerator.StoppableKafkaEnumContextProxy.StoppableKafkaEnumContextProxyFactory
- getDescription() - Method in enum org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanBoundedMode
- getDescription() - Method in enum org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanStartupMode
- getEnumeratorCheckpointSerializer() - Method in class org.apache.flink.connector.kafka.dynamic.source.DynamicKafkaSource
- getEnumeratorCheckpointSerializer() - Method in class org.apache.flink.connector.kafka.source.KafkaSource
- getEpoch() - Method in class org.apache.flink.connector.kafka.sink.internal.FlinkKafkaInternalProducer
- getHeaders(IN) - Method in interface org.apache.flink.connector.kafka.sink.HeaderProvider
- getInstance() - Static method in class org.apache.flink.connector.kafka.sink.internal.BackchannelFactory
-
Gets the singleton instance of the
BackchannelFactory
. - getIOMetricGroup() - Method in class org.apache.flink.connector.kafka.dynamic.source.metrics.KafkaClusterMetricGroup
- getKafkaClusterId() - Method in class org.apache.flink.connector.kafka.dynamic.source.split.DynamicKafkaSourceSplit
- getKafkaDatasetFacet() - Method in interface org.apache.flink.connector.kafka.lineage.KafkaDatasetFacetProvider
-
Returns a Kafka dataset facet or empty in case an implementing class is not able to identify a dataset.
- getKafkaMetric(Map<MetricName, ? extends Metric>, String, String) - Static method in class org.apache.flink.connector.kafka.MetricUtil
-
Tries to find the Kafka
Metric
in the provided metrics. - getKafkaMetric(Map<MetricName, ? extends Metric>, Predicate<Map.Entry<MetricName, ? extends Metric>>) - Static method in class org.apache.flink.connector.kafka.MetricUtil
-
Tries to find the Kafka
Metric
in the provided metrics matching a given filter. - getKafkaPartitionSplit() - Method in class org.apache.flink.connector.kafka.dynamic.source.split.DynamicKafkaSourceSplit
- getKafkaProducerConfig() - Method in class org.apache.flink.connector.kafka.sink.KafkaSink
- getKafkaStreams() - Method in class org.apache.flink.connector.kafka.dynamic.source.enumerator.DynamicKafkaSourceEnumState
- getKafkaStreams() - Method in class org.apache.flink.connector.kafka.dynamic.source.MetadataUpdateEvent
- getKafkaStreamSubscriber() - Method in class org.apache.flink.connector.kafka.dynamic.source.DynamicKafkaSource
- getLastCheckpointId() - Method in class org.apache.flink.connector.kafka.sink.internal.TransactionNamingStrategyContextImpl
- getLastCheckpointId() - Method in interface org.apache.flink.connector.kafka.sink.internal.TransactionNamingStrategyImpl.Context
- getLineageVertex() - Method in class org.apache.flink.connector.kafka.sink.KafkaSink
- getLineageVertex() - Method in class org.apache.flink.connector.kafka.source.KafkaSource
- getMessage() - Method in exception org.apache.flink.connector.kafka.dynamic.source.enumerator.StoppableKafkaEnumContextProxy.HandledFlinkKafkaException
- GetMetadataUpdateEvent - Class in org.apache.flink.connector.kafka.dynamic.source
-
Event to signal to enumerator that a reader needs to know the current metadata.
- GetMetadataUpdateEvent() - Constructor for class org.apache.flink.connector.kafka.dynamic.source.GetMetadataUpdateEvent
- getMetricIdentifier(String) - Method in class org.apache.flink.connector.kafka.dynamic.source.metrics.KafkaClusterMetricGroup
- getMetricIdentifier(String, CharacterFilter) - Method in class org.apache.flink.connector.kafka.dynamic.source.metrics.KafkaClusterMetricGroup
- getNextCheckpointId() - Method in class org.apache.flink.connector.kafka.sink.internal.TransactionNamingStrategyContextImpl
- getNextCheckpointId() - Method in interface org.apache.flink.connector.kafka.sink.internal.TransactionNamingStrategyImpl.Context
- getNumberOfParallelInstances() - Method in class org.apache.flink.connector.kafka.sink.DefaultKafkaSinkContext
- getNumberOfParallelInstances() - Method in interface org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema.KafkaSinkContext
- getNumRecordsInErrorsCounter() - Method in class org.apache.flink.connector.kafka.dynamic.source.metrics.KafkaClusterMetricGroup
- getOngoingTransactions() - Method in interface org.apache.flink.connector.kafka.sink.internal.ProducerPool
-
Returns a snapshot of all ongoing transactions.
- getOngoingTransactions() - Method in class org.apache.flink.connector.kafka.sink.internal.ProducerPoolImpl
- getOngoingTransactions() - Method in class org.apache.flink.connector.kafka.sink.internal.TransactionNamingStrategyContextImpl
- getOngoingTransactions() - Method in interface org.apache.flink.connector.kafka.sink.internal.TransactionNamingStrategyImpl.Context
- getOpenTransactionsForTopics() - Method in class org.apache.flink.connector.kafka.sink.internal.TransactionAbortStrategyContextImpl
- getOpenTransactionsForTopics() - Method in interface org.apache.flink.connector.kafka.sink.internal.TransactionAbortStrategyImpl.Context
-
Returns the list of all open transactions for the topics retrieved through introspection.
- getOpenTransactionsForTopics(Admin, Collection<String>) - Static method in class org.apache.flink.connector.kafka.util.AdminUtils
- getOption(Properties, ConfigOption<?>, Function<String, T>) - Static method in class org.apache.flink.connector.kafka.dynamic.source.DynamicKafkaSourceOptions
- getOption(Properties, ConfigOption<?>, Function<String, T>) - Static method in class org.apache.flink.connector.kafka.source.KafkaSourceOptions
- getOwnedSubtaskId() - Method in class org.apache.flink.connector.kafka.sink.KafkaWriterState
- getOwnedSubtaskIds(int, int, Collection<KafkaWriterState>) - Method in enum org.apache.flink.connector.kafka.sink.internal.TransactionOwnership
-
Returns the owned subtask ids for this subtask.
- getOwnership() - Method in enum org.apache.flink.connector.kafka.sink.internal.TransactionNamingStrategyImpl
- getParallelInstanceId() - Method in class org.apache.flink.connector.kafka.sink.DefaultKafkaSinkContext
- getParallelInstanceId() - Method in interface org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema.KafkaSinkContext
-
Get the ID of the subtask the KafkaSink is running on.
- getPartition() - Method in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
- getPartitionOffsets(Collection<TopicPartition>, OffsetsInitializer.PartitionOffsetsRetriever) - Method in class org.apache.flink.connector.kafka.source.enumerator.initializer.NoStoppingOffsetsInitializer
- getPartitionOffsets(Collection<TopicPartition>, OffsetsInitializer.PartitionOffsetsRetriever) - Method in interface org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer
-
Get the initial offsets for the given Kafka partitions.
- getPartitionSetSubscriber(Set<TopicPartition>) - Static method in interface org.apache.flink.connector.kafka.source.enumerator.subscriber.KafkaSubscriber
- getPartitionsForTopic(String) - Method in class org.apache.flink.connector.kafka.sink.DefaultKafkaSinkContext
- getPartitionsForTopic(String) - Method in interface org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema.KafkaSinkContext
-
For a given topic id retrieve the available partitions.
- getPrecommittedTransactionalIds() - Method in class org.apache.flink.connector.kafka.sink.internal.TransactionAbortStrategyContextImpl
- getPrecommittedTransactionalIds() - Method in interface org.apache.flink.connector.kafka.sink.internal.TransactionAbortStrategyImpl.Context
-
Returns a list of transactional ids that shouldn't be aborted because they are part of the committer state.
- getPrecommittedTransactionalIds() - Method in class org.apache.flink.connector.kafka.sink.KafkaWriterState
- getPrefixesToAbort() - Method in class org.apache.flink.connector.kafka.sink.internal.TransactionAbortStrategyContextImpl
- getPrefixesToAbort() - Method in interface org.apache.flink.connector.kafka.sink.internal.TransactionAbortStrategyImpl.Context
- getProducedType() - Method in class org.apache.flink.connector.kafka.dynamic.source.DynamicKafkaSource
-
Get the
TypeInformation
of the source. - getProducedType() - Method in class org.apache.flink.connector.kafka.source.KafkaSource
- getProducedType() - Method in class org.apache.flink.streaming.util.serialization.JSONKeyValueDeserializationSchema
- getProducer(String) - Method in class org.apache.flink.connector.kafka.sink.internal.TransactionNamingStrategyContextImpl
- getProducer(String) - Method in interface org.apache.flink.connector.kafka.sink.internal.TransactionNamingStrategyImpl.Context
- getProducerId() - Method in class org.apache.flink.connector.kafka.sink.internal.FlinkKafkaInternalProducer
- getProducerIds(Admin, Collection<String>) - Static method in class org.apache.flink.connector.kafka.util.AdminUtils
- getProducers() - Method in class org.apache.flink.connector.kafka.sink.internal.ProducerPoolImpl
- getProducerStates(Admin, Collection<String>) - Static method in class org.apache.flink.connector.kafka.util.AdminUtils
- getProperties() - Method in class org.apache.flink.connector.kafka.dynamic.metadata.ClusterMetadata
-
Get the properties.
- getProperties() - Method in class org.apache.flink.connector.kafka.lineage.DefaultKafkaDatasetFacet
- getProperties() - Method in interface org.apache.flink.connector.kafka.lineage.KafkaDatasetFacet
- getReadableBackchannel(int, int, String) - Method in class org.apache.flink.connector.kafka.sink.internal.BackchannelFactory
-
Gets a
ReadableBackchannel
for the given subtask, attempt, and transactional id prefix. - getScanRuntimeProvider(ScanTableSource.ScanContext) - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
- getScopeComponents() - Method in class org.apache.flink.connector.kafka.dynamic.source.metrics.KafkaClusterMetricGroup
- getSinkRuntimeProvider(DynamicTableSink.Context) - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
- getSplitSerializer() - Method in class org.apache.flink.connector.kafka.dynamic.source.DynamicKafkaSource
-
Get the
DynamicKafkaSourceSplitSerializer
. - getSplitSerializer() - Method in class org.apache.flink.connector.kafka.source.KafkaSource
- getStartCheckpointId() - Method in class org.apache.flink.connector.kafka.sink.internal.TransactionAbortStrategyContextImpl
- getStartCheckpointId() - Method in interface org.apache.flink.connector.kafka.sink.internal.TransactionAbortStrategyImpl.Context
- getStartingOffset() - Method in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
- getStateSentinel() - Method in enum org.apache.flink.streaming.connectors.kafka.config.StartupMode
- getStatusCode() - Method in enum org.apache.flink.connector.kafka.source.enumerator.AssignmentStatus
- getStoppingOffset() - Method in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
- getStreamId() - Method in class org.apache.flink.connector.kafka.dynamic.metadata.KafkaStream
-
Get the stream id.
- getSubscribedStreams(KafkaMetadataService) - Method in class org.apache.flink.connector.kafka.dynamic.source.enumerator.subscriber.KafkaStreamSetSubscriber
- getSubscribedStreams(KafkaMetadataService) - Method in interface org.apache.flink.connector.kafka.dynamic.source.enumerator.subscriber.KafkaStreamSubscriber
-
Get the subscribed
KafkaStream
s. - getSubscribedStreams(KafkaMetadataService) - Method in class org.apache.flink.connector.kafka.dynamic.source.enumerator.subscriber.StreamPatternSubscriber
- getSubscribedTopicPartitions(AdminClient) - Method in interface org.apache.flink.connector.kafka.source.enumerator.subscriber.KafkaSubscriber
-
Get a set of subscribed
TopicPartition
s. - getTopic() - Method in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
- getTopicIdentifier() - Method in class org.apache.flink.connector.kafka.lineage.DefaultKafkaDatasetFacet
- getTopicIdentifier() - Method in interface org.apache.flink.connector.kafka.lineage.KafkaDatasetFacet
- getTopicListSubscriber(List<String>) - Static method in interface org.apache.flink.connector.kafka.source.enumerator.subscriber.KafkaSubscriber
- getTopicMetadata(Admin, Collection<String>) - Static method in class org.apache.flink.connector.kafka.util.AdminUtils
- getTopicMetadata(Admin, Pattern) - Static method in class org.apache.flink.connector.kafka.util.AdminUtils
- getTopicPartition() - Method in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
- getTopicPartitions(Admin, Collection<String>) - Static method in class org.apache.flink.connector.kafka.util.AdminUtils
- getTopicPattern() - Method in class org.apache.flink.connector.kafka.lineage.DefaultKafkaDatasetIdentifier
- getTopicPattern() - Method in interface org.apache.flink.connector.kafka.lineage.KafkaDatasetIdentifier
- getTopicPatternSubscriber(Pattern) - Static method in interface org.apache.flink.connector.kafka.source.enumerator.subscriber.KafkaSubscriber
- getTopics() - Method in class org.apache.flink.connector.kafka.dynamic.metadata.ClusterMetadata
-
Get the topics.
- getTopics() - Method in class org.apache.flink.connector.kafka.lineage.DefaultKafkaDatasetIdentifier
- getTopics() - Method in interface org.apache.flink.connector.kafka.lineage.KafkaDatasetIdentifier
- getTopicsByPattern(Admin, Pattern) - Static method in class org.apache.flink.connector.kafka.util.AdminUtils
- getTotalNumberOfOwnedSubtasks() - Method in class org.apache.flink.connector.kafka.sink.KafkaWriterState
- getTotalNumberOfOwnedSubtasks(int, int, Collection<KafkaWriterState>) - Method in enum org.apache.flink.connector.kafka.sink.internal.TransactionOwnership
-
Returns the total number of owned subtasks across all subtasks.
- getTransactionAborter() - Method in class org.apache.flink.connector.kafka.sink.internal.TransactionAbortStrategyContextImpl
- getTransactionAborter() - Method in interface org.apache.flink.connector.kafka.sink.internal.TransactionAbortStrategyImpl.Context
- getTransactionalId() - Method in class org.apache.flink.connector.kafka.sink.internal.CheckpointTransaction
- getTransactionalId() - Method in class org.apache.flink.connector.kafka.sink.internal.FlinkKafkaInternalProducer
- getTransactionalIdPrefix() - Method in class org.apache.flink.connector.kafka.sink.KafkaWriterState
- getTransactionalProducer(String, long) - Method in interface org.apache.flink.connector.kafka.sink.internal.ProducerPool
-
Get a producer for the given transactional id and checkpoint id.
- getTransactionalProducer(String, long) - Method in class org.apache.flink.connector.kafka.sink.internal.ProducerPoolImpl
- getTransactionalProducer(TransactionNamingStrategyImpl.Context) - Method in enum org.apache.flink.connector.kafka.sink.internal.TransactionNamingStrategyImpl
-
Returns a
FlinkKafkaInternalProducer
that will not clash with any ongoing transactions. - getTransactionId() - Method in class org.apache.flink.connector.kafka.sink.internal.TransactionFinished
- getTransactionOwnership() - Method in class org.apache.flink.connector.kafka.sink.KafkaWriterState
- getTypeDatasetFacet() - Method in interface org.apache.flink.connector.kafka.lineage.TypeDatasetFacetProvider
-
Returns a type dataset facet or `Optional.empty` in case an implementing class is not able to resolve type.
- getTypeInformation() - Method in class org.apache.flink.connector.kafka.lineage.DefaultTypeDatasetFacet
- getTypeInformation() - Method in interface org.apache.flink.connector.kafka.lineage.TypeDatasetFacet
- getValue() - Method in class org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaMetricMutableWrapper
- getVersion() - Method in class org.apache.flink.connector.kafka.dynamic.source.enumerator.DynamicKafkaSourceEnumStateSerializer
- getVersion() - Method in class org.apache.flink.connector.kafka.dynamic.source.split.DynamicKafkaSourceSplitSerializer
- getVersion() - Method in class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumStateSerializer
- getVersion() - Method in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplitSerializer
- getWritableBackchannel(int, int, String) - Method in class org.apache.flink.connector.kafka.sink.internal.BackchannelFactory
-
Gets a
WritableBackchannel
for the given subtask, attempt, and transactional id prefix. - getWriterStateSerializer() - Method in class org.apache.flink.connector.kafka.sink.KafkaSink
- GROUP_OFFSET - Static variable in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionStateSentinel
-
Magic number that defines the partition should start from its committed group offset in Kafka.
- GROUP_OFFSETS - org.apache.flink.streaming.connectors.kafka.config.BoundedMode
-
End from committed offsets in ZK / Kafka brokers of a specific consumer group.
- GROUP_OFFSETS - org.apache.flink.streaming.connectors.kafka.config.StartupMode
-
Start from committed offsets in ZK / Kafka brokers of a specific consumer group (default).
- GROUP_OFFSETS - org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanBoundedMode
- GROUP_OFFSETS - org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanStartupMode
H
- HandledFlinkKafkaException(Throwable, String) - Constructor for exception org.apache.flink.connector.kafka.dynamic.source.enumerator.StoppableKafkaEnumContextProxy.HandledFlinkKafkaException
- handleSourceEvent(int, SourceEvent) - Method in class org.apache.flink.connector.kafka.dynamic.source.enumerator.DynamicKafkaSourceEnumerator
- handleSourceEvents(SourceEvent) - Method in class org.apache.flink.connector.kafka.dynamic.source.reader.DynamicKafkaSourceReader
-
Duplicate source events are handled with idempotency.
- handleSplitRequest(int, String) - Method in class org.apache.flink.connector.kafka.dynamic.source.enumerator.DynamicKafkaSourceEnumerator
-
Multi cluster Kafka source readers will not request splits.
- handleSplitRequest(int, String) - Method in class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
- handleSplitsChanges(SplitsChange<KafkaPartitionSplit>) - Method in class org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader
- hashCode() - Method in class org.apache.flink.connector.kafka.dynamic.metadata.ClusterMetadata
- hashCode() - Method in class org.apache.flink.connector.kafka.dynamic.metadata.KafkaStream
- hashCode() - Method in class org.apache.flink.connector.kafka.dynamic.source.MetadataUpdateEvent
- hashCode() - Method in class org.apache.flink.connector.kafka.dynamic.source.split.DynamicKafkaSourceSplit
- hashCode() - Method in class org.apache.flink.connector.kafka.lineage.DefaultKafkaDatasetFacet
- hashCode() - Method in class org.apache.flink.connector.kafka.lineage.DefaultKafkaDatasetIdentifier
- hashCode() - Method in class org.apache.flink.connector.kafka.lineage.DefaultTypeDatasetFacet
- hashCode() - Method in class org.apache.flink.connector.kafka.sink.internal.CheckpointTransaction
- hashCode() - Method in class org.apache.flink.connector.kafka.sink.internal.TransactionFinished
- hashCode() - Method in class org.apache.flink.connector.kafka.sink.KafkaWriterState
- hashCode() - Method in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
- hashCode() - Method in class org.apache.flink.streaming.connectors.kafka.partitioner.FlinkFixedPartitioner
- hashCode() - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
- hashCode() - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
- hashCode() - Method in class org.apache.flink.streaming.connectors.kafka.table.SinkBufferFlushMode
- hashCode() - Method in class org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.DecodingFormatWrapper
- hashCode() - Method in class org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.EncodingFormatWrapper
- hasRecordsInTransaction() - Method in class org.apache.flink.connector.kafka.sink.internal.FlinkKafkaInternalProducer
- HeaderProvider<IN> - Interface in org.apache.flink.connector.kafka.sink
-
Creates an
Iterable
ofHeader
s from the input element. - histogram(String, H) - Method in class org.apache.flink.connector.kafka.dynamic.source.metrics.KafkaClusterMetricGroup
I
- IDENTIFIER - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory
- IDENTIFIER - Static variable in class org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory
- IMPLICIT_BY_SUBTASK_ID - org.apache.flink.connector.kafka.sink.internal.TransactionOwnership
-
The ownership is determined by the current subtask ID.
- INCREMENTING - org.apache.flink.connector.kafka.sink.internal.TransactionNamingStrategyImpl
- INCREMENTING - org.apache.flink.connector.kafka.sink.TransactionNamingStrategy
-
The offset of the transaction name is a monotonically increasing number that mostly corresponds to the checkpoint id.
- INITIAL_OFFSET - Static variable in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
- initialDiscoveryFinished() - Method in class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumState
- initializedState(KafkaPartitionSplit) - Method in class org.apache.flink.connector.kafka.source.reader.KafkaSourceReader
- isActivelyConsumingSplits() - Method in class org.apache.flink.connector.kafka.dynamic.source.reader.DynamicKafkaSourceReader
- isAvailable() - Method in class org.apache.flink.connector.kafka.dynamic.source.reader.DynamicKafkaSourceReader
- isClosed() - Method in class org.apache.flink.connector.kafka.sink.internal.FlinkKafkaInternalProducer
- isClusterActive(String) - Method in interface org.apache.flink.connector.kafka.dynamic.metadata.KafkaMetadataService
-
Check if the cluster is active.
- isClusterActive(String) - Method in class org.apache.flink.connector.kafka.dynamic.metadata.SingleClusterTopicMetadataService
-
Check if the cluster is active.
- isEnabled() - Method in class org.apache.flink.streaming.connectors.kafka.table.SinkBufferFlushMode
- isEstablished() - Method in interface org.apache.flink.connector.kafka.sink.internal.Backchannel
-
Check if the backchannel is fully established.
- isInTransaction() - Method in class org.apache.flink.connector.kafka.sink.internal.FlinkKafkaInternalProducer
- isNoMoreSplits() - Method in class org.apache.flink.connector.kafka.dynamic.source.enumerator.StoppableKafkaEnumContextProxy
- isPrecommitted() - Method in class org.apache.flink.connector.kafka.sink.internal.FlinkKafkaInternalProducer
- isSentinel(long) - Static method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionStateSentinel
- isSuccess() - Method in class org.apache.flink.connector.kafka.sink.internal.TransactionFinished
J
- JacksonMapperFactory - Class in org.apache.flink.connector.kafka.util
-
Factory for Jackson mappers.
- JSONKeyValueDeserializationSchema - Class in org.apache.flink.streaming.util.serialization
-
DeserializationSchema that deserializes a JSON String into an ObjectNode.
- JSONKeyValueDeserializationSchema(boolean) - Constructor for class org.apache.flink.streaming.util.serialization.JSONKeyValueDeserializationSchema
K
- KAFKA_CLUSTER_GROUP_NAME - Static variable in class org.apache.flink.connector.kafka.dynamic.source.metrics.KafkaClusterMetricGroup
- KAFKA_CONSUMER_METRIC_GROUP - Static variable in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
- KAFKA_FACET_NAME - Static variable in class org.apache.flink.connector.kafka.lineage.DefaultKafkaDatasetFacet
- KAFKA_SOURCE_READER_METRIC_GROUP - Static variable in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
- KafkaClusterMetricGroup - Class in org.apache.flink.connector.kafka.dynamic.source.metrics
-
A custom proxy metric group in order to group
KafkaSourceReaderMetrics
by Kafka cluster. - KafkaClusterMetricGroup(MetricGroup, SourceReaderMetricGroup, String) - Constructor for class org.apache.flink.connector.kafka.dynamic.source.metrics.KafkaClusterMetricGroup
- KafkaClusterMetricGroupManager - Class in org.apache.flink.connector.kafka.dynamic.source.metrics
-
Manages metric groups for each cluster.
- KafkaClusterMetricGroupManager() - Constructor for class org.apache.flink.connector.kafka.dynamic.source.metrics.KafkaClusterMetricGroupManager
- KafkaConnectorOptions - Class in org.apache.flink.streaming.connectors.kafka.table
-
Options for the Kafka connector.
- KafkaConnectorOptions.ScanBoundedMode - Enum in org.apache.flink.streaming.connectors.kafka.table
-
Bounded mode for the Kafka consumer, see
KafkaConnectorOptions.SCAN_BOUNDED_MODE
. - KafkaConnectorOptions.ScanStartupMode - Enum in org.apache.flink.streaming.connectors.kafka.table
-
Startup mode for the Kafka consumer, see
KafkaConnectorOptions.SCAN_STARTUP_MODE
. - KafkaConnectorOptions.ValueFieldsStrategy - Enum in org.apache.flink.streaming.connectors.kafka.table
-
Strategies to derive the data type of a value format by considering a key format.
- KafkaDatasetFacet - Interface in org.apache.flink.connector.kafka.lineage
-
Facet definition to contain all Kafka specific information on Kafka sources and sinks.
- KafkaDatasetFacetProvider - Interface in org.apache.flink.connector.kafka.lineage
-
Contains method to extract
KafkaDatasetFacet
. - KafkaDatasetIdentifier - Interface in org.apache.flink.connector.kafka.lineage
-
Kafka dataset identifier which can contain either a list of topics or a topic pattern.
- KafkaDatasetIdentifierProvider - Interface in org.apache.flink.connector.kafka.lineage
-
Contains method which allows extracting topic identifier.
- KafkaDynamicSink - Class in org.apache.flink.streaming.connectors.kafka.table
-
A version-agnostic Kafka
DynamicTableSink
. - KafkaDynamicSink(DataType, DataType, EncodingFormat<SerializationSchema<RowData>>, EncodingFormat<SerializationSchema<RowData>>, int[], int[], String, List<String>, Pattern, Properties, KafkaPartitioner<RowData>, DeliveryGuarantee, boolean, SinkBufferFlushMode, Integer, String, TransactionNamingStrategy) - Constructor for class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
- KafkaDynamicSource - Class in org.apache.flink.streaming.connectors.kafka.table
-
A version-agnostic Kafka
ScanTableSource
. - KafkaDynamicSource(DataType, DecodingFormat<DeserializationSchema<RowData>>, DecodingFormat<DeserializationSchema<RowData>>, int[], int[], String, List<String>, Pattern, Properties, StartupMode, Map<TopicPartition, Long>, long, BoundedMode, Map<TopicPartition, Long>, long, boolean, String, Integer) - Constructor for class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
- KafkaDynamicTableFactory - Class in org.apache.flink.streaming.connectors.kafka.table
-
Factory for creating configured instances of
KafkaDynamicSource
andKafkaDynamicSink
. - KafkaDynamicTableFactory() - Constructor for class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory
- KafkaMetadataService - Interface in org.apache.flink.connector.kafka.dynamic.metadata
-
Metadata service that returns Kafka details.
- KafkaMetricMutableWrapper - Class in org.apache.flink.streaming.connectors.kafka.internals.metrics
-
Gauge for getting the current value of a Kafka metric.
- KafkaMetricMutableWrapper(Metric) - Constructor for class org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaMetricMutableWrapper
- KafkaPartitioner<T> - Interface in org.apache.flink.connector.kafka.sink
-
A
KafkaPartitioner
wraps logic on how to partition records across partitions of multiple Kafka topics. - KafkaPartitionSplit - Class in org.apache.flink.connector.kafka.source.split
-
A
SourceSplit
for a Kafka partition. - KafkaPartitionSplit(TopicPartition, long) - Constructor for class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
- KafkaPartitionSplit(TopicPartition, long, long) - Constructor for class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
- KafkaPartitionSplitReader - Class in org.apache.flink.connector.kafka.source.reader
-
A
SplitReader
implementation that reads records from Kafka partitions. - KafkaPartitionSplitReader(Properties, SourceReaderContext, KafkaSourceReaderMetrics) - Constructor for class org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader
- KafkaPartitionSplitReader(Properties, SourceReaderContext, KafkaSourceReaderMetrics, String) - Constructor for class org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader
- KafkaPartitionSplitReaderWrapper - Class in org.apache.flink.connector.kafka.dynamic.source.reader
-
This extends to Kafka Partition Split Reader to wrap split ids with the cluster name.
- KafkaPartitionSplitReaderWrapper(Properties, SourceReaderContext, KafkaSourceReaderMetrics, String) - Constructor for class org.apache.flink.connector.kafka.dynamic.source.reader.KafkaPartitionSplitReaderWrapper
- KafkaPartitionSplitSerializer - Class in org.apache.flink.connector.kafka.source.split
-
The
serializer
forKafkaPartitionSplit
. - KafkaPartitionSplitSerializer() - Constructor for class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplitSerializer
- KafkaPartitionSplitState - Class in org.apache.flink.connector.kafka.source.split
-
This class extends KafkaPartitionSplit to track a mutable current offset.
- KafkaPartitionSplitState(KafkaPartitionSplit) - Constructor for class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplitState
- KafkaPropertiesUtil - Class in org.apache.flink.connector.kafka.source
-
Utility class for modify Kafka properties.
- KafkaRecordDeserializationSchema<T> - Interface in org.apache.flink.connector.kafka.source.reader.deserializer
-
An interface for the deserialization of Kafka records.
- KafkaRecordEmitter<T> - Class in org.apache.flink.connector.kafka.source.reader
-
The
RecordEmitter
implementation forKafkaSourceReader
. - KafkaRecordEmitter(KafkaRecordDeserializationSchema<T>) - Constructor for class org.apache.flink.connector.kafka.source.reader.KafkaRecordEmitter
- KafkaRecordSerializationSchema<T> - Interface in org.apache.flink.connector.kafka.sink
-
A serialization schema which defines how to convert a value of type
T
toProducerRecord
. - KafkaRecordSerializationSchema.KafkaSinkContext - Interface in org.apache.flink.connector.kafka.sink
-
Context providing information of the kafka record target location.
- KafkaRecordSerializationSchemaBuilder<IN> - Class in org.apache.flink.connector.kafka.sink
-
Builder to construct
KafkaRecordSerializationSchema
. - KafkaRecordSerializationSchemaBuilder() - Constructor for class org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
- KafkaSink<IN> - Class in org.apache.flink.connector.kafka.sink
-
Flink Sink to produce data into a Kafka topic.
- KafkaSinkBuilder<IN> - Class in org.apache.flink.connector.kafka.sink
-
Builder to construct
KafkaSink
. - KafkaSource<OUT> - Class in org.apache.flink.connector.kafka.source
-
The Source implementation of Kafka.
- KafkaSourceBuilder<OUT> - Class in org.apache.flink.connector.kafka.source
-
The builder class for
KafkaSource
to make it easier for the users to construct aKafkaSource
. - KafkaSourceEnumerator - Class in org.apache.flink.connector.kafka.source.enumerator
-
The enumerator class for Kafka source.
- KafkaSourceEnumerator(KafkaSubscriber, OffsetsInitializer, OffsetsInitializer, Properties, SplitEnumeratorContext<KafkaPartitionSplit>, Boundedness) - Constructor for class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
- KafkaSourceEnumerator(KafkaSubscriber, OffsetsInitializer, OffsetsInitializer, Properties, SplitEnumeratorContext<KafkaPartitionSplit>, Boundedness, KafkaSourceEnumState) - Constructor for class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
- KafkaSourceEnumerator.PartitionOffsetsRetrieverImpl - Class in org.apache.flink.connector.kafka.source.enumerator
-
The implementation for offsets retriever with a consumer and an admin client.
- KafkaSourceEnumState - Class in org.apache.flink.connector.kafka.source.enumerator
-
The state of Kafka source enumerator.
- KafkaSourceEnumState(Set<TopicPartitionAndAssignmentStatus>, boolean) - Constructor for class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumState
- KafkaSourceEnumState(Set<TopicPartition>, Set<TopicPartition>, boolean) - Constructor for class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumState
- KafkaSourceEnumStateSerializer - Class in org.apache.flink.connector.kafka.source.enumerator
-
The
Serializer
for the enumerator state of Kafka source. - KafkaSourceEnumStateSerializer() - Constructor for class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumStateSerializer
- KafkaSourceFetcherManager - Class in org.apache.flink.connector.kafka.source.reader.fetcher
-
The SplitFetcherManager for Kafka source.
- KafkaSourceFetcherManager(FutureCompletingBlockingQueue<RecordsWithSplitIds<ConsumerRecord<byte[], byte[]>>>, Supplier<SplitReader<ConsumerRecord<byte[], byte[]>, KafkaPartitionSplit>>, Consumer<Collection<String>>) - Constructor for class org.apache.flink.connector.kafka.source.reader.fetcher.KafkaSourceFetcherManager
-
Creates a new SplitFetcherManager with a single I/O threads.
- KafkaSourceOptions - Class in org.apache.flink.connector.kafka.source
-
Configurations for KafkaSource.
- KafkaSourceOptions() - Constructor for class org.apache.flink.connector.kafka.source.KafkaSourceOptions
- KafkaSourceReader<T> - Class in org.apache.flink.connector.kafka.source.reader
-
The source reader for Kafka partitions.
- KafkaSourceReader(FutureCompletingBlockingQueue<RecordsWithSplitIds<ConsumerRecord<byte[], byte[]>>>, KafkaSourceFetcherManager, RecordEmitter<ConsumerRecord<byte[], byte[]>, T, KafkaPartitionSplitState>, Configuration, SourceReaderContext, KafkaSourceReaderMetrics) - Constructor for class org.apache.flink.connector.kafka.source.reader.KafkaSourceReader
- KafkaSourceReaderMetrics - Class in org.apache.flink.connector.kafka.source.metrics
-
A collection class for handling metrics in
KafkaSourceReader
. - KafkaSourceReaderMetrics(SourceReaderMetricGroup) - Constructor for class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
- KafkaStream - Class in org.apache.flink.connector.kafka.dynamic.metadata
-
Kafka stream represents multiple topics over multiple Kafka clusters and this class encapsulates all the necessary information to initiate Kafka consumers to read a stream.
- KafkaStream(String, Map<String, ClusterMetadata>) - Constructor for class org.apache.flink.connector.kafka.dynamic.metadata.KafkaStream
-
Construct a
KafkaStream
by passing Kafka information in order to connect to the stream. - KafkaStreamSetSubscriber - Class in org.apache.flink.connector.kafka.dynamic.source.enumerator.subscriber
-
Subscribe to streams based on the set of ids.
- KafkaStreamSetSubscriber(Set<String>) - Constructor for class org.apache.flink.connector.kafka.dynamic.source.enumerator.subscriber.KafkaStreamSetSubscriber
- KafkaStreamSubscriber - Interface in org.apache.flink.connector.kafka.dynamic.source.enumerator.subscriber
-
The subscriber interacts with
KafkaMetadataService
to find which Kafka streams the source will subscribe to. - KafkaSubscriber - Interface in org.apache.flink.connector.kafka.source.enumerator.subscriber
-
Kafka consumer allows a few different ways to consume from the topics, including: Subscribe from a collection of topics.
- KafkaTopicPartitionStateSentinel - Class in org.apache.flink.streaming.connectors.kafka.internals
-
Magic values used to represent special offset states before partitions are actually read.
- KafkaTopicPartitionStateSentinel() - Constructor for class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionStateSentinel
- KafkaWriterState - Class in org.apache.flink.connector.kafka.sink
-
The state of the Kafka writer.
- KafkaWriterState(String, int, int, TransactionOwnership, Collection<CheckpointTransaction>) - Constructor for class org.apache.flink.connector.kafka.sink.KafkaWriterState
- KEY_FIELDS - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- KEY_FIELDS_PREFIX - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- KEY_FORMAT - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- keyDecodingFormat - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
Optional format for decoding keys from Kafka.
- keyEncodingFormat - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
Optional format for encoding keys to Kafka.
- keyPrefix - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
Prefix that needs to be removed from fields when constructing the physical data type.
- keyPrefix - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
Prefix that needs to be removed from fields when constructing the physical data type.
- keyProjection - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
Indices that determine the key fields and the source position in the consumed row.
- keyProjection - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
Indices that determine the key fields and the target position in the produced row.
L
- latest() - Static method in interface org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer
-
Get an
OffsetsInitializer
which initializes the offsets to the latest offsets of each partition. - LATEST - org.apache.flink.streaming.connectors.kafka.config.BoundedMode
-
End from the latest offset.
- LATEST - org.apache.flink.streaming.connectors.kafka.config.StartupMode
-
Start from the latest offset.
- LATEST_OFFSET - org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanBoundedMode
- LATEST_OFFSET - org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanStartupMode
- LATEST_OFFSET - Static variable in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
-
Deprecated.
- LATEST_OFFSET - Static variable in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionStateSentinel
-
Magic number that defines the partition should start from the latest offset.
- LineageUtil - Class in org.apache.flink.connector.kafka.lineage
-
Utility class with useful methods for managing lineage objects.
- LineageUtil() - Constructor for class org.apache.flink.connector.kafka.lineage.LineageUtil
- lineageVertexOf(Collection<LineageDataset>) - Static method in class org.apache.flink.connector.kafka.lineage.LineageUtil
- LISTING - org.apache.flink.connector.kafka.sink.internal.TransactionAbortStrategyImpl
- listReadableMetadata() - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
- listWritableMetadata() - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
M
- maybeAddRecordsLagMetric(KafkaConsumer<?, ?>, TopicPartition) - Method in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
-
Add a partition's records-lag metric to tracking list if this partition never appears before.
- metadataKeys - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
Metadata that is appended at the end of a physical sink row.
- metadataKeys - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
Metadata that is appended at the end of a physical source row.
- MetadataUpdateEvent - Class in org.apache.flink.connector.kafka.dynamic.source
-
Signals
DynamicKafkaSourceReader
to stop their underlying readers. - MetadataUpdateEvent(Set<KafkaStream>) - Constructor for class org.apache.flink.connector.kafka.dynamic.source.MetadataUpdateEvent
- meter(String, M) - Method in class org.apache.flink.connector.kafka.dynamic.source.metrics.KafkaClusterMetricGroup
- metricGroup() - Method in class org.apache.flink.connector.kafka.dynamic.source.enumerator.StoppableKafkaEnumContextProxy
- MetricUtil - Class in org.apache.flink.connector.kafka
-
Collection of methods to interact with Kafka's client metric system.
- MetricUtil() - Constructor for class org.apache.flink.connector.kafka.MetricUtil
N
- name() - Method in class org.apache.flink.connector.kafka.lineage.DefaultKafkaDatasetFacet
- name() - Method in class org.apache.flink.connector.kafka.lineage.DefaultTypeDatasetFacet
- namespaceOf(Properties) - Static method in class org.apache.flink.connector.kafka.lineage.LineageUtil
- NO_STOPPING_OFFSET - Static variable in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
- NoStoppingOffsetsInitializer - Class in org.apache.flink.connector.kafka.source.enumerator.initializer
-
An implementation of
OffsetsInitializer
which does not initialize anything. - NoStoppingOffsetsInitializer() - Constructor for class org.apache.flink.connector.kafka.source.enumerator.initializer.NoStoppingOffsetsInitializer
- notifyCheckpointComplete(long) - Method in class org.apache.flink.connector.kafka.dynamic.source.reader.DynamicKafkaSourceReader
- notifyCheckpointComplete(long) - Method in class org.apache.flink.connector.kafka.source.reader.KafkaSourceReader
- notifyCheckpointComplete(Map<TopicPartition, OffsetAndMetadata>, OffsetCommitCallback) - Method in class org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader
- notifyNoMoreSplits() - Method in class org.apache.flink.connector.kafka.dynamic.source.reader.DynamicKafkaSourceReader
O
- OFFSET_NOT_SET - Static variable in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionStateSentinel
-
Magic number that defines an unset offset.
- offsets(Map<TopicPartition, Long>) - Static method in interface org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer
-
Get an
OffsetsInitializer
which initializes the offsets to the specified offsets. - offsets(Map<TopicPartition, Long>, OffsetResetStrategy) - Static method in interface org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer
-
Get an
OffsetsInitializer
which initializes the offsets to the specified offsets. - offsetsForTimes(Map<TopicPartition, Long>) - Method in interface org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer.PartitionOffsetsRetriever
-
List offsets matching a timestamp for the specified partitions.
- offsetsForTimes(Map<TopicPartition, Long>) - Method in class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator.PartitionOffsetsRetrieverImpl
- OffsetsInitializer - Interface in org.apache.flink.connector.kafka.source.enumerator.initializer
-
An interface for users to specify the starting / stopping offset of a
KafkaPartitionSplit
. - OffsetsInitializer.PartitionOffsetsRetriever - Interface in org.apache.flink.connector.kafka.source.enumerator.initializer
-
An interface that provides necessary information to the
OffsetsInitializer
to get the initial offsets of the Kafka partitions. - OffsetsInitializerValidator - Interface in org.apache.flink.connector.kafka.source.enumerator.initializer
-
Interface for validating
OffsetsInitializer
with properties fromKafkaSource
. - ofPattern(Pattern) - Static method in class org.apache.flink.connector.kafka.lineage.DefaultKafkaDatasetIdentifier
- ofStatusCode(int) - Static method in enum org.apache.flink.connector.kafka.source.enumerator.AssignmentStatus
- ofTopics(List<String>) - Static method in class org.apache.flink.connector.kafka.lineage.DefaultKafkaDatasetIdentifier
- onSplitFinished(Map<String, KafkaPartitionSplitState>) - Method in class org.apache.flink.connector.kafka.source.reader.KafkaSourceReader
- open(int, int) - Method in interface org.apache.flink.connector.kafka.sink.KafkaPartitioner
-
Initializer for the partitioner.
- open(int, int) - Method in class org.apache.flink.streaming.connectors.kafka.partitioner.FlinkFixedPartitioner
- open(DeserializationSchema.InitializationContext) - Method in interface org.apache.flink.connector.kafka.source.reader.deserializer.KafkaRecordDeserializationSchema
-
Initialization method for the schema.
- open(DeserializationSchema.InitializationContext) - Method in class org.apache.flink.streaming.util.serialization.JSONKeyValueDeserializationSchema
- open(SerializationSchema.InitializationContext, KafkaRecordSerializationSchema.KafkaSinkContext) - Method in interface org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema
-
Initialization method for the schema.
- optionalOptions() - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory
- optionalOptions() - Method in class org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory
- org.apache.flink.connector.kafka - package org.apache.flink.connector.kafka
- org.apache.flink.connector.kafka.dynamic.metadata - package org.apache.flink.connector.kafka.dynamic.metadata
- org.apache.flink.connector.kafka.dynamic.source - package org.apache.flink.connector.kafka.dynamic.source
- org.apache.flink.connector.kafka.dynamic.source.enumerator - package org.apache.flink.connector.kafka.dynamic.source.enumerator
- org.apache.flink.connector.kafka.dynamic.source.enumerator.subscriber - package org.apache.flink.connector.kafka.dynamic.source.enumerator.subscriber
- org.apache.flink.connector.kafka.dynamic.source.metrics - package org.apache.flink.connector.kafka.dynamic.source.metrics
- org.apache.flink.connector.kafka.dynamic.source.reader - package org.apache.flink.connector.kafka.dynamic.source.reader
- org.apache.flink.connector.kafka.dynamic.source.split - package org.apache.flink.connector.kafka.dynamic.source.split
- org.apache.flink.connector.kafka.lineage - package org.apache.flink.connector.kafka.lineage
- org.apache.flink.connector.kafka.sink - package org.apache.flink.connector.kafka.sink
- org.apache.flink.connector.kafka.sink.internal - package org.apache.flink.connector.kafka.sink.internal
- org.apache.flink.connector.kafka.source - package org.apache.flink.connector.kafka.source
- org.apache.flink.connector.kafka.source.enumerator - package org.apache.flink.connector.kafka.source.enumerator
- org.apache.flink.connector.kafka.source.enumerator.initializer - package org.apache.flink.connector.kafka.source.enumerator.initializer
- org.apache.flink.connector.kafka.source.enumerator.subscriber - package org.apache.flink.connector.kafka.source.enumerator.subscriber
- org.apache.flink.connector.kafka.source.metrics - package org.apache.flink.connector.kafka.source.metrics
- org.apache.flink.connector.kafka.source.reader - package org.apache.flink.connector.kafka.source.reader
- org.apache.flink.connector.kafka.source.reader.deserializer - package org.apache.flink.connector.kafka.source.reader.deserializer
- org.apache.flink.connector.kafka.source.reader.fetcher - package org.apache.flink.connector.kafka.source.reader.fetcher
- org.apache.flink.connector.kafka.source.split - package org.apache.flink.connector.kafka.source.split
- org.apache.flink.connector.kafka.util - package org.apache.flink.connector.kafka.util
- org.apache.flink.streaming.connectors.kafka.config - package org.apache.flink.streaming.connectors.kafka.config
- org.apache.flink.streaming.connectors.kafka.internals - package org.apache.flink.streaming.connectors.kafka.internals
- org.apache.flink.streaming.connectors.kafka.internals.metrics - package org.apache.flink.streaming.connectors.kafka.internals.metrics
- org.apache.flink.streaming.connectors.kafka.partitioner - package org.apache.flink.streaming.connectors.kafka.partitioner
- org.apache.flink.streaming.connectors.kafka.table - package org.apache.flink.streaming.connectors.kafka.table
- org.apache.flink.streaming.util.serialization - package org.apache.flink.streaming.util.serialization
- ownsTransactionalId(String) - Method in class org.apache.flink.connector.kafka.sink.internal.TransactionAbortStrategyContextImpl
- ownsTransactionalId(String) - Method in interface org.apache.flink.connector.kafka.sink.internal.TransactionAbortStrategyImpl.Context
-
Subtask must abort transactions that they own and must not abort any transaction that they don't own.
P
- parallelism - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
Parallelism of the physical Kafka producer.
- parallelism - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
Parallelism of the physical Kafka consumer.
- partition(T, byte[], byte[], String, int[]) - Method in interface org.apache.flink.connector.kafka.sink.KafkaPartitioner
-
Determine the id of the partition that the record should be written to.
- partition(T, byte[], byte[], String, int[]) - Method in class org.apache.flink.streaming.connectors.kafka.partitioner.FlinkFixedPartitioner
- PARTITION_DISCOVERY_INTERVAL_MS - Static variable in class org.apache.flink.connector.kafka.source.KafkaSourceOptions
- PARTITION_GROUP - Static variable in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
- partitioner - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
Partitioner to select Kafka partition for each item.
- PartitionOffsetsRetrieverImpl(AdminClient, String) - Constructor for class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator.PartitionOffsetsRetrieverImpl
- partitions() - Method in class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumState
- pauseOrResumeSplits(Collection<KafkaPartitionSplit>, Collection<KafkaPartitionSplit>) - Method in class org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader
- physicalDataType - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
Data type to configure the formats.
- physicalDataType - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
Data type to configure the formats.
- poll() - Method in interface org.apache.flink.connector.kafka.sink.internal.ReadableBackchannel
-
Poll the next message from the backchannel.
- pollNext(ReaderOutput<T>) - Method in class org.apache.flink.connector.kafka.dynamic.source.reader.DynamicKafkaSourceReader
- POOLING - org.apache.flink.connector.kafka.sink.internal.TransactionNamingStrategyImpl
- POOLING - org.apache.flink.connector.kafka.sink.TransactionNamingStrategy
-
This strategy reuses transaction names.
- precommitTransaction() - Method in class org.apache.flink.connector.kafka.sink.internal.FlinkKafkaInternalProducer
- PROBING - org.apache.flink.connector.kafka.sink.internal.TransactionAbortStrategyImpl
-
The probing strategy starts with aborting a set of known transactional ids from the recovered state and then continues guessing if more transactions may have been opened between this run and the last successful checkpoint.
- producedDataType - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
Data type that describes the final output of the source.
- ProducerPool - Interface in org.apache.flink.connector.kafka.sink.internal
-
A pool of producers that can be recycled.
- ProducerPoolImpl - Class in org.apache.flink.connector.kafka.sink.internal
-
Manages a pool of
FlinkKafkaInternalProducer
instances for reuse in theKafkaWriter
and keeps track of the used transactional ids. - ProducerPoolImpl(Properties, Consumer<FlinkKafkaInternalProducer<byte[], byte[]>>, Collection<CheckpointTransaction>) - Constructor for class org.apache.flink.connector.kafka.sink.internal.ProducerPoolImpl
-
Creates a new
ProducerPoolImpl
. - properties - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
Properties for the Kafka producer.
- properties - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
Properties for the Kafka consumer.
- props - Variable in class org.apache.flink.connector.kafka.source.KafkaSourceBuilder
- PROPS_BOOTSTRAP_SERVERS - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- PROPS_GROUP_ID - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
R
- ReadableBackchannel<T> - Interface in org.apache.flink.connector.kafka.sink.internal
-
The readable portion of a backchannel for communication between the commiter -> writer.
- recordCommittedOffset(TopicPartition, long) - Method in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
-
Update the latest committed offset of the given
TopicPartition
. - recordCurrentOffset(TopicPartition, long) - Method in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
-
Update current consuming offset of the given
TopicPartition
. - recordFailedCommit() - Method in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
-
Mark a failure commit.
- RECORDS_LAG - Static variable in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
- recordSucceededCommit() - Method in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
-
Mark a successful commit.
- recycle(FlinkKafkaInternalProducer<byte[], byte[]>) - Method in interface org.apache.flink.connector.kafka.sink.internal.ProducerPool
-
Explicitly recycle a producer.
- recycle(FlinkKafkaInternalProducer<byte[], byte[]>) - Method in class org.apache.flink.connector.kafka.sink.internal.ProducerPoolImpl
- recycle(FlinkKafkaInternalProducer<byte[], byte[]>) - Method in class org.apache.flink.connector.kafka.sink.internal.TransactionNamingStrategyContextImpl
- recycle(FlinkKafkaInternalProducer<byte[], byte[]>) - Method in interface org.apache.flink.connector.kafka.sink.internal.TransactionNamingStrategyImpl.Context
- recycleByTransactionId(String, boolean) - Method in interface org.apache.flink.connector.kafka.sink.internal.ProducerPool
-
Notify the pool that a transaction has finished.
- recycleByTransactionId(String, boolean) - Method in class org.apache.flink.connector.kafka.sink.internal.ProducerPoolImpl
- register(String, KafkaClusterMetricGroup) - Method in class org.apache.flink.connector.kafka.dynamic.source.metrics.KafkaClusterMetricGroupManager
- REGISTER_KAFKA_CONSUMER_METRICS - Static variable in class org.apache.flink.connector.kafka.source.KafkaSourceOptions
- registeredReaders() - Method in class org.apache.flink.connector.kafka.dynamic.source.enumerator.StoppableKafkaEnumContextProxy
- registerKafkaConsumerMetrics(KafkaConsumer<?, ?>) - Method in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
-
Register metrics of KafkaConsumer in Kafka metric group.
- registerNumBytesIn(KafkaConsumer<?, ?>) - Method in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
-
Register
MetricNames.IO_NUM_BYTES_IN
. - registerTopicPartition(TopicPartition) - Method in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
-
Register metric groups for the given
TopicPartition
. - removeRecordsLagMetric(TopicPartition) - Method in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
-
Remove a partition's records-lag metric from tracking list.
- requiredOptions() - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory
- requiredOptions() - Method in class org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory
- requiresKnownTopics() - Method in enum org.apache.flink.connector.kafka.sink.internal.TransactionNamingStrategyImpl
- restoreEnumerator(SplitEnumeratorContext<DynamicKafkaSourceSplit>, DynamicKafkaSourceEnumState) - Method in class org.apache.flink.connector.kafka.dynamic.source.DynamicKafkaSource
-
Restore the
DynamicKafkaSourceEnumerator
. - restoreEnumerator(SplitEnumeratorContext<KafkaPartitionSplit>, KafkaSourceEnumState) - Method in class org.apache.flink.connector.kafka.source.KafkaSource
- restoreWriter(WriterInitContext, Collection<KafkaWriterState>) - Method in class org.apache.flink.connector.kafka.sink.KafkaSink
- restoreWriter(WriterInitContext, Collection<WriterStateT>) - Method in interface org.apache.flink.connector.kafka.sink.TwoPhaseCommittingStatefulSink
- resumeTransaction(long, short) - Method in class org.apache.flink.connector.kafka.sink.internal.FlinkKafkaInternalProducer
-
Instead of obtaining producerId and epoch from the transaction coordinator, re-use previously obtained ones, so that we can resume transaction after a restart.
- runInCoordinatorThread(Runnable) - Method in class org.apache.flink.connector.kafka.dynamic.source.enumerator.StoppableKafkaEnumContextProxy
S
- SCAN_BOUNDED_MODE - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- SCAN_BOUNDED_SPECIFIC_OFFSETS - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- SCAN_BOUNDED_TIMESTAMP_MILLIS - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- SCAN_PARALLELISM - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- SCAN_STARTUP_MODE - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- SCAN_STARTUP_SPECIFIC_OFFSETS - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- SCAN_STARTUP_TIMESTAMP_MILLIS - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- SCAN_TOPIC_PARTITION_DISCOVERY - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- send(ProducerRecord<K, V>, Callback) - Method in class org.apache.flink.connector.kafka.sink.internal.FlinkKafkaInternalProducer
- send(T) - Method in interface org.apache.flink.connector.kafka.sink.internal.WritableBackchannel
-
Send a message to the backchannel.
- sendEventToSourceReader(int, SourceEvent) - Method in class org.apache.flink.connector.kafka.dynamic.source.enumerator.StoppableKafkaEnumContextProxy
- serialize(DynamicKafkaSourceEnumState) - Method in class org.apache.flink.connector.kafka.dynamic.source.enumerator.DynamicKafkaSourceEnumStateSerializer
- serialize(DynamicKafkaSourceSplit) - Method in class org.apache.flink.connector.kafka.dynamic.source.split.DynamicKafkaSourceSplitSerializer
- serialize(KafkaSourceEnumState) - Method in class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumStateSerializer
- serialize(KafkaPartitionSplit) - Method in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplitSerializer
- serialize(T, KafkaRecordSerializationSchema.KafkaSinkContext, Long) - Method in interface org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema
-
Serializes given element and returns it as a
ProducerRecord
. - serializeTopicPartitions(Collection<TopicPartition>) - Static method in class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumStateSerializer
- setBootstrapServers(String) - Method in class org.apache.flink.connector.kafka.sink.KafkaSinkBuilder
-
Sets the Kafka bootstrap servers.
- setBootstrapServers(String) - Method in class org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
Sets the bootstrap servers for the KafkaConsumer of the KafkaSource.
- setBounded(OffsetsInitializer) - Method in class org.apache.flink.connector.kafka.dynamic.source.DynamicKafkaSourceBuilder
-
Set the source in bounded mode and specify what offsets to end at.
- setBounded(OffsetsInitializer) - Method in class org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
By default the KafkaSource is set to run as
Boundedness.CONTINUOUS_UNBOUNDED
and thus never stops until the Flink job fails or is canceled. - setClientIdPrefix(String) - Method in class org.apache.flink.connector.kafka.dynamic.source.DynamicKafkaSourceBuilder
-
Set the client id prefix.
- setClientIdPrefix(String) - Method in class org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
Sets the client id prefix of this KafkaSource.
- setClientIdPrefix(Properties, String) - Static method in class org.apache.flink.connector.kafka.source.KafkaPropertiesUtil
-
client.id is used for Kafka server side logging, see https://docs.confluent.io/platform/current/installation/configuration/consumer-configs.html#consumerconfigs_client.id
- setCurrentOffset(long) - Method in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplitState
- setDeliveryGuarantee(DeliveryGuarantee) - Method in class org.apache.flink.connector.kafka.sink.KafkaSinkBuilder
-
Sets the wanted the
DeliveryGuarantee
. - setDeserializer(KafkaRecordDeserializationSchema<OUT>) - Method in class org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
Sets the
deserializer
of theConsumerRecord
for KafkaSource. - setDeserializer(KafkaRecordDeserializationSchema<T>) - Method in class org.apache.flink.connector.kafka.dynamic.source.DynamicKafkaSourceBuilder
-
Set the
KafkaRecordDeserializationSchema
. - setGroupId(String) - Method in class org.apache.flink.connector.kafka.dynamic.source.DynamicKafkaSourceBuilder
-
Set the property for
CommonClientConfigs.GROUP_ID_CONFIG
. - setGroupId(String) - Method in class org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
Sets the consumer group id of the KafkaSource.
- setHeaderProvider(HeaderProvider<? super T>) - Method in class org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
-
Sets a
HeaderProvider
which is used to add headers to theProducerRecord
for the current element. - setKafkaKeySerializer(Class<? extends Serializer<? super T>>) - Method in class org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
-
Sets Kafka's
Serializer
to serialize incoming elements to the key of theProducerRecord
. - setKafkaKeySerializer(Class<S>, Map<String, String>) - Method in class org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
-
Sets a configurable Kafka
Serializer
and pass a configuration to serialize incoming elements to the key of theProducerRecord
. - setKafkaMetadataService(KafkaMetadataService) - Method in class org.apache.flink.connector.kafka.dynamic.source.DynamicKafkaSourceBuilder
-
Set the
KafkaMetadataService
. - setKafkaMetric(Metric) - Method in class org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaMetricMutableWrapper
- setKafkaProducerConfig(Properties) - Method in class org.apache.flink.connector.kafka.sink.KafkaSinkBuilder
-
Sets the configuration which used to instantiate all used
KafkaProducer
. - setKafkaStreamSubscriber(KafkaStreamSubscriber) - Method in class org.apache.flink.connector.kafka.dynamic.source.DynamicKafkaSourceBuilder
-
Set a custom Kafka stream subscriber.
- setKafkaSubscriber(KafkaSubscriber) - Method in class org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
Set a custom Kafka subscriber to use to discover new splits.
- setKafkaValueSerializer(Class<? extends Serializer<? super T>>) - Method in class org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
-
Sets Kafka's
Serializer
to serialize incoming elements to the value of theProducerRecord
. - setKafkaValueSerializer(Class<S>, Map<String, String>) - Method in class org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
-
Sets a configurable Kafka
Serializer
and pass a configuration to serialize incoming elements to the value of theProducerRecord
. - setKeySerializationSchema(SerializationSchema<? super T>) - Method in class org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
-
Sets a
SerializationSchema
which is used to serialize the incoming element to the key of theProducerRecord
. - setLastCheckpointId(long) - Method in class org.apache.flink.connector.kafka.sink.internal.TransactionNamingStrategyContextImpl
- setNextCheckpointId(long) - Method in class org.apache.flink.connector.kafka.sink.internal.TransactionNamingStrategyContextImpl
- setOngoingTransactions(Collection<String>) - Method in class org.apache.flink.connector.kafka.sink.internal.TransactionNamingStrategyContextImpl
- setPartitioner(KafkaPartitioner<? super T>) - Method in class org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
-
Sets a custom partitioner determining the target partition of the target topic.
- setPartitions(Set<TopicPartition>) - Method in class org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
Set a set of partitions to consume from.
- setPendingBytesGauge(Gauge<Long>) - Method in class org.apache.flink.connector.kafka.dynamic.source.metrics.KafkaClusterMetricGroup
- setPendingRecordsGauge(Gauge<Long>) - Method in class org.apache.flink.connector.kafka.dynamic.source.metrics.KafkaClusterMetricGroup
- setProperties(Properties) - Method in class org.apache.flink.connector.kafka.dynamic.source.DynamicKafkaSourceBuilder
-
Set the properties of the consumers.
- setProperties(Properties) - Method in class org.apache.flink.connector.kafka.lineage.DefaultKafkaDatasetFacet
- setProperties(Properties) - Method in interface org.apache.flink.connector.kafka.lineage.KafkaDatasetFacet
- setProperties(Properties) - Method in class org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
Set arbitrary properties for the KafkaSource and KafkaConsumer.
- setProperty(String, String) - Method in class org.apache.flink.connector.kafka.dynamic.source.DynamicKafkaSourceBuilder
-
Set a property for the consumers.
- setProperty(String, String) - Method in class org.apache.flink.connector.kafka.sink.KafkaSinkBuilder
- setProperty(String, String) - Method in class org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
Set an arbitrary property for the KafkaSource and KafkaConsumer.
- setRackIdSupplier(SerializableSupplier<String>) - Method in class org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
Set the clientRackId supplier to be passed down to the KafkaPartitionSplitReader.
- setRecordSerializer(KafkaRecordSerializationSchema<IN>) - Method in class org.apache.flink.connector.kafka.sink.KafkaSinkBuilder
-
Sets the
KafkaRecordSerializationSchema
that transforms incoming records toProducerRecord
s. - setStartingOffsets(OffsetsInitializer) - Method in class org.apache.flink.connector.kafka.dynamic.source.DynamicKafkaSourceBuilder
-
Set the starting offsets of the stream.
- setStartingOffsets(OffsetsInitializer) - Method in class org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
Specify from which offsets the KafkaSource should start consuming from by providing an
OffsetsInitializer
. - setStreamIds(Set<String>) - Method in class org.apache.flink.connector.kafka.dynamic.source.DynamicKafkaSourceBuilder
-
Set the stream ids belonging to the
KafkaMetadataService
. - setStreamPattern(Pattern) - Method in class org.apache.flink.connector.kafka.dynamic.source.DynamicKafkaSourceBuilder
-
Set the stream pattern to determine stream ids belonging to the
KafkaMetadataService
. - setTopic(String) - Method in class org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
-
Sets a fixed topic which used as destination for all records.
- setTopicPattern(Pattern) - Method in class org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
Set a topic pattern to consume from use the java
Pattern
. - setTopics(String...) - Method in class org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
Set a list of topics the KafkaSource should consume from.
- setTopics(List<String>) - Method in class org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
Set a list of topics the KafkaSource should consume from.
- setTopicSelector(TopicSelector<? super T>) - Method in class org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
-
Sets a topic selector which computes the target topic for every incoming record.
- setTransactionalIdPrefix(String) - Method in class org.apache.flink.connector.kafka.sink.KafkaSinkBuilder
-
Sets the prefix for all created transactionalIds if
DeliveryGuarantee.EXACTLY_ONCE
is configured. - setTransactionId(String) - Method in class org.apache.flink.connector.kafka.sink.internal.FlinkKafkaInternalProducer
-
Sets the transactional id and sets the transaction manager state to uninitialized.
- setTransactionNamingStrategy(TransactionNamingStrategy) - Method in class org.apache.flink.connector.kafka.sink.KafkaSinkBuilder
-
Sets the
TransactionNamingStrategy
that is used to name the transactions. - setUnbounded(OffsetsInitializer) - Method in class org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
By default the KafkaSource is set to run as
Boundedness.CONTINUOUS_UNBOUNDED
and thus never stops until the Flink job fails or is canceled. - setValueOnlyDeserializer(DeserializationSchema<OUT>) - Method in class org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
Sets the
deserializer
of theConsumerRecord
for KafkaSource. - setValueSerializationSchema(SerializationSchema<T>) - Method in class org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
-
Sets a
SerializationSchema
which is used to serialize the incoming element to the value of theProducerRecord
. - signalNoMoreSplits(int) - Method in class org.apache.flink.connector.kafka.dynamic.source.enumerator.StoppableKafkaEnumContextProxy
- SingleClusterTopicMetadataService - Class in org.apache.flink.connector.kafka.dynamic.metadata
-
A
KafkaMetadataService
that delegates metadata fetching to a singleAdminClient
, which is scoped to a single cluster. - SingleClusterTopicMetadataService(String, Properties) - Constructor for class org.apache.flink.connector.kafka.dynamic.metadata.SingleClusterTopicMetadataService
-
Create a
SingleClusterTopicMetadataService
. - SINK_BUFFER_FLUSH_INTERVAL - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- SINK_BUFFER_FLUSH_MAX_ROWS - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- SINK_CHANGELOG_MODE - Static variable in class org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.EncodingFormatWrapper
- SINK_PARALLELISM - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- SINK_PARTITIONER - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- SinkBufferFlushMode - Class in org.apache.flink.streaming.connectors.kafka.table
-
Sink buffer flush configuration.
- SinkBufferFlushMode(int, long) - Constructor for class org.apache.flink.streaming.connectors.kafka.table.SinkBufferFlushMode
- snapshotState(long) - Method in class org.apache.flink.connector.kafka.dynamic.source.enumerator.DynamicKafkaSourceEnumerator
-
Besides for checkpointing, this method is used in the restart sequence to retain the relevant assigned splits so that there is no reader duplicate split assignment.
- snapshotState(long) - Method in class org.apache.flink.connector.kafka.dynamic.source.reader.DynamicKafkaSourceReader
- snapshotState(long) - Method in class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
- snapshotState(long) - Method in class org.apache.flink.connector.kafka.source.reader.KafkaSourceReader
- sourceLineageVertexOf(Collection<LineageDataset>) - Static method in class org.apache.flink.connector.kafka.lineage.LineageUtil
- SPECIFIC_OFFSETS - org.apache.flink.streaming.connectors.kafka.config.BoundedMode
-
End from user-supplied specific offsets for each partition.
- SPECIFIC_OFFSETS - org.apache.flink.streaming.connectors.kafka.config.StartupMode
-
Start from user-supplied specific offsets for each partition.
- SPECIFIC_OFFSETS - org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanBoundedMode
- SPECIFIC_OFFSETS - org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanStartupMode
- specificBoundedOffsets - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
Specific end offsets; only relevant when bounded mode is
BoundedMode.SPECIFIC_OFFSETS
. - specificStartupOffsets - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
Specific startup offsets; only relevant when startup mode is
StartupMode.SPECIFIC_OFFSETS
. - splitId() - Method in class org.apache.flink.connector.kafka.dynamic.source.split.DynamicKafkaSourceSplit
- splitId() - Method in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
- start() - Method in class org.apache.flink.connector.kafka.dynamic.source.enumerator.DynamicKafkaSourceEnumerator
-
Discover Kafka clusters and initialize sub enumerators.
- start() - Method in class org.apache.flink.connector.kafka.dynamic.source.reader.DynamicKafkaSourceReader
-
This is invoked first only in reader startup without state.
- start() - Method in class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
-
Start the enumerator.
- startupMode - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
The startup mode for the contained consumer (default is
StartupMode.GROUP_OFFSETS
). - StartupMode - Enum in org.apache.flink.streaming.connectors.kafka.config
-
Startup modes for the Kafka Consumer.
- startupTimestampMillis - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
The start timestamp to locate partition offsets; only relevant when startup mode is
StartupMode.TIMESTAMP
. - StoppableKafkaEnumContextProxy - Class in org.apache.flink.connector.kafka.dynamic.source.enumerator
-
A proxy enumerator context that supports life cycle management of underlying threads related to a sub
KafkaSourceEnumerator
. - StoppableKafkaEnumContextProxy(String, KafkaMetadataService, SplitEnumeratorContext<DynamicKafkaSourceSplit>, Runnable) - Constructor for class org.apache.flink.connector.kafka.dynamic.source.enumerator.StoppableKafkaEnumContextProxy
-
Constructor for the enumerator context.
- StoppableKafkaEnumContextProxy.HandledFlinkKafkaException - Exception in org.apache.flink.connector.kafka.dynamic.source.enumerator
-
General exception to signal to internal exception handling mechanisms that a benign error occurred.
- StoppableKafkaEnumContextProxy.StoppableKafkaEnumContextProxyFactory - Interface in org.apache.flink.connector.kafka.dynamic.source.enumerator
-
This factory exposes a way to override the
StoppableKafkaEnumContextProxy
used in the enumerator. - STREAM_METADATA_DISCOVERY_FAILURE_THRESHOLD - Static variable in class org.apache.flink.connector.kafka.dynamic.source.DynamicKafkaSourceOptions
- STREAM_METADATA_DISCOVERY_INTERVAL_MS - Static variable in class org.apache.flink.connector.kafka.dynamic.source.DynamicKafkaSourceOptions
- StreamPatternSubscriber - Class in org.apache.flink.connector.kafka.dynamic.source.enumerator.subscriber
-
To subscribe to streams based on a pattern.
- StreamPatternSubscriber(Pattern) - Constructor for class org.apache.flink.connector.kafka.dynamic.source.enumerator.subscriber.StreamPatternSubscriber
- successful(String) - Static method in class org.apache.flink.connector.kafka.sink.internal.TransactionFinished
- supportsMetadataProjection() - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
- sync(Metric, Counter) - Static method in class org.apache.flink.connector.kafka.MetricUtil
-
Ensures that the counter has the same value as the given Kafka metric.
T
- tableIdentifier - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
- timestamp(long) - Static method in interface org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer
-
Get an
OffsetsInitializer
which initializes the offsets in each partition so that the initialized offset is the offset of the first record whose record timestamp is greater than or equals the given timestamp (milliseconds). - TIMESTAMP - org.apache.flink.streaming.connectors.kafka.config.BoundedMode
-
End from user-supplied timestamp for each partition.
- TIMESTAMP - org.apache.flink.streaming.connectors.kafka.config.StartupMode
-
Start from user-supplied timestamp for each partition.
- TIMESTAMP - org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanBoundedMode
- TIMESTAMP - org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanStartupMode
- toKafkaPartitionSplit() - Method in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplitState
-
Use the current offset as the starting offset to create a new KafkaPartitionSplit.
- toLineageName() - Method in interface org.apache.flink.connector.kafka.lineage.KafkaDatasetIdentifier
-
Assigns lineage dataset's name which is topic pattern if it is present or comma separated list of topics.
- TOPIC - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- TOPIC_GROUP - Static variable in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
- TOPIC_PATTERN - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- topicPartition() - Method in class org.apache.flink.connector.kafka.source.enumerator.TopicPartitionAndAssignmentStatus
- TopicPartitionAndAssignmentStatus - Class in org.apache.flink.connector.kafka.source.enumerator
-
Kafka partition with assign status.
- TopicPartitionAndAssignmentStatus(TopicPartition, AssignmentStatus) - Constructor for class org.apache.flink.connector.kafka.source.enumerator.TopicPartitionAndAssignmentStatus
- topicPattern - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
The Kafka topic pattern of topics allowed to produce to.
- topicPattern - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
The Kafka topic pattern to consume.
- topics - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
The Kafka topics to allow for producing.
- topics - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
The Kafka topics to consume.
- TopicSelector<IN> - Interface in org.apache.flink.connector.kafka.sink
-
Selects a topic for the incoming record.
- toSplitId(TopicPartition) - Static method in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
- toSplitType(String, KafkaPartitionSplitState) - Method in class org.apache.flink.connector.kafka.source.reader.KafkaSourceReader
- toString() - Method in class org.apache.flink.connector.kafka.dynamic.metadata.ClusterMetadata
- toString() - Method in class org.apache.flink.connector.kafka.dynamic.metadata.KafkaStream
- toString() - Method in class org.apache.flink.connector.kafka.dynamic.source.MetadataUpdateEvent
- toString() - Method in class org.apache.flink.connector.kafka.dynamic.source.split.DynamicKafkaSourceSplit
- toString() - Method in class org.apache.flink.connector.kafka.sink.internal.CheckpointTransaction
- toString() - Method in class org.apache.flink.connector.kafka.sink.internal.FlinkKafkaInternalProducer
- toString() - Method in class org.apache.flink.connector.kafka.sink.internal.TransactionFinished
- toString() - Method in class org.apache.flink.connector.kafka.sink.KafkaWriterState
- toString() - Method in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
- toString() - Method in enum org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanBoundedMode
- toString() - Method in enum org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanStartupMode
- TRANSACTION_NAMING_STRATEGY - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
-
The strategy to name transactions.
- TransactionAbortStrategyContextImpl - Class in org.apache.flink.connector.kafka.sink.internal
-
Implementation of
TransactionAbortStrategyImpl.Context
. - TransactionAbortStrategyContextImpl(Supplier<Collection<String>>, int, int, int[], int, List<String>, long, TransactionAbortStrategyImpl.TransactionAborter, Supplier<Admin>, Set<String>) - Constructor for class org.apache.flink.connector.kafka.sink.internal.TransactionAbortStrategyContextImpl
-
Creates a new
TransactionAbortStrategyContextImpl
. - TransactionAbortStrategyImpl - Enum in org.apache.flink.connector.kafka.sink.internal
-
Implementations of an abort strategy for transactions left over from previous runs.
- TransactionAbortStrategyImpl.Context - Interface in org.apache.flink.connector.kafka.sink.internal
-
Context for the
TransactionAbortStrategyImpl
. - TransactionAbortStrategyImpl.TransactionAborter - Interface in org.apache.flink.connector.kafka.sink.internal
-
Injects the actual abortion of the transactional id generated by one of the strategies.
- TRANSACTIONAL_ID_PREFIX - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- TransactionalIdFactory - Class in org.apache.flink.connector.kafka.sink.internal
-
Utility class for constructing transactionalIds for Kafka transactions.
- TransactionalIdFactory() - Constructor for class org.apache.flink.connector.kafka.sink.internal.TransactionalIdFactory
- TransactionFinished - Class in org.apache.flink.connector.kafka.sink.internal
-
Represents the end of a transaction.
- TransactionFinished(String, boolean) - Constructor for class org.apache.flink.connector.kafka.sink.internal.TransactionFinished
- TransactionNamingStrategy - Enum in org.apache.flink.connector.kafka.sink
-
The strategy to name transactions.
- TransactionNamingStrategyContextImpl - Class in org.apache.flink.connector.kafka.sink.internal
-
Implementation of
TransactionNamingStrategyImpl.Context
. - TransactionNamingStrategyContextImpl(String, int, long, ProducerPool) - Constructor for class org.apache.flink.connector.kafka.sink.internal.TransactionNamingStrategyContextImpl
-
Creates a new
TransactionNamingStrategyContextImpl
. - TransactionNamingStrategyImpl - Enum in org.apache.flink.connector.kafka.sink.internal
-
Implementation of
TransactionNamingStrategy
. - TransactionNamingStrategyImpl.Context - Interface in org.apache.flink.connector.kafka.sink.internal
-
Context for the transaction naming strategy.
- TransactionOwnership - Enum in org.apache.flink.connector.kafka.sink.internal
-
Describes the ownership model of transactional ids and with that ownership of the transactions.
- TwoPhaseCommittingStatefulSink<InputT,WriterStateT,CommT> - Interface in org.apache.flink.connector.kafka.sink
-
A combination of
SupportsCommitter
andSupportsWriterState
. - TwoPhaseCommittingStatefulSink.PrecommittingStatefulSinkWriter<InputT,WriterStateT,CommT> - Interface in org.apache.flink.connector.kafka.sink
-
A combination of
StatefulSinkWriter
. - TYPE_FACET_NAME - Static variable in class org.apache.flink.connector.kafka.lineage.DefaultTypeDatasetFacet
- TypeDatasetFacet - Interface in org.apache.flink.connector.kafka.lineage
-
Facet definition to contain type information of source and sink.
- TypeDatasetFacetProvider - Interface in org.apache.flink.connector.kafka.lineage
-
Contains method to extract
TypeDatasetFacet
.
U
- UNASSIGNED_INITIAL - org.apache.flink.connector.kafka.source.enumerator.AssignmentStatus
-
The partitions that have been discovered during initialization but not assigned to readers yet.
- unassignedInitialPartitions() - Method in class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumState
- UNBOUNDED - org.apache.flink.streaming.connectors.kafka.config.BoundedMode
-
Do not end consuming.
- UNBOUNDED - org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanBoundedMode
- UNKNOWN - Static variable in class org.apache.flink.connector.kafka.sink.KafkaWriterState
- updateNumBytesInCounter() - Method in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
-
Update
MetricNames.IO_NUM_BYTES_IN
. - UpsertKafkaDynamicTableFactory - Class in org.apache.flink.streaming.connectors.kafka.table
-
Upsert-Kafka factory.
- UpsertKafkaDynamicTableFactory() - Constructor for class org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory
- UpsertKafkaDynamicTableFactory.DecodingFormatWrapper - Class in org.apache.flink.streaming.connectors.kafka.table
-
It is used to wrap the decoding format and expose the desired changelog mode.
- UpsertKafkaDynamicTableFactory.EncodingFormatWrapper - Class in org.apache.flink.streaming.connectors.kafka.table
-
It is used to wrap the encoding format and expose the desired changelog mode.
- upsertMode - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
Flag to determine sink mode.
- upsertMode - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
Flag to determine source mode.
V
- VALID_STARTING_OFFSET_MARKERS - Static variable in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
- VALID_STOPPING_OFFSET_MARKERS - Static variable in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
- validate(Properties) - Method in interface org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializerValidator
-
Validate offsets initializer with properties of Kafka source.
- VALUE_FIELDS_INCLUDE - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- VALUE_FORMAT - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- valueDecodingFormat - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
Format for decoding values from Kafka.
- valueEncodingFormat - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
Format for encoding values to Kafka.
- valueOf(String) - Static method in enum org.apache.flink.connector.kafka.sink.internal.TransactionAbortStrategyImpl
-
Returns the enum constant of this type with the specified name.
- valueOf(String) - Static method in enum org.apache.flink.connector.kafka.sink.internal.TransactionNamingStrategyImpl
-
Returns the enum constant of this type with the specified name.
- valueOf(String) - Static method in enum org.apache.flink.connector.kafka.sink.internal.TransactionOwnership
-
Returns the enum constant of this type with the specified name.
- valueOf(String) - Static method in enum org.apache.flink.connector.kafka.sink.TransactionNamingStrategy
-
Returns the enum constant of this type with the specified name.
- valueOf(String) - Static method in enum org.apache.flink.connector.kafka.source.enumerator.AssignmentStatus
-
Returns the enum constant of this type with the specified name.
- valueOf(String) - Static method in enum org.apache.flink.streaming.connectors.kafka.config.BoundedMode
-
Returns the enum constant of this type with the specified name.
- valueOf(String) - Static method in enum org.apache.flink.streaming.connectors.kafka.config.StartupMode
-
Returns the enum constant of this type with the specified name.
- valueOf(String) - Static method in enum org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanBoundedMode
-
Returns the enum constant of this type with the specified name.
- valueOf(String) - Static method in enum org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanStartupMode
-
Returns the enum constant of this type with the specified name.
- valueOf(String) - Static method in enum org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ValueFieldsStrategy
-
Returns the enum constant of this type with the specified name.
- valueOnly(Class<? extends Deserializer<V>>) - Static method in interface org.apache.flink.connector.kafka.source.reader.deserializer.KafkaRecordDeserializationSchema
-
Wraps a Kafka
Deserializer
to aKafkaRecordDeserializationSchema
. - valueOnly(Class<D>, Map<String, String>) - Static method in interface org.apache.flink.connector.kafka.source.reader.deserializer.KafkaRecordDeserializationSchema
-
Wraps a Kafka
Deserializer
to aKafkaRecordDeserializationSchema
. - valueOnly(DeserializationSchema<V>) - Static method in interface org.apache.flink.connector.kafka.source.reader.deserializer.KafkaRecordDeserializationSchema
-
Wraps a
DeserializationSchema
as the value deserialization schema of theConsumerRecords
. - valueProjection - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
Indices that determine the value fields and the source position in the consumed row.
- valueProjection - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
Indices that determine the value fields and the target position in the produced row.
- values() - Static method in enum org.apache.flink.connector.kafka.sink.internal.TransactionAbortStrategyImpl
-
Returns an array containing the constants of this enum type, in the order they are declared.
- values() - Static method in enum org.apache.flink.connector.kafka.sink.internal.TransactionNamingStrategyImpl
-
Returns an array containing the constants of this enum type, in the order they are declared.
- values() - Static method in enum org.apache.flink.connector.kafka.sink.internal.TransactionOwnership
-
Returns an array containing the constants of this enum type, in the order they are declared.
- values() - Static method in enum org.apache.flink.connector.kafka.sink.TransactionNamingStrategy
-
Returns an array containing the constants of this enum type, in the order they are declared.
- values() - Static method in enum org.apache.flink.connector.kafka.source.enumerator.AssignmentStatus
-
Returns an array containing the constants of this enum type, in the order they are declared.
- values() - Static method in enum org.apache.flink.streaming.connectors.kafka.config.BoundedMode
-
Returns an array containing the constants of this enum type, in the order they are declared.
- values() - Static method in enum org.apache.flink.streaming.connectors.kafka.config.StartupMode
-
Returns an array containing the constants of this enum type, in the order they are declared.
- values() - Static method in enum org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanBoundedMode
-
Returns an array containing the constants of this enum type, in the order they are declared.
- values() - Static method in enum org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanStartupMode
-
Returns an array containing the constants of this enum type, in the order they are declared.
- values() - Static method in enum org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ValueFieldsStrategy
-
Returns an array containing the constants of this enum type, in the order they are declared.
W
- wakeUp() - Method in class org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader
- watermarkStrategy - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
Watermark strategy that is used to generate per-partition watermark.
- wrapCallAsyncCallable(Callable<T>) - Method in class org.apache.flink.connector.kafka.dynamic.source.enumerator.StoppableKafkaEnumContextProxy
-
Wraps callable in call async executed in worker thread pool with exception propagation to optimize on doing IO in non-coordinator thread.
- wrapCallAsyncCallableHandler(BiConsumer<T, Throwable>) - Method in class org.apache.flink.connector.kafka.dynamic.source.enumerator.StoppableKafkaEnumContextProxy
-
Handle exception that is propagated by a callable, executed on coordinator thread.
- WritableBackchannel<T> - Interface in org.apache.flink.connector.kafka.sink.internal
-
The writable portion of a
Backchannel
for communication between the commiter -> writer.
All Classes All Packages