Interface DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder
- All Superinterfaces:
org.apache.camel.builder.EndpointConsumerBuilder
,org.apache.camel.EndpointConsumerResolver
- Enclosing interface:
DebeziumDb2EndpointBuilderFactory
public static interface DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder
extends org.apache.camel.builder.EndpointConsumerBuilder
Builder for endpoint for the Debezium DB2 Connector component.
-
Method Summary
Modifier and TypeMethodDescriptionadditionalProperties
(String key, Object value) Additional properties for debezium components in case they can't be set directly on the camel configurations (e.g: setting Kafka Connect properties needed by Debezium engine, for example setting KafkaOffsetBackingStore), the properties have to be prefixed with additionalProperties..additionalProperties
(Map values) Additional properties for debezium components in case they can't be set directly on the camel configurations (e.g: setting Kafka Connect properties needed by Debezium engine, for example setting KafkaOffsetBackingStore), the properties have to be prefixed with additionalProperties..advanced()
cdcChangeTablesSchema
(String cdcChangeTablesSchema) The name of the schema where CDC change tables are located; defaults to 'ASNCDC'.cdcControlSchema
(String cdcControlSchema) The name of the schema where CDC control structures are located; defaults to 'ASNCDC'.columnExcludeList
(String columnExcludeList) Regular expressions matching columns to exclude from change events.columnIncludeList
(String columnIncludeList) Regular expressions matching columns to include in change events.columnPropagateSourceType
(String columnPropagateSourceType) A comma-separated list of regular expressions matching fully-qualified names of columns that adds the columns original type and original length as parameters to the corresponding field schemas in the emitted change records.converters
(String converters) Optional list of custom converters that would be used instead of default ones.customMetricTags
(String customMetricTags) The custom metric tags will accept key-value pairs to customize the MBean object name which should be appended the end of regular name, each key would represent a tag for the MBean object name, and the corresponding value would be the value of that tag the key is.databaseDbname
(String databaseDbname) The name of the database from which the connector should capture changes.databaseHostname
(String databaseHostname) Resolvable hostname or IP address of the database server.databasePassword
(String databasePassword) Password of the database user to be used when connecting to the database.databasePort
(int databasePort) Port of the database server.databasePort
(String databasePort) Port of the database server.databaseUser
(String databaseUser) Name of the database user to be used when connecting to the database.datatypePropagateSourceType
(String datatypePropagateSourceType) A comma-separated list of regular expressions matching the database-specific data type names that adds the data type's original type and original length as parameters to the corresponding field schemas in the emitted change records.db2Platform
(String db2Platform) Informs connector which Db2 implementation platform it is connected to.decimalHandlingMode
(String decimalHandlingMode) Specify how DECIMAL and NUMERIC columns should be represented in change events, including: 'precise' (the default) uses java.math.BigDecimal to represent values, which are encoded in the change events using a binary representation and Kafka Connect's 'org.apache.kafka.connect.data.Decimal' type; 'string' uses string to represent values; 'double' represents values using Java's 'double', which may not offer the precision but will be far easier to use in consumers.errorsMaxRetries
(int errorsMaxRetries) The maximum number of retries on connection errors before failing (-1 = no limit, 0 = disabled, 0 = num of retries).errorsMaxRetries
(String errorsMaxRetries) The maximum number of retries on connection errors before failing (-1 = no limit, 0 = disabled, 0 = num of retries).eventProcessingFailureHandlingMode
(String eventProcessingFailureHandlingMode) Specify how failures during processing of events (i.e. when encountering a corrupted event) should be handled, including: 'fail' (the default) an exception indicating the problematic event and its position is raised, causing the connector to be stopped; 'warn' the problematic event and its position will be logged and the event will be skipped; 'ignore' the problematic event will be skipped.heartbeatIntervalMs
(int heartbeatIntervalMs) Length of an interval in milli-seconds in in which the connector periodically sends heartbeat messages to a heartbeat topic.heartbeatIntervalMs
(String heartbeatIntervalMs) Length of an interval in milli-seconds in in which the connector periodically sends heartbeat messages to a heartbeat topic.heartbeatTopicsPrefix
(String heartbeatTopicsPrefix) The prefix that is used to name heartbeat topics.Defaults to __debezium-heartbeat.includeSchemaChanges
(boolean includeSchemaChanges) Whether the connector should publish changes in the database schema to a Kafka topic with the same name as the database server ID.includeSchemaChanges
(String includeSchemaChanges) Whether the connector should publish changes in the database schema to a Kafka topic with the same name as the database server ID.incrementalSnapshotChunkSize
(int incrementalSnapshotChunkSize) The maximum size of chunk (number of documents/rows) for incremental snapshotting.incrementalSnapshotChunkSize
(String incrementalSnapshotChunkSize) The maximum size of chunk (number of documents/rows) for incremental snapshotting.incrementalSnapshotWatermarkingStrategy
(String incrementalSnapshotWatermarkingStrategy) Specify the strategy used for watermarking during an incremental snapshot: 'insert_insert' both open and close signal is written into signal data collection (default); 'insert_delete' only open signal is written on signal data collection, the close will delete the relative open signal;.internalKeyConverter
(String internalKeyConverter) The Converter class that should be used to serialize and deserialize key data for offsets.internalValueConverter
(String internalValueConverter) The Converter class that should be used to serialize and deserialize value data for offsets.maxBatchSize
(int maxBatchSize) Maximum size of each batch of source records.maxBatchSize
(String maxBatchSize) Maximum size of each batch of source records.maxQueueSize
(int maxQueueSize) Maximum size of the queue for change events read from the database log but not yet recorded or forwarded.maxQueueSize
(String maxQueueSize) Maximum size of the queue for change events read from the database log but not yet recorded or forwarded.maxQueueSizeInBytes
(long maxQueueSizeInBytes) Maximum size of the queue in bytes for change events read from the database log but not yet recorded or forwarded.maxQueueSizeInBytes
(String maxQueueSizeInBytes) Maximum size of the queue in bytes for change events read from the database log but not yet recorded or forwarded.messageKeyColumns
(String messageKeyColumns) A semicolon-separated list of expressions that match fully-qualified tables and column(s) to be used as message key.notificationEnabledChannels
(String notificationEnabledChannels) List of notification channels names that are enabled.notificationSinkTopicName
(String notificationSinkTopicName) The name of the topic for the notifications.offsetCommitPolicy
(String offsetCommitPolicy) The name of the Java class of the commit policy.offsetCommitTimeoutMs
(long offsetCommitTimeoutMs) Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt.offsetCommitTimeoutMs
(String offsetCommitTimeoutMs) Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt.offsetFlushIntervalMs
(long offsetFlushIntervalMs) Interval at which to try committing offsets.offsetFlushIntervalMs
(String offsetFlushIntervalMs) Interval at which to try committing offsets.offsetStorage
(String offsetStorage) The name of the Java class that is responsible for persistence of connector offsets.offsetStorageFileName
(String offsetStorageFileName) Path to file where offsets are to be stored.offsetStoragePartitions
(int offsetStoragePartitions) The number of partitions used when creating the offset storage topic.offsetStoragePartitions
(String offsetStoragePartitions) The number of partitions used when creating the offset storage topic.offsetStorageReplicationFactor
(int offsetStorageReplicationFactor) Replication factor used when creating the offset storage topic.offsetStorageReplicationFactor
(String offsetStorageReplicationFactor) Replication factor used when creating the offset storage topic.offsetStorageTopic
(String offsetStorageTopic) The name of the Kafka topic where offsets are to be stored.pollIntervalMs
(long pollIntervalMs) Time to wait for new change events to appear after receiving no events, given in milliseconds.pollIntervalMs
(String pollIntervalMs) Time to wait for new change events to appear after receiving no events, given in milliseconds.postProcessors
(String postProcessors) Optional list of post processors.provideTransactionMetadata
(boolean provideTransactionMetadata) Enables transaction metadata extraction together with event counting.provideTransactionMetadata
(String provideTransactionMetadata) Enables transaction metadata extraction together with event counting.queryFetchSize
(int queryFetchSize) The maximum number of records that should be loaded into memory while streaming.queryFetchSize
(String queryFetchSize) The maximum number of records that should be loaded into memory while streaming.retriableRestartConnectorWaitMs
(long retriableRestartConnectorWaitMs) Time to wait before restarting connector after retriable exception occurs.retriableRestartConnectorWaitMs
(String retriableRestartConnectorWaitMs) Time to wait before restarting connector after retriable exception occurs.schemaHistoryInternal
(String schemaHistoryInternal) The name of the SchemaHistory class that should be used to store and recover database schema changes.schemaHistoryInternalFileFilename
(String schemaHistoryInternalFileFilename) The path to the file that will be used to record the database schema history.schemaHistoryInternalSkipUnparseableDdl
(boolean schemaHistoryInternalSkipUnparseableDdl) Controls the action Debezium will take when it meets a DDL statement in binlog, that it cannot parse.By default the connector will stop operating but by changing the setting it can ignore the statements which it cannot parse.schemaHistoryInternalSkipUnparseableDdl
(String schemaHistoryInternalSkipUnparseableDdl) Controls the action Debezium will take when it meets a DDL statement in binlog, that it cannot parse.By default the connector will stop operating but by changing the setting it can ignore the statements which it cannot parse.schemaHistoryInternalStoreOnlyCapturedDatabasesDdl
(boolean schemaHistoryInternalStoreOnlyCapturedDatabasesDdl) Controls what DDL will Debezium store in database schema history.schemaHistoryInternalStoreOnlyCapturedDatabasesDdl
(String schemaHistoryInternalStoreOnlyCapturedDatabasesDdl) Controls what DDL will Debezium store in database schema history.schemaHistoryInternalStoreOnlyCapturedTablesDdl
(boolean schemaHistoryInternalStoreOnlyCapturedTablesDdl) Controls what DDL will Debezium store in database schema history.schemaHistoryInternalStoreOnlyCapturedTablesDdl
(String schemaHistoryInternalStoreOnlyCapturedTablesDdl) Controls what DDL will Debezium store in database schema history.schemaNameAdjustmentMode
(String schemaNameAdjustmentMode) Specify how schema names should be adjusted for compatibility with the message converter used by the connector, including: 'avro' replaces the characters that cannot be used in the Avro type name with underscore; 'avro_unicode' replaces the underscore or characters that cannot be used in the Avro type name with corresponding unicode like _uxxxx.signalDataCollection
(String signalDataCollection) The name of the data collection that is used to send signals/commands to Debezium.signalEnabledChannels
(String signalEnabledChannels) List of channels names that are enabled.signalPollIntervalMs
(long signalPollIntervalMs) Interval for looking for new signals in registered channels, given in milliseconds.signalPollIntervalMs
(String signalPollIntervalMs) Interval for looking for new signals in registered channels, given in milliseconds.skippedOperations
(String skippedOperations) The comma-separated list of operations to skip during streaming, defined as: 'c' for inserts/create; 'u' for updates; 'd' for deletes, 't' for truncates, and 'none' to indicate nothing skipped.snapshotDelayMs
(long snapshotDelayMs) A delay period before a snapshot will begin, given in milliseconds.snapshotDelayMs
(String snapshotDelayMs) A delay period before a snapshot will begin, given in milliseconds.snapshotFetchSize
(int snapshotFetchSize) The maximum number of records that should be loaded into memory while performing a snapshot.snapshotFetchSize
(String snapshotFetchSize) The maximum number of records that should be loaded into memory while performing a snapshot.snapshotIncludeCollectionList
(String snapshotIncludeCollectionList) This setting must be set to specify a list of tables/collections whose snapshot must be taken on creating or restarting the connector.snapshotLockTimeoutMs
(long snapshotLockTimeoutMs) The maximum number of millis to wait for table locks at the beginning of a snapshot.snapshotLockTimeoutMs
(String snapshotLockTimeoutMs) The maximum number of millis to wait for table locks at the beginning of a snapshot.snapshotMode
(String snapshotMode) The criteria for running a snapshot upon startup of the connector.snapshotModeConfigurationBasedSnapshotData
(boolean snapshotModeConfigurationBasedSnapshotData) When 'snapshot.mode' is set as configuration_based, this setting permits to specify whenever the data should be snapshotted or not.snapshotModeConfigurationBasedSnapshotData
(String snapshotModeConfigurationBasedSnapshotData) When 'snapshot.mode' is set as configuration_based, this setting permits to specify whenever the data should be snapshotted or not.snapshotModeConfigurationBasedSnapshotOnDataError
(boolean snapshotModeConfigurationBasedSnapshotOnDataError) When 'snapshot.mode' is set as configuration_based, this setting permits to specify whenever the data should be snapshotted or not in case of error.snapshotModeConfigurationBasedSnapshotOnDataError
(String snapshotModeConfigurationBasedSnapshotOnDataError) When 'snapshot.mode' is set as configuration_based, this setting permits to specify whenever the data should be snapshotted or not in case of error.snapshotModeConfigurationBasedSnapshotOnSchemaError
(boolean snapshotModeConfigurationBasedSnapshotOnSchemaError) When 'snapshot.mode' is set as configuration_based, this setting permits to specify whenever the schema should be snapshotted or not in case of error.snapshotModeConfigurationBasedSnapshotOnSchemaError
(String snapshotModeConfigurationBasedSnapshotOnSchemaError) When 'snapshot.mode' is set as configuration_based, this setting permits to specify whenever the schema should be snapshotted or not in case of error.snapshotModeConfigurationBasedSnapshotSchema
(boolean snapshotModeConfigurationBasedSnapshotSchema) When 'snapshot.mode' is set as configuration_based, this setting permits to specify whenever the schema should be snapshotted or not.snapshotModeConfigurationBasedSnapshotSchema
(String snapshotModeConfigurationBasedSnapshotSchema) When 'snapshot.mode' is set as configuration_based, this setting permits to specify whenever the schema should be snapshotted or not.snapshotModeConfigurationBasedStartStream
(boolean snapshotModeConfigurationBasedStartStream) When 'snapshot.mode' is set as configuration_based, this setting permits to specify whenever the stream should start or not after snapshot.snapshotModeConfigurationBasedStartStream
(String snapshotModeConfigurationBasedStartStream) When 'snapshot.mode' is set as configuration_based, this setting permits to specify whenever the stream should start or not after snapshot.snapshotModeCustomName
(String snapshotModeCustomName) When 'snapshot.mode' is set as custom, this setting must be set to specify a the name of the custom implementation provided in the 'name()' method.snapshotSelectStatementOverrides
(String snapshotSelectStatementOverrides) This property contains a comma-separated list of fully-qualified tables (DB_NAME.TABLE_NAME) or (SCHEMA_NAME.TABLE_NAME), depending on the specific connectors.snapshotTablesOrderByRowCount
(String snapshotTablesOrderByRowCount) Controls the order in which tables are processed in the initial snapshot.sourceinfoStructMaker
(String sourceinfoStructMaker) The name of the SourceInfoStructMaker class that returns SourceInfo schema and struct.streamingDelayMs
(long streamingDelayMs) A delay period after the snapshot is completed and the streaming begins, given in milliseconds.streamingDelayMs
(String streamingDelayMs) A delay period after the snapshot is completed and the streaming begins, given in milliseconds.tableExcludeList
(String tableExcludeList) A comma-separated list of regular expressions that match the fully-qualified names of tables to be excluded from monitoring.tableIgnoreBuiltin
(boolean tableIgnoreBuiltin) Flag specifying whether built-in tables should be ignored.tableIgnoreBuiltin
(String tableIgnoreBuiltin) Flag specifying whether built-in tables should be ignored.tableIncludeList
(String tableIncludeList) The tables for which changes are to be captured.timePrecisionMode
(String timePrecisionMode) Time, date, and timestamps can be represented with different kinds of precisions, including: 'adaptive' (the default) bases the precision of time, date, and timestamp values on the database column's precision; 'adaptive_time_microseconds' like 'adaptive' mode, but TIME fields always use microseconds precision; 'connect' always represents time, date, and timestamp values using Kafka Connect's built-in representations for Time, Date, and Timestamp, which uses millisecond precision regardless of the database columns' precision.tombstonesOnDelete
(boolean tombstonesOnDelete) Whether delete operations should be represented by a delete event and a subsequent tombstone event (true) or only by a delete event (false).tombstonesOnDelete
(String tombstonesOnDelete) Whether delete operations should be represented by a delete event and a subsequent tombstone event (true) or only by a delete event (false).topicNamingStrategy
(String topicNamingStrategy) The name of the TopicNamingStrategy class that should be used to determine the topic name for data change, schema change, transaction, heartbeat event etc.topicPrefix
(String topicPrefix) Topic prefix that identifies and provides a namespace for the particular database server/cluster is capturing changes.transactionMetadataFactory
(String transactionMetadataFactory) Class to make transaction context & transaction struct/schemas.Methods inherited from interface org.apache.camel.builder.EndpointConsumerBuilder
doSetMultiValueProperties, doSetMultiValueProperty, doSetProperty, expr, getRawUri, getUri
Methods inherited from interface org.apache.camel.EndpointConsumerResolver
resolve, resolve
-
Method Details
-
advanced
-
additionalProperties
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder additionalProperties(String key, Object value) Additional properties for debezium components in case they can't be set directly on the camel configurations (e.g: setting Kafka Connect properties needed by Debezium engine, for example setting KafkaOffsetBackingStore), the properties have to be prefixed with additionalProperties.. E.g: additionalProperties.transactional.id=12345&additionalProperties.schema.registry.url=http://localhost:8811/avro. The option is a:java.util.Map<java.lang.String, java.lang.Object>
type. The option is multivalued, and you can use the additionalProperties(String, Object) method to add a value (call the method multiple times to set more values). Group: common- Parameters:
key
- the option keyvalue
- the option value- Returns:
- the dsl builder
-
additionalProperties
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder additionalProperties(Map values) Additional properties for debezium components in case they can't be set directly on the camel configurations (e.g: setting Kafka Connect properties needed by Debezium engine, for example setting KafkaOffsetBackingStore), the properties have to be prefixed with additionalProperties.. E.g: additionalProperties.transactional.id=12345&additionalProperties.schema.registry.url=http://localhost:8811/avro. The option is a:java.util.Map<java.lang.String, java.lang.Object>
type. The option is multivalued, and you can use the additionalProperties(String, Object) method to add a value (call the method multiple times to set more values). Group: common- Parameters:
values
- the values- Returns:
- the dsl builder
-
internalKeyConverter
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder internalKeyConverter(String internalKeyConverter) The Converter class that should be used to serialize and deserialize key data for offsets. The default is JSON converter. The option is a:java.lang.String
type. Default: org.apache.kafka.connect.json.JsonConverter Group: consumer- Parameters:
internalKeyConverter
- the value to set- Returns:
- the dsl builder
-
internalValueConverter
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder internalValueConverter(String internalValueConverter) The Converter class that should be used to serialize and deserialize value data for offsets. The default is JSON converter. The option is a:java.lang.String
type. Default: org.apache.kafka.connect.json.JsonConverter Group: consumer- Parameters:
internalValueConverter
- the value to set- Returns:
- the dsl builder
-
offsetCommitPolicy
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder offsetCommitPolicy(String offsetCommitPolicy) The name of the Java class of the commit policy. It defines when offsets commit has to be triggered based on the number of events processed and the time elapsed since the last commit. This class must implement the interface 'OffsetCommitPolicy'. The default is a periodic commit policy based upon time intervals. The option is a:java.lang.String
type. Group: consumer- Parameters:
offsetCommitPolicy
- the value to set- Returns:
- the dsl builder
-
offsetCommitTimeoutMs
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder offsetCommitTimeoutMs(long offsetCommitTimeoutMs) Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt. The default is 5 seconds. The option is a:long
type. Default: 5000 Group: consumer- Parameters:
offsetCommitTimeoutMs
- the value to set- Returns:
- the dsl builder
-
offsetCommitTimeoutMs
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder offsetCommitTimeoutMs(String offsetCommitTimeoutMs) Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt. The default is 5 seconds. The option will be converted to along
type. Default: 5000 Group: consumer- Parameters:
offsetCommitTimeoutMs
- the value to set- Returns:
- the dsl builder
-
offsetFlushIntervalMs
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder offsetFlushIntervalMs(long offsetFlushIntervalMs) Interval at which to try committing offsets. The default is 1 minute. The option is a:long
type. Default: 60000 Group: consumer- Parameters:
offsetFlushIntervalMs
- the value to set- Returns:
- the dsl builder
-
offsetFlushIntervalMs
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder offsetFlushIntervalMs(String offsetFlushIntervalMs) Interval at which to try committing offsets. The default is 1 minute. The option will be converted to along
type. Default: 60000 Group: consumer- Parameters:
offsetFlushIntervalMs
- the value to set- Returns:
- the dsl builder
-
offsetStorage
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder offsetStorage(String offsetStorage) The name of the Java class that is responsible for persistence of connector offsets. The option is a:java.lang.String
type. Default: org.apache.kafka.connect.storage.FileOffsetBackingStore Group: consumer- Parameters:
offsetStorage
- the value to set- Returns:
- the dsl builder
-
offsetStorageFileName
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder offsetStorageFileName(String offsetStorageFileName) Path to file where offsets are to be stored. Required when offset.storage is set to the FileOffsetBackingStore. The option is a:java.lang.String
type. Group: consumer- Parameters:
offsetStorageFileName
- the value to set- Returns:
- the dsl builder
-
offsetStoragePartitions
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder offsetStoragePartitions(int offsetStoragePartitions) The number of partitions used when creating the offset storage topic. Required when offset.storage is set to the 'KafkaOffsetBackingStore'. The option is a:int
type. Group: consumer- Parameters:
offsetStoragePartitions
- the value to set- Returns:
- the dsl builder
-
offsetStoragePartitions
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder offsetStoragePartitions(String offsetStoragePartitions) The number of partitions used when creating the offset storage topic. Required when offset.storage is set to the 'KafkaOffsetBackingStore'. The option will be converted to aint
type. Group: consumer- Parameters:
offsetStoragePartitions
- the value to set- Returns:
- the dsl builder
-
offsetStorageReplicationFactor
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder offsetStorageReplicationFactor(int offsetStorageReplicationFactor) Replication factor used when creating the offset storage topic. Required when offset.storage is set to the KafkaOffsetBackingStore. The option is a:int
type. Group: consumer- Parameters:
offsetStorageReplicationFactor
- the value to set- Returns:
- the dsl builder
-
offsetStorageReplicationFactor
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder offsetStorageReplicationFactor(String offsetStorageReplicationFactor) Replication factor used when creating the offset storage topic. Required when offset.storage is set to the KafkaOffsetBackingStore. The option will be converted to aint
type. Group: consumer- Parameters:
offsetStorageReplicationFactor
- the value to set- Returns:
- the dsl builder
-
offsetStorageTopic
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder offsetStorageTopic(String offsetStorageTopic) The name of the Kafka topic where offsets are to be stored. Required when offset.storage is set to the KafkaOffsetBackingStore. The option is a:java.lang.String
type. Group: consumer- Parameters:
offsetStorageTopic
- the value to set- Returns:
- the dsl builder
-
cdcChangeTablesSchema
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder cdcChangeTablesSchema(String cdcChangeTablesSchema) The name of the schema where CDC change tables are located; defaults to 'ASNCDC'. The option is a:java.lang.String
type. Default: ASNCDC Group: db2- Parameters:
cdcChangeTablesSchema
- the value to set- Returns:
- the dsl builder
-
cdcControlSchema
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder cdcControlSchema(String cdcControlSchema) The name of the schema where CDC control structures are located; defaults to 'ASNCDC'. The option is a:java.lang.String
type. Default: ASNCDC Group: db2- Parameters:
cdcControlSchema
- the value to set- Returns:
- the dsl builder
-
columnExcludeList
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder columnExcludeList(String columnExcludeList) Regular expressions matching columns to exclude from change events. The option is a:java.lang.String
type. Group: db2- Parameters:
columnExcludeList
- the value to set- Returns:
- the dsl builder
-
columnIncludeList
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder columnIncludeList(String columnIncludeList) Regular expressions matching columns to include in change events. The option is a:java.lang.String
type. Group: db2- Parameters:
columnIncludeList
- the value to set- Returns:
- the dsl builder
-
columnPropagateSourceType
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder columnPropagateSourceType(String columnPropagateSourceType) A comma-separated list of regular expressions matching fully-qualified names of columns that adds the columns original type and original length as parameters to the corresponding field schemas in the emitted change records. The option is a:java.lang.String
type. Group: db2- Parameters:
columnPropagateSourceType
- the value to set- Returns:
- the dsl builder
-
converters
Optional list of custom converters that would be used instead of default ones. The converters are defined using '.type' config option and configured using options '.'. The option is a:java.lang.String
type. Group: db2- Parameters:
converters
- the value to set- Returns:
- the dsl builder
-
customMetricTags
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder customMetricTags(String customMetricTags) The custom metric tags will accept key-value pairs to customize the MBean object name which should be appended the end of regular name, each key would represent a tag for the MBean object name, and the corresponding value would be the value of that tag the key is. For example: k1=v1,k2=v2. The option is a:java.lang.String
type. Group: db2- Parameters:
customMetricTags
- the value to set- Returns:
- the dsl builder
-
databaseDbname
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder databaseDbname(String databaseDbname) The name of the database from which the connector should capture changes. The option is a:java.lang.String
type. Group: db2- Parameters:
databaseDbname
- the value to set- Returns:
- the dsl builder
-
databaseHostname
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder databaseHostname(String databaseHostname) Resolvable hostname or IP address of the database server. The option is a:java.lang.String
type. Group: db2- Parameters:
databaseHostname
- the value to set- Returns:
- the dsl builder
-
databasePassword
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder databasePassword(String databasePassword) Password of the database user to be used when connecting to the database. The option is a:java.lang.String
type. Required: true Group: db2- Parameters:
databasePassword
- the value to set- Returns:
- the dsl builder
-
databasePort
Port of the database server. The option is a:int
type. Default: 50000 Group: db2- Parameters:
databasePort
- the value to set- Returns:
- the dsl builder
-
databasePort
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder databasePort(String databasePort) Port of the database server. The option will be converted to aint
type. Default: 50000 Group: db2- Parameters:
databasePort
- the value to set- Returns:
- the dsl builder
-
databaseUser
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder databaseUser(String databaseUser) Name of the database user to be used when connecting to the database. The option is a:java.lang.String
type. Group: db2- Parameters:
databaseUser
- the value to set- Returns:
- the dsl builder
-
datatypePropagateSourceType
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder datatypePropagateSourceType(String datatypePropagateSourceType) A comma-separated list of regular expressions matching the database-specific data type names that adds the data type's original type and original length as parameters to the corresponding field schemas in the emitted change records. The option is a:java.lang.String
type. Group: db2- Parameters:
datatypePropagateSourceType
- the value to set- Returns:
- the dsl builder
-
db2Platform
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder db2Platform(String db2Platform) Informs connector which Db2 implementation platform it is connected to. The default is 'LUW', which means Windows, UNIX, Linux. Using a value of 'Z' ensures that the Db2 for z/OS specific SQL statements are used. The option is a:java.lang.String
type. Default: LUW Group: db2- Parameters:
db2Platform
- the value to set- Returns:
- the dsl builder
-
decimalHandlingMode
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder decimalHandlingMode(String decimalHandlingMode) Specify how DECIMAL and NUMERIC columns should be represented in change events, including: 'precise' (the default) uses java.math.BigDecimal to represent values, which are encoded in the change events using a binary representation and Kafka Connect's 'org.apache.kafka.connect.data.Decimal' type; 'string' uses string to represent values; 'double' represents values using Java's 'double', which may not offer the precision but will be far easier to use in consumers. The option is a:java.lang.String
type. Default: precise Group: db2- Parameters:
decimalHandlingMode
- the value to set- Returns:
- the dsl builder
-
errorsMaxRetries
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder errorsMaxRetries(int errorsMaxRetries) The maximum number of retries on connection errors before failing (-1 = no limit, 0 = disabled, 0 = num of retries). The option is a:int
type. Default: -1 Group: db2- Parameters:
errorsMaxRetries
- the value to set- Returns:
- the dsl builder
-
errorsMaxRetries
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder errorsMaxRetries(String errorsMaxRetries) The maximum number of retries on connection errors before failing (-1 = no limit, 0 = disabled, 0 = num of retries). The option will be converted to aint
type. Default: -1 Group: db2- Parameters:
errorsMaxRetries
- the value to set- Returns:
- the dsl builder
-
eventProcessingFailureHandlingMode
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder eventProcessingFailureHandlingMode(String eventProcessingFailureHandlingMode) Specify how failures during processing of events (i.e. when encountering a corrupted event) should be handled, including: 'fail' (the default) an exception indicating the problematic event and its position is raised, causing the connector to be stopped; 'warn' the problematic event and its position will be logged and the event will be skipped; 'ignore' the problematic event will be skipped. The option is a:java.lang.String
type. Default: fail Group: db2- Parameters:
eventProcessingFailureHandlingMode
- the value to set- Returns:
- the dsl builder
-
heartbeatIntervalMs
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder heartbeatIntervalMs(int heartbeatIntervalMs) Length of an interval in milli-seconds in in which the connector periodically sends heartbeat messages to a heartbeat topic. Use 0 to disable heartbeat messages. Disabled by default. The option is a:int
type. Default: 0ms Group: db2- Parameters:
heartbeatIntervalMs
- the value to set- Returns:
- the dsl builder
-
heartbeatIntervalMs
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder heartbeatIntervalMs(String heartbeatIntervalMs) Length of an interval in milli-seconds in in which the connector periodically sends heartbeat messages to a heartbeat topic. Use 0 to disable heartbeat messages. Disabled by default. The option will be converted to aint
type. Default: 0ms Group: db2- Parameters:
heartbeatIntervalMs
- the value to set- Returns:
- the dsl builder
-
heartbeatTopicsPrefix
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder heartbeatTopicsPrefix(String heartbeatTopicsPrefix) The prefix that is used to name heartbeat topics.Defaults to __debezium-heartbeat. The option is a:java.lang.String
type. Default: __debezium-heartbeat Group: db2- Parameters:
heartbeatTopicsPrefix
- the value to set- Returns:
- the dsl builder
-
includeSchemaChanges
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder includeSchemaChanges(boolean includeSchemaChanges) Whether the connector should publish changes in the database schema to a Kafka topic with the same name as the database server ID. Each schema change will be recorded using a key that contains the database name and whose value include logical description of the new schema and optionally the DDL statement(s). The default is 'true'. This is independent of how the connector internally records database schema history. The option is a:boolean
type. Default: true Group: db2- Parameters:
includeSchemaChanges
- the value to set- Returns:
- the dsl builder
-
includeSchemaChanges
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder includeSchemaChanges(String includeSchemaChanges) Whether the connector should publish changes in the database schema to a Kafka topic with the same name as the database server ID. Each schema change will be recorded using a key that contains the database name and whose value include logical description of the new schema and optionally the DDL statement(s). The default is 'true'. This is independent of how the connector internally records database schema history. The option will be converted to aboolean
type. Default: true Group: db2- Parameters:
includeSchemaChanges
- the value to set- Returns:
- the dsl builder
-
incrementalSnapshotChunkSize
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder incrementalSnapshotChunkSize(int incrementalSnapshotChunkSize) The maximum size of chunk (number of documents/rows) for incremental snapshotting. The option is a:int
type. Default: 1024 Group: db2- Parameters:
incrementalSnapshotChunkSize
- the value to set- Returns:
- the dsl builder
-
incrementalSnapshotChunkSize
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder incrementalSnapshotChunkSize(String incrementalSnapshotChunkSize) The maximum size of chunk (number of documents/rows) for incremental snapshotting. The option will be converted to aint
type. Default: 1024 Group: db2- Parameters:
incrementalSnapshotChunkSize
- the value to set- Returns:
- the dsl builder
-
incrementalSnapshotWatermarkingStrategy
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder incrementalSnapshotWatermarkingStrategy(String incrementalSnapshotWatermarkingStrategy) Specify the strategy used for watermarking during an incremental snapshot: 'insert_insert' both open and close signal is written into signal data collection (default); 'insert_delete' only open signal is written on signal data collection, the close will delete the relative open signal;. The option is a:java.lang.String
type. Default: INSERT_INSERT Group: db2- Parameters:
incrementalSnapshotWatermarkingStrategy
- the value to set- Returns:
- the dsl builder
-
maxBatchSize
Maximum size of each batch of source records. Defaults to 2048. The option is a:int
type. Default: 2048 Group: db2- Parameters:
maxBatchSize
- the value to set- Returns:
- the dsl builder
-
maxBatchSize
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder maxBatchSize(String maxBatchSize) Maximum size of each batch of source records. Defaults to 2048. The option will be converted to aint
type. Default: 2048 Group: db2- Parameters:
maxBatchSize
- the value to set- Returns:
- the dsl builder
-
maxQueueSize
Maximum size of the queue for change events read from the database log but not yet recorded or forwarded. Defaults to 8192, and should always be larger than the maximum batch size. The option is a:int
type. Default: 8192 Group: db2- Parameters:
maxQueueSize
- the value to set- Returns:
- the dsl builder
-
maxQueueSize
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder maxQueueSize(String maxQueueSize) Maximum size of the queue for change events read from the database log but not yet recorded or forwarded. Defaults to 8192, and should always be larger than the maximum batch size. The option will be converted to aint
type. Default: 8192 Group: db2- Parameters:
maxQueueSize
- the value to set- Returns:
- the dsl builder
-
maxQueueSizeInBytes
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder maxQueueSizeInBytes(long maxQueueSizeInBytes) Maximum size of the queue in bytes for change events read from the database log but not yet recorded or forwarded. Defaults to 0. Mean the feature is not enabled. The option is a:long
type. Default: 0 Group: db2- Parameters:
maxQueueSizeInBytes
- the value to set- Returns:
- the dsl builder
-
maxQueueSizeInBytes
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder maxQueueSizeInBytes(String maxQueueSizeInBytes) Maximum size of the queue in bytes for change events read from the database log but not yet recorded or forwarded. Defaults to 0. Mean the feature is not enabled. The option will be converted to along
type. Default: 0 Group: db2- Parameters:
maxQueueSizeInBytes
- the value to set- Returns:
- the dsl builder
-
messageKeyColumns
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder messageKeyColumns(String messageKeyColumns) A semicolon-separated list of expressions that match fully-qualified tables and column(s) to be used as message key. Each expression must match the pattern ':', where the table names could be defined as (DB_NAME.TABLE_NAME) or (SCHEMA_NAME.TABLE_NAME), depending on the specific connector, and the key columns are a comma-separated list of columns representing the custom key. For any table without an explicit key configuration the table's primary key column(s) will be used as message key. Example: dbserver1.inventory.orderlines:orderId,orderLineId;dbserver1.inventory.orders:id. The option is a:java.lang.String
type. Group: db2- Parameters:
messageKeyColumns
- the value to set- Returns:
- the dsl builder
-
notificationEnabledChannels
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder notificationEnabledChannels(String notificationEnabledChannels) List of notification channels names that are enabled. The option is a:java.lang.String
type. Group: db2- Parameters:
notificationEnabledChannels
- the value to set- Returns:
- the dsl builder
-
notificationSinkTopicName
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder notificationSinkTopicName(String notificationSinkTopicName) The name of the topic for the notifications. This is required in case 'sink' is in the list of enabled channels. The option is a:java.lang.String
type. Group: db2- Parameters:
notificationSinkTopicName
- the value to set- Returns:
- the dsl builder
-
pollIntervalMs
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder pollIntervalMs(long pollIntervalMs) Time to wait for new change events to appear after receiving no events, given in milliseconds. Defaults to 500 ms. The option is a:long
type. Default: 500ms Group: db2- Parameters:
pollIntervalMs
- the value to set- Returns:
- the dsl builder
-
pollIntervalMs
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder pollIntervalMs(String pollIntervalMs) Time to wait for new change events to appear after receiving no events, given in milliseconds. Defaults to 500 ms. The option will be converted to along
type. Default: 500ms Group: db2- Parameters:
pollIntervalMs
- the value to set- Returns:
- the dsl builder
-
postProcessors
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder postProcessors(String postProcessors) Optional list of post processors. The processors are defined using '.type' config option and configured using options ''. The option is a:java.lang.String
type. Group: db2- Parameters:
postProcessors
- the value to set- Returns:
- the dsl builder
-
provideTransactionMetadata
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder provideTransactionMetadata(boolean provideTransactionMetadata) Enables transaction metadata extraction together with event counting. The option is a:boolean
type. Default: false Group: db2- Parameters:
provideTransactionMetadata
- the value to set- Returns:
- the dsl builder
-
provideTransactionMetadata
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder provideTransactionMetadata(String provideTransactionMetadata) Enables transaction metadata extraction together with event counting. The option will be converted to aboolean
type. Default: false Group: db2- Parameters:
provideTransactionMetadata
- the value to set- Returns:
- the dsl builder
-
queryFetchSize
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder queryFetchSize(int queryFetchSize) The maximum number of records that should be loaded into memory while streaming. A value of '0' uses the default JDBC fetch size. The default value is '10000'. The option is a:int
type. Default: 10000 Group: db2- Parameters:
queryFetchSize
- the value to set- Returns:
- the dsl builder
-
queryFetchSize
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder queryFetchSize(String queryFetchSize) The maximum number of records that should be loaded into memory while streaming. A value of '0' uses the default JDBC fetch size. The default value is '10000'. The option will be converted to aint
type. Default: 10000 Group: db2- Parameters:
queryFetchSize
- the value to set- Returns:
- the dsl builder
-
retriableRestartConnectorWaitMs
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder retriableRestartConnectorWaitMs(long retriableRestartConnectorWaitMs) Time to wait before restarting connector after retriable exception occurs. Defaults to 10000ms. The option is a:long
type. Default: 10s Group: db2- Parameters:
retriableRestartConnectorWaitMs
- the value to set- Returns:
- the dsl builder
-
retriableRestartConnectorWaitMs
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder retriableRestartConnectorWaitMs(String retriableRestartConnectorWaitMs) Time to wait before restarting connector after retriable exception occurs. Defaults to 10000ms. The option will be converted to along
type. Default: 10s Group: db2- Parameters:
retriableRestartConnectorWaitMs
- the value to set- Returns:
- the dsl builder
-
schemaHistoryInternal
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder schemaHistoryInternal(String schemaHistoryInternal) The name of the SchemaHistory class that should be used to store and recover database schema changes. The configuration properties for the history are prefixed with the 'schema.history.internal.' string. The option is a:java.lang.String
type. Default: io.debezium.storage.kafka.history.KafkaSchemaHistory Group: db2- Parameters:
schemaHistoryInternal
- the value to set- Returns:
- the dsl builder
-
schemaHistoryInternalFileFilename
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder schemaHistoryInternalFileFilename(String schemaHistoryInternalFileFilename) The path to the file that will be used to record the database schema history. The option is a:java.lang.String
type. Group: db2- Parameters:
schemaHistoryInternalFileFilename
- the value to set- Returns:
- the dsl builder
-
schemaHistoryInternalSkipUnparseableDdl
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder schemaHistoryInternalSkipUnparseableDdl(boolean schemaHistoryInternalSkipUnparseableDdl) Controls the action Debezium will take when it meets a DDL statement in binlog, that it cannot parse.By default the connector will stop operating but by changing the setting it can ignore the statements which it cannot parse. If skipping is enabled then Debezium can miss metadata changes. The option is a:boolean
type. Default: false Group: db2- Parameters:
schemaHistoryInternalSkipUnparseableDdl
- the value to set- Returns:
- the dsl builder
-
schemaHistoryInternalSkipUnparseableDdl
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder schemaHistoryInternalSkipUnparseableDdl(String schemaHistoryInternalSkipUnparseableDdl) Controls the action Debezium will take when it meets a DDL statement in binlog, that it cannot parse.By default the connector will stop operating but by changing the setting it can ignore the statements which it cannot parse. If skipping is enabled then Debezium can miss metadata changes. The option will be converted to aboolean
type. Default: false Group: db2- Parameters:
schemaHistoryInternalSkipUnparseableDdl
- the value to set- Returns:
- the dsl builder
-
schemaHistoryInternalStoreOnlyCapturedDatabasesDdl
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder schemaHistoryInternalStoreOnlyCapturedDatabasesDdl(boolean schemaHistoryInternalStoreOnlyCapturedDatabasesDdl) Controls what DDL will Debezium store in database schema history. By default (true) only DDL that manipulates a table from captured schema/database will be stored. If set to false, then Debezium will store all incoming DDL statements. The option is a:boolean
type. Default: false Group: db2- Parameters:
schemaHistoryInternalStoreOnlyCapturedDatabasesDdl
- the value to set- Returns:
- the dsl builder
-
schemaHistoryInternalStoreOnlyCapturedDatabasesDdl
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder schemaHistoryInternalStoreOnlyCapturedDatabasesDdl(String schemaHistoryInternalStoreOnlyCapturedDatabasesDdl) Controls what DDL will Debezium store in database schema history. By default (true) only DDL that manipulates a table from captured schema/database will be stored. If set to false, then Debezium will store all incoming DDL statements. The option will be converted to aboolean
type. Default: false Group: db2- Parameters:
schemaHistoryInternalStoreOnlyCapturedDatabasesDdl
- the value to set- Returns:
- the dsl builder
-
schemaHistoryInternalStoreOnlyCapturedTablesDdl
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder schemaHistoryInternalStoreOnlyCapturedTablesDdl(boolean schemaHistoryInternalStoreOnlyCapturedTablesDdl) Controls what DDL will Debezium store in database schema history. By default (false) Debezium will store all incoming DDL statements. If set to true, then only DDL that manipulates a captured table will be stored. The option is a:boolean
type. Default: false Group: db2- Parameters:
schemaHistoryInternalStoreOnlyCapturedTablesDdl
- the value to set- Returns:
- the dsl builder
-
schemaHistoryInternalStoreOnlyCapturedTablesDdl
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder schemaHistoryInternalStoreOnlyCapturedTablesDdl(String schemaHistoryInternalStoreOnlyCapturedTablesDdl) Controls what DDL will Debezium store in database schema history. By default (false) Debezium will store all incoming DDL statements. If set to true, then only DDL that manipulates a captured table will be stored. The option will be converted to aboolean
type. Default: false Group: db2- Parameters:
schemaHistoryInternalStoreOnlyCapturedTablesDdl
- the value to set- Returns:
- the dsl builder
-
schemaNameAdjustmentMode
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder schemaNameAdjustmentMode(String schemaNameAdjustmentMode) Specify how schema names should be adjusted for compatibility with the message converter used by the connector, including: 'avro' replaces the characters that cannot be used in the Avro type name with underscore; 'avro_unicode' replaces the underscore or characters that cannot be used in the Avro type name with corresponding unicode like _uxxxx. Note: _ is an escape sequence like backslash in Java;'none' does not apply any adjustment (default). The option is a:java.lang.String
type. Default: none Group: db2- Parameters:
schemaNameAdjustmentMode
- the value to set- Returns:
- the dsl builder
-
signalDataCollection
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder signalDataCollection(String signalDataCollection) The name of the data collection that is used to send signals/commands to Debezium. Signaling is disabled when not set. The option is a:java.lang.String
type. Group: db2- Parameters:
signalDataCollection
- the value to set- Returns:
- the dsl builder
-
signalEnabledChannels
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder signalEnabledChannels(String signalEnabledChannels) List of channels names that are enabled. Source channel is enabled by default. The option is a:java.lang.String
type. Default: source Group: db2- Parameters:
signalEnabledChannels
- the value to set- Returns:
- the dsl builder
-
signalPollIntervalMs
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder signalPollIntervalMs(long signalPollIntervalMs) Interval for looking for new signals in registered channels, given in milliseconds. Defaults to 5 seconds. The option is a:long
type. Default: 5s Group: db2- Parameters:
signalPollIntervalMs
- the value to set- Returns:
- the dsl builder
-
signalPollIntervalMs
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder signalPollIntervalMs(String signalPollIntervalMs) Interval for looking for new signals in registered channels, given in milliseconds. Defaults to 5 seconds. The option will be converted to along
type. Default: 5s Group: db2- Parameters:
signalPollIntervalMs
- the value to set- Returns:
- the dsl builder
-
skippedOperations
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder skippedOperations(String skippedOperations) The comma-separated list of operations to skip during streaming, defined as: 'c' for inserts/create; 'u' for updates; 'd' for deletes, 't' for truncates, and 'none' to indicate nothing skipped. By default, only truncate operations will be skipped. The option is a:java.lang.String
type. Default: t Group: db2- Parameters:
skippedOperations
- the value to set- Returns:
- the dsl builder
-
snapshotDelayMs
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder snapshotDelayMs(long snapshotDelayMs) A delay period before a snapshot will begin, given in milliseconds. Defaults to 0 ms. The option is a:long
type. Default: 0ms Group: db2- Parameters:
snapshotDelayMs
- the value to set- Returns:
- the dsl builder
-
snapshotDelayMs
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder snapshotDelayMs(String snapshotDelayMs) A delay period before a snapshot will begin, given in milliseconds. Defaults to 0 ms. The option will be converted to along
type. Default: 0ms Group: db2- Parameters:
snapshotDelayMs
- the value to set- Returns:
- the dsl builder
-
snapshotFetchSize
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder snapshotFetchSize(int snapshotFetchSize) The maximum number of records that should be loaded into memory while performing a snapshot. The option is a:int
type. Group: db2- Parameters:
snapshotFetchSize
- the value to set- Returns:
- the dsl builder
-
snapshotFetchSize
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder snapshotFetchSize(String snapshotFetchSize) The maximum number of records that should be loaded into memory while performing a snapshot. The option will be converted to aint
type. Group: db2- Parameters:
snapshotFetchSize
- the value to set- Returns:
- the dsl builder
-
snapshotIncludeCollectionList
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder snapshotIncludeCollectionList(String snapshotIncludeCollectionList) This setting must be set to specify a list of tables/collections whose snapshot must be taken on creating or restarting the connector. The option is a:java.lang.String
type. Group: db2- Parameters:
snapshotIncludeCollectionList
- the value to set- Returns:
- the dsl builder
-
snapshotLockTimeoutMs
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder snapshotLockTimeoutMs(long snapshotLockTimeoutMs) The maximum number of millis to wait for table locks at the beginning of a snapshot. If locks cannot be acquired in this time frame, the snapshot will be aborted. Defaults to 10 seconds. The option is a:long
type. Default: 10s Group: db2- Parameters:
snapshotLockTimeoutMs
- the value to set- Returns:
- the dsl builder
-
snapshotLockTimeoutMs
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder snapshotLockTimeoutMs(String snapshotLockTimeoutMs) The maximum number of millis to wait for table locks at the beginning of a snapshot. If locks cannot be acquired in this time frame, the snapshot will be aborted. Defaults to 10 seconds. The option will be converted to along
type. Default: 10s Group: db2- Parameters:
snapshotLockTimeoutMs
- the value to set- Returns:
- the dsl builder
-
snapshotMode
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder snapshotMode(String snapshotMode) The criteria for running a snapshot upon startup of the connector. Options include: 'initial' (the default) to specify the connector should run a snapshot only when no offsets are available for the logical server name; 'schema_only' to specify the connector should run a snapshot of the schema when no offsets are available for the logical server name. The option is a:java.lang.String
type. Default: initial Group: db2- Parameters:
snapshotMode
- the value to set- Returns:
- the dsl builder
-
snapshotModeConfigurationBasedSnapshotData
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder snapshotModeConfigurationBasedSnapshotData(boolean snapshotModeConfigurationBasedSnapshotData) When 'snapshot.mode' is set as configuration_based, this setting permits to specify whenever the data should be snapshotted or not. The option is a:boolean
type. Default: false Group: db2- Parameters:
snapshotModeConfigurationBasedSnapshotData
- the value to set- Returns:
- the dsl builder
-
snapshotModeConfigurationBasedSnapshotData
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder snapshotModeConfigurationBasedSnapshotData(String snapshotModeConfigurationBasedSnapshotData) When 'snapshot.mode' is set as configuration_based, this setting permits to specify whenever the data should be snapshotted or not. The option will be converted to aboolean
type. Default: false Group: db2- Parameters:
snapshotModeConfigurationBasedSnapshotData
- the value to set- Returns:
- the dsl builder
-
snapshotModeConfigurationBasedSnapshotOnDataError
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder snapshotModeConfigurationBasedSnapshotOnDataError(boolean snapshotModeConfigurationBasedSnapshotOnDataError) When 'snapshot.mode' is set as configuration_based, this setting permits to specify whenever the data should be snapshotted or not in case of error. The option is a:boolean
type. Default: false Group: db2- Parameters:
snapshotModeConfigurationBasedSnapshotOnDataError
- the value to set- Returns:
- the dsl builder
-
snapshotModeConfigurationBasedSnapshotOnDataError
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder snapshotModeConfigurationBasedSnapshotOnDataError(String snapshotModeConfigurationBasedSnapshotOnDataError) When 'snapshot.mode' is set as configuration_based, this setting permits to specify whenever the data should be snapshotted or not in case of error. The option will be converted to aboolean
type. Default: false Group: db2- Parameters:
snapshotModeConfigurationBasedSnapshotOnDataError
- the value to set- Returns:
- the dsl builder
-
snapshotModeConfigurationBasedSnapshotOnSchemaError
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder snapshotModeConfigurationBasedSnapshotOnSchemaError(boolean snapshotModeConfigurationBasedSnapshotOnSchemaError) When 'snapshot.mode' is set as configuration_based, this setting permits to specify whenever the schema should be snapshotted or not in case of error. The option is a:boolean
type. Default: false Group: db2- Parameters:
snapshotModeConfigurationBasedSnapshotOnSchemaError
- the value to set- Returns:
- the dsl builder
-
snapshotModeConfigurationBasedSnapshotOnSchemaError
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder snapshotModeConfigurationBasedSnapshotOnSchemaError(String snapshotModeConfigurationBasedSnapshotOnSchemaError) When 'snapshot.mode' is set as configuration_based, this setting permits to specify whenever the schema should be snapshotted or not in case of error. The option will be converted to aboolean
type. Default: false Group: db2- Parameters:
snapshotModeConfigurationBasedSnapshotOnSchemaError
- the value to set- Returns:
- the dsl builder
-
snapshotModeConfigurationBasedSnapshotSchema
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder snapshotModeConfigurationBasedSnapshotSchema(boolean snapshotModeConfigurationBasedSnapshotSchema) When 'snapshot.mode' is set as configuration_based, this setting permits to specify whenever the schema should be snapshotted or not. The option is a:boolean
type. Default: false Group: db2- Parameters:
snapshotModeConfigurationBasedSnapshotSchema
- the value to set- Returns:
- the dsl builder
-
snapshotModeConfigurationBasedSnapshotSchema
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder snapshotModeConfigurationBasedSnapshotSchema(String snapshotModeConfigurationBasedSnapshotSchema) When 'snapshot.mode' is set as configuration_based, this setting permits to specify whenever the schema should be snapshotted or not. The option will be converted to aboolean
type. Default: false Group: db2- Parameters:
snapshotModeConfigurationBasedSnapshotSchema
- the value to set- Returns:
- the dsl builder
-
snapshotModeConfigurationBasedStartStream
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder snapshotModeConfigurationBasedStartStream(boolean snapshotModeConfigurationBasedStartStream) When 'snapshot.mode' is set as configuration_based, this setting permits to specify whenever the stream should start or not after snapshot. The option is a:boolean
type. Default: false Group: db2- Parameters:
snapshotModeConfigurationBasedStartStream
- the value to set- Returns:
- the dsl builder
-
snapshotModeConfigurationBasedStartStream
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder snapshotModeConfigurationBasedStartStream(String snapshotModeConfigurationBasedStartStream) When 'snapshot.mode' is set as configuration_based, this setting permits to specify whenever the stream should start or not after snapshot. The option will be converted to aboolean
type. Default: false Group: db2- Parameters:
snapshotModeConfigurationBasedStartStream
- the value to set- Returns:
- the dsl builder
-
snapshotModeCustomName
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder snapshotModeCustomName(String snapshotModeCustomName) When 'snapshot.mode' is set as custom, this setting must be set to specify a the name of the custom implementation provided in the 'name()' method. The implementations must implement the 'Snapshotter' interface and is called on each app boot to determine whether to do a snapshot. The option is a:java.lang.String
type. Group: db2- Parameters:
snapshotModeCustomName
- the value to set- Returns:
- the dsl builder
-
snapshotSelectStatementOverrides
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder snapshotSelectStatementOverrides(String snapshotSelectStatementOverrides) This property contains a comma-separated list of fully-qualified tables (DB_NAME.TABLE_NAME) or (SCHEMA_NAME.TABLE_NAME), depending on the specific connectors. Select statements for the individual tables are specified in further configuration properties, one for each table, identified by the id 'snapshot.select.statement.overrides.DB_NAME.TABLE_NAME' or 'snapshot.select.statement.overrides.SCHEMA_NAME.TABLE_NAME', respectively. The value of those properties is the select statement to use when retrieving data from the specific table during snapshotting. A possible use case for large append-only tables is setting a specific point where to start (resume) snapshotting, in case a previous snapshotting was interrupted. The option is a:java.lang.String
type. Group: db2- Parameters:
snapshotSelectStatementOverrides
- the value to set- Returns:
- the dsl builder
-
snapshotTablesOrderByRowCount
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder snapshotTablesOrderByRowCount(String snapshotTablesOrderByRowCount) Controls the order in which tables are processed in the initial snapshot. A descending value will order the tables by row count descending. A ascending value will order the tables by row count ascending. A value of disabled (the default) will disable ordering by row count. The option is a:java.lang.String
type. Default: disabled Group: db2- Parameters:
snapshotTablesOrderByRowCount
- the value to set- Returns:
- the dsl builder
-
sourceinfoStructMaker
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder sourceinfoStructMaker(String sourceinfoStructMaker) The name of the SourceInfoStructMaker class that returns SourceInfo schema and struct. The option is a:java.lang.String
type. Default: io.debezium.connector.db2.Db2SourceInfoStructMaker Group: db2- Parameters:
sourceinfoStructMaker
- the value to set- Returns:
- the dsl builder
-
streamingDelayMs
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder streamingDelayMs(long streamingDelayMs) A delay period after the snapshot is completed and the streaming begins, given in milliseconds. Defaults to 0 ms. The option is a:long
type. Default: 0ms Group: db2- Parameters:
streamingDelayMs
- the value to set- Returns:
- the dsl builder
-
streamingDelayMs
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder streamingDelayMs(String streamingDelayMs) A delay period after the snapshot is completed and the streaming begins, given in milliseconds. Defaults to 0 ms. The option will be converted to along
type. Default: 0ms Group: db2- Parameters:
streamingDelayMs
- the value to set- Returns:
- the dsl builder
-
tableExcludeList
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder tableExcludeList(String tableExcludeList) A comma-separated list of regular expressions that match the fully-qualified names of tables to be excluded from monitoring. The option is a:java.lang.String
type. Group: db2- Parameters:
tableExcludeList
- the value to set- Returns:
- the dsl builder
-
tableIgnoreBuiltin
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder tableIgnoreBuiltin(boolean tableIgnoreBuiltin) Flag specifying whether built-in tables should be ignored. The option is a:boolean
type. Default: true Group: db2- Parameters:
tableIgnoreBuiltin
- the value to set- Returns:
- the dsl builder
-
tableIgnoreBuiltin
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder tableIgnoreBuiltin(String tableIgnoreBuiltin) Flag specifying whether built-in tables should be ignored. The option will be converted to aboolean
type. Default: true Group: db2- Parameters:
tableIgnoreBuiltin
- the value to set- Returns:
- the dsl builder
-
tableIncludeList
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder tableIncludeList(String tableIncludeList) The tables for which changes are to be captured. The option is a:java.lang.String
type. Group: db2- Parameters:
tableIncludeList
- the value to set- Returns:
- the dsl builder
-
timePrecisionMode
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder timePrecisionMode(String timePrecisionMode) Time, date, and timestamps can be represented with different kinds of precisions, including: 'adaptive' (the default) bases the precision of time, date, and timestamp values on the database column's precision; 'adaptive_time_microseconds' like 'adaptive' mode, but TIME fields always use microseconds precision; 'connect' always represents time, date, and timestamp values using Kafka Connect's built-in representations for Time, Date, and Timestamp, which uses millisecond precision regardless of the database columns' precision. The option is a:java.lang.String
type. Default: adaptive Group: db2- Parameters:
timePrecisionMode
- the value to set- Returns:
- the dsl builder
-
tombstonesOnDelete
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder tombstonesOnDelete(boolean tombstonesOnDelete) Whether delete operations should be represented by a delete event and a subsequent tombstone event (true) or only by a delete event (false). Emitting the tombstone event (the default behavior) allows Kafka to completely delete all events pertaining to the given key once the source record got deleted. The option is a:boolean
type. Default: false Group: db2- Parameters:
tombstonesOnDelete
- the value to set- Returns:
- the dsl builder
-
tombstonesOnDelete
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder tombstonesOnDelete(String tombstonesOnDelete) Whether delete operations should be represented by a delete event and a subsequent tombstone event (true) or only by a delete event (false). Emitting the tombstone event (the default behavior) allows Kafka to completely delete all events pertaining to the given key once the source record got deleted. The option will be converted to aboolean
type. Default: false Group: db2- Parameters:
tombstonesOnDelete
- the value to set- Returns:
- the dsl builder
-
topicNamingStrategy
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder topicNamingStrategy(String topicNamingStrategy) The name of the TopicNamingStrategy class that should be used to determine the topic name for data change, schema change, transaction, heartbeat event etc. The option is a:java.lang.String
type. Default: io.debezium.schema.SchemaTopicNamingStrategy Group: db2- Parameters:
topicNamingStrategy
- the value to set- Returns:
- the dsl builder
-
topicPrefix
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder topicPrefix(String topicPrefix) Topic prefix that identifies and provides a namespace for the particular database server/cluster is capturing changes. The topic prefix should be unique across all other connectors, since it is used as a prefix for all Kafka topic names that receive events emitted by this connector. Only alphanumeric characters, hyphens, dots and underscores must be accepted. The option is a:java.lang.String
type. Required: true Group: db2- Parameters:
topicPrefix
- the value to set- Returns:
- the dsl builder
-
transactionMetadataFactory
default DebeziumDb2EndpointBuilderFactory.DebeziumDb2EndpointBuilder transactionMetadataFactory(String transactionMetadataFactory) Class to make transaction context & transaction struct/schemas. The option is a:java.lang.String
type. Default: io.debezium.pipeline.txmetadata.DefaultTransactionMetadataFactory Group: db2- Parameters:
transactionMetadataFactory
- the value to set- Returns:
- the dsl builder
-