Class MySqlConnectorEmbeddedDebeziumConfiguration
java.lang.Object
org.apache.camel.component.debezium.configuration.EmbeddedDebeziumConfiguration
org.apache.camel.component.debezium.configuration.MySqlConnectorEmbeddedDebeziumConfiguration
- All Implemented Interfaces:
Cloneable
@UriParams
public class MySqlConnectorEmbeddedDebeziumConfiguration
extends org.apache.camel.component.debezium.configuration.EmbeddedDebeziumConfiguration
-
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionprotected Class
protected io.debezium.config.Configuration
int
long
int
int
long
long
int
int
int
int
int
long
int
long
int
long
long
long
int
long
int
boolean
boolean
boolean
boolean
boolean
boolean
boolean
boolean
boolean
boolean
boolean
boolean
boolean
void
setBigintUnsignedHandlingMode
(String bigintUnsignedHandlingMode) Specify how BIGINT UNSIGNED columns should be represented in change events, including: 'precise' uses java.math.BigDecimal to represent values, which are encoded in the change events using a binary representation and Kafka Connect's 'org.apache.kafka.connect.data.Decimal' type; 'long' (the default) represents values using Java's 'long', which may not offer the precision but will be far easier to use in consumers.void
setBinaryHandlingMode
(String binaryHandlingMode) Specify how binary (blob, binary, etc.) columns should be represented in change events, including: 'bytes' represents binary data as byte array (default); 'base64' represents binary data as base64-encoded string; 'base64-url-safe' represents binary data as base64-url-safe-encoded string; 'hex' represents binary data as hex-encoded (base16) stringvoid
setBinlogBufferSize
(int binlogBufferSize) The size of a look-ahead buffer used by the binlog reader to decide whether the transaction in progress is going to be committed or rolled back.void
setColumnExcludeList
(String columnExcludeList) Regular expressions matching columns to exclude from change eventsvoid
setColumnIncludeList
(String columnIncludeList) Regular expressions matching columns to include in change eventsvoid
setColumnPropagateSourceType
(String columnPropagateSourceType) A comma-separated list of regular expressions matching fully-qualified names of columns that adds the columns original type and original length as parameters to the corresponding field schemas in the emitted change records.void
setConnectKeepAlive
(boolean connectKeepAlive) Whether a separate thread should be used to ensure the connection is kept alive.void
setConnectKeepAliveIntervalMs
(long connectKeepAliveIntervalMs) Interval for connection checking if keep alive thread is used, given in milliseconds Defaults to 1 minute (60,000 ms).void
setConnectorAdapter
(String connectorAdapter) Specifies the connection adapter to be usedvoid
setConnectTimeoutMs
(int connectTimeoutMs) Maximum time to wait after trying to connect to the database before timing out, given in milliseconds.void
setConverters
(String converters) Optional list of custom converters that would be used instead of default ones.void
setCustomMetricTags
(String customMetricTags) The custom metric tags will accept key-value pairs to customize the MBean object name which should be appended the end of regular name, each key would represent a tag for the MBean object name, and the corresponding value would be the value of that tag the key is.void
setDatabaseExcludeList
(String databaseExcludeList) A comma-separated list of regular expressions that match database names to be excluded from monitoringvoid
setDatabaseHostname
(String databaseHostname) Resolvable hostname or IP address of the database server.void
setDatabaseIncludeList
(String databaseIncludeList) The databases for which changes are to be capturedvoid
setDatabaseInitialStatements
(String databaseInitialStatements) A semicolon separated list of SQL statements to be executed when a JDBC connection (not binlog reading connection) to the database is established.void
setDatabaseJdbcDriver
(String databaseJdbcDriver) JDBC Driver class name used to connect to the MySQL database server.void
setDatabasePassword
(String databasePassword) Password of the database user to be used when connecting to the database.void
setDatabasePort
(int databasePort) Port of the database server.void
setDatabaseServerId
(long databaseServerId) A numeric ID of this database client, which must be unique across all currently-running database processes in the cluster.void
setDatabaseServerIdOffset
(long databaseServerIdOffset) Only relevant if parallel snapshotting is configured.void
setDatabaseSslKeystore
(String databaseSslKeystore) The location of the key store file.void
setDatabaseSslKeystorePassword
(String databaseSslKeystorePassword) The password for the key store file.void
setDatabaseSslMode
(String databaseSslMode) Whether to use an encrypted connection to MySQL.void
setDatabaseSslTruststore
(String databaseSslTruststore) The location of the trust store file for the server certificate verification.void
setDatabaseSslTruststorePassword
(String databaseSslTruststorePassword) The password for the trust store file.void
setDatabaseUser
(String databaseUser) Name of the database user to be used when connecting to the database.void
setDatatypePropagateSourceType
(String datatypePropagateSourceType) A comma-separated list of regular expressions matching the database-specific data type names that adds the data type's original type and original length as parameters to the corresponding field schemas in the emitted change records.void
setDecimalHandlingMode
(String decimalHandlingMode) Specify how DECIMAL and NUMERIC columns should be represented in change events, including: 'precise' (the default) uses java.math.BigDecimal to represent values, which are encoded in the change events using a binary representation and Kafka Connect's 'org.apache.kafka.connect.data.Decimal' type; 'string' uses string to represent values; 'double' represents values using Java's 'double', which may not offer the precision but will be far easier to use in consumers.void
setEnableTimeAdjuster
(boolean enableTimeAdjuster) MySQL allows user to insert year value as either 2-digit or 4-digit.void
setErrorsMaxRetries
(int errorsMaxRetries) The maximum number of retries on connection errors before failing (-1 = no limit, 0 = disabled, > 0 = num of retries).void
setEventDeserializationFailureHandlingMode
(String eventDeserializationFailureHandlingMode) Specify how failures during deserialization of binlog events (i.e.void
setEventProcessingFailureHandlingMode
(String eventProcessingFailureHandlingMode) Specify how failures during processing of events (i.e.void
setGtidSourceExcludes
(String gtidSourceExcludes) The source UUIDs used to exclude GTID ranges when determine the starting position in the MySQL server's binlog.void
setGtidSourceFilterDmlEvents
(boolean gtidSourceFilterDmlEvents) If set to true, we will only produce DML events into Kafka for transactions that were written on mysql servers with UUIDs matching the filters defined by the gtid.source.includes or gtid.source.excludes configuration options, if they are specified.void
setGtidSourceIncludes
(String gtidSourceIncludes) The source UUIDs used to include GTID ranges when determine the starting position in the MySQL server's binlog.void
setHeartbeatActionQuery
(String heartbeatActionQuery) The query executed with every heartbeat.void
setHeartbeatIntervalMs
(int heartbeatIntervalMs) Length of an interval in milli-seconds in in which the connector periodically sends heartbeat messages to a heartbeat topic.void
setHeartbeatTopicsPrefix
(String heartbeatTopicsPrefix) The prefix that is used to name heartbeat topics.Defaults to __debezium-heartbeat.void
setIncludeQuery
(boolean includeQuery) Whether the connector should include the original SQL query that generated the change event.void
setIncludeSchemaChanges
(boolean includeSchemaChanges) Whether the connector should publish changes in the database schema to a Kafka topic with the same name as the database server ID.void
setIncludeSchemaComments
(boolean includeSchemaComments) Whether the connector parse table and column's comment to metadata object.void
setInconsistentSchemaHandlingMode
(String inconsistentSchemaHandlingMode) Specify how binlog events that belong to a table missing from internal schema representation (i.e.void
setIncrementalSnapshotAllowSchemaChanges
(boolean incrementalSnapshotAllowSchemaChanges) Detect schema change during an incremental snapshot and re-select a current chunk to avoid locking DDLs.void
setIncrementalSnapshotChunkSize
(int incrementalSnapshotChunkSize) The maximum size of chunk (number of documents/rows) for incremental snapshottingvoid
setIncrementalSnapshotWatermarkingStrategy
(String incrementalSnapshotWatermarkingStrategy) Specify the strategy used for watermarking during an incremental snapshot: 'insert_insert' both open and close signal is written into signal data collection (default); 'insert_delete' only open signal is written on signal data collection, the close will delete the relative open signal;void
setMaxBatchSize
(int maxBatchSize) Maximum size of each batch of source records.void
setMaxQueueSize
(int maxQueueSize) Maximum size of the queue for change events read from the database log but not yet recorded or forwarded.void
setMaxQueueSizeInBytes
(long maxQueueSizeInBytes) Maximum size of the queue in bytes for change events read from the database log but not yet recorded or forwarded.void
setMessageKeyColumns
(String messageKeyColumns) A semicolon-separated list of expressions that match fully-qualified tables and column(s) to be used as message key.void
setMinRowCountToStreamResults
(int minRowCountToStreamResults) The number of rows a table must contain to stream results rather than pull all into memory during snapshots.void
setNotificationEnabledChannels
(String notificationEnabledChannels) List of notification channels names that are enabled.void
setNotificationSinkTopicName
(String notificationSinkTopicName) The name of the topic for the notifications.void
setPollIntervalMs
(long pollIntervalMs) Time to wait for new change events to appear after receiving no events, given in milliseconds.void
setPostProcessors
(String postProcessors) Optional list of post processors.void
setProvideTransactionMetadata
(boolean provideTransactionMetadata) Enables transaction metadata extraction together with event countingvoid
setQueryFetchSize
(int queryFetchSize) The maximum number of records that should be loaded into memory while streaming.void
setRetriableRestartConnectorWaitMs
(long retriableRestartConnectorWaitMs) Time to wait before restarting connector after retriable exception occurs.void
setSchemaHistoryInternal
(String schemaHistoryInternal) The name of the SchemaHistory class that should be used to store and recover database schema changes.void
setSchemaHistoryInternalFileFilename
(String schemaHistoryInternalFileFilename) The path to the file that will be used to record the database schema historyvoid
setSchemaHistoryInternalSkipUnparseableDdl
(boolean schemaHistoryInternalSkipUnparseableDdl) Controls the action Debezium will take when it meets a DDL statement in binlog, that it cannot parse.By default the connector will stop operating but by changing the setting it can ignore the statements which it cannot parse.void
setSchemaHistoryInternalStoreOnlyCapturedDatabasesDdl
(boolean schemaHistoryInternalStoreOnlyCapturedDatabasesDdl) Controls what DDL will Debezium store in database schema history.void
setSchemaHistoryInternalStoreOnlyCapturedTablesDdl
(boolean schemaHistoryInternalStoreOnlyCapturedTablesDdl) Controls what DDL will Debezium store in database schema history.void
setSchemaNameAdjustmentMode
(String schemaNameAdjustmentMode) Specify how schema names should be adjusted for compatibility with the message converter used by the connector, including: 'avro' replaces the characters that cannot be used in the Avro type name with underscore; 'avro_unicode' replaces the underscore or characters that cannot be used in the Avro type name with corresponding unicode like _uxxxx.void
setSignalDataCollection
(String signalDataCollection) The name of the data collection that is used to send signals/commands to Debezium.void
setSignalEnabledChannels
(String signalEnabledChannels) List of channels names that are enabled.void
setSignalPollIntervalMs
(long signalPollIntervalMs) Interval for looking for new signals in registered channels, given in milliseconds.void
setSkippedOperations
(String skippedOperations) The comma-separated list of operations to skip during streaming, defined as: 'c' for inserts/create; 'u' for updates; 'd' for deletes, 't' for truncates, and 'none' to indicate nothing skipped.void
setSnapshotDelayMs
(long snapshotDelayMs) A delay period before a snapshot will begin, given in milliseconds.void
setSnapshotFetchSize
(int snapshotFetchSize) The maximum number of records that should be loaded into memory while performing a snapshot.void
setSnapshotIncludeCollectionList
(String snapshotIncludeCollectionList) This setting must be set to specify a list of tables/collections whose snapshot must be taken on creating or restarting the connector.void
setSnapshotLockingMode
(String snapshotLockingMode) Controls how long the connector holds onto the global read lock while it is performing a snapshot.void
setSnapshotLockTimeoutMs
(long snapshotLockTimeoutMs) The maximum number of millis to wait for table locks at the beginning of a snapshot.void
setSnapshotMaxThreads
(int snapshotMaxThreads) The maximum number of threads used to perform the snapshot.void
setSnapshotMode
(String snapshotMode) The criteria for running a snapshot upon startup of the connector.void
setSnapshotNewTables
(String snapshotNewTables) BETA FEATURE: On connector restart, the connector will check if there have been any new tables added to the configuration, and snapshot them.void
setSnapshotSelectStatementOverrides
(String snapshotSelectStatementOverrides) This property contains a comma-separated list of fully-qualified tables (DB_NAME.TABLE_NAME) or (SCHEMA_NAME.TABLE_NAME), depending on the specific connectors.void
setSnapshotTablesOrderByRowCount
(String snapshotTablesOrderByRowCount) Controls the order in which tables are processed in the initial snapshot.void
setSourceinfoStructMaker
(String sourceinfoStructMaker) The name of the SourceInfoStructMaker class that returns SourceInfo schema and struct.void
setTableExcludeList
(String tableExcludeList) A comma-separated list of regular expressions that match the fully-qualified names of tables to be excluded from monitoringvoid
setTableIgnoreBuiltin
(boolean tableIgnoreBuiltin) Flag specifying whether built-in tables should be ignored.void
setTableIncludeList
(String tableIncludeList) The tables for which changes are to be capturedvoid
setTimePrecisionMode
(String timePrecisionMode) Time, date and timestamps can be represented with different kinds of precisions, including: 'adaptive_time_microseconds': the precision of date and timestamp values is based the database column's precision; but time fields always use microseconds precision; 'connect': always represents time, date and timestamp values using Kafka Connect's built-in representations for Time, Date, and Timestamp, which uses millisecond precision regardless of the database columns' precision.void
setTombstonesOnDelete
(boolean tombstonesOnDelete) Whether delete operations should be represented by a delete event and a subsequent tombstone event (true) or only by a delete event (false).void
setTopicNamingStrategy
(String topicNamingStrategy) The name of the TopicNamingStrategy class that should be used to determine the topic name for data change, schema change, transaction, heartbeat event etc.void
setTopicPrefix
(String topicPrefix) Topic prefix that identifies and provides a namespace for the particular database server/cluster is capturing changes.protected org.apache.camel.component.debezium.configuration.ConfigurationValidation
Methods inherited from class org.apache.camel.component.debezium.configuration.EmbeddedDebeziumConfiguration
addPropertyIfNotNull, addPropertyIfNotNull, copy, createDebeziumConfiguration, getAdditionalProperties, getConnectorClass, getInternalKeyConverter, getInternalValueConverter, getName, getOffsetCommitPolicy, getOffsetCommitTimeoutMs, getOffsetFlushIntervalMs, getOffsetStorage, getOffsetStorageFileName, getOffsetStoragePartitions, getOffsetStorageReplicationFactor, getOffsetStorageTopic, isFieldValueNotSet, setAdditionalProperties, setConnectorClass, setInternalKeyConverter, setInternalValueConverter, setName, setOffsetCommitPolicy, setOffsetCommitTimeoutMs, setOffsetFlushIntervalMs, setOffsetStorage, setOffsetStorageFileName, setOffsetStoragePartitions, setOffsetStorageReplicationFactor, setOffsetStorageTopic, validateConfiguration
-
Constructor Details
-
MySqlConnectorEmbeddedDebeziumConfiguration
public MySqlConnectorEmbeddedDebeziumConfiguration()
-
-
Method Details
-
setSnapshotLockingMode
Controls how long the connector holds onto the global read lock while it is performing a snapshot. The default is 'minimal', which means the connector holds the global read lock (and thus prevents any updates) for just the initial portion of the snapshot while the database schemas and other metadata are being read. The remaining work in a snapshot involves selecting all rows from each table, and this can be done using the snapshot process' REPEATABLE READ transaction even when the lock is no longer held and other operations are updating the database. However, in some cases it may be desirable to block all writes for the entire duration of the snapshot; in such cases set this property to 'extended'. Using a value of 'none' will prevent the connector from acquiring any table locks during the snapshot process. This mode can only be used in combination with snapshot.mode values of 'schema_only' or 'schema_only_recovery' and is only safe to use if no schema changes are happening while the snapshot is taken. -
getSnapshotLockingMode
-
setMessageKeyColumns
A semicolon-separated list of expressions that match fully-qualified tables and column(s) to be used as message key. Each expression must match the pattern 'invalid input: '<'fully-qualified table name>:', where the table names could be defined as (DB_NAME.TABLE_NAME) or (SCHEMA_NAME.TABLE_NAME), depending on the specific connector, and the key columns are a comma-separated list of columns representing the custom key. For any table without an explicit key configuration the table's primary key column(s) will be used as message key. Example: dbserver1.inventory.orderlines:orderId,orderLineId;dbserver1.inventory.orders:id -
getMessageKeyColumns
-
setCustomMetricTags
The custom metric tags will accept key-value pairs to customize the MBean object name which should be appended the end of regular name, each key would represent a tag for the MBean object name, and the corresponding value would be the value of that tag the key is. For example: k1=v1,k2=v2 -
getCustomMetricTags
-
setQueryFetchSize
public void setQueryFetchSize(int queryFetchSize) The maximum number of records that should be loaded into memory while streaming. A value of '0' uses the default JDBC fetch size. -
getQueryFetchSize
public int getQueryFetchSize() -
setSignalEnabledChannels
List of channels names that are enabled. Source channel is enabled by default -
getSignalEnabledChannels
-
setIncludeSchemaChanges
public void setIncludeSchemaChanges(boolean includeSchemaChanges) Whether the connector should publish changes in the database schema to a Kafka topic with the same name as the database server ID. Each schema change will be recorded using a key that contains the database name and whose value include logical description of the new schema and optionally the DDL statement(s). The default is 'true'. This is independent of how the connector internally records database schema history. -
isIncludeSchemaChanges
public boolean isIncludeSchemaChanges() -
setConnectorAdapter
Specifies the connection adapter to be used -
getConnectorAdapter
-
setGtidSourceIncludes
The source UUIDs used to include GTID ranges when determine the starting position in the MySQL server's binlog. -
getGtidSourceIncludes
-
setDatabaseJdbcDriver
JDBC Driver class name used to connect to the MySQL database server. -
getDatabaseJdbcDriver
-
setHeartbeatActionQuery
The query executed with every heartbeat. -
getHeartbeatActionQuery
-
setPollIntervalMs
public void setPollIntervalMs(long pollIntervalMs) Time to wait for new change events to appear after receiving no events, given in milliseconds. Defaults to 500 ms. -
getPollIntervalMs
public long getPollIntervalMs() -
setSignalDataCollection
The name of the data collection that is used to send signals/commands to Debezium. Signaling is disabled when not set. -
getSignalDataCollection
-
setDatabaseInitialStatements
A semicolon separated list of SQL statements to be executed when a JDBC connection (not binlog reading connection) to the database is established. Note that the connector may establish JDBC connections at its own discretion, so this should typically be used for configuration of session parameters only, but not for executing DML statements. Use doubled semicolon (';;') to use a semicolon as a character and not as a delimiter. -
getDatabaseInitialStatements
-
setConverters
Optional list of custom converters that would be used instead of default ones. The converters are defined using 'invalid input: '<'converter.prefix>.type' config option and configured using options 'invalid input: '<'converter.prefix>. -
getConverters
-
setHeartbeatTopicsPrefix
The prefix that is used to name heartbeat topics.Defaults to __debezium-heartbeat. -
getHeartbeatTopicsPrefix
-
setBinlogBufferSize
public void setBinlogBufferSize(int binlogBufferSize) The size of a look-ahead buffer used by the binlog reader to decide whether the transaction in progress is going to be committed or rolled back. Use 0 to disable look-ahead buffering. Defaults to 0 (i.e. buffering is disabled). -
getBinlogBufferSize
public int getBinlogBufferSize() -
setSnapshotFetchSize
public void setSnapshotFetchSize(int snapshotFetchSize) The maximum number of records that should be loaded into memory while performing a snapshot. -
getSnapshotFetchSize
public int getSnapshotFetchSize() -
setSnapshotLockTimeoutMs
public void setSnapshotLockTimeoutMs(long snapshotLockTimeoutMs) The maximum number of millis to wait for table locks at the beginning of a snapshot. If locks cannot be acquired in this time frame, the snapshot will be aborted. Defaults to 10 seconds -
getSnapshotLockTimeoutMs
public long getSnapshotLockTimeoutMs() -
setDatabaseUser
Name of the database user to be used when connecting to the database. -
getDatabaseUser
-
setDatatypePropagateSourceType
A comma-separated list of regular expressions matching the database-specific data type names that adds the data type's original type and original length as parameters to the corresponding field schemas in the emitted change records. -
getDatatypePropagateSourceType
-
setSnapshotTablesOrderByRowCount
Controls the order in which tables are processed in the initial snapshot. A `descending` value will order the tables by row count descending. A `ascending` value will order the tables by row count ascending. A value of `disabled` (the default) will disable ordering by row count. -
getSnapshotTablesOrderByRowCount
-
setGtidSourceExcludes
The source UUIDs used to exclude GTID ranges when determine the starting position in the MySQL server's binlog. -
getGtidSourceExcludes
-
setIncrementalSnapshotWatermarkingStrategy
public void setIncrementalSnapshotWatermarkingStrategy(String incrementalSnapshotWatermarkingStrategy) Specify the strategy used for watermarking during an incremental snapshot: 'insert_insert' both open and close signal is written into signal data collection (default); 'insert_delete' only open signal is written on signal data collection, the close will delete the relative open signal; -
getIncrementalSnapshotWatermarkingStrategy
-
setSnapshotSelectStatementOverrides
This property contains a comma-separated list of fully-qualified tables (DB_NAME.TABLE_NAME) or (SCHEMA_NAME.TABLE_NAME), depending on the specific connectors. Select statements for the individual tables are specified in further configuration properties, one for each table, identified by the id 'snapshot.select.statement.overrides.[DB_NAME].[TABLE_NAME]' or 'snapshot.select.statement.overrides.[SCHEMA_NAME].[TABLE_NAME]', respectively. The value of those properties is the select statement to use when retrieving data from the specific table during snapshotting. A possible use case for large append-only tables is setting a specific point where to start (resume) snapshotting, in case a previous snapshotting was interrupted. -
getSnapshotSelectStatementOverrides
-
setDatabaseSslKeystore
The location of the key store file. This is optional and can be used for two-way authentication between the client and the MySQL Server. -
getDatabaseSslKeystore
-
setHeartbeatIntervalMs
public void setHeartbeatIntervalMs(int heartbeatIntervalMs) Length of an interval in milli-seconds in in which the connector periodically sends heartbeat messages to a heartbeat topic. Use 0 to disable heartbeat messages. Disabled by default. -
getHeartbeatIntervalMs
public int getHeartbeatIntervalMs() -
setDatabaseSslTruststorePassword
The password for the trust store file. Used to check the integrity of the truststore, and unlock the truststore. -
getDatabaseSslTruststorePassword
-
setIncrementalSnapshotAllowSchemaChanges
public void setIncrementalSnapshotAllowSchemaChanges(boolean incrementalSnapshotAllowSchemaChanges) Detect schema change during an incremental snapshot and re-select a current chunk to avoid locking DDLs. Note that changes to a primary key are not supported and can cause incorrect results if performed during an incremental snapshot. Another limitation is that if a schema change affects only columns' default values, then the change won't be detected until the DDL is processed from the binlog stream. This doesn't affect the snapshot events' values, but the schema of snapshot events may have outdated defaults. -
isIncrementalSnapshotAllowSchemaChanges
public boolean isIncrementalSnapshotAllowSchemaChanges() -
setSchemaHistoryInternalSkipUnparseableDdl
public void setSchemaHistoryInternalSkipUnparseableDdl(boolean schemaHistoryInternalSkipUnparseableDdl) Controls the action Debezium will take when it meets a DDL statement in binlog, that it cannot parse.By default the connector will stop operating but by changing the setting it can ignore the statements which it cannot parse. If skipping is enabled then Debezium can miss metadata changes. -
isSchemaHistoryInternalSkipUnparseableDdl
public boolean isSchemaHistoryInternalSkipUnparseableDdl() -
setColumnIncludeList
Regular expressions matching columns to include in change events -
getColumnIncludeList
-
setEnableTimeAdjuster
public void setEnableTimeAdjuster(boolean enableTimeAdjuster) MySQL allows user to insert year value as either 2-digit or 4-digit. In case of two digit the value is automatically mapped into 1970 - 2069.false - delegates the implicit conversion to the databasetrue - (the default) Debezium makes the conversion -
isEnableTimeAdjuster
public boolean isEnableTimeAdjuster() -
setColumnPropagateSourceType
A comma-separated list of regular expressions matching fully-qualified names of columns that adds the columns original type and original length as parameters to the corresponding field schemas in the emitted change records. -
getColumnPropagateSourceType
-
setInconsistentSchemaHandlingMode
Specify how binlog events that belong to a table missing from internal schema representation (i.e. internal representation is not consistent with database) should be handled, including: 'fail' (the default) an exception indicating the problematic event and its binlog position is raised, causing the connector to be stopped; 'warn' the problematic event and its binlog position will be logged and the event will be skipped; 'skip' the problematic event will be skipped. -
getInconsistentSchemaHandlingMode
-
setMinRowCountToStreamResults
public void setMinRowCountToStreamResults(int minRowCountToStreamResults) The number of rows a table must contain to stream results rather than pull all into memory during snapshots. Defaults to 1,000. Use 0 to stream all results and completely avoid checking the size of each table. -
getMinRowCountToStreamResults
public int getMinRowCountToStreamResults() -
setErrorsMaxRetries
public void setErrorsMaxRetries(int errorsMaxRetries) The maximum number of retries on connection errors before failing (-1 = no limit, 0 = disabled, > 0 = num of retries). -
getErrorsMaxRetries
public int getErrorsMaxRetries() -
setTableExcludeList
A comma-separated list of regular expressions that match the fully-qualified names of tables to be excluded from monitoring -
getTableExcludeList
-
setDatabasePassword
Password of the database user to be used when connecting to the database. -
getDatabasePassword
-
setDatabaseExcludeList
A comma-separated list of regular expressions that match database names to be excluded from monitoring -
getDatabaseExcludeList
-
setGtidSourceFilterDmlEvents
public void setGtidSourceFilterDmlEvents(boolean gtidSourceFilterDmlEvents) If set to true, we will only produce DML events into Kafka for transactions that were written on mysql servers with UUIDs matching the filters defined by the gtid.source.includes or gtid.source.excludes configuration options, if they are specified. -
isGtidSourceFilterDmlEvents
public boolean isGtidSourceFilterDmlEvents() -
setMaxBatchSize
public void setMaxBatchSize(int maxBatchSize) Maximum size of each batch of source records. Defaults to 2048. -
getMaxBatchSize
public int getMaxBatchSize() -
setSkippedOperations
The comma-separated list of operations to skip during streaming, defined as: 'c' for inserts/create; 'u' for updates; 'd' for deletes, 't' for truncates, and 'none' to indicate nothing skipped. By default, only truncate operations will be skipped. -
getSkippedOperations
-
setTopicNamingStrategy
The name of the TopicNamingStrategy class that should be used to determine the topic name for data change, schema change, transaction, heartbeat event etc. -
getTopicNamingStrategy
-
setConnectKeepAlive
public void setConnectKeepAlive(boolean connectKeepAlive) Whether a separate thread should be used to ensure the connection is kept alive. -
isConnectKeepAlive
public boolean isConnectKeepAlive() -
setSnapshotMode
The criteria for running a snapshot upon startup of the connector. Select one of the following snapshot options: 'when_needed': On startup, the connector runs a snapshot if one is needed.; 'schema_only': If the connector does not detect any offsets for the logical server name, it runs a snapshot that captures only the schema (table structures), but not any table data. After the snapshot completes, the connector begins to stream changes from the binlog.; 'schema_only_recovery': The connector performs a snapshot that captures only the database schema history. The connector then transitions back to streaming. Use this setting to restore a corrupted or lost database schema history topic. Do not use if the database schema was modified after the connector stopped.; 'initial' (default): If the connector does not detect any offsets for the logical server name, it runs a snapshot that captures the current full state of the configured tables. After the snapshot completes, the connector begins to stream changes from the binlog.; 'initial_only': The connector performs a snapshot as it does for the 'initial' option, but after the connector completes the snapshot, it stops, and does not stream changes from the binlog.; 'never': The connector does not run a snapshot. Upon first startup, the connector immediately begins reading from the beginning of the binlog. The 'never' mode should be used with care, and only when the binlog is known to contain all history. -
getSnapshotMode
-
setConnectTimeoutMs
public void setConnectTimeoutMs(int connectTimeoutMs) Maximum time to wait after trying to connect to the database before timing out, given in milliseconds. Defaults to 30 seconds (30,000 ms). -
getConnectTimeoutMs
public int getConnectTimeoutMs() -
setMaxQueueSize
public void setMaxQueueSize(int maxQueueSize) Maximum size of the queue for change events read from the database log but not yet recorded or forwarded. Defaults to 8192, and should always be larger than the maximum batch size. -
getMaxQueueSize
public int getMaxQueueSize() -
setIncrementalSnapshotChunkSize
public void setIncrementalSnapshotChunkSize(int incrementalSnapshotChunkSize) The maximum size of chunk (number of documents/rows) for incremental snapshotting -
getIncrementalSnapshotChunkSize
public int getIncrementalSnapshotChunkSize() -
setRetriableRestartConnectorWaitMs
public void setRetriableRestartConnectorWaitMs(long retriableRestartConnectorWaitMs) Time to wait before restarting connector after retriable exception occurs. Defaults to 10000ms. -
getRetriableRestartConnectorWaitMs
public long getRetriableRestartConnectorWaitMs() -
setSnapshotDelayMs
public void setSnapshotDelayMs(long snapshotDelayMs) A delay period before a snapshot will begin, given in milliseconds. Defaults to 0 ms. -
getSnapshotDelayMs
public long getSnapshotDelayMs() -
setProvideTransactionMetadata
public void setProvideTransactionMetadata(boolean provideTransactionMetadata) Enables transaction metadata extraction together with event counting -
isProvideTransactionMetadata
public boolean isProvideTransactionMetadata() -
setSchemaHistoryInternalStoreOnlyCapturedDatabasesDdl
public void setSchemaHistoryInternalStoreOnlyCapturedDatabasesDdl(boolean schemaHistoryInternalStoreOnlyCapturedDatabasesDdl) Controls what DDL will Debezium store in database schema history. By default (true) only DDL that manipulates a table from captured schema/database will be stored. If set to false, then Debezium will store all incoming DDL statements. -
isSchemaHistoryInternalStoreOnlyCapturedDatabasesDdl
public boolean isSchemaHistoryInternalStoreOnlyCapturedDatabasesDdl() -
setSchemaHistoryInternalStoreOnlyCapturedTablesDdl
public void setSchemaHistoryInternalStoreOnlyCapturedTablesDdl(boolean schemaHistoryInternalStoreOnlyCapturedTablesDdl) Controls what DDL will Debezium store in database schema history. By default (false) Debezium will store all incoming DDL statements. If set to true, then only DDL that manipulates a captured table will be stored. -
isSchemaHistoryInternalStoreOnlyCapturedTablesDdl
public boolean isSchemaHistoryInternalStoreOnlyCapturedTablesDdl() -
setSchemaHistoryInternalFileFilename
The path to the file that will be used to record the database schema history -
getSchemaHistoryInternalFileFilename
-
setTombstonesOnDelete
public void setTombstonesOnDelete(boolean tombstonesOnDelete) Whether delete operations should be represented by a delete event and a subsequent tombstone event (true) or only by a delete event (false). Emitting the tombstone event (the default behavior) allows Kafka to completely delete all events pertaining to the given key once the source record got deleted. -
isTombstonesOnDelete
public boolean isTombstonesOnDelete() -
setTopicPrefix
Topic prefix that identifies and provides a namespace for the particular database server/cluster is capturing changes. The topic prefix should be unique across all other connectors, since it is used as a prefix for all Kafka topic names that receive events emitted by this connector. Only alphanumeric characters, hyphens, dots and underscores must be accepted. -
getTopicPrefix
-
setDecimalHandlingMode
Specify how DECIMAL and NUMERIC columns should be represented in change events, including: 'precise' (the default) uses java.math.BigDecimal to represent values, which are encoded in the change events using a binary representation and Kafka Connect's 'org.apache.kafka.connect.data.Decimal' type; 'string' uses string to represent values; 'double' represents values using Java's 'double', which may not offer the precision but will be far easier to use in consumers. -
getDecimalHandlingMode
-
setBinaryHandlingMode
Specify how binary (blob, binary, etc.) columns should be represented in change events, including: 'bytes' represents binary data as byte array (default); 'base64' represents binary data as base64-encoded string; 'base64-url-safe' represents binary data as base64-url-safe-encoded string; 'hex' represents binary data as hex-encoded (base16) string -
getBinaryHandlingMode
-
setIncludeSchemaComments
public void setIncludeSchemaComments(boolean includeSchemaComments) Whether the connector parse table and column's comment to metadata object. Note: Enable this option will bring the implications on memory usage. The number and size of ColumnImpl objects is what largely impacts how much memory is consumed by the Debezium connectors, and adding a String to each of them can potentially be quite heavy. The default is 'false'. -
isIncludeSchemaComments
public boolean isIncludeSchemaComments() -
setSnapshotNewTables
BETA FEATURE: On connector restart, the connector will check if there have been any new tables added to the configuration, and snapshot them. There is presently only two options: 'off': Default behavior. Do not snapshot new tables. 'parallel': The snapshot of the new tables will occur in parallel to the continued binlog reading of the old tables. When the snapshot completes, an independent binlog reader will begin reading the events for the new tables until it catches up to present time. At this point, both old and new binlog readers will be momentarily halted and new binlog reader will start that will read the binlog for all configured tables. The parallel binlog reader will have a configured server id of 10000 + the primary binlog reader's server id. -
getSnapshotNewTables
-
setSourceinfoStructMaker
The name of the SourceInfoStructMaker class that returns SourceInfo schema and struct. -
getSourceinfoStructMaker
-
setTableIgnoreBuiltin
public void setTableIgnoreBuiltin(boolean tableIgnoreBuiltin) Flag specifying whether built-in tables should be ignored. -
isTableIgnoreBuiltin
public boolean isTableIgnoreBuiltin() -
setSnapshotIncludeCollectionList
This setting must be set to specify a list of tables/collections whose snapshot must be taken on creating or restarting the connector. -
getSnapshotIncludeCollectionList
-
setBigintUnsignedHandlingMode
Specify how BIGINT UNSIGNED columns should be represented in change events, including: 'precise' uses java.math.BigDecimal to represent values, which are encoded in the change events using a binary representation and Kafka Connect's 'org.apache.kafka.connect.data.Decimal' type; 'long' (the default) represents values using Java's 'long', which may not offer the precision but will be far easier to use in consumers. -
getBigintUnsignedHandlingMode
-
setDatabaseServerId
public void setDatabaseServerId(long databaseServerId) A numeric ID of this database client, which must be unique across all currently-running database processes in the cluster. This connector joins the MySQL database cluster as another server (with this unique ID) so it can read the binlog. -
getDatabaseServerId
public long getDatabaseServerId() -
setMaxQueueSizeInBytes
public void setMaxQueueSizeInBytes(long maxQueueSizeInBytes) Maximum size of the queue in bytes for change events read from the database log but not yet recorded or forwarded. Defaults to 0. Mean the feature is not enabled -
getMaxQueueSizeInBytes
public long getMaxQueueSizeInBytes() -
setTimePrecisionMode
Time, date and timestamps can be represented with different kinds of precisions, including: 'adaptive_time_microseconds': the precision of date and timestamp values is based the database column's precision; but time fields always use microseconds precision; 'connect': always represents time, date and timestamp values using Kafka Connect's built-in representations for Time, Date, and Timestamp, which uses millisecond precision regardless of the database columns' precision. -
getTimePrecisionMode
-
setSignalPollIntervalMs
public void setSignalPollIntervalMs(long signalPollIntervalMs) Interval for looking for new signals in registered channels, given in milliseconds. Defaults to 5 seconds. -
getSignalPollIntervalMs
public long getSignalPollIntervalMs() -
setEventDeserializationFailureHandlingMode
public void setEventDeserializationFailureHandlingMode(String eventDeserializationFailureHandlingMode) Specify how failures during deserialization of binlog events (i.e. when encountering a corrupted event) should be handled, including: 'fail' (the default) an exception indicating the problematic event and its binlog position is raised, causing the connector to be stopped; 'warn' the problematic event and its binlog position will be logged and the event will be skipped; 'ignore' the problematic event will be skipped. -
getEventDeserializationFailureHandlingMode
-
setPostProcessors
Optional list of post processors. The processors are defined using 'invalid input: '<'post.processor.prefix>.type' config option and configured using options 'invalid input: '<'post.processor.prefix. -
getPostProcessors
-
setNotificationEnabledChannels
List of notification channels names that are enabled. -
getNotificationEnabledChannels
-
setEventProcessingFailureHandlingMode
Specify how failures during processing of events (i.e. when encountering a corrupted event) should be handled, including: 'fail' (the default) an exception indicating the problematic event and its position is raised, causing the connector to be stopped; 'warn' the problematic event and its position will be logged and the event will be skipped; 'ignore' the problematic event will be skipped. -
getEventProcessingFailureHandlingMode
-
setSnapshotMaxThreads
public void setSnapshotMaxThreads(int snapshotMaxThreads) The maximum number of threads used to perform the snapshot. Defaults to 1. -
getSnapshotMaxThreads
public int getSnapshotMaxThreads() -
setDatabasePort
public void setDatabasePort(int databasePort) Port of the database server. -
getDatabasePort
public int getDatabasePort() -
setDatabaseSslTruststore
The location of the trust store file for the server certificate verification. -
getDatabaseSslTruststore
-
setNotificationSinkTopicName
The name of the topic for the notifications. This is required in case 'sink' is in the list of enabled channels -
getNotificationSinkTopicName
-
setDatabaseSslMode
Whether to use an encrypted connection to MySQL. Options include: 'disabled' to use an unencrypted connection; 'preferred' (the default) to establish a secure (encrypted) connection if the server supports secure connections, but fall back to an unencrypted connection otherwise; 'required' to use a secure (encrypted) connection, and fail if one cannot be established; 'verify_ca' like 'required' but additionally verify the server TLS certificate against the configured Certificate Authority (CA) certificates, or fail if no valid matching CA certificates are found; or'verify_identity' like 'verify_ca' but additionally verify that the server certificate matches the host to which the connection is attempted. -
getDatabaseSslMode
-
setDatabaseSslKeystorePassword
The password for the key store file. This is optional and only needed if 'database.ssl.keystore' is configured. -
getDatabaseSslKeystorePassword
-
setSchemaHistoryInternal
The name of the SchemaHistory class that should be used to store and recover database schema changes. The configuration properties for the history are prefixed with the 'schema.history.internal.' string. -
getSchemaHistoryInternal
-
setColumnExcludeList
Regular expressions matching columns to exclude from change events -
getColumnExcludeList
-
setDatabaseHostname
Resolvable hostname or IP address of the database server. -
getDatabaseHostname
-
setSchemaNameAdjustmentMode
Specify how schema names should be adjusted for compatibility with the message converter used by the connector, including: 'avro' replaces the characters that cannot be used in the Avro type name with underscore; 'avro_unicode' replaces the underscore or characters that cannot be used in the Avro type name with corresponding unicode like _uxxxx. Note: _ is an escape sequence like backslash in Java;'none' does not apply any adjustment (default) -
getSchemaNameAdjustmentMode
-
setDatabaseServerIdOffset
public void setDatabaseServerIdOffset(long databaseServerIdOffset) Only relevant if parallel snapshotting is configured. During parallel snapshotting, multiple (4) connections open to the database client, and they each need their own unique connection ID. This offset is used to generate those IDs from the base configured cluster ID. -
getDatabaseServerIdOffset
public long getDatabaseServerIdOffset() -
setConnectKeepAliveIntervalMs
public void setConnectKeepAliveIntervalMs(long connectKeepAliveIntervalMs) Interval for connection checking if keep alive thread is used, given in milliseconds Defaults to 1 minute (60,000 ms). -
getConnectKeepAliveIntervalMs
public long getConnectKeepAliveIntervalMs() -
setTableIncludeList
The tables for which changes are to be captured -
getTableIncludeList
-
setIncludeQuery
public void setIncludeQuery(boolean includeQuery) Whether the connector should include the original SQL query that generated the change event. Note: This option requires MySQL be configured with the binlog_rows_query_log_events option set to ON. If using MariaDB, configure the binlog_annotate_row_events option must be set to ON. Query will not be present for events generated from snapshot. WARNING: Enabling this option may expose tables or fields explicitly excluded or masked by including the original SQL statement in the change event. For this reason the default value is 'false'. -
isIncludeQuery
public boolean isIncludeQuery() -
setDatabaseIncludeList
The databases for which changes are to be captured -
getDatabaseIncludeList
-
createConnectorConfiguration
protected io.debezium.config.Configuration createConnectorConfiguration()- Specified by:
createConnectorConfiguration
in classorg.apache.camel.component.debezium.configuration.EmbeddedDebeziumConfiguration
-
configureConnectorClass
- Specified by:
configureConnectorClass
in classorg.apache.camel.component.debezium.configuration.EmbeddedDebeziumConfiguration
-
validateConnectorConfiguration
protected org.apache.camel.component.debezium.configuration.ConfigurationValidation validateConnectorConfiguration()- Specified by:
validateConnectorConfiguration
in classorg.apache.camel.component.debezium.configuration.EmbeddedDebeziumConfiguration
-
getConnectorDatabaseType
- Specified by:
getConnectorDatabaseType
in classorg.apache.camel.component.debezium.configuration.EmbeddedDebeziumConfiguration
-