source / sink format (jdbc by default). Cf spark.format possible values
Spark SaveMode to use. If not present, the save mode will be computed from the write disposition set in the YAM file
any option required by the format used to ingest / tranform / compute the data. Eg for JDBC uri, user and password are required uri the URI of the database engine. It must start with "jdbc:" user the username under which to connect to the database engine password the password to use in order to connect to the database engine
the index into the Comet.jdbcEngines map of the underlying database engine, in case one cannot use the engine name from the uri
the index into the Comet.jdbcEngines map of the underlying database engine, in case one cannot use the engine name from the uri
source / sink format (jdbc by default).
source / sink format (jdbc by default). Cf spark.format possible values
Spark SaveMode to use.
Spark SaveMode to use. If not present, the save mode will be computed from the write disposition set in the YAM file
any option required by the format used to ingest / tranform / compute the data.
any option required by the format used to ingest / tranform / compute the data. Eg for JDBC uri, user and password are required uri the URI of the database engine. It must start with "jdbc:" user the username under which to connect to the database engine password the password to use in order to connect to the database engine
Describes a connection to a JDBC-accessible database engine
source / sink format (jdbc by default). Cf spark.format possible values
Spark SaveMode to use. If not present, the save mode will be computed from the write disposition set in the YAM file
any option required by the format used to ingest / tranform / compute the data. Eg for JDBC uri, user and password are required uri the URI of the database engine. It must start with "jdbc:" user the username under which to connect to the database engine password the password to use in order to connect to the database engine
the index into the Comet.jdbcEngines map of the underlying database engine, in case one cannot use the engine name from the uri
the use case for engineOverride is when you need to have an alternate schema definition (e.g. non-standard table names) alongside with the regular schema definition, on the same underlying engine.