Snowflake automatically enable/disable pushdown function
Returns true if bucket lifecycle configuration should be checked
Retrieve Column mapping data.
Retrieve Column mapping data. None if empty
Set on_error parameter to continue in COPY command todo: create data validation function in spark side instead of using COPY COMMAND
Creates a per-query subdirectory in the rootTempDir, with a random UUID.
Extra options to append to the Snowflake COPY command (e.g.
Extra options to append to the Snowflake COPY command (e.g. "MAXERROR 100").
Number of threads used for PUT/GET.
List of semi-colon separated SQL statements to run after successful write operations.
List of semi-colon separated SQL statements to run after successful write operations. This can be useful for running GRANT operations to make your new tables readable to other users and groups.
If the action string contains %s, the table name will be substituted in, in case a staging table is being used.
Defaults to empty.
List of semi-colon separated SQL statements to run before write operations.
List of semi-colon separated SQL statements to run before write operations. This can be useful for running DELETE operations to clean up data
If the action string contains %s, the table name will be substituted in, in case a staging table is being used.
Defaults to empty.
Generate private key form pem key value
Generate private key form pem key value
private key object
Whether or not to have PURGE in the COPY statement generated by the Spark connector
The Snowflake query to be used as the target when loading data.
A root directory to be used for intermediate data exchange, expected to be on cloud storage (S3 or Azure storage), or somewhere that can be written to and read from by Snowflake.
A root directory to be used for intermediate data exchange, expected to be on cloud storage (S3 or Azure storage), or somewhere that can be written to and read from by Snowflake. Make sure that credentials are available for this cloud provider.
Max file size used to move data out from Snowflake
Snowflake account - optional
Snowflake use compression on/off - "on" by default
Snowflake database name
Returns a map of options that are not known to the connector, and are passed verbosely to the JDBC driver
Snowflake password
Snowflake role - optional
Snowflake SSL on/off - "on" by default
Snowflake schema
Snowflake timezone- optional
URL pointing to the snowflake database, simply host:port
Snowflake user
Snowflake warehouse
The Snowflake table to be used as the target when loading or writing data.
Temporary AWS credentials which are passed to Snowflake.
Temporary AWS credentials which are passed to Snowflake. These only need to be supplied by the user when Hadoop is configured to authenticate to S3 via IAM roles assigned to EC2 instances.
SAS Token to be passed to Snowflake to access data in Azure storage.
SAS Token to be passed to Snowflake to access data in Azure storage. We currently don't support full storage account key so this has to be provided if customer would like to load data through their storage account directly.
Whether or not to have TRUNCATE_COLUMNS in the COPY statement generated by the Spark connector.
Truncate table when overwriting.
Truncate table when overwriting. Keep the table schema
When true, data is always loaded into a new temporary table when performing an overwrite.
When true, data is always loaded into a new temporary table when performing an overwrite. This is to ensure that the whole load process succeeds before dropping any data from Snowflake, which can be useful if, in the event of failures, stale data is better than no data for your systems.
Defaults to true.
Adds validators and accessors to string map