case classJdbcExecute(session: SparkSession) extends DataFrameReader with Product with Serializable
Implicit class to easily invoke JDBC provider on SparkSession and avoid double query
execution of pushdown queries (one for schema determination and other the actual query).
Instead of: spark.read.jdbc(jdbcUrl, "(pushdown query) q1", properties) one can simply do
spark.snappyQuery(query). This will also register dialects that avoid double execution,
use proper JDBC driver argument to avoid ClassNotFound errors. In addition this
provides "snappyExecute" implicits for non-query executions that will return an update count.
Linear Supertypes
Serializable, Serializable, Product, Equals, DataFrameReader, internal.Logging, AnyRef, Any
Implicit class to easily invoke JDBC provider on SparkSession and avoid double query execution of pushdown queries (one for schema determination and other the actual query).
Instead of: spark.read.jdbc(jdbcUrl, "(pushdown query) q1", properties) one can simply do spark.snappyQuery(query). This will also register dialects that avoid double execution, use proper JDBC driver argument to avoid ClassNotFound errors. In addition this provides "snappyExecute" implicits for non-query executions that will return an update count.