package
transform
Type Members
-
case class
AutoTaskJob(name: String, defaultArea: StorageArea, format: Option[String], coalesce: Boolean, udf: Option[String], views: Views, engine: Engine, task: AutoTaskDesc, sqlParameters: Map[String, String])(implicit settings: Settings, storageHandler: StorageHandler, schemaHandler: SchemaHandler) extends SparkJob with Product with Serializable
Execute the SQL Task and store it in parquet/orc/.... If Hive support is enabled, also store it as a Hive Table. If analyze support is active, also compute basic statistics for twhe dataset.
: Job Name as defined in the YML job description file
: Where the resulting dataset is stored by default if not specified in the task
: Task to run
: Sql Parameters to pass to SQL statements