DPJob

gcp4zio.dp.DPJob$
See theDPJob companion trait
object DPJob

Attributes

Companion:
trait
Graph
Supertypes
class Object
trait Matchable
class Any
Self type
DPJob.type

Members list

Concise view

Value members

Concrete methods

def executeHiveJob(query: String, trackingInterval: Duration): RIO[DPJob, Job]

Submits a Hive Job in Dataproc Cluster. (This API will not wait for Job Completion, to wait for Job completion use trackJobProgress)

Submits a Hive Job in Dataproc Cluster. (This API will not wait for Job Completion, to wait for Job completion use trackJobProgress)

Attributes

query

Hive SQL query to run

trackingInterval

Specifies duration each status check repetition should be spaced from the last run

def executeSparkJob(args: List[String], mainClass: String, libs: List[String], conf: Map[String, String], trackingInterval: Duration): RIO[DPJob, Job]

Submits a Spark Job in Dataproc Cluster and waits till Job Completion

Submits a Spark Job in Dataproc Cluster and waits till Job Completion

Attributes

args

command line arguments which will be passed to spark application

conf

Key value pair of the spark properties

libs

List of jar required to run this Spark Application (including application jar)

mainClass

Main class to run

trackingInterval

Specifies duration each status check repetition should be spaced from the last run

def live(cluster: String, project: String, region: String, endpoint: String): TaskLayer[DPJob]

Creates live layer required for all DPJob API's

Creates live layer required for all DPJob API's

Attributes

cluster

Dataproc cluster name

endpoint

GCP dataproc API

project

GCP projectID

region

GCP Region name

def submitHiveJob(query: String): RIO[DPJob, Job]

Submits a Hive Job in Dataproc Cluster. (This API will not wait for Job Completion, to wait for Job completion use trackJobProgress)

Submits a Hive Job in Dataproc Cluster. (This API will not wait for Job Completion, to wait for Job completion use trackJobProgress)

Attributes

query

Hive SQL query to run

def submitSparkJob(args: List[String], mainClass: String, libs: List[String], conf: Map[String, String]): RIO[DPJob, Job]

Submits a Spark Job in Dataproc Cluster. (This API will not wait for Job Completion, to wait for Job completion use trackJobProgress)

Submits a Spark Job in Dataproc Cluster. (This API will not wait for Job Completion, to wait for Job completion use trackJobProgress)

Attributes

args

command line arguments which will be passed to spark application

conf

Key value pair of the spark properties

libs

List of jar required to run this Spark Application (including application jar)

mainClass

Main class to run

def trackJobProgress(job: Job, interval: Duration): RIO[DPJob, Unit]

This API will track job until Completion

This API will track job until Completion

Attributes

interval

Specifies duration each status check repetition should be spaced from the last run

job

Dataproc Job which needs to be tracked