: Output Dataset Domain
: Topic Name
: List of globally defined types
: Unused
: Storage Handler
: Input Dataset Domain
: Input Dataset Domain
Where the magic happen
Where the magic happen
input dataset as a RDD of string
load the json as an RDD of String
load the json as an RDD of String
Spark Dataframe loaded using metadata options
Load dataset using spark csv reader and all metadata.
Load dataset using spark csv reader and all metadata. Does not infer schema. columns not defined in the schema are dropped fro the dataset (require datsets with a header)
Spark DataFrame where each row holds a single string
Merged metadata
Merged metadata
in the form [SinkType:[configName:]]viewName
(SinkType, configName, viewName)
Partition a dataset using dataset columns.
Partition a dataset using dataset columns. To partition the dataset using the ingestion time, use the reserved column names :
: Input dataset
: list of columns to use for partitioning.
The Spark session used to run this job
: Input dataset path
: Input dataset path
Main entry point as required by the Spark Job interface
Main entry point as required by the Spark Job interface
: Spark Session used for the job
Merge new and existing dataset if required Save using overwrite / Append mode
Merge new and existing dataset if required Save using overwrite / Append mode
: Input Dataset Schema
: Input Dataset Schema
: Storage Handler
: Storage Handler
: List of globally defined types
: List of globally defined types
Main class to ingest JSON messages from Kafka