ingestion algorithm
Dataset loading strategy (JSON / CSV / ...)
Dataset loading strategy (JSON / CSV / ...)
Spark Dataframe loaded using metadata options
Merge incoming and existing dataframes using merge options
Merge incoming and existing dataframes using merge options
merged dataframe
Merged metadata
Partition a dataset using dataset columns.
Partition a dataset using dataset columns. To partition the dataset using the igestion time, use the reserved column names :
: Input dataset
: list of columns to use for partitioning.
The Spark session used to run this job
Main entry point as required by the Spark Job interface
Main entry point as required by the Spark Job interface
: Spark Session used for the job
Merge new and existing dataset if required Save using overwrite / Append mode
Save typed dataset in parquet.
Save typed dataset in parquet. If hive support is active, also register it as a Hive Table and if analyze is active, also compute basic statistics
: dataset to save
: absolute path
: Append or overwrite
: accepted or rejected area