Saves a dataset.
Saves a dataset. If the path is empty (the first time we call metrics on the schema) then we can write.
If there's already parquet files stored in it, then create a temporary directory to compute on, and flush the path to move updated metrics in it
: dataset to be saved
: Path to save the file at
ingestion algorithm
ingestion algorithm
Dataset loading strategy (JSON / CSV / ...)
Dataset loading strategy (JSON / CSV / ...)
Spark Dataframe loaded using metadata options
Merged metadata
Merged metadata
Partition a dataset using dataset columns.
Partition a dataset using dataset columns. To partition the dataset using the ingestion time, use the reserved column names :
: Input dataset
: list of columns to use for partitioning.
The Spark session used to run this job
Main entry point as required by the Spark Job interface
Main entry point as required by the Spark Job interface
: Spark Session used for the job
Merge new and existing dataset if required Save using overwrite / Append mode
Merge new and existing dataset if required Save using overwrite / Append mode
Used only to apply data masking rules (privacy) on one or more simple elements in XML data. The input XML file is read as a text file. Privacy rules are applied on the resulting DataFrame and the result is saved accepted area. In the definition of the XML Schema: - schema.metadata.format should be set to TEXT_XML - schema.attributes should only contain the attributes on which privacy should be applied Comet.defaultWriteFormat should be set text in order to have an XML formatted output file Comet.privacyOnly should be set to true to save the result in one file (coalesce 1)