Package

za.co.absa.cobrix.spark.cobol.source

parameters

Permalink

package parameters

Visibility
  1. Public
  2. All

Type Members

  1. case class CobolParameters(copybookPath: Option[String], multiCopybookPath: Seq[String], copybookContent: Option[String], sourcePath: Option[String], isEbcdic: Boolean, ebcdicCodePage: String, ebcdicCodePageClass: Option[String], asciiCharset: String, floatingPointFormat: FloatingPointFormat, recordStartOffset: Int, recordEndOffset: Int, variableLengthParams: Option[VariableLengthParameters], schemaRetentionPolicy: SchemaRetentionPolicy, stringTrimmingPolicy: StringTrimmingPolicy, multisegmentParams: Option[MultisegmentParameters], commentPolicy: CommentPolicy, dropGroupFillers: Boolean, nonTerminals: Seq[String], debugIgnoreFileSize: Boolean) extends Product with Serializable

    Permalink

    This class holds parameters for the job.

    This class holds parameters for the job.

    copybookPath

    String containing the path to the copybook in a given file system.

    multiCopybookPath

    Sequence containing the paths to the copybooks.

    copybookContent

    String containing the actual content of the copybook. Either this, the copybookPath, or multiCopybookPath parameter must be specified.

    sourcePath

    String containing the path to the Cobol file to be parsed.

    isEbcdic

    If true the input data file encoding is EBCDIC, otherwise it is ASCII

    ebcdicCodePage

    Specifies what code page to use for EBCDIC to ASCII/Unicode conversions

    ebcdicCodePageClass

    An optional custom code page conversion class provided by a user

    asciiCharset

    A charset for ASCII data

    floatingPointFormat

    A format of floating-point numbers

    recordStartOffset

    A number of bytes to skip at the beginning of the record before parsing a record according to a copybook

    recordEndOffset

    A number of bytes to skip at the end of each record

    variableLengthParams

    VariableLengthParameters containing the specifications for the consumption of variable-length Cobol records.

    schemaRetentionPolicy

    A copybook usually has a root group struct element that acts like a rowtag in XML. This can be retained in Spark schema or can be collapsed

    stringTrimmingPolicy

    Specify if and how strings should be trimmed when parsed

    multisegmentParams

    Parameters for reading multisegment mainframe files

    commentPolicy

    A comment truncation policy

    dropGroupFillers

    If true the parser will drop all FILLER fields, even GROUP FILLERS that have non-FILLER nested fields

    nonTerminals

    A list of non-terminals (GROUPS) to combine and parse as primitive fields

    debugIgnoreFileSize

    If true the fixed length file reader won't check file size divisibility. Useful for debugging binary file / copybook mismatches.

  2. case class LocalityParameters(improveLocality: Boolean, optimizeAllocation: Boolean) extends Product with Serializable

    Permalink
  3. case class VariableLengthParameters(isRecordSequence: Boolean, isRdwBigEndian: Boolean, isRdwPartRecLength: Boolean, rdwAdjustment: Int, recordHeaderParser: Option[String], rhpAdditionalInfo: Option[String], recordLengthField: String, fileStartOffset: Int, fileEndOffset: Int, variableSizeOccurs: Boolean, generateRecordId: Boolean, isUsingIndex: Boolean, inputSplitRecords: Option[Int], inputSplitSizeMB: Option[Int], improveLocality: Boolean, optimizeAllocation: Boolean, inputFileNameColumn: String) extends Product with Serializable

    Permalink

    This class holds the parameters currently used for parsing variable-length records.

    This class holds the parameters currently used for parsing variable-length records.

    isRecordSequence

    Does input files have 4 byte record length headers

    isRdwBigEndian

    Is RDW big endian? It may depend on flavor of mainframe and/or mainframe to PC transfer method

    isRdwPartRecLength

    Does RDW count itself as part of record length itself

    rdwAdjustment

    Controls a mismatch between RDW and record length

    recordHeaderParser

    An optional custom record header parser for non-standard RDWs

    rhpAdditionalInfo

    An optional additional option string passed to a custom record header parser

    recordLengthField

    A field that stores record length

    fileStartOffset

    A number of bytes to skip at the beginning of each file

    fileEndOffset

    A number of bytes to skip at the end of each file

    variableSizeOccurs

    If true, OCCURS DEPENDING ON data size will depend on the number of elements

    generateRecordId

    Generate a sequential record number for each record to be able to retain the order of the original data

    isUsingIndex

    Is indexing input file before processing is requested

    inputSplitRecords

    The number of records to include in each partition. Notice mainframe records may have variable size, inputSplitMB is the recommended option

    inputSplitSizeMB

    A partition size to target. In certain circumstances this size may not be exactly that, but the library will do the best effort to target that size

    improveLocality

    Tries to improve locality by extracting preferred locations for variable-length records

    optimizeAllocation

    Optimizes cluster usage in case of optimization for locality in the presence of new nodes (nodes that do not contain any blocks of the files being processed)

    inputFileNameColumn

    A column name to add to the dataframe. The column will contain input file name for each record similar to 'input_file_name()' function

Value Members

  1. object CobolParametersParser

    Permalink

    This class provides methods for parsing the parameters set as Spark options.

  2. object CobolParametersValidator

    Permalink

    This class provides methods for checking the Spark job options after parsed.

  3. object LocalityParameters extends Serializable

    Permalink

Ungrouped