This class holds parameters for the job.
This class holds the parameters currently used for parsing variable-length records.
These are properties for customizing mainframe binary data reader.
These are properties for customizing mainframe binary data reader.
Record format
If true the input data file encoding is EBCDIC, otherwise it is ASCII
If true line ending characters will be used (LF / CRLF) as the record separator
Specifies what code page to use for EBCDIC to ASCII/Unicode conversions
An optional custom code page conversion class provided by a user
A charset for ASCII data
If true UTF-16 strings are considered big-endian.
A format of floating-point numbers
If true, OCCURS DEPENDING ON data size will depend on the number of elements
Specifies the length of the record disregarding the copybook record size. Implied the file has fixed record length.
A name of a field that contains record length. Optional. If not set the copybook record length will be used.
Does input files have 4 byte record length headers
Block descriptor word (if specified), for FB and VB record formats
Does RDW count itself as part of record length itself
Controls a mismatch between RDW and record length
Is indexing input file before processing is requested
The number of records to include in each partition. Notice mainframe records may have variable size, inputSplitMB is the recommended option
A partition size to target. In certain circumstances this size may not be exactly that, but the library will do the best effort to target that size
Default HDFS block size for the HDFS filesystem used. This value is used as the default split size if inputSplitSizeMB is not specified
An offset to the start of the record in each binary data block.
An offset from the end of the record to the end of the binary data block.
A number of bytes to skip at the beginning of each file
A number of bytes to skip at the end of each file
If true, a record id field will be prepended to each record.
Specifies a policy to transform the input schema. The default policy is to keep the schema exactly as it is in the copybook.
Specifies if and how strings should be trimmed when parsed.
If true, partial ASCII records can be parsed (in cases when LF character is missing for example)
Parameters specific to reading multisegment files
A comment truncation policy
If true, string values that contain only zero bytes (0x0) will be considered null.
If true the parser will drop all FILLER fields, even GROUP FILLERS that have non-FILLER nested fields
If true the parser will drop all value FILLER fields
Specifies the strategy of renaming FILLER names to make them unique
A list of non-terminals (GROUPS) to combine and parse as primitive fields
Specifies if debugging fields need to be added and what should they contain (false, hex, raw).
A parser used to parse data field record headers
An optional additional option string passed to a custom record header parser
A column name to add to the dataframe. The column will contain input file name for each record similar to 'input_file_name()' function
This class holds the parameters currently used for parsing variable-length records.
This class holds the parameters currently used for parsing variable-length records.
Does input files have 4 byte record length headers
Block descriptor word (if specified), for FB and VB record formats
Is RDW big endian? It may depend on flavor of mainframe and/or mainframe to PC transfer method
Does RDW count itself as part of record length itself
Controls a mismatch between RDW and record length
An optional custom record header parser for non-standard RDWs
An optional custom raw record parser class non-standard record types
An optional additional option string passed to a custom record header parser
An optional additional option string passed to a custom record extractor
A field that stores record length
A number of bytes to skip at the beginning of each file
A number of bytes to skip at the end of each file
Generate a sequential record number for each record to be able to retain the order of the original data
Is indexing input file before processing is requested
The number of records to include in each partition. Notice mainframe records may have variable size, inputSplitMB is the recommended option
A partition size to target. In certain circumstances this size may not be exactly that, but the library will do the best effort to target that size
Tries to improve locality by extracting preferred locations for variable-length records
Optimizes cluster usage in case of optimization for locality in the presence of new nodes (nodes that do not contain any blocks of the files being processed)
A column name to add to the dataframe. The column will contain input file name for each record similar to 'input_file_name()' function
This class holds parameters for the job.
String containing the path to the copybook in a given file system.
Sequence containing the paths to the copybooks.
String containing the actual content of the copybook. Either this, the copybookPath, or multiCopybookPath parameter must be specified.
The list of source file paths.
The record format (F, V, VB, D)
[deprecated by recordFormat] If true the input data consists of text files where records are separated by a line ending character
If true the input data file encoding is EBCDIC, otherwise it is ASCII
Specifies what code page to use for EBCDIC to ASCII/Unicode conversions
An optional custom code page conversion class provided by a user
A charset for ASCII data
If true UTF-16 is considered big-endian.
A format of floating-point numbers
A number of bytes to skip at the beginning of the record before parsing a record according to a copybook
A number of bytes to skip at the end of each record
Specifies the length of the record disregarding the copybook record size. Implied the file has fixed record length.
VariableLengthParameters containing the specifications for the consumption of variable-length Cobol records.
If true, OCCURS DEPENDING ON data size will depend on the number of elements
A copybook usually has a root group struct element that acts like a rowtag in XML. This can be retained in Spark schema or can be collapsed
Specify if and how strings should be trimmed when parsed
If true, partial ASCII records can be parsed (in cases when LF character is missing for example)
Parameters for reading multisegment mainframe files
A comment truncation policy
If true, string values that contain only zero bytes (0x0) will be considered null.
If true the parser will drop all FILLER fields, even GROUP FILLERS that have non-FILLER nested fields
If true the parser will drop all value FILLER fields
A list of non-terminals (GROUPS) to combine and parse as primitive fields
Specifies if debugging fields need to be added and what should they contain (false, hex, raw).
If true the fixed length file reader won't check file size divisibility. Useful for debugging binary file / copybook mismatches.