Modifier and Type | Class and Description |
---|---|
class |
FileSplit
A section of an input file.
|
class |
MultiFileSplit
A sub-collection of input files.
|
Modifier and Type | Method and Description |
---|---|
RecordReader<K,V> |
CombineFileInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context) |
Modifier and Type | Method and Description |
---|---|
InputSplit |
MapContext.getInputSplit()
Get the input split for this map.
|
Modifier and Type | Method and Description |
---|---|
abstract List<InputSplit> |
InputFormat.getSplits(JobContext context)
Logically split the set of input files for the job.
|
Modifier and Type | Method and Description |
---|---|
abstract RecordReader<K,V> |
InputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context)
Create a record reader for a given split.
|
abstract void |
RecordReader.initialize(InputSplit split,
TaskAttemptContext context)
Called once at initialization.
|
Modifier and Type | Class and Description |
---|---|
static class |
DataDrivenDBInputFormat.DataDrivenDBInputSplit
A InputSplit that spans a set of rows
|
static class |
DBInputFormat.DBInputSplit
A InputSplit that spans a set of rows
|
Modifier and Type | Method and Description |
---|---|
List<InputSplit> |
DataDrivenDBInputFormat.getSplits(JobContext job)
Logically split the set of input files for the job.
|
List<InputSplit> |
DBInputFormat.getSplits(JobContext job)
Logically split the set of input files for the job.
|
List<InputSplit> |
IntegerSplitter.split(org.apache.hadoop.conf.Configuration conf,
ResultSet results,
String colName) |
List<InputSplit> |
FloatSplitter.split(org.apache.hadoop.conf.Configuration conf,
ResultSet results,
String colName) |
List<InputSplit> |
DateSplitter.split(org.apache.hadoop.conf.Configuration conf,
ResultSet results,
String colName) |
List<InputSplit> |
DBSplitter.split(org.apache.hadoop.conf.Configuration conf,
ResultSet results,
String colName)
Given a ResultSet containing one record (and already advanced to that record)
with two columns (a low value, and a high value, both of the same type), determine
a set of splits that span the given values.
|
List<InputSplit> |
BooleanSplitter.split(org.apache.hadoop.conf.Configuration conf,
ResultSet results,
String colName) |
List<InputSplit> |
BigDecimalSplitter.split(org.apache.hadoop.conf.Configuration conf,
ResultSet results,
String colName) |
List<InputSplit> |
TextSplitter.split(org.apache.hadoop.conf.Configuration conf,
ResultSet results,
String colName)
This method needs to determine the splits between two user-provided strings.
|
Modifier and Type | Method and Description |
---|---|
RecordReader<org.apache.hadoop.io.LongWritable,T> |
DBInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context)
Create a record reader for a given split.
|
void |
DBRecordReader.initialize(InputSplit split,
TaskAttemptContext context) |
Modifier and Type | Class and Description |
---|---|
class |
CombineFileSplit
A sub-collection of input files.
|
Modifier and Type | Method and Description |
---|---|
List<InputSplit> |
CombineFileInputFormat.getSplits(JobContext job) |
List<InputSplit> |
DelegatingInputFormat.getSplits(JobContext job) |
List<InputSplit> |
FileInputFormat.getSplits(JobContext job)
Generate the list of files and make them into FileSplits.
|
List<InputSplit> |
NLineInputFormat.getSplits(JobContext job)
Logically splits the set of input files for the job, splits N lines
of the input as one split.
|
Modifier and Type | Method and Description |
---|---|
abstract RecordReader<K,V> |
CombineFileInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context)
This is not implemented yet.
|
RecordReader<org.apache.hadoop.io.Text,org.apache.hadoop.io.Text> |
SequenceFileAsTextInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context) |
RecordReader<org.apache.hadoop.io.LongWritable,org.apache.hadoop.io.Text> |
CombineTextInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context) |
RecordReader<K,V> |
DelegatingInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context) |
RecordReader<K,V> |
SequenceFileInputFilter.createRecordReader(InputSplit split,
TaskAttemptContext context)
Create a record reader for the given split
|
RecordReader<org.apache.hadoop.io.Text,org.apache.hadoop.io.Text> |
KeyValueTextInputFormat.createRecordReader(InputSplit genericSplit,
TaskAttemptContext context) |
RecordReader<K,V> |
SequenceFileInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context) |
RecordReader<K,V> |
CombineSequenceFileInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context) |
RecordReader<org.apache.hadoop.io.LongWritable,org.apache.hadoop.io.BytesWritable> |
FixedLengthInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context) |
RecordReader<org.apache.hadoop.io.LongWritable,org.apache.hadoop.io.Text> |
TextInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context) |
RecordReader<org.apache.hadoop.io.BytesWritable,org.apache.hadoop.io.BytesWritable> |
SequenceFileAsBinaryInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context) |
RecordReader<org.apache.hadoop.io.LongWritable,org.apache.hadoop.io.Text> |
NLineInputFormat.createRecordReader(InputSplit genericSplit,
TaskAttemptContext context) |
void |
CombineFileRecordReader.initialize(InputSplit split,
TaskAttemptContext context) |
void |
KeyValueLineRecordReader.initialize(InputSplit genericSplit,
TaskAttemptContext context) |
void |
DelegatingRecordReader.initialize(InputSplit split,
TaskAttemptContext context) |
void |
LineRecordReader.initialize(InputSplit genericSplit,
TaskAttemptContext context) |
void |
FixedLengthRecordReader.initialize(InputSplit genericSplit,
TaskAttemptContext context) |
void |
SequenceFileAsTextRecordReader.initialize(InputSplit split,
TaskAttemptContext context) |
void |
CombineFileRecordReaderWrapper.initialize(InputSplit split,
TaskAttemptContext context) |
void |
SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader.initialize(InputSplit split,
TaskAttemptContext context) |
void |
SequenceFileRecordReader.initialize(InputSplit split,
TaskAttemptContext context) |
Constructor and Description |
---|
DelegatingRecordReader(InputSplit split,
TaskAttemptContext context)
Constructs the DelegatingRecordReader.
|
Modifier and Type | Class and Description |
---|---|
class |
CompositeInputSplit
This InputSplit contains a set of child InputSplits.
|
Modifier and Type | Method and Description |
---|---|
InputSplit |
CompositeInputSplit.get(int i)
Get ith child InputSplit.
|
Modifier and Type | Method and Description |
---|---|
List<InputSplit> |
CompositeInputFormat.getSplits(JobContext job)
Build a CompositeInputSplit from the child InputFormats by assigning the
ith split from each child to the ith composite split.
|
Modifier and Type | Method and Description |
---|---|
void |
CompositeInputSplit.add(InputSplit s)
Add an InputSplit to this collection.
|
RecordReader<K,TupleWritable> |
CompositeInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext taskContext)
Construct a CompositeRecordReader for the children of this InputFormat
as defined in the init expression.
|
abstract ComposableRecordReader<K,V> |
ComposableInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context) |
void |
MultiFilterRecordReader.initialize(InputSplit split,
TaskAttemptContext context) |
void |
WrappedRecordReader.initialize(InputSplit split,
TaskAttemptContext context) |
void |
CompositeRecordReader.initialize(InputSplit split,
TaskAttemptContext context) |
Modifier and Type | Method and Description |
---|---|
InputSplit |
WrappedMapper.Context.getInputSplit()
Get the input split for this map.
|
Modifier and Type | Method and Description |
---|---|
static <T extends InputSplit> |
JobSplitWriter.createSplitFiles(org.apache.hadoop.fs.Path jobSubmitDir,
org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.FileSystem fs,
List<InputSplit> splits) |
static <T extends InputSplit> |
JobSplitWriter.createSplitFiles(org.apache.hadoop.fs.Path jobSubmitDir,
org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.FileSystem fs,
T[] splits) |
Modifier and Type | Method and Description |
---|---|
static <T extends InputSplit> |
JobSplitWriter.createSplitFiles(org.apache.hadoop.fs.Path jobSubmitDir,
org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.FileSystem fs,
T[] splits) |
Modifier and Type | Method and Description |
---|---|
static <T extends InputSplit> |
JobSplitWriter.createSplitFiles(org.apache.hadoop.fs.Path jobSubmitDir,
org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.FileSystem fs,
List<InputSplit> splits) |
Constructor and Description |
---|
SplitMetaInfo(InputSplit split,
long startOffset) |
TaskSplitMetaInfo(InputSplit split,
long startOffset) |
Modifier and Type | Method and Description |
---|---|
InputSplit |
MapContextImpl.getInputSplit()
Get the input split for this map.
|
Constructor and Description |
---|
MapContextImpl(org.apache.hadoop.conf.Configuration conf,
TaskAttemptID taskid,
RecordReader<KEYIN,VALUEIN> reader,
RecordWriter<KEYOUT,VALUEOUT> writer,
OutputCommitter committer,
StatusReporter reporter,
InputSplit split) |
Copyright © 2017 Apache Software Foundation. All Rights Reserved.