Uses of Class
org.apache.hadoop.mapreduce.RecordReader

Packages that use RecordReader
org.apache.hadoop.mapred.lib   
org.apache.hadoop.mapred.lib.db   
org.apache.hadoop.mapreduce   
org.apache.hadoop.mapreduce.lib.db   
org.apache.hadoop.mapreduce.lib.input   
org.apache.hadoop.mapreduce.lib.join   
org.apache.hadoop.mapreduce.task   
 

Uses of RecordReader in org.apache.hadoop.mapred.lib
 

Methods in org.apache.hadoop.mapred.lib that return RecordReader
 RecordReader<K,V> CombineFileInputFormat.createRecordReader(InputSplit split, TaskAttemptContext context)
           
 

Uses of RecordReader in org.apache.hadoop.mapred.lib.db
 

Subclasses of RecordReader in org.apache.hadoop.mapred.lib.db
protected  class DBInputFormat.DBRecordReader
          A RecordReader that reads records from a SQL table.
 

Uses of RecordReader in org.apache.hadoop.mapreduce
 

Methods in org.apache.hadoop.mapreduce that return RecordReader
abstract  RecordReader<K,V> InputFormat.createRecordReader(InputSplit split, TaskAttemptContext context)
          Create a record reader for a given split.
 

Methods in org.apache.hadoop.mapreduce with parameters of type RecordReader
static
<K1,V1,K2,V2>
Mapper.Context
ContextFactory.cloneMapContext(MapContext<K1,V1,K2,V2> context, org.apache.hadoop.conf.Configuration conf, RecordReader<K1,V1> reader, RecordWriter<K2,V2> writer)
          Copy a custom WrappedMapper.Context, optionally replacing the input and output.
 

Uses of RecordReader in org.apache.hadoop.mapreduce.lib.db
 

Subclasses of RecordReader in org.apache.hadoop.mapreduce.lib.db
 class DataDrivenDBRecordReader<T extends DBWritable>
          A RecordReader that reads records from a SQL table, using data-driven WHERE clause splits.
 class DBRecordReader<T extends DBWritable>
          A RecordReader that reads records from a SQL table.
 class MySQLDataDrivenDBRecordReader<T extends DBWritable>
          A RecordReader that reads records from a MySQL table via DataDrivenDBRecordReader
 class MySQLDBRecordReader<T extends DBWritable>
          A RecordReader that reads records from a MySQL table.
 class OracleDataDrivenDBRecordReader<T extends DBWritable>
          A RecordReader that reads records from a Oracle table via DataDrivenDBRecordReader
 class OracleDBRecordReader<T extends DBWritable>
          A RecordReader that reads records from an Oracle SQL table.
 

Methods in org.apache.hadoop.mapreduce.lib.db that return RecordReader
protected  RecordReader<org.apache.hadoop.io.LongWritable,T> DBInputFormat.createDBRecordReader(DBInputFormat.DBInputSplit split, org.apache.hadoop.conf.Configuration conf)
           
protected  RecordReader<org.apache.hadoop.io.LongWritable,T> DataDrivenDBInputFormat.createDBRecordReader(DBInputFormat.DBInputSplit split, org.apache.hadoop.conf.Configuration conf)
           
protected  RecordReader<org.apache.hadoop.io.LongWritable,T> OracleDataDrivenDBInputFormat.createDBRecordReader(DBInputFormat.DBInputSplit split, org.apache.hadoop.conf.Configuration conf)
           
 RecordReader<org.apache.hadoop.io.LongWritable,T> DBInputFormat.createRecordReader(InputSplit split, TaskAttemptContext context)
          Create a record reader for a given split.
 

Uses of RecordReader in org.apache.hadoop.mapreduce.lib.input
 

Subclasses of RecordReader in org.apache.hadoop.mapreduce.lib.input
 class CombineFileRecordReader<K,V>
          A generic RecordReader that can hand out different recordReaders for each chunk in a CombineFileSplit.
 class DelegatingRecordReader<K,V>
          This is a delegating RecordReader, which delegates the functionality to the underlying record reader in TaggedInputSplit
 class KeyValueLineRecordReader
          This class treats a line in the input as a key/value pair separated by a separator character.
 class LineRecordReader
          Treats keys as offset in file and value as line.
static class SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
          Read records from a SequenceFile as binary (raw) bytes.
 class SequenceFileAsTextRecordReader
          This class converts the input keys and values to their String forms by calling toString() method.
 class SequenceFileRecordReader<K,V>
          An RecordReader for SequenceFiles.
 

Fields in org.apache.hadoop.mapreduce.lib.input declared as RecordReader
protected  RecordReader<K,V> CombineFileRecordReader.curReader
           
 

Fields in org.apache.hadoop.mapreduce.lib.input with type parameters of type RecordReader
protected  Constructor<? extends RecordReader<K,V>> CombineFileRecordReader.rrConstructor
           
 

Methods in org.apache.hadoop.mapreduce.lib.input that return RecordReader
 RecordReader<org.apache.hadoop.io.LongWritable,org.apache.hadoop.io.Text> NLineInputFormat.createRecordReader(InputSplit genericSplit, TaskAttemptContext context)
           
 RecordReader<K,V> SequenceFileInputFilter.createRecordReader(InputSplit split, TaskAttemptContext context)
          Create a record reader for the given split
 RecordReader<org.apache.hadoop.io.Text,org.apache.hadoop.io.Text> KeyValueTextInputFormat.createRecordReader(InputSplit genericSplit, TaskAttemptContext context)
           
 RecordReader<K,V> SequenceFileInputFormat.createRecordReader(InputSplit split, TaskAttemptContext context)
           
 RecordReader<org.apache.hadoop.io.LongWritable,org.apache.hadoop.io.Text> TextInputFormat.createRecordReader(InputSplit split, TaskAttemptContext context)
           
 RecordReader<K,V> DelegatingInputFormat.createRecordReader(InputSplit split, TaskAttemptContext context)
           
 RecordReader<org.apache.hadoop.io.Text,org.apache.hadoop.io.Text> SequenceFileAsTextInputFormat.createRecordReader(InputSplit split, TaskAttemptContext context)
           
abstract  RecordReader<K,V> CombineFileInputFormat.createRecordReader(InputSplit split, TaskAttemptContext context)
          This is not implemented yet.
 RecordReader<org.apache.hadoop.io.BytesWritable,org.apache.hadoop.io.BytesWritable> SequenceFileAsBinaryInputFormat.createRecordReader(InputSplit split, TaskAttemptContext context)
           
 

Constructor parameters in org.apache.hadoop.mapreduce.lib.input with type arguments of type RecordReader
CombineFileRecordReader(CombineFileSplit split, TaskAttemptContext context, Class<? extends RecordReader<K,V>> rrClass)
          A generic RecordReader that can hand out different recordReaders for each chunk in the CombineFileSplit.
 

Uses of RecordReader in org.apache.hadoop.mapreduce.lib.join
 

Subclasses of RecordReader in org.apache.hadoop.mapreduce.lib.join
 class ComposableRecordReader<K extends WritableComparable<?>,V extends Writable>
          Additional operations required of a RecordReader to participate in a join.
 class CompositeRecordReader<K extends WritableComparable<?>,V extends Writable,X extends Writable>
          A RecordReader that can effect joins of RecordReaders sharing a common key type and partitioning.
 class InnerJoinRecordReader<K extends WritableComparable<?>>
          Full inner join.
 class JoinRecordReader<K extends WritableComparable<?>>
          Base class for Composite joins returning Tuples of arbitrary Writables.
 class MultiFilterRecordReader<K extends WritableComparable<?>,V extends Writable>
          Base class for Composite join returning values derived from multiple sources, but generally not tuples.
 class OuterJoinRecordReader<K extends WritableComparable<?>>
          Full outer join.
 class OverrideRecordReader<K extends WritableComparable<?>,V extends Writable>
          Prefer the "rightmost" data source for this key.
 class WrappedRecordReader<K extends WritableComparable<?>,U extends Writable>
          Proxy class for a RecordReader participating in the join framework.
 

Methods in org.apache.hadoop.mapreduce.lib.join that return RecordReader
 RecordReader<K,TupleWritable> CompositeInputFormat.createRecordReader(InputSplit split, TaskAttemptContext taskContext)
          Construct a CompositeRecordReader for the children of this InputFormat as defined in the init expression.
 

Uses of RecordReader in org.apache.hadoop.mapreduce.task
 

Constructors in org.apache.hadoop.mapreduce.task with parameters of type RecordReader
MapContextImpl(org.apache.hadoop.conf.Configuration conf, TaskAttemptID taskid, RecordReader<KEYIN,VALUEIN> reader, RecordWriter<KEYOUT,VALUEOUT> writer, OutputCommitter committer, StatusReporter reporter, InputSplit split)
           
 



Copyright © 2013 Apache Software Foundation. All Rights Reserved.