|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
Job.addArchiveToClassPath(Path)
instead
Job.addCacheArchive(URI)
instead
Job.addCacheFile(URI)
instead
Job.addFileToClassPath(Path)
instead
Path
to the list of inputs for the map-reduce job.
Path
with a custom InputFormat
to the list of
inputs for the map-reduce job.
Path
with a custom InputFormat
and
Mapper
to the list of inputs for the map-reduce job.
Path
to the list of inputs for the map-reduce job.
Path
with a custom InputFormat
to the list of
inputs for the map-reduce job.
Path
with a custom InputFormat
and
Mapper
to the list of inputs for the map-reduce job.
Mapper
class to the chain mapper.
Mapper
class to the chain reducer.
BackupStore
is an utility class that is used to support
the mark-reset functionality of values iteratorBinaryComparable
keys using a configurable part of
the bytes array returned by BinaryComparable.getBytes()
.BinaryComparable
keys using a configurable part of
the bytes array returned by BinaryComparable.getBytes()
.Token.cancel(org.apache.hadoop.conf.Configuration)
instead
Token.cancel(org.apache.hadoop.conf.Configuration)
instead
ChainMapper
and the ChainReducer
classes.OutputCommitter.commitJob(JobContext)
or
OutputCommitter.abortJob(JobContext, int)
instead.
OutputCommitter.commitJob(org.apache.hadoop.mapreduce.JobContext)
or OutputCommitter.abortJob(org.apache.hadoop.mapreduce.JobContext, org.apache.hadoop.mapreduce.JobStatus.State)
instead.
OutputCommitter.commitJob(JobContext)
and
OutputCommitter.abortJob(JobContext, JobStatus.State)
instead.
JobContext
or TaskAttemptContext
with a
new configuration.
JobClient
.
RecordWriter
to future operations.
InputSplit
to future operations.
RecordWriter
to future operations.
Cluster
.
RecordWriter
to future operations.
RecordWriter
to future operations.
MultiFilterRecordReader.emit(org.apache.hadoop.mapred.join.TupleWritable)
every Tuple from the
collector (the outer join of child RRs).
MultiFilterRecordReader.emit(org.apache.hadoop.mapreduce.lib.join.TupleWritable)
every Tuple from the
collector (the outer join of child RRs).
InputFormat
that returns CombineFileSplit
's
in InputFormat.getSplits(JobConf, int)
method.InputFormat
that returns CombineFileSplit
's in
InputFormat.getSplits(JobContext)
method.CombineFileSplit
.CombineFileSplit
.JobConf
.
JobConf
.
Configuration
.
Counter
s that logically belong together.Counters
holds per job/task counters, defined either by the
Map-Reduce framework or applications.Group
of counters, comprising of counters from a particular
counter Enum
class.CombineFileInputFormat.createPool(List)
.
CombineFileInputFormat.createPool(PathFilter...)
.
DBWritable
.InputFormat
that delegates behaviour of paths to multiple other
InputFormats.InputFormat
that delegates behavior of paths to multiple other
InputFormats.Mapper
that delegates behaviour of paths to multiple other
mappers.Mapper
that delegates behavior of paths to multiple other
mappers.TaggedInputSplit
extendInternal
at least once.
InputFormat
.InputFormat
s.OutputCommitter
that commits files specified
in job output directory i.e.OutputCommitter
that commits files specified
in job output directory i.e.OutputFormat
.OutputFormat
s that read from FileSystem
s.FilterRecordWriter
is a convenience wrapper
class that implements RecordWriter
.FilterRecordWriter
is a convenience wrapper
class that extends the RecordWriter
.Counters.findCounter(String, String)
instead
Counters.makeEscapedCompactString()
counter
representation into a counter object.
Cluster.getAllJobStatuses()
instead.
JobContext.getArchiveClassPaths()
instead
JobContext.getArchiveTimestamps()
instead
SkipBadRecords.COUNTER_MAP_PROCESSED_RECORDS
is incremented
by MapRunner after invoking the map function.
SkipBadRecords.COUNTER_REDUCE_PROCESSED_GROUPS
is incremented
by framework after invoking the reduce function.
JobContext.getCacheArchives()
instead
JobContext.getCacheFiles()
instead
Configuration
for the Map or Reduce in the chain.
Counters.Group.findCounter(String)
instead
Counters.Counter
of the given group with the given name.
Counters.Counter
of the given group with the given name.
Counter
for the given counterName
.
Counter
for the given groupName
and
counterName
.
JobContext.getFileClassPaths()
instead
JobContext.getFileTimestamps()
instead
RawComparator
comparator for
grouping keys of inputs to the reduce.
RawComparator
comparator for
grouping keys of inputs to the reduce.
InputFormat
implementation for the map-reduce job,
defaults to TextInputFormat
if not specified explicity.
InputFormat
class for the job.
InputFormat
class for the job.
Path
s for the map-reduce job.
Path
s for the map-reduce job.
InputSplit
object for a map.
Job
with no particular Cluster
.
Job
with no particular Cluster
and a
given Configuration
.
Job
with no particular Cluster
and a given jobName.
Job
with no particular Cluster
and given
Configuration
and JobStatus
.
Job.getInstance()
Job.getInstance(Configuration)
Job
with no particular Cluster
and given
Configuration
and JobStatus
.
RunningJob
object to track an ongoing job.
JobClient.getJob(JobID)
.
RunningJob.getID()
.
JobID
object that this task attempt belongs to
JobID
object that this tip belongs to
JobPriority
for this job.
JobStatus
, of the Job.
SequenceFileRecordReader.next(Object, Object)
..
KeyFieldBasedComparator
options
KeyFieldBasedComparator
options
KeyFieldBasedPartitioner
options
KeyFieldBasedPartitioner
options
InputSplit
.
JobContext.getLocalCacheArchives()
instead
JobContext.getCacheArchives()
.
JobContext.getLocalCacheFiles()
instead
JobContext.getCacheFiles()
.
WrappedMapper.Context
for custom implementations.
CompressionCodec
for compressing the map outputs.
Mapper
class for the job.
Mapper
class for the job.
Mapper
class for the job.
MapRunnable
class for the job.
true
.
JobClient.getMapTaskReports(JobID)
mapreduce.map.maxattempts
property.
mapred.map.max.attempts
property.
mapred.map.max.attempts
property.
mapreduce.reduce.maxattempts
property.
mapred.reduce.max.attempts
property.
mapred.reduce.max.attempts
property.
TaskStatus.getMaxStringSize()
to control the max-size
of strings in TaskStatus
.
JobConf.getMemoryForMapTask()
and
JobConf.getMemoryForReduceTask()
OutputCommitter
implementation for the map-reduce job,
defaults to FileOutputCommitter
if not specified explicitly.
OutputCommitter
for the task-attempt.
SequenceFile.CompressionType
for the output SequenceFile
.
SequenceFile.CompressionType
for the output SequenceFile
.
CompressionCodec
for compressing the job outputs.
CompressionCodec
for compressing the job outputs.
OutputFormat
implementation for the map-reduce job,
defaults to TextOutputFormat
if not specified explicity.
OutputFormat
class for the job.
OutputFormat
class for the job.
RawComparator
comparator used to compare keys.
Path
to the output directory for the map-reduce job.
Path
to the output directory for the map-reduce job.
WritableComparable
comparator for
grouping keys of inputs to the reduce.
Object.hashCode()
to partition.
BinaryComparable.getBytes()
to partition.
Object.hashCode()
to partition.
Partitioner
used to partition Mapper
-outputs
to be sent to the Reducer
s.
Partitioner
class for the job.
Partitioner
class for the job.
Path
for a file that is unique for
the task within the job output directory.
Path
for a file that is unique for
the task within the job output directory.
RecordReader
consumed i.e.
RecordReader
for the given InputSplit
.
RecordReader
for the given InputSplit
.
RecordWriter
for the given job.
RecordWriter
for the given job.
RecordWriter
for the given task.
RecordWriter
for the given task.
Reducer
class for the job.
Reducer
class for the job.
Reducer
class for the job.
WrappedReducer.Context
for custom implementations.
true
.
JobClient.getReduceTaskReports(JobID)
TaskType
SequenceFile
SequenceFile
SequenceFile
SequenceFile
RawComparator
comparator used to compare keys.
RawComparator
comparator used to compare keys.
true
.
FileInputFormat.listStatus(JobConf)
when
they're too big.
TaskCompletionEvent.getTaskAttemptId()
instead.
TaskID
object that this task attempt belongs to
TaskID.getTaskIDsPattern(String, Integer, TaskType,
Integer)
TaskType
corresponding to the character
SequenceFileRecordReader.next(Object, Object)
..
Path
to the task's temporary output directory
for the map-reduce job
Path
to the task's temporary output directory
for the map-reduce job
QueueACL
name for the given queue.
Object.hashCode()
.Object.hashCode()
.IFile
is the simple IFile.Reader
to read intermediate map-outputs.IFile.Writer
to write out intermediate map-outputs.Enum
type, by the specified amount.
IFile.InMemoryReader
to read map-outputs present in-memory.InputFormat
describes the input-specification for a
Map-Reduce job.InputFormat
describes the input-specification for a
Map-Reduce job.TotalOrderPartitioner
.InputFormat
.InputSplit
represents the data to be processed by an
individual Mapper
.InputSplit
represents the data to be processed by an
individual Mapper
.Mapper
that swaps keys and values.Mapper
that swaps keys and values.JobClient
is the primary interface for the user-job to interact
with the cluster.JobConf
, and connect to the
default cluster
Configuration
,
and connect to the default cluster