|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
Job.addArchiveToClassPath(Path)
instead
Job.addCacheArchive(URI)
instead
Job.addCacheFile(URI)
instead
Job.addFileToClassPath(Path)
instead
Path
to the list of inputs for the map-reduce job.
Path
with a custom InputFormat
to the list of
inputs for the map-reduce job.
Path
with a custom InputFormat
and
Mapper
to the list of inputs for the map-reduce job.
Path
to the list of inputs for the map-reduce job.
Path
with a custom InputFormat
to the list of
inputs for the map-reduce job.
Path
with a custom InputFormat
and
Mapper
to the list of inputs for the map-reduce job.
Mapper
class to the chain mapper.
Mapper
class to the chain reducer.
BackupStore
is an utility class that is used to support
the mark-reset functionality of values iteratorBinaryComparable
keys using a configurable part of
the bytes array returned by BinaryComparable.getBytes()
.BinaryComparable
keys using a configurable part of
the bytes array returned by BinaryComparable.getBytes()
.Token.cancel(org.apache.hadoop.conf.Configuration)
instead
Token.cancel(org.apache.hadoop.conf.Configuration)
instead
ChainMapper
and the ChainReducer
classes.OutputCommitter.commitJob(JobContext)
or
OutputCommitter.abortJob(JobContext, int)
instead.
OutputCommitter.commitJob(org.apache.hadoop.mapreduce.JobContext)
or OutputCommitter.abortJob(org.apache.hadoop.mapreduce.JobContext, org.apache.hadoop.mapreduce.JobStatus.State)
instead.
OutputCommitter.commitJob(JobContext)
and
OutputCommitter.abortJob(JobContext, JobStatus.State)
instead.
JobContext
or TaskAttemptContext
with a
new configuration.
JobClient
.
RecordWriter
to future operations.
InputSplit
to future operations.
RecordWriter
to future operations.
Cluster
.
RecordWriter
to future operations.
RecordWriter
to future operations.
MultiFilterRecordReader.emit(org.apache.hadoop.mapred.join.TupleWritable)
every Tuple from the
collector (the outer join of child RRs).
MultiFilterRecordReader.emit(org.apache.hadoop.mapreduce.lib.join.TupleWritable)
every Tuple from the
collector (the outer join of child RRs).
InputFormat
that returns CombineFileSplit
's
in InputFormat.getSplits(JobConf, int)
method.InputFormat
that returns CombineFileSplit
's in
InputFormat.getSplits(JobContext)
method.CombineFileSplit
.CombineFileSplit
.CombineFileInputFormat
-equivalent for
SequenceFileInputFormat
.CombineFileInputFormat
-equivalent for
SequenceFileInputFormat
.CombineFileInputFormat
-equivalent for
TextInputFormat
.CombineFileInputFormat
-equivalent for
TextInputFormat
.JobConf
.
JobConf
.
Configuration
.
Counter
s that logically belong together.Counters
holds per job/task counters, defined either by the
Map-Reduce framework or applications.Group
of counters, comprising of counters from a particular
counter Enum
class.CombineFileInputFormat.createPool(List)
.
CombineFileInputFormat.createPool(PathFilter...)
.
DBWritable
.InputFormat
that delegates behaviour of paths to multiple other
InputFormats.InputFormat
that delegates behavior of paths to multiple other
InputFormats.Mapper
that delegates behaviour of paths to multiple other
mappers.Mapper
that delegates behavior of paths to multiple other
mappers.TaggedInputSplit
extendInternal
at least once.
InputFormat
.InputFormat
s.OutputCommitter
that commits files specified
in job output directory i.e.OutputCommitter
that commits files specified
in job output directory i.e.OutputFormat
.OutputFormat
s that read from FileSystem
s.FilterRecordWriter
is a convenience wrapper
class that implements RecordWriter
.FilterRecordWriter
is a convenience wrapper
class that extends the RecordWriter
.Counters.findCounter(String, String)
instead
Counters.makeEscapedCompactString()
counter
representation into a counter object.
Cluster.getAllJobStatuses()
instead.
JobContext.getArchiveClassPaths()
instead
JobContext.getArchiveTimestamps()
instead
SkipBadRecords.COUNTER_MAP_PROCESSED_RECORDS
is incremented
by MapRunner after invoking the map function.
SkipBadRecords.COUNTER_REDUCE_PROCESSED_GROUPS
is incremented
by framework after invoking the reduce function.
JobContext.getCacheArchives()
instead
JobContext.getCacheFiles()
instead
Configuration
for the Map or Reduce in the chain.
WritableComparable
comparator for
grouping keys of inputs to the combiner.
RawComparator
comparator for
grouping keys of inputs to the combiner.
RawComparator
comparator for
grouping keys of inputs to the combiner.
Counters.Group.findCounter(String)
instead
Counters.Counter
of the given group with the given name.
Counters.Counter
of the given group with the given name.
Counter
for the given counterName
.
Counter
for the given groupName
and
counterName
.
Credentials.getToken(org.apache.hadoop.io.Text)
instead, this method is included for compatibility against Hadoop-1
JobContext.getFileClassPaths()
instead
JobContext.getFileTimestamps()
instead
RawComparator
comparator for
grouping keys of inputs to the reduce.
RawComparator
comparator for
grouping keys of inputs to the reduce.
InputFormat
implementation for the map-reduce job,
defaults to TextInputFormat
if not specified explicity.
InputFormat
class for the job.
InputFormat
class for the job.
Path
s for the map-reduce job.
Path
s for the map-reduce job.
InputSplit
object for a map.
Job
with no particular Cluster
.
Job
with no particular Cluster
and a
given Configuration
.
Job
with no particular Cluster
and a given jobName.
Job
with no particular Cluster
and given
Configuration
and JobStatus
.
Job.getInstance()
Job.getInstance(Configuration)
Job
with no particular Cluster
and given
Configuration
and JobStatus
.
RunningJob
object to track an ongoing job.
JobClient.getJob(JobID)
.
RunningJob.getID()
.
JobID
object that this task attempt belongs to
JobID
object that this tip belongs to
JobPriority
for this job.
JobStatus
, of the Job.
SequenceFileRecordReader.next(Object, Object)
..
KeyFieldBasedComparator
options
KeyFieldBasedComparator
options
KeyFieldBasedPartitioner
options
KeyFieldBasedPartitioner
options
InputSplit
.
JobContext.getLocalCacheArchives()
instead
JobContext.getCacheArchives()
.
JobContext.getLocalCacheFiles()
instead
JobContext.getCacheFiles()
.
WrappedMapper.Context
for custom implementations.
CompressionCodec
for compressing the map outputs.
Mapper
class for the job.
Mapper
class for the job.
Mapper
class for the job.
MapRunnable
class for the job.
true
.
JobClient.getMapTaskReports(JobID)
mapreduce.map.maxattempts
property.
mapred.map.max.attempts
property.
mapred.map.max.attempts
property.
mapreduce.reduce.maxattempts
property.
mapred.reduce.max.attempts
property.
mapred.reduce.max.attempts
property.
TaskStatus.getMaxStringSize()
to control the max-size
of strings in TaskStatus
.
JobConf.getMemoryForMapTask()
and
JobConf.getMemoryForReduceTask()
OutputCommitter
implementation for the map-reduce job,
defaults to FileOutputCommitter
if not specified explicitly.
OutputCommitter
for the task-attempt.
SequenceFile.CompressionType
for the output SequenceFile
.
SequenceFile.CompressionType
for the output SequenceFile
.
CompressionCodec
for compressing the job outputs.
CompressionCodec
for compressing the job outputs.
OutputFormat
implementation for the map-reduce job,
defaults to TextOutputFormat
if not specified explicity.
OutputFormat
class for the job.
OutputFormat
class for the job.
RawComparator
comparator used to compare keys.
Path
to the output directory for the map-reduce job.
Path
to the output directory for the map-reduce job.
WritableComparable
comparator for
grouping keys of inputs to the reduce.
Object.hashCode()
to partition.
BinaryComparable.getBytes()
to partition.
Object.hashCode()
to partition.
Partitioner
used to partition Mapper
-outputs
to be sent to the Reducer
s.
Partitioner
class for the job.
Partitioner
class for the job.
TotalOrderPartitioner.getPartitionFile(Configuration)
instead
Path
for a file that is unique for
the task within the job output directory.
Path
for a file that is unique for
the task within the job output directory.
RecordReader
consumed i.e.
RecordReader
for the given InputSplit
.
RecordReader
for the given InputSplit
.
RecordWriter
for the given job.
RecordWriter
for the given job.
RecordWriter
for the given task.
RecordWriter
for the given task.
Reducer
class for the job.
Reducer
class for the job.
Reducer
class for the job.
WrappedReducer.Context
for custom implementations.
true
.
JobClient.getReduceTaskReports(JobID)
TaskType
SequenceFile
SequenceFile
SequenceFile
SequenceFile
RawComparator
comparator used to compare keys.
RawComparator
comparator used to compare keys.
true
.
FileInputFormat.listStatus(JobConf)
when
they're too big.
TaskCompletionEvent.Status
TaskCompletionEvent.getTaskAttemptId()
instead.
TaskID
object that this task attempt belongs to
TaskID.getTaskIDsPattern(String, Integer, TaskType,
Integer)
HostUtil.getTaskLogUrl(String, String, String, String)
to construct the taskLogUrl.
TaskCompletionEvent.Status
TaskType
corresponding to the character
SequenceFileRecordReader.next(Object, Object)
..
Path
to the task's temporary output directory
for the map-reduce job
Path
to the task's temporary output directory
for the map-reduce job
QueueACL
name for the given queue.
Object.hashCode()
.Object.hashCode()
.IFile
is the simple IFile.Reader
to read intermediate map-outputs.IFile.Writer
to write out intermediate map-outputs.Enum
type, by the specified amount.
IFile.InMemoryReader
to read map-outputs present in-memory.InputFormat
describes the input-specification for a
Map-Reduce job.InputFormat
describes the input-specification for a
Map-Reduce job.TotalOrderPartitioner
.InputFormat
.InputFormat
.InputSplit
represents the data to be processed by an
individual Mapper
.InputSplit
represents the data to be processed by an
individual Mapper
.Mapper
that swaps keys and values.Mapper
that swaps keys and values.OutputCommitter.isRecoverySupported(JobContext)
instead.
OutputCommitter.isRecoverySupported(JobContext)
instead.
CombineFileInputFormat.isSplitable(FileSystem, Path)
.
Job.getInstance()
Job.getInstance(Configuration)
Job.getInstance(Configuration, String)
JobClient
is the primary interface for the user-job to interact
with the cluster.JobConf
, and connect to the
default cluster
Configuration
,
and connect to the default cluster
JobCounter
instead.JobProfile
.
JobProfile
the userid, jobid,
job config-file, job-details url and job name.
JobProfile
the userid, jobid,
job config-file, job-details url and job name.
JobTracker
is no longer used since M/R 2.x.State
is no longer used since M/R 2.x.KeyFieldBasedComparator
.KeyFieldBasedComparator
.InputFormat
for plain text files.InputFormat
for plain text files.RunningJob.killTask(TaskAttemptID, boolean)
LineReader
instead.Credentials.readTokenStorageFile(org.apache.hadoop.fs.Path, org.apache.hadoop.conf.Configuration)
instead,
this method is included for compatibility against Hadoop-1.
Credentials.readTokenStorageFile(org.apache.hadoop.fs.Path, org.apache.hadoop.conf.Configuration)
instead,
this method is included for compatibility against Hadoop-1.
Reducer
that sums long values.map(...)
methods of the Mappers in the chain.
ProgressSplitsBlock
for an explanation of the meaning of that parameter.
Create an event for successful completion of map attempts
Mapper
.Mapper
.OutputFormat
that writes MapFile
s.OutputFormat
that writes
MapFile
s.Context
passed on to the Mapper
implementations.Level
for the map task.
Level
for the reduce task.
JobConf.MAPRED_MAP_TASK_ENV
or
JobConf.MAPRED_REDUCE_TASK_ENV
JobConf.MAPRED_MAP_TASK_JAVA_OPTS
or
JobConf.MAPRED_REDUCE_TASK_JAVA_OPTS
JobConf.MAPREDUCE_JOB_MAP_MEMORY_MB_PROPERTY
and
JobConf.MAPREDUCE_JOB_REDUCE_MEMORY_MB_PROPERTY
Mapper
and Reducer
implementations.Mapper
s.MapRunnable
implementation.MarkableIterator
is a wrapper iterator class that
implements the MarkableIteratorInterface
.InputFormat
that returns MultiFileSplit
's
in MultiFileInputFormat.getSplits(JobConf, int)
method.InputFormat
and Mapper
for each pathInputFormat
and Mapper
for each pathOutputCollector
passed to
the map()
and reduce()
methods of the
Mapper
and Reducer
implementations.DBRecordReader.nextKeyValue()
<key, value>
pairs output by Mapper
s
and Reducer
s.OutputCommitter
describes the commit of task output for a
Map-Reduce job.OutputCommitter
describes the commit of task output for a
Map-Reduce job.OutputFormat
describes the output-specification for a
Map-Reduce job.OutputFormat
describes the output-specification for a
Map-Reduce job.OutputCommitter
that commits files specified
in job output directory i.e.OutputCommitter
implementing partial commit of task output, as during preemption.Configuration
object.
RawKeyValueIterator
is an iterator used to iterate over
the raw keys and values during sort/merge of intermediate data.ResultSet
.
RecordReader
reads <key, value> pairs from an
InputSplit
.Mapper
.RecordWriter
writes the output <key, value> pairs
to an output file.RecordWriter
writes the output <key, value> pairs
to an output file.reduce(...)
method of the Reducer with the
map(...)
methods of the Mappers in the chain.
ProgressSplitsBlock
for an explanation of the meaning of that parameter.
Create an event to record completion of a reduce attempt
Reducer
.Iterator
to iterate over values for a given group of records.Reducer
.Context
passed on to the Reducer
implementations.Mapper
that extracts text matching a regular expression.Mapper
that extracts text matching a regular expression.Token.renew(org.apache.hadoop.conf.Configuration)
instead
Token.renew(org.apache.hadoop.conf.Configuration)
instead
TaskCompletionEvent
from the event stream.
Reducer.run(org.apache.hadoop.mapreduce.Reducer.Context)
method to
control how the reduce task works.
RunningJob
is the user-interface to query for details on a
running Map-Reduce job.OutputFormat
that writes keys, values to
SequenceFile
s in binary(raw) formatOutputFormat
that writes keys,
values to SequenceFile
s in binary(raw) formatInputFormat
for SequenceFile
s.InputFormat
for SequenceFile
s.OutputFormat
that writes SequenceFile
s.OutputFormat
that writes SequenceFile
s.RecordReader
for SequenceFile
s.RecordReader
for SequenceFile
s.SkipBadRecords.COUNTER_MAP_PROCESSED_RECORDS
is incremented
by MapRunner after invoking the map function.
SkipBadRecords.COUNTER_REDUCE_PROCESSED_GROUPS
is incremented
by framework after invoking the reduce function.
Job.setCacheArchives(URI[])
instead
Job.setCacheFiles(URI[])
instead
RawComparator
comparator for
grouping keys in the input to the combiner.
Reducer.reduce(Object, Iterable,
org.apache.hadoop.mapreduce.Reducer.Context)
Reducer.reduce(Object, Iterable,
org.apache.hadoop.mapreduce.Reducer.Context)
InputFormat
implementation for the map-reduce job.
InputFormat
for the job.
Path
s as the list of inputs
for the map-reduce job.
Path
s as the list of inputs
for the map-reduce job.
JobPriority
for this job.
KeyFieldBasedComparator
options used to compare keys.
KeyFieldBasedComparator
options used to compare keys.
KeyFieldBasedPartitioner
options used for
Partitioner
KeyFieldBasedPartitioner
options used for
Partitioner
bytes[offset:]
in Python syntax.
CompressionCodec
for the map outputs.
Mapper
class for the job.
Mapper
for the job.
MapRunnable
class for the job.
JobConf.setMemoryForMapTask(long mem)
and
Use JobConf.setMemoryForReduceTask(long mem)
bytes[left:(right+1)]
in Python syntax.
OutputCommitter
implementation for the map-reduce job.
SequenceFile.CompressionType
for the output SequenceFile
.
SequenceFile.CompressionType
for the output SequenceFile
.
CompressionCodec
to be used to compress job outputs.
CompressionCodec
to be used to compress job outputs.
OutputFormat
implementation for the map-reduce job.
OutputFormat
for the job.
RawComparator
comparator used to compare keys.
Path
of the output directory for the map-reduce job.
Path
of the output directory for the map-reduce job.
RawComparator
comparator for
grouping keys in the input to the reduce.
Partitioner
class used to partition
Mapper
-outputs to be sent to the Reducer
s.
Partitioner
for the job.
TotalOrderPartitioner.setPartitionFile(Configuration, Path)
instead
Reducer
class to the chain job.
Reducer
class for the job.
Reducer
for the job.
bytes[:(offset+1)]
in Python syntax.
SequenceFile
SequenceFile
SequenceFile
SequenceFile
Reducer
.
TaskStatus
.
TaskCompletionEvent.setTaskAttemptId(TaskAttemptID)
instead.
TaskCompletionEvent.setTaskAttemptId(TaskAttemptID)
instead.
Path
of the task's temporary output directory
for the map-reduce job.
AbstractCounters.countCounters()
instead
Submitter.runJob(JobConf)
TaskCounter
instead.TaskID
.
TaskAttemptID.TaskAttemptID(String, int, TaskType, int, int)
.
TaskID
.
ProgressSplitsBlock
for an explanation of the meaning of that parameter.
Create an event to record the unsuccessful completion of attempts
TaskID.TaskID(String, int, TaskType, int)
TaskID.TaskID(org.apache.hadoop.mapreduce.JobID, TaskType,
int)
JobID
.
JobID
.
InputFormat
for plain text files.InputFormat
for plain text files.OutputFormat
that writes plain text files.OutputFormat
that writes plain text files.Mapper
that maps text values into TTConfig.TT_RESOURCE_CALCULATOR_PLUGIN
instead
Writable
s.Writable
s.Mapper
which wraps a given one to allow custom
WrappedMapper.Context
implementations.Reducer
which wraps a given one to allow for custom
WrappedReducer.Context
implementations.PreparedStatement
.
out
.
|
||||||||||
PREV NEXT | FRAMES NO FRAMES |