Deprecated Methods |
org.apache.hadoop.mapreduce.filecache.DistributedCache.addArchiveToClassPath(Path, Configuration)
Use Job.addArchiveToClassPath(Path) instead |
org.apache.hadoop.mapreduce.filecache.DistributedCache.addCacheArchive(URI, Configuration)
Use Job.addCacheArchive(URI) instead |
org.apache.hadoop.mapreduce.filecache.DistributedCache.addCacheFile(URI, Configuration)
Use Job.addCacheFile(URI) instead |
org.apache.hadoop.mapreduce.filecache.DistributedCache.addFileToClassPath(Path, Configuration)
Use Job.addFileToClassPath(Path) instead |
org.apache.hadoop.filecache.DistributedCache.addLocalArchives(Configuration, String)
|
org.apache.hadoop.filecache.DistributedCache.addLocalFiles(Configuration, String)
|
org.apache.hadoop.mapred.JobClient.cancelDelegationToken(Token)
Use Token.cancel(org.apache.hadoop.conf.Configuration) instead |
org.apache.hadoop.mapreduce.Cluster.cancelDelegationToken(Token)
Use Token.cancel(org.apache.hadoop.conf.Configuration) instead |
org.apache.hadoop.mapred.OutputCommitter.cleanupJob(JobContext)
Use OutputCommitter.commitJob(JobContext) or
OutputCommitter.abortJob(JobContext, int) instead. |
org.apache.hadoop.mapred.OutputCommitter.cleanupJob(JobContext)
Use OutputCommitter.commitJob(org.apache.hadoop.mapreduce.JobContext)
or OutputCommitter.abortJob(org.apache.hadoop.mapreduce.JobContext, org.apache.hadoop.mapreduce.JobStatus.State)
instead. |
org.apache.hadoop.mapred.FileOutputCommitter.cleanupJob(JobContext)
|
org.apache.hadoop.mapreduce.OutputCommitter.cleanupJob(JobContext)
Use OutputCommitter.commitJob(JobContext) and
OutputCommitter.abortJob(JobContext, JobStatus.State) instead. |
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.cleanupJob(JobContext)
|
org.apache.hadoop.mapred.Counters.Counter.contentEquals(Counters.Counter)
|
org.apache.hadoop.filecache.DistributedCache.createAllSymlink(Configuration, File, File)
Internal to MapReduce framework. Use DistributedCacheManager
instead. |
org.apache.hadoop.mapred.lib.CombineFileInputFormat.createPool(JobConf, List)
Use CombineFileInputFormat.createPool(List) . |
org.apache.hadoop.mapred.lib.CombineFileInputFormat.createPool(JobConf, PathFilter...)
Use CombineFileInputFormat.createPool(PathFilter...) . |
org.apache.hadoop.mapreduce.Job.createSymlink()
|
org.apache.hadoop.mapreduce.filecache.DistributedCache.createSymlink(Configuration)
This is a NO-OP. |
org.apache.hadoop.mapreduce.lib.db.DBRecordReader.createValue()
|
org.apache.hadoop.mapred.JobConf.deleteLocalFiles()
|
org.apache.hadoop.mapred.Counters.findCounter(String, int, String)
use Counters.findCounter(String, String) instead |
org.apache.hadoop.mapreduce.Cluster.getAllJobs()
Use Cluster.getAllJobStatuses() instead. |
org.apache.hadoop.mapreduce.filecache.DistributedCache.getArchiveClassPaths(Configuration)
Use JobContext.getArchiveClassPaths() instead |
org.apache.hadoop.mapreduce.filecache.DistributedCache.getArchiveTimestamps(Configuration)
Use JobContext.getArchiveTimestamps() instead |
org.apache.hadoop.mapreduce.filecache.DistributedCache.getCacheArchives(Configuration)
Use JobContext.getCacheArchives() instead |
org.apache.hadoop.mapreduce.filecache.DistributedCache.getCacheFiles(Configuration)
Use JobContext.getCacheFiles() instead |
org.apache.hadoop.mapred.Counters.Group.getCounter(int, String)
use Counters.Group.findCounter(String) instead |
org.apache.hadoop.mapreduce.security.TokenCache.getDelegationToken(Credentials, String)
Use Credentials.getToken(org.apache.hadoop.io.Text)
instead, this method is included for compatibility against Hadoop-1 |
org.apache.hadoop.mapreduce.filecache.DistributedCache.getFileClassPaths(Configuration)
Use JobContext.getFileClassPaths() instead |
org.apache.hadoop.filecache.DistributedCache.getFileStatus(Configuration, URI)
|
org.apache.hadoop.mapreduce.filecache.DistributedCache.getFileTimestamps(Configuration)
Use JobContext.getFileTimestamps() instead |
org.apache.hadoop.mapred.ClusterStatus.getGraylistedTrackerNames()
|
org.apache.hadoop.mapred.ClusterStatus.getGraylistedTrackers()
|
org.apache.hadoop.mapreduce.Job.getInstance(Cluster)
Use Job.getInstance() |
org.apache.hadoop.mapreduce.Job.getInstance(Cluster, Configuration)
Use Job.getInstance(Configuration) |
org.apache.hadoop.mapred.JobClient.getJob(String)
Applications should rather use JobClient.getJob(JobID) . |
org.apache.hadoop.mapred.JobStatus.getJobId()
use getJobID instead |
org.apache.hadoop.mapred.JobProfile.getJobId()
use getJobID() instead |
org.apache.hadoop.mapred.RunningJob.getJobID()
This method is deprecated and will be removed. Applications should
rather use RunningJob.getID() . |
org.apache.hadoop.mapred.JobID.getJobIDsPattern(String, Integer)
|
org.apache.hadoop.mapred.ClusterStatus.getJobTrackerState()
|
org.apache.hadoop.mapreduce.JobContext.getLocalCacheArchives()
the array returned only includes the items the were
downloaded. There is no way to map this to what is returned by
JobContext.getCacheArchives() . |
org.apache.hadoop.mapreduce.filecache.DistributedCache.getLocalCacheArchives(Configuration)
Use JobContext.getLocalCacheArchives() instead |
org.apache.hadoop.mapreduce.JobContext.getLocalCacheFiles()
the array returned only includes the items the were
downloaded. There is no way to map this to what is returned by
JobContext.getCacheFiles() . |
org.apache.hadoop.mapreduce.filecache.DistributedCache.getLocalCacheFiles(Configuration)
Use JobContext.getLocalCacheFiles() instead |
org.apache.hadoop.mapred.JobClient.getMapTaskReports(String)
Applications should rather use JobClient.getMapTaskReports(JobID) |
org.apache.hadoop.mapred.ClusterStatus.getMaxMemory()
|
org.apache.hadoop.mapred.JobConf.getMaxPhysicalMemoryForTask()
this variable is deprecated and nolonger in use. |
org.apache.hadoop.mapred.JobConf.getMaxVirtualMemoryForTask()
Use JobConf.getMemoryForMapTask() and
JobConf.getMemoryForReduceTask() |
org.apache.hadoop.mapred.lib.TotalOrderPartitioner.getPartitionFile(JobConf)
Use
TotalOrderPartitioner.getPartitionFile(Configuration)
instead |
org.apache.hadoop.mapreduce.lib.db.DBRecordReader.getPos()
|
org.apache.hadoop.mapred.JobQueueInfo.getQueueState()
|
org.apache.hadoop.mapred.JobClient.getReduceTaskReports(String)
Applications should rather use JobClient.getReduceTaskReports(JobID) |
org.apache.hadoop.mapred.JobConf.getSessionId()
|
org.apache.hadoop.mapreduce.JobContext.getSymlink()
|
org.apache.hadoop.mapreduce.filecache.DistributedCache.getSymlink(Configuration)
symlinks are always created. |
org.apache.hadoop.mapred.TaskAttemptID.getTaskAttemptIDsPattern(String, Integer, Boolean, Integer, Integer)
|
org.apache.hadoop.mapred.TaskAttemptID.getTaskAttemptIDsPattern(String, Integer, TaskType, Integer, Integer)
|
org.apache.hadoop.mapred.TaskCompletionEvent.getTaskId()
use TaskCompletionEvent.getTaskAttemptId() instead. |
org.apache.hadoop.mapred.TaskID.getTaskIDsPattern(String, Integer, Boolean, Integer)
Use TaskID.getTaskIDsPattern(String, Integer, TaskType,
Integer) |
org.apache.hadoop.mapred.TaskID.getTaskIDsPattern(String, Integer, TaskType, Integer)
|
org.apache.hadoop.mapreduce.util.HostUtil.getTaskLogUrl(String, String, String)
Use HostUtil.getTaskLogUrl(String, String, String, String)
to construct the taskLogUrl. |
org.apache.hadoop.mapred.JobClient.getTaskOutputFilter()
|
org.apache.hadoop.filecache.DistributedCache.getTimestamp(Configuration, URI)
|
org.apache.hadoop.mapred.ClusterStatus.getUsedMemory()
|
org.apache.hadoop.mapreduce.TaskID.isMap()
|
org.apache.hadoop.mapreduce.TaskAttemptID.isMap()
|
org.apache.hadoop.mapred.OutputCommitter.isRecoverySupported()
Use OutputCommitter.isRecoverySupported(JobContext) instead. |
org.apache.hadoop.mapred.FileOutputCommitter.isRecoverySupported()
|
org.apache.hadoop.mapreduce.OutputCommitter.isRecoverySupported()
Use OutputCommitter.isRecoverySupported(JobContext) instead. |
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.isRecoverySupported()
|
org.apache.hadoop.mapred.RunningJob.killTask(String, boolean)
Applications should rather use RunningJob.killTask(TaskAttemptID, boolean) |
org.apache.hadoop.mapreduce.security.TokenCache.loadTokens(String, Configuration)
Use Credentials.readTokenStorageFile(org.apache.hadoop.fs.Path, org.apache.hadoop.conf.Configuration) instead,
this method is included for compatibility against Hadoop-1. |
org.apache.hadoop.mapreduce.security.TokenCache.loadTokens(String, JobConf)
Use Credentials.readTokenStorageFile(org.apache.hadoop.fs.Path, org.apache.hadoop.conf.Configuration) instead,
this method is included for compatibility against Hadoop-1. |
org.apache.hadoop.mapreduce.lib.db.DBRecordReader.next(LongWritable, T)
Use DBRecordReader.nextKeyValue() |
org.apache.hadoop.mapred.TaskID.read(DataInput)
|
org.apache.hadoop.mapred.TaskAttemptID.read(DataInput)
|
org.apache.hadoop.mapred.JobID.read(DataInput)
|
org.apache.hadoop.mapred.JobClient.renewDelegationToken(Token)
Use Token.renew(org.apache.hadoop.conf.Configuration) instead |
org.apache.hadoop.mapreduce.Cluster.renewDelegationToken(Token)
Use Token.renew(org.apache.hadoop.conf.Configuration) instead |
org.apache.hadoop.filecache.DistributedCache.setArchiveTimestamps(Configuration, String)
|
org.apache.hadoop.mapred.jobcontrol.Job.setAssignedJobID(JobID)
setAssignedJobID should not be called.
JOBID is set by the framework. |
org.apache.hadoop.mapreduce.filecache.DistributedCache.setCacheArchives(URI[], Configuration)
Use Job.setCacheArchives(URI[]) instead |
org.apache.hadoop.mapreduce.filecache.DistributedCache.setCacheFiles(URI[], Configuration)
Use Job.setCacheFiles(URI[]) instead |
org.apache.hadoop.mapreduce.Counter.setDisplayName(String)
(and no-op by default) |
org.apache.hadoop.mapreduce.counters.GenericCounter.setDisplayName(String)
|
org.apache.hadoop.mapreduce.counters.AbstractCounter.setDisplayName(String)
|
org.apache.hadoop.filecache.DistributedCache.setFileTimestamps(Configuration, String)
|
org.apache.hadoop.filecache.DistributedCache.setLocalArchives(Configuration, String)
|
org.apache.hadoop.filecache.DistributedCache.setLocalFiles(Configuration, String)
|
org.apache.hadoop.mapred.jobcontrol.Job.setMapredJobID(String)
|
org.apache.hadoop.mapred.JobConf.setMaxPhysicalMemoryForTask(long)
|
org.apache.hadoop.mapred.JobConf.setMaxVirtualMemoryForTask(long)
Use JobConf.setMemoryForMapTask(long mem) and
Use JobConf.setMemoryForReduceTask(long mem) |
org.apache.hadoop.mapred.lib.TotalOrderPartitioner.setPartitionFile(JobConf, Path)
Use
TotalOrderPartitioner.setPartitionFile(Configuration, Path)
instead |
org.apache.hadoop.mapred.JobConf.setSessionId(String)
|
org.apache.hadoop.mapred.jobcontrol.Job.setState(int)
|
org.apache.hadoop.mapred.TaskCompletionEvent.setTaskId(String)
use TaskCompletionEvent.setTaskAttemptId(TaskAttemptID) instead. |
org.apache.hadoop.mapred.TaskCompletionEvent.setTaskID(TaskAttemptID)
use TaskCompletionEvent.setTaskAttemptId(TaskAttemptID) instead. |
org.apache.hadoop.mapred.JobClient.setTaskOutputFilter(JobClient.TaskStatusFilter)
|
org.apache.hadoop.mapred.Counters.size()
use AbstractCounters.countCounters() instead |
org.apache.hadoop.mapred.pipes.Submitter.submitJob(JobConf)
Use Submitter.runJob(JobConf) |