Deprecated API


Contents
Deprecated Classes
org.apache.hadoop.filecache.DistributedCache
           
org.apache.hadoop.mapreduce.filecache.DistributedCache
           
org.apache.hadoop.mapred.LineRecordReader.LineReader
          Use LineReader instead. 
 

Deprecated Enums
org.apache.hadoop.mapred.FileInputFormat.Counter
           
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.Counter
           
org.apache.hadoop.mapred.FileOutputFormat.Counter
           
org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.Counter
           
org.apache.hadoop.mapred.JobInProgress.Counter
          Provided for compatibility. Use JobCounter instead. 
org.apache.hadoop.mapred.Task.Counter
          Provided for compatibility. Use TaskCounter instead. 
 

Deprecated Fields
org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.acls
           
org.apache.hadoop.mapreduce.jobhistory.AMStarted.applicationAttemptId
           
org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished.attemptId
           
org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.attemptId
           
org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted.attemptId
           
org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.attemptId
           
org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.attemptId
           
org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted.avataar
           
org.apache.hadoop.filecache.DistributedCache.CACHE_ARCHIVES
           
org.apache.hadoop.filecache.DistributedCache.CACHE_ARCHIVES_SIZES
           
org.apache.hadoop.filecache.DistributedCache.CACHE_ARCHIVES_TIMESTAMPS
           
org.apache.hadoop.filecache.DistributedCache.CACHE_FILES
           
org.apache.hadoop.filecache.DistributedCache.CACHE_FILES_SIZES
           
org.apache.hadoop.filecache.DistributedCache.CACHE_FILES_TIMESTAMPS
           
org.apache.hadoop.filecache.DistributedCache.CACHE_LOCALARCHIVES
           
org.apache.hadoop.filecache.DistributedCache.CACHE_LOCALFILES
           
org.apache.hadoop.filecache.DistributedCache.CACHE_SYMLINK
           
org.apache.hadoop.mapreduce.MRJobConfig.CACHE_SYMLINK
          Symlinks are always on and cannot be disabled. 
org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.clockSplits
           
org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.clockSplits
           
org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.clockSplits
           
org.apache.hadoop.mapreduce.jobhistory.AMStarted.containerId
           
org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted.containerId
           
org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished.counters
           
org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.counters
           
org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.counters
           
org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.counters
           
org.apache.hadoop.mapreduce.jobhistory.TaskFinished.counters
           
org.apache.hadoop.mapreduce.jobhistory.TaskFailed.counters
           
org.apache.hadoop.mapreduce.jobhistory.JhCounterGroup.counts
           
org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.cpuUsages
           
org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.cpuUsages
           
org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.cpuUsages
           
org.apache.hadoop.mapred.JobConf.DEFAULT_MAPREDUCE_RECOVER_JOB
           
org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletion.diagnostics
           
org.apache.hadoop.mapreduce.jobhistory.JhCounter.displayName
           
org.apache.hadoop.mapreduce.jobhistory.JhCounterGroup.displayName
           
org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.error
           
org.apache.hadoop.mapreduce.jobhistory.TaskFailed.error
           
org.apache.hadoop.mapreduce.jobhistory.Event.event
           
org.apache.hadoop.mapreduce.jobhistory.TaskFailed.failedDueToAttempt
           
org.apache.hadoop.mapreduce.jobhistory.JobFinished.failedMaps
           
org.apache.hadoop.mapreduce.jobhistory.JobFinished.failedReduces
           
org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletion.finishedMaps
           
org.apache.hadoop.mapreduce.jobhistory.JobFinished.finishedMaps
           
org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletion.finishedReduces
           
org.apache.hadoop.mapreduce.jobhistory.JobFinished.finishedReduces
           
org.apache.hadoop.mapreduce.jobhistory.TaskUpdated.finishTime
           
org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletion.finishTime
           
org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished.finishTime
           
org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.finishTime
           
org.apache.hadoop.mapreduce.jobhistory.JobFinished.finishTime
           
org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.finishTime
           
org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.finishTime
           
org.apache.hadoop.mapreduce.jobhistory.TaskFinished.finishTime
           
org.apache.hadoop.mapreduce.jobhistory.TaskFailed.finishTime
           
org.apache.hadoop.mapreduce.jobhistory.JhCounters.groups
           
org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished.hostname
           
org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.hostname
           
org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.hostname
           
org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.hostname
           
org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted.httpPort
           
org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.jobConfPath
           
org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletion.jobid
           
org.apache.hadoop.mapreduce.jobhistory.JobInited.jobid
           
org.apache.hadoop.mapreduce.jobhistory.JobStatusChanged.jobid
           
org.apache.hadoop.mapreduce.jobhistory.JobPriorityChange.jobid
           
org.apache.hadoop.mapreduce.jobhistory.JobInfoChange.jobid
           
org.apache.hadoop.mapreduce.jobhistory.JobFinished.jobid
           
org.apache.hadoop.mapreduce.jobhistory.JobQueueChange.jobid
           
org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.jobid
           
org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.jobName
           
org.apache.hadoop.mapreduce.jobhistory.JobQueueChange.jobQueueName
           
org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.jobQueueName
           
org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletion.jobStatus
           
org.apache.hadoop.mapreduce.jobhistory.JobInited.jobStatus
           
org.apache.hadoop.mapreduce.jobhistory.JobStatusChanged.jobStatus
           
org.apache.hadoop.mapreduce.server.jobtracker.JTConfig.JT_SUPERGROUP
          Use MR_SUPERGROUP instead 
org.apache.hadoop.mapreduce.jobhistory.JobInited.launchTime
           
org.apache.hadoop.mapreduce.jobhistory.JobInfoChange.launchTime
           
org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted.locality
           
org.apache.hadoop.mapreduce.jobhistory.JobFinished.mapCounters
           
org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.mapFinishTime
           
org.apache.hadoop.mapred.JobConf.MAPRED_JOB_MAP_MEMORY_MB_PROPERTY
           
org.apache.hadoop.mapred.JobConf.MAPRED_JOB_REDUCE_MEMORY_MB_PROPERTY
           
org.apache.hadoop.mapred.JobConf.MAPRED_MAP_TASK_ULIMIT
          Configuration key to set the maximum virtual memory available to the map tasks (in kilo-bytes). This has been deprecated and will no longer have any effect. 
org.apache.hadoop.mapred.JobConf.MAPRED_REDUCE_TASK_ULIMIT
          Configuration key to set the maximum virtual memory available to the reduce tasks (in kilo-bytes). This has been deprecated and will no longer have any effect. 
org.apache.hadoop.mapred.JobConf.MAPRED_TASK_DEFAULT_MAXVMEM_PROPERTY
            
org.apache.hadoop.mapred.JobConf.MAPRED_TASK_ENV
          Use JobConf.MAPRED_MAP_TASK_ENV or JobConf.MAPRED_REDUCE_TASK_ENV 
org.apache.hadoop.mapred.JobConf.MAPRED_TASK_JAVA_OPTS
          Use JobConf.MAPRED_MAP_TASK_JAVA_OPTS or JobConf.MAPRED_REDUCE_TASK_JAVA_OPTS 
org.apache.hadoop.mapred.JobConf.MAPRED_TASK_MAXPMEM_PROPERTY
            
org.apache.hadoop.mapred.JobConf.MAPRED_TASK_MAXVMEM_PROPERTY
          Use JobConf.MAPREDUCE_JOB_MAP_MEMORY_MB_PROPERTY and JobConf.MAPREDUCE_JOB_REDUCE_MEMORY_MB_PROPERTY 
org.apache.hadoop.mapred.JobConf.MAPRED_TASK_ULIMIT
          Configuration key to set the maximum virtual memory available to the child map and reduce tasks (in kilo-bytes). This has been deprecated and will no longer have any effect. 
org.apache.hadoop.mapred.JobConf.MAPREDUCE_RECOVER_JOB
           
org.apache.hadoop.mapreduce.MRConfig.MR_SUPERGROUP
           
org.apache.hadoop.mapreduce.jobhistory.JhCounters.name
           
org.apache.hadoop.mapreduce.jobhistory.JhCounter.name
           
org.apache.hadoop.mapreduce.jobhistory.JhCounterGroup.name
           
org.apache.hadoop.mapreduce.jobhistory.AMStarted.nodeManagerHost
           
org.apache.hadoop.mapreduce.jobhistory.AMStarted.nodeManagerHttpPort
           
org.apache.hadoop.mapreduce.jobhistory.AMStarted.nodeManagerPort
           
org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.physMemKbytes
           
org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.physMemKbytes
           
org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.physMemKbytes
           
org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.port
           
org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.port
           
org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.port
           
org.apache.hadoop.mapreduce.jobhistory.JobPriorityChange.priority
           
org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished.rackname
           
org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.rackname
           
org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.rackname
           
org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.rackname
           
org.apache.hadoop.mapreduce.jobhistory.JobFinished.reduceCounters
           
org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.shuffleFinishTime
           
org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted.shufflePort
           
org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.sortFinishTime
           
org.apache.hadoop.mapreduce.jobhistory.TaskStarted.splitLocations
           
org.apache.hadoop.mapreduce.jobhistory.TaskStarted.startTime
           
org.apache.hadoop.mapreduce.jobhistory.AMStarted.startTime
           
org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted.startTime
           
org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished.state
           
org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.state
           
org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.state
           
org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.status
           
org.apache.hadoop.mapreduce.jobhistory.TaskFinished.status
           
org.apache.hadoop.mapreduce.jobhistory.TaskFailed.status
           
org.apache.hadoop.mapreduce.jobhistory.JobInfoChange.submitTime
           
org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.submitTime
           
org.apache.hadoop.mapreduce.jobhistory.TaskFinished.successfulAttemptId
           
org.apache.hadoop.mapreduce.jobhistory.TaskStarted.taskid
           
org.apache.hadoop.mapreduce.jobhistory.TaskUpdated.taskid
           
org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished.taskid
           
org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.taskid
           
org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted.taskid
           
org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.taskid
           
org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.taskid
           
org.apache.hadoop.mapreduce.jobhistory.TaskFinished.taskid
           
org.apache.hadoop.mapreduce.jobhistory.TaskFailed.taskid
           
org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished.taskStatus
           
org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.taskStatus
           
org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.taskStatus
           
org.apache.hadoop.mapreduce.jobhistory.TaskStarted.taskType
           
org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished.taskType
           
org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.taskType
           
org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted.taskType
           
org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.taskType
           
org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.taskType
           
org.apache.hadoop.mapreduce.jobhistory.TaskFinished.taskType
           
org.apache.hadoop.mapreduce.jobhistory.TaskFailed.taskType
           
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.TEMP_DIR_NAME
           
org.apache.hadoop.mapreduce.jobhistory.JobFinished.totalCounters
           
org.apache.hadoop.mapreduce.jobhistory.JobInited.totalMaps
           
org.apache.hadoop.mapreduce.jobhistory.JobInited.totalReduces
           
org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted.trackerName
           
org.apache.hadoop.mapreduce.server.tasktracker.TTConfig.TT_MEMORY_CALCULATOR_PLUGIN
          Use TTConfig.TT_RESOURCE_CALCULATOR_PLUGIN instead 
org.apache.hadoop.mapreduce.jobhistory.Event.type
           
org.apache.hadoop.mapreduce.jobhistory.JobInited.uberized
           
org.apache.hadoop.mapred.JobConf.UPPER_LIMIT_ON_TASK_VMEM_PROPERTY
            
org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.userName
           
org.apache.hadoop.mapreduce.jobhistory.JhCounter.value
           
org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.vMemKbytes
           
org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.vMemKbytes
           
org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.vMemKbytes
           
org.apache.hadoop.mapred.JobConf.WORKFLOW_ADJACENCY_PREFIX_PATTERN
           
org.apache.hadoop.mapred.JobConf.WORKFLOW_ADJACENCY_PREFIX_STRING
           
org.apache.hadoop.mapred.JobConf.WORKFLOW_ID
           
org.apache.hadoop.mapred.JobConf.WORKFLOW_NAME
           
org.apache.hadoop.mapred.JobConf.WORKFLOW_NODE_NAME
           
org.apache.hadoop.mapred.JobConf.WORKFLOW_TAGS
           
org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.workflowAdjacencies
           
org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.workflowId
           
org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.workflowName
           
org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.workflowNodeName
           
org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.workflowTags
           
 

Deprecated Methods
org.apache.hadoop.mapreduce.filecache.DistributedCache.addArchiveToClassPath(Path, Configuration)
          Use Job.addArchiveToClassPath(Path) instead 
org.apache.hadoop.mapreduce.filecache.DistributedCache.addCacheArchive(URI, Configuration)
          Use Job.addCacheArchive(URI) instead 
org.apache.hadoop.mapreduce.filecache.DistributedCache.addCacheFile(URI, Configuration)
          Use Job.addCacheFile(URI) instead 
org.apache.hadoop.mapreduce.filecache.DistributedCache.addFileToClassPath(Path, Configuration)
          Use Job.addFileToClassPath(Path) instead 
org.apache.hadoop.filecache.DistributedCache.addLocalArchives(Configuration, String)
           
org.apache.hadoop.filecache.DistributedCache.addLocalFiles(Configuration, String)
           
org.apache.hadoop.mapred.JobClient.cancelDelegationToken(Token)
          Use Token.cancel(org.apache.hadoop.conf.Configuration) instead 
org.apache.hadoop.mapreduce.Cluster.cancelDelegationToken(Token)
          Use Token.cancel(org.apache.hadoop.conf.Configuration) instead 
org.apache.hadoop.mapred.OutputCommitter.cleanupJob(JobContext)
          Use OutputCommitter.commitJob(JobContext) or OutputCommitter.abortJob(JobContext, int) instead. 
org.apache.hadoop.mapred.OutputCommitter.cleanupJob(JobContext)
          Use OutputCommitter.commitJob(org.apache.hadoop.mapreduce.JobContext) or OutputCommitter.abortJob(org.apache.hadoop.mapreduce.JobContext, org.apache.hadoop.mapreduce.JobStatus.State) instead. 
org.apache.hadoop.mapred.FileOutputCommitter.cleanupJob(JobContext)
           
org.apache.hadoop.mapreduce.OutputCommitter.cleanupJob(JobContext)
          Use OutputCommitter.commitJob(JobContext) and OutputCommitter.abortJob(JobContext, JobStatus.State) instead. 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.cleanupJob(JobContext)
           
org.apache.hadoop.mapred.Counters.Counter.contentEquals(Counters.Counter)
            
org.apache.hadoop.filecache.DistributedCache.createAllSymlink(Configuration, File, File)
          Internal to MapReduce framework. Use DistributedCacheManager instead. 
org.apache.hadoop.mapred.lib.CombineFileInputFormat.createPool(JobConf, List)
          Use CombineFileInputFormat.createPool(List). 
org.apache.hadoop.mapred.lib.CombineFileInputFormat.createPool(JobConf, PathFilter...)
          Use CombineFileInputFormat.createPool(PathFilter...). 
org.apache.hadoop.mapreduce.Job.createSymlink()
           
org.apache.hadoop.mapreduce.filecache.DistributedCache.createSymlink(Configuration)
          This is a NO-OP. 
org.apache.hadoop.mapreduce.lib.db.DBRecordReader.createValue()
            
org.apache.hadoop.mapred.JobConf.deleteLocalFiles()
           
org.apache.hadoop.mapred.Counters.findCounter(String, int, String)
          use Counters.findCounter(String, String) instead 
org.apache.hadoop.mapreduce.Cluster.getAllJobs()
          Use Cluster.getAllJobStatuses() instead. 
org.apache.hadoop.mapreduce.filecache.DistributedCache.getArchiveClassPaths(Configuration)
          Use JobContext.getArchiveClassPaths() instead 
org.apache.hadoop.mapreduce.filecache.DistributedCache.getArchiveTimestamps(Configuration)
          Use JobContext.getArchiveTimestamps() instead 
org.apache.hadoop.mapreduce.filecache.DistributedCache.getCacheArchives(Configuration)
          Use JobContext.getCacheArchives() instead 
org.apache.hadoop.mapreduce.filecache.DistributedCache.getCacheFiles(Configuration)
          Use JobContext.getCacheFiles() instead 
org.apache.hadoop.mapred.Counters.Group.getCounter(int, String)
          use Counters.Group.findCounter(String) instead 
org.apache.hadoop.mapreduce.security.TokenCache.getDelegationToken(Credentials, String)
          Use Credentials.getToken(org.apache.hadoop.io.Text) instead, this method is included for compatibility against Hadoop-1 
org.apache.hadoop.mapreduce.filecache.DistributedCache.getFileClassPaths(Configuration)
          Use JobContext.getFileClassPaths() instead 
org.apache.hadoop.filecache.DistributedCache.getFileStatus(Configuration, URI)
           
org.apache.hadoop.mapreduce.filecache.DistributedCache.getFileTimestamps(Configuration)
          Use JobContext.getFileTimestamps() instead 
org.apache.hadoop.mapred.ClusterStatus.getGraylistedTrackerNames()
           
org.apache.hadoop.mapred.ClusterStatus.getGraylistedTrackers()
           
org.apache.hadoop.mapreduce.Job.getInstance(Cluster)
          Use Job.getInstance() 
org.apache.hadoop.mapreduce.Job.getInstance(Cluster, Configuration)
          Use Job.getInstance(Configuration) 
org.apache.hadoop.mapred.JobClient.getJob(String)
          Applications should rather use JobClient.getJob(JobID). 
org.apache.hadoop.mapred.JobProfile.getJobId()
          use getJobID() instead 
org.apache.hadoop.mapred.JobStatus.getJobId()
          use getJobID instead 
org.apache.hadoop.mapred.RunningJob.getJobID()
          This method is deprecated and will be removed. Applications should rather use RunningJob.getID(). 
org.apache.hadoop.mapred.JobID.getJobIDsPattern(String, Integer)
           
org.apache.hadoop.mapred.ClusterStatus.getJobTrackerState()
           
org.apache.hadoop.mapreduce.JobContext.getLocalCacheArchives()
          the array returned only includes the items the were downloaded. There is no way to map this to what is returned by JobContext.getCacheArchives(). 
org.apache.hadoop.mapreduce.filecache.DistributedCache.getLocalCacheArchives(Configuration)
          Use JobContext.getLocalCacheArchives() instead 
org.apache.hadoop.mapreduce.JobContext.getLocalCacheFiles()
          the array returned only includes the items the were downloaded. There is no way to map this to what is returned by JobContext.getCacheFiles(). 
org.apache.hadoop.mapreduce.filecache.DistributedCache.getLocalCacheFiles(Configuration)
          Use JobContext.getLocalCacheFiles() instead 
org.apache.hadoop.mapred.JobClient.getMapTaskReports(String)
          Applications should rather use JobClient.getMapTaskReports(JobID) 
org.apache.hadoop.mapred.ClusterStatus.getMaxMemory()
           
org.apache.hadoop.mapred.JobConf.getMaxPhysicalMemoryForTask()
          this variable is deprecated and nolonger in use. 
org.apache.hadoop.mapred.JobConf.getMaxVirtualMemoryForTask()
          Use JobConf.getMemoryForMapTask() and JobConf.getMemoryForReduceTask() 
org.apache.hadoop.mapred.lib.TotalOrderPartitioner.getPartitionFile(JobConf)
          Use TotalOrderPartitioner.getPartitionFile(Configuration) instead 
org.apache.hadoop.mapreduce.lib.db.DBRecordReader.getPos()
            
org.apache.hadoop.mapred.JobQueueInfo.getQueueState()
           
org.apache.hadoop.mapred.JobClient.getReduceTaskReports(String)
          Applications should rather use JobClient.getReduceTaskReports(JobID) 
org.apache.hadoop.mapred.JobConf.getSessionId()
           
org.apache.hadoop.mapreduce.JobContext.getSymlink()
           
org.apache.hadoop.mapreduce.filecache.DistributedCache.getSymlink(Configuration)
          symlinks are always created. 
org.apache.hadoop.mapred.TaskAttemptID.getTaskAttemptIDsPattern(String, Integer, Boolean, Integer, Integer)
           
org.apache.hadoop.mapred.TaskAttemptID.getTaskAttemptIDsPattern(String, Integer, TaskType, Integer, Integer)
           
org.apache.hadoop.mapred.TaskCompletionEvent.getTaskId()
          use TaskCompletionEvent.getTaskAttemptId() instead. 
org.apache.hadoop.mapred.TaskID.getTaskIDsPattern(String, Integer, Boolean, Integer)
          Use TaskID.getTaskIDsPattern(String, Integer, TaskType, Integer) 
org.apache.hadoop.mapred.TaskID.getTaskIDsPattern(String, Integer, TaskType, Integer)
           
org.apache.hadoop.mapred.JobClient.getTaskOutputFilter()
           
org.apache.hadoop.filecache.DistributedCache.getTimestamp(Configuration, URI)
           
org.apache.hadoop.mapred.ClusterStatus.getUsedMemory()
           
org.apache.hadoop.mapreduce.TaskID.isMap()
           
org.apache.hadoop.mapreduce.TaskAttemptID.isMap()
           
org.apache.hadoop.mapred.RunningJob.killTask(String, boolean)
          Applications should rather use RunningJob.killTask(TaskAttemptID, boolean) 
org.apache.hadoop.mapreduce.security.TokenCache.loadTokens(String, Configuration)
          Use Credentials.readTokenStorageFile(org.apache.hadoop.fs.Path, org.apache.hadoop.conf.Configuration) instead, this method is included for compatibility against Hadoop-1. 
org.apache.hadoop.mapreduce.security.TokenCache.loadTokens(String, JobConf)
          Use Credentials.readTokenStorageFile(org.apache.hadoop.fs.Path, org.apache.hadoop.conf.Configuration) instead, this method is included for compatibility against Hadoop-1. 
org.apache.hadoop.mapreduce.lib.db.DBRecordReader.next(LongWritable, T)
          Use DBRecordReader.nextKeyValue() 
org.apache.hadoop.mapred.TaskID.read(DataInput)
           
org.apache.hadoop.mapred.TaskAttemptID.read(DataInput)
           
org.apache.hadoop.mapred.JobID.read(DataInput)
           
org.apache.hadoop.mapred.JobClient.renewDelegationToken(Token)
          Use Token.renew(org.apache.hadoop.conf.Configuration) instead 
org.apache.hadoop.mapreduce.Cluster.renewDelegationToken(Token)
          Use Token.renew(org.apache.hadoop.conf.Configuration) instead 
org.apache.hadoop.filecache.DistributedCache.setArchiveTimestamps(Configuration, String)
           
org.apache.hadoop.mapred.jobcontrol.Job.setAssignedJobID(JobID)
          setAssignedJobID should not be called. JOBID is set by the framework. 
org.apache.hadoop.mapreduce.filecache.DistributedCache.setCacheArchives(URI[], Configuration)
          Use Job.setCacheArchives(URI[]) instead 
org.apache.hadoop.mapreduce.filecache.DistributedCache.setCacheFiles(URI[], Configuration)
          Use Job.setCacheFiles(URI[]) instead 
org.apache.hadoop.mapreduce.Counter.setDisplayName(String)
          (and no-op by default) 
org.apache.hadoop.mapreduce.counters.GenericCounter.setDisplayName(String)
           
org.apache.hadoop.mapreduce.counters.AbstractCounter.setDisplayName(String)
           
org.apache.hadoop.filecache.DistributedCache.setFileTimestamps(Configuration, String)
           
org.apache.hadoop.filecache.DistributedCache.setLocalArchives(Configuration, String)
           
org.apache.hadoop.filecache.DistributedCache.setLocalFiles(Configuration, String)
           
org.apache.hadoop.mapred.jobcontrol.Job.setMapredJobID(String)
           
org.apache.hadoop.mapred.JobConf.setMaxPhysicalMemoryForTask(long)
           
org.apache.hadoop.mapred.JobConf.setMaxVirtualMemoryForTask(long)
          Use JobConf.setMemoryForMapTask(long mem) and Use JobConf.setMemoryForReduceTask(long mem) 
org.apache.hadoop.mapred.lib.TotalOrderPartitioner.setPartitionFile(JobConf, Path)
          Use TotalOrderPartitioner.setPartitionFile(Configuration, Path) instead 
org.apache.hadoop.mapred.JobConf.setSessionId(String)
           
org.apache.hadoop.mapred.jobcontrol.Job.setState(int)
           
org.apache.hadoop.mapred.TaskCompletionEvent.setTaskId(String)
          use TaskCompletionEvent.setTaskAttemptId(TaskAttemptID) instead. 
org.apache.hadoop.mapred.TaskCompletionEvent.setTaskID(TaskAttemptID)
          use TaskCompletionEvent.setTaskAttemptId(TaskAttemptID) instead. 
org.apache.hadoop.mapred.JobClient.setTaskOutputFilter(JobClient.TaskStatusFilter)
           
org.apache.hadoop.mapred.Counters.size()
          use AbstractCounters.countCounters() instead 
org.apache.hadoop.mapred.pipes.Submitter.submitJob(JobConf)
          Use Submitter.runJob(JobConf) 
 

Deprecated Constructors
org.apache.hadoop.mapred.FileSplit(Path, long, long, JobConf)
            
org.apache.hadoop.mapreduce.Job()
           
org.apache.hadoop.mapreduce.Job(Configuration)
           
org.apache.hadoop.mapreduce.Job(Configuration, String)
           
org.apache.hadoop.mapred.JobProfile(String, String, String, String, String)
          use JobProfile(String, JobID, String, String, String) instead 
org.apache.hadoop.mapred.JobStatus(JobID, float, float, float, float, int, JobPriority)
           
org.apache.hadoop.mapred.JobStatus(JobID, float, float, float, int)
           
org.apache.hadoop.mapred.JobStatus(JobID, float, float, float, int, JobPriority)
           
org.apache.hadoop.mapred.JobStatus(JobID, float, float, int)
           
org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinishedEvent(TaskAttemptID, TaskType, String, long, long, String, String, Counters)
          please use the constructor with an additional argument, an array of splits arrays instead. See ProgressSplitsBlock for an explanation of the meaning of that parameter. Create an event for successful completion of map attempts 
org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinishedEvent(TaskAttemptID, TaskType, String, long, long, long, String, String, Counters)
          please use the constructor with an additional argument, an array of splits arrays instead. See ProgressSplitsBlock for an explanation of the meaning of that parameter. Create an event to record completion of a reduce attempt 
org.apache.hadoop.mapred.TaskAttemptID(String, int, boolean, int, int)
          Use TaskAttemptID.TaskAttemptID(String, int, TaskType, int, int). 
org.apache.hadoop.mapreduce.TaskAttemptID(String, int, boolean, int, int)
           
org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletionEvent(TaskAttemptID, TaskType, String, long, String, String)
          please use the constructor with an additional argument, an array of splits arrays instead. See ProgressSplitsBlock for an explanation of the meaning of that parameter. Create an event to record the unsuccessful completion of attempts 
org.apache.hadoop.mapred.TaskID(JobID, boolean, int)
          Use TaskID.TaskID(String, int, TaskType, int) 
org.apache.hadoop.mapreduce.TaskID(JobID, boolean, int)
           
org.apache.hadoop.mapred.TaskID(String, int, boolean, int)
          Use TaskID.TaskID(org.apache.hadoop.mapreduce.JobID, TaskType, int) 
org.apache.hadoop.mapreduce.TaskID(String, int, boolean, int)
           
 

Deprecated Enum Constants
org.apache.hadoop.mapreduce.JobCounter.FALLOW_SLOTS_MILLIS_MAPS
           
org.apache.hadoop.mapreduce.JobCounter.FALLOW_SLOTS_MILLIS_REDUCES
           
org.apache.hadoop.mapreduce.JobCounter.SLOTS_MILLIS_MAPS
           
org.apache.hadoop.mapreduce.JobCounter.SLOTS_MILLIS_REDUCES
           
 



Copyright © 2014 Apache Software Foundation. All Rights Reserved.