Deprecated API


Contents
Deprecated Classes
org.apache.hadoop.mapreduce.filecache.DistributedCache
           
org.apache.hadoop.mapred.LineRecordReader.LineReader
          Use LineReader instead. 
 

Deprecated Enums
org.apache.hadoop.mapred.JobInProgress.Counter
          Provided for compatibility. Use JobCounter instead. 
org.apache.hadoop.mapred.Task.Counter
          Provided for compatibility. Use TaskCounter instead. 
 

Deprecated Fields
org.apache.hadoop.mapreduce.MRJobConfig.CACHE_SYMLINK
          Symlinks are always on and cannot be disabled. 
org.apache.hadoop.mapreduce.server.jobtracker.JTConfig.JT_SUPERGROUP
          Use MR_SUPERGROUP instead 
org.apache.hadoop.mapred.JobConf.MAPRED_MAP_TASK_ULIMIT
          Configuration key to set the maximum virtual memory available to the map tasks (in kilo-bytes). This has been deprecated and will no longer have any effect. 
org.apache.hadoop.mapred.JobConf.MAPRED_REDUCE_TASK_ULIMIT
          Configuration key to set the maximum virtual memory available to the reduce tasks (in kilo-bytes). This has been deprecated and will no longer have any effect. 
org.apache.hadoop.mapred.JobConf.MAPRED_TASK_DEFAULT_MAXVMEM_PROPERTY
            
org.apache.hadoop.mapred.JobConf.MAPRED_TASK_ENV
          Use JobConf.MAPRED_MAP_TASK_ENV or JobConf.MAPRED_REDUCE_TASK_ENV 
org.apache.hadoop.mapred.JobConf.MAPRED_TASK_JAVA_OPTS
          Use JobConf.MAPRED_MAP_TASK_JAVA_OPTS or JobConf.MAPRED_REDUCE_TASK_JAVA_OPTS 
org.apache.hadoop.mapred.JobConf.MAPRED_TASK_MAXPMEM_PROPERTY
            
org.apache.hadoop.mapred.JobConf.MAPRED_TASK_MAXVMEM_PROPERTY
          Use JobConf.MAPRED_JOB_MAP_MEMORY_MB_PROPERTY and JobConf.MAPRED_JOB_REDUCE_MEMORY_MB_PROPERTY 
org.apache.hadoop.mapred.JobConf.MAPRED_TASK_ULIMIT
          Configuration key to set the maximum virtual memory available to the child map and reduce tasks (in kilo-bytes). This has been deprecated and will no longer have any effect. 
org.apache.hadoop.mapreduce.MRConfig.MR_SUPERGROUP
           
org.apache.hadoop.mapreduce.server.tasktracker.TTConfig.TT_MEMORY_CALCULATOR_PLUGIN
          Use TTConfig.TT_RESOURCE_CALCULATOR_PLUGIN instead 
org.apache.hadoop.mapred.JobConf.UPPER_LIMIT_ON_TASK_VMEM_PROPERTY
            
 

Deprecated Methods
org.apache.hadoop.mapreduce.filecache.DistributedCache.addArchiveToClassPath(Path, Configuration)
          Use Job.addArchiveToClassPath(Path) instead 
org.apache.hadoop.mapreduce.filecache.DistributedCache.addCacheArchive(URI, Configuration)
          Use Job.addCacheArchive(URI) instead 
org.apache.hadoop.mapreduce.filecache.DistributedCache.addCacheFile(URI, Configuration)
          Use Job.addCacheFile(URI) instead 
org.apache.hadoop.mapreduce.filecache.DistributedCache.addFileToClassPath(Path, Configuration)
          Use Job.addFileToClassPath(Path) instead 
org.apache.hadoop.mapreduce.Cluster.cancelDelegationToken(Token)
          Use Token.cancel(org.apache.hadoop.conf.Configuration) instead 
org.apache.hadoop.mapred.JobClient.cancelDelegationToken(Token)
          Use Token.cancel(org.apache.hadoop.conf.Configuration) instead 
org.apache.hadoop.mapred.TaskLog.captureOutAndError(List, List, File, File, long, String)
          pidFiles are no more used. Instead pid is exported to env variable JVM_PID. 
org.apache.hadoop.mapreduce.OutputCommitter.cleanupJob(JobContext)
          Use OutputCommitter.commitJob(JobContext) and OutputCommitter.abortJob(JobContext, JobStatus.State) instead. 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.cleanupJob(JobContext)
           
org.apache.hadoop.mapred.OutputCommitter.cleanupJob(JobContext)
          Use OutputCommitter.commitJob(JobContext) or OutputCommitter.abortJob(JobContext, int) instead. 
org.apache.hadoop.mapred.OutputCommitter.cleanupJob(JobContext)
          Use OutputCommitter.commitJob(org.apache.hadoop.mapreduce.JobContext) or OutputCommitter.abortJob(org.apache.hadoop.mapreduce.JobContext, org.apache.hadoop.mapreduce.JobStatus.State) instead. 
org.apache.hadoop.mapred.FileOutputCommitter.cleanupJob(JobContext)
           
org.apache.hadoop.mapred.Counters.Counter.contentEquals(Counters.Counter)
            
org.apache.hadoop.mapred.lib.CombineFileInputFormat.createPool(JobConf, List)
          Use CombineFileInputFormat.createPool(List). 
org.apache.hadoop.mapred.lib.CombineFileInputFormat.createPool(JobConf, PathFilter...)
          Use CombineFileInputFormat.createPool(PathFilter...). 
org.apache.hadoop.mapreduce.Job.createSymlink()
           
org.apache.hadoop.mapreduce.filecache.DistributedCache.createSymlink(Configuration)
          This is a NO-OP. 
org.apache.hadoop.mapreduce.lib.db.DBRecordReader.createValue()
            
org.apache.hadoop.mapred.JobConf.deleteLocalFiles()
           
org.apache.hadoop.mapred.Counters.findCounter(String, int, String)
          use Counters.findCounter(String, String) instead 
org.apache.hadoop.mapreduce.Cluster.getAllJobs()
          Use Cluster.getAllJobStatuses() instead. 
org.apache.hadoop.mapreduce.filecache.DistributedCache.getArchiveClassPaths(Configuration)
          Use JobContext.getArchiveClassPaths() instead 
org.apache.hadoop.mapreduce.filecache.DistributedCache.getArchiveTimestamps(Configuration)
          Use JobContext.getArchiveTimestamps() instead 
org.apache.hadoop.mapreduce.filecache.DistributedCache.getCacheArchives(Configuration)
          Use JobContext.getCacheArchives() instead 
org.apache.hadoop.mapreduce.filecache.DistributedCache.getCacheFiles(Configuration)
          Use JobContext.getCacheFiles() instead 
org.apache.hadoop.mapred.Counters.Group.getCounter(int, String)
          use Counters.Group.findCounter(String) instead 
org.apache.hadoop.mapreduce.filecache.DistributedCache.getFileClassPaths(Configuration)
          Use JobContext.getFileClassPaths() instead 
org.apache.hadoop.mapreduce.filecache.DistributedCache.getFileTimestamps(Configuration)
          Use JobContext.getFileTimestamps() instead 
org.apache.hadoop.mapreduce.Job.getInstance(Cluster)
          Use Job.getInstance() 
org.apache.hadoop.mapreduce.Job.getInstance(Cluster, Configuration)
          Use Job.getInstance(Configuration) 
org.apache.hadoop.mapred.JobClient.getJob(String)
          Applications should rather use JobClient.getJob(JobID). 
org.apache.hadoop.mapred.JobStatus.getJobId()
          use getJobID instead 
org.apache.hadoop.mapred.JobProfile.getJobId()
          use getJobID() instead 
org.apache.hadoop.mapred.RunningJob.getJobID()
          This method is deprecated and will be removed. Applications should rather use RunningJob.getID(). 
org.apache.hadoop.mapred.JobID.getJobIDsPattern(String, Integer)
           
org.apache.hadoop.mapreduce.JobContext.getLocalCacheArchives()
          the array returned only includes the items the were downloaded. There is no way to map this to what is returned by JobContext.getCacheArchives(). 
org.apache.hadoop.mapreduce.filecache.DistributedCache.getLocalCacheArchives(Configuration)
          Use JobContext.getLocalCacheArchives() instead 
org.apache.hadoop.mapreduce.JobContext.getLocalCacheFiles()
          the array returned only includes the items the were downloaded. There is no way to map this to what is returned by JobContext.getCacheFiles(). 
org.apache.hadoop.mapreduce.filecache.DistributedCache.getLocalCacheFiles(Configuration)
          Use JobContext.getLocalCacheFiles() instead 
org.apache.hadoop.mapred.JobClient.getMapTaskReports(String)
          Applications should rather use JobClient.getMapTaskReports(JobID) 
org.apache.hadoop.mapred.JobConf.getMaxPhysicalMemoryForTask()
          this variable is deprecated and nolonger in use. 
org.apache.hadoop.mapred.JobConf.getMaxVirtualMemoryForTask()
          Use JobConf.getMemoryForMapTask() and JobConf.getMemoryForReduceTask() 
org.apache.hadoop.mapreduce.lib.db.DBRecordReader.getPos()
            
org.apache.hadoop.mapred.JobClient.getReduceTaskReports(String)
          Applications should rather use JobClient.getReduceTaskReports(JobID) 
org.apache.hadoop.mapred.JobConf.getSessionId()
           
org.apache.hadoop.mapreduce.JobContext.getSymlink()
           
org.apache.hadoop.mapreduce.filecache.DistributedCache.getSymlink(Configuration)
          symlinks are always created. 
org.apache.hadoop.mapred.TaskAttemptID.getTaskAttemptIDsPattern(String, Integer, Boolean, Integer, Integer)
           
org.apache.hadoop.mapred.TaskAttemptID.getTaskAttemptIDsPattern(String, Integer, TaskType, Integer, Integer)
           
org.apache.hadoop.mapred.TaskCompletionEvent.getTaskId()
          use TaskCompletionEvent.getTaskAttemptId() instead. 
org.apache.hadoop.mapred.TaskID.getTaskIDsPattern(String, Integer, Boolean, Integer)
          Use TaskID.getTaskIDsPattern(String, Integer, TaskType, Integer) 
org.apache.hadoop.mapred.TaskID.getTaskIDsPattern(String, Integer, TaskType, Integer)
           
org.apache.hadoop.mapred.JobClient.getTaskOutputFilter()
           
org.apache.hadoop.mapreduce.TaskID.isMap()
           
org.apache.hadoop.mapreduce.TaskAttemptID.isMap()
           
org.apache.hadoop.mapred.RunningJob.killTask(String, boolean)
          Applications should rather use RunningJob.killTask(TaskAttemptID, boolean) 
org.apache.hadoop.mapreduce.lib.db.DBRecordReader.next(LongWritable, T)
          Use DBRecordReader.nextKeyValue() 
org.apache.hadoop.mapred.JobID.read(DataInput)
           
org.apache.hadoop.mapred.TaskID.read(DataInput)
           
org.apache.hadoop.mapred.TaskAttemptID.read(DataInput)
           
org.apache.hadoop.mapreduce.Cluster.renewDelegationToken(Token)
          Use Token.renew(org.apache.hadoop.conf.Configuration) instead 
org.apache.hadoop.mapred.JobClient.renewDelegationToken(Token)
          Use Token.renew(org.apache.hadoop.conf.Configuration) instead 
org.apache.hadoop.mapred.jobcontrol.Job.setAssignedJobID(JobID)
          setAssignedJobID should not be called. JOBID is set by the framework. 
org.apache.hadoop.mapreduce.filecache.DistributedCache.setCacheArchives(URI[], Configuration)
          Use Job.setCacheArchives(URI[]) instead 
org.apache.hadoop.mapreduce.filecache.DistributedCache.setCacheFiles(URI[], Configuration)
          Use Job.setCacheFiles(URI[]) instead 
org.apache.hadoop.mapreduce.Counter.setDisplayName(String)
          (and no-op by default) 
org.apache.hadoop.mapreduce.counters.GenericCounter.setDisplayName(String)
           
org.apache.hadoop.mapreduce.counters.AbstractCounter.setDisplayName(String)
           
org.apache.hadoop.mapred.JobConf.setMaxPhysicalMemoryForTask(long)
           
org.apache.hadoop.mapred.JobConf.setMaxVirtualMemoryForTask(long)
          Use JobConf.setMemoryForMapTask(long mem) and Use JobConf.setMemoryForReduceTask(long mem) 
org.apache.hadoop.mapred.JobConf.setSessionId(String)
           
org.apache.hadoop.mapreduce.util.ProcfsBasedProcessTree.setSigKillInterval(long)
          Use ProcfsBasedProcessTree.ProcfsBasedProcessTree( String, boolean, long) instead 
org.apache.hadoop.mapred.TaskCompletionEvent.setTaskId(String)
          use TaskCompletionEvent.setTaskAttemptId(TaskAttemptID) instead. 
org.apache.hadoop.mapred.JobClient.setTaskOutputFilter(JobClient.TaskStatusFilter)
           
org.apache.hadoop.mapred.Counters.size()
          use AbstractCounters.countCounters() instead 
org.apache.hadoop.mapred.pipes.Submitter.submitJob(JobConf)
          Use Submitter.runJob(JobConf) 
 

Deprecated Constructors
org.apache.hadoop.mapred.FileSplit(Path, long, long, JobConf)
            
org.apache.hadoop.mapreduce.Job()
           
org.apache.hadoop.mapreduce.Job(Configuration)
           
org.apache.hadoop.mapreduce.Job(Configuration, String)
           
org.apache.hadoop.mapred.JobProfile(String, String, String, String, String)
          use JobProfile(String, JobID, String, String, String) instead 
org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinishedEvent(TaskAttemptID, TaskType, String, long, long, String, String, Counters)
          please use the constructor with an additional argument, an array of splits arrays instead. See ProgressSplitsBlock for an explanation of the meaning of that parameter. Create an event for successful completion of map attempts 
org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinishedEvent(TaskAttemptID, TaskType, String, long, long, long, String, String, Counters)
          please use the constructor with an additional argument, an array of splits arrays instead. See ProgressSplitsBlock for an explanation of the meaning of that parameter. Create an event to record completion of a reduce attempt 
org.apache.hadoop.mapred.TaskAttemptID(String, int, boolean, int, int)
          Use TaskAttemptID.TaskAttemptID(String, int, TaskType, int, int). 
org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletionEvent(TaskAttemptID, TaskType, String, long, String, String)
          please use the constructor with an additional argument, an array of splits arrays instead. See ProgressSplitsBlock for an explanation of the meaning of that parameter. Create an event to record the unsuccessful completion of attempts 
org.apache.hadoop.mapred.TaskID(JobID, boolean, int)
          Use TaskID.TaskID(String, int, TaskType, int) 
org.apache.hadoop.mapred.TaskID(String, int, boolean, int)
          Use TaskID.TaskID(org.apache.hadoop.mapreduce.JobID, TaskType, int) 
 



Copyright © 2012 Apache Software Foundation. All Rights Reserved.