org.apache.accumulo.core.client.mapred
Class AccumuloMultiTableInputFormat
java.lang.Object
org.apache.accumulo.core.client.mapred.AbstractInputFormat<Key,Value>
org.apache.accumulo.core.client.mapred.AccumuloMultiTableInputFormat
- All Implemented Interfaces:
- org.apache.hadoop.mapred.InputFormat<Key,Value>
public class AccumuloMultiTableInputFormat
- extends AbstractInputFormat<Key,Value>
This class allows MapReduce jobs to use multiple Accumulo tables as the source of data. This InputFormat
provides keys and
values of type Key
and Value
to the Map function.
The user must specify the following via static configurator methods:
Other static methods are optional.
Methods inherited from class org.apache.accumulo.core.client.mapred.AbstractInputFormat |
getAuthenticationToken, getInputTableConfig, getInputTableConfigs, getInstance, getLogLevel, getPrincipal, getScanAuthorizations, getSplits, getTabletLocator, getToken, getTokenClass, isConnectorInfoSet, setConnectorInfo, setConnectorInfo, setLogLevel, setMockInstance, setScanAuthorizations, setZooKeeperInstance, setZooKeeperInstance, validateOptions |
Methods inherited from class java.lang.Object |
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait |
AccumuloMultiTableInputFormat
public AccumuloMultiTableInputFormat()
setInputTableConfigs
public static void setInputTableConfigs(org.apache.hadoop.mapred.JobConf job,
Map<String,InputTableConfig> configs)
- Sets the
InputTableConfig
objects on the given Hadoop configuration
- Parameters:
job
- the Hadoop job instance to be configuredconfigs
- the table query configs to be set on the configuration.- Since:
- 1.6.0
getRecordReader
public org.apache.hadoop.mapred.RecordReader<Key,Value> getRecordReader(org.apache.hadoop.mapred.InputSplit split,
org.apache.hadoop.mapred.JobConf job,
org.apache.hadoop.mapred.Reporter reporter)
throws IOException
- Throws:
IOException
Copyright © 2015 Apache Accumulo Project. All rights reserved.