Modifier and Type | Method and Description |
---|---|
BatchDeleter |
AccumuloClient.createBatchDeleter(String tableName,
Authorizations authorizations,
int numQueryThreads)
Factory method to create BatchDeleter.
|
abstract BatchDeleter |
Connector.createBatchDeleter(String tableName,
Authorizations authorizations,
int numQueryThreads,
BatchWriterConfig config)
Deprecated.
Factory method to create BatchDeleter
|
BatchDeleter |
AccumuloClient.createBatchDeleter(String tableName,
Authorizations authorizations,
int numQueryThreads,
BatchWriterConfig config)
Factory method to create BatchDeleter
|
abstract BatchDeleter |
Connector.createBatchDeleter(String tableName,
Authorizations authorizations,
int numQueryThreads,
long maxMemory,
long maxLatency,
int maxWriteThreads)
Deprecated.
since 1.5.0; Use
Connector.createBatchDeleter(String, Authorizations, int, BatchWriterConfig)
instead. |
BatchScanner |
AccumuloClient.createBatchScanner(String tableName)
Factory method to create a BatchScanner with all of user's authorizations and the number of
query threads configured when AccumuloClient was created.
|
BatchScanner |
AccumuloClient.createBatchScanner(String tableName,
Authorizations authorizations)
Factory method to create a BatchScanner connected to Accumulo.
|
abstract BatchScanner |
Connector.createBatchScanner(String tableName,
Authorizations authorizations,
int numQueryThreads)
Deprecated.
Factory method to create a BatchScanner connected to Accumulo.
|
BatchScanner |
AccumuloClient.createBatchScanner(String tableName,
Authorizations authorizations,
int numQueryThreads)
Factory method to create a BatchScanner connected to Accumulo.
|
BatchWriter |
AccumuloClient.createBatchWriter(String tableName)
Factory method to create a BatchWriter.
|
abstract BatchWriter |
Connector.createBatchWriter(String tableName,
BatchWriterConfig config)
Deprecated.
Factory method to create a BatchWriter connected to Accumulo.
|
BatchWriter |
AccumuloClient.createBatchWriter(String tableName,
BatchWriterConfig config)
Factory method to create a BatchWriter connected to Accumulo.
|
abstract BatchWriter |
Connector.createBatchWriter(String tableName,
long maxMemory,
long maxLatency,
int maxWriteThreads)
Deprecated.
since 1.5.0; Use
Connector.createBatchWriter(String, BatchWriterConfig) instead. |
abstract ConditionalWriter |
Connector.createConditionalWriter(String tableName,
ConditionalWriterConfig config)
Deprecated.
Factory method to create a ConditionalWriter connected to Accumulo.
|
ConditionalWriter |
AccumuloClient.createConditionalWriter(String tableName,
ConditionalWriterConfig config)
Factory method to create a ConditionalWriter connected to Accumulo.
|
Scanner |
AccumuloClient.createScanner(String tableName)
Factory method to create a Scanner with all of the user's authorizations.
|
abstract Scanner |
Connector.createScanner(String tableName,
Authorizations authorizations)
Deprecated.
Factory method to create a Scanner connected to Accumulo.
|
Scanner |
AccumuloClient.createScanner(String tableName,
Authorizations authorizations)
Factory method to create a Scanner connected to Accumulo.
|
void |
BatchDeleter.delete()
Deletes the ranges specified by
BatchDeleter.setRanges(java.util.Collection<org.apache.accumulo.core.data.Range>) . |
BatchWriter |
MultiTableBatchWriter.getBatchWriter(String table)
Returns a BatchWriter for a particular table.
|
Modifier and Type | Method and Description |
---|---|
int |
TableOperations.addConstraint(String tableName,
String constraintClassName)
Add a new constraint to a table.
|
void |
TableOperations.addSplits(String tableName,
SortedSet<org.apache.hadoop.io.Text> partitionKeys)
Ensures that tablets are split along a set of keys.
|
void |
TableOperations.addSummarizers(String tableName,
SummarizerConfiguration... summarizers)
Enables summary generation for this table for future compactions.
|
void |
TableOperations.attachIterator(String tableName,
IteratorSetting setting)
Add an iterator to a table on all scopes.
|
void |
TableOperations.attachIterator(String tableName,
IteratorSetting setting,
EnumSet<IteratorUtil.IteratorScope> scopes)
Add an iterator to a table on the given scopes.
|
void |
TableOperations.cancelCompaction(String tableName)
Cancels a user initiated major compaction of a table initiated with
TableOperations.compact(String, Text, Text, boolean, boolean) or
TableOperations.compact(String, Text, Text, List, boolean, boolean) . |
void |
TableOperations.checkIteratorConflicts(String tableName,
IteratorSetting setting,
EnumSet<IteratorUtil.IteratorScope> scopes)
Check whether a given iterator configuration conflicts with existing configuration; in
particular, determine if the name or priority are already in use for the specified scopes.
|
void |
TableOperations.clearLocatorCache(String tableName)
Clears the tablet locator cache for a specified table
|
void |
TableOperations.clearSamplerConfiguration(String tableName)
Clear all sampling configuration properties on the table.
|
void |
TableOperations.clone(String srcTableName,
String newTableName,
boolean flush,
Map<String,String> propertiesToSet,
Set<String> propertiesToExclude)
Clone a table from an existing table.
|
void |
TableOperations.compact(String tableName,
CompactionConfig config)
Starts a full major compaction of the tablets in the range (start, end].
|
void |
TableOperations.compact(String tableName,
org.apache.hadoop.io.Text start,
org.apache.hadoop.io.Text end,
boolean flush,
boolean wait)
Starts a full major compaction of the tablets in the range (start, end].
|
void |
TableOperations.compact(String tableName,
org.apache.hadoop.io.Text start,
org.apache.hadoop.io.Text end,
List<IteratorSetting> iterators,
boolean flush,
boolean wait)
Starts a full major compaction of the tablets in the range (start, end].
|
void |
TableOperations.delete(String tableName)
Delete a table
|
void |
TableOperations.deleteRows(String tableName,
org.apache.hadoop.io.Text start,
org.apache.hadoop.io.Text end)
Delete rows between (start, end]
|
void |
ReplicationOperations.drain(String tableName)
Waits for a table to be fully replicated, given the state of files pending replication for the
provided table at the point in time which this method is invoked.
|
void |
ReplicationOperations.drain(String tableName,
Set<String> files)
Given the provided set of files that are pending replication for a table, wait for those files
to be fully replicated to all configured peers.
|
void |
TableOperations.exportTable(String tableName,
String exportDir)
Exports a table.
|
static org.apache.hadoop.io.Text |
FindMax.findMax(Scanner scanner,
org.apache.hadoop.io.Text start,
boolean is,
org.apache.hadoop.io.Text end,
boolean ie) |
void |
TableOperations.flush(String tableName,
org.apache.hadoop.io.Text start,
org.apache.hadoop.io.Text end,
boolean wait)
Flush a table's data that is currently in memory.
|
List<DiskUsage> |
TableOperations.getDiskUsage(Set<String> tables)
Gets the number of bytes being used in the files for a set of tables
|
IteratorSetting |
TableOperations.getIteratorSetting(String tableName,
String name,
IteratorUtil.IteratorScope scope)
Get the settings for an iterator.
|
Map<String,Set<org.apache.hadoop.io.Text>> |
TableOperations.getLocalityGroups(String tableName)
Gets the locality groups currently set for a table.
|
org.apache.hadoop.io.Text |
TableOperations.getMaxRow(String tableName,
Authorizations auths,
org.apache.hadoop.io.Text startRow,
boolean startInclusive,
org.apache.hadoop.io.Text endRow,
boolean endInclusive)
Finds the max row within a given range.
|
Iterable<Map.Entry<String,String>> |
TableOperations.getProperties(String tableName)
Gets properties of a table.
|
SamplerConfiguration |
TableOperations.getSamplerConfiguration(String tableName)
Reads the sampling configuration properties for a table.
|
abstract String |
ActiveCompaction.getTable() |
void |
TableOperations.importDirectory(String tableName,
String dir,
String failureDir,
boolean setTime)
Deprecated.
since 2.0.0 use
TableOperations.importDirectory(String) instead. |
Map<String,Integer> |
TableOperations.listConstraints(String tableName)
List constraints on a table with their assigned numbers.
|
Map<String,EnumSet<IteratorUtil.IteratorScope>> |
TableOperations.listIterators(String tableName)
Get a list of iterators for this table.
|
Collection<org.apache.hadoop.io.Text> |
TableOperations.listSplits(String tableName) |
Collection<org.apache.hadoop.io.Text> |
TableOperations.listSplits(String tableName,
int maxSplits) |
List<SummarizerConfiguration> |
TableOperations.listSummarizers(String tableName) |
void |
TableOperations.ImportOptions.load()
Loads the files into the table.
|
Locations |
TableOperations.locate(String tableName,
Collection<Range> ranges)
Locates the tablet servers and tablets that would service a collections of ranges.
|
void |
TableOperations.merge(String tableName,
org.apache.hadoop.io.Text start,
org.apache.hadoop.io.Text end)
Merge tablets between (start, end]
|
void |
TableOperations.offline(String tableName)
Initiates taking a table offline, but does not wait for action to complete
|
void |
TableOperations.offline(String tableName,
boolean wait) |
void |
TableOperations.online(String tableName)
Initiates bringing a table online, but does not wait for action to complete
|
void |
TableOperations.online(String tableName,
boolean wait) |
Set<String> |
ReplicationOperations.referencedFiles(String tableName)
Gets all of the referenced files for a table from the metadata table.
|
void |
TableOperations.removeIterator(String tableName,
String name,
EnumSet<IteratorUtil.IteratorScope> scopes)
Remove an iterator from a table by name.
|
void |
TableOperations.removeSummarizers(String tableName,
Predicate<SummarizerConfiguration> predicate)
Removes summary generation for this table for the matching summarizers.
|
void |
TableOperations.rename(String oldTableName,
String newTableName)
Rename a table
|
List<Summary> |
SummaryRetriever.retrieve() |
void |
TableOperations.setLocalityGroups(String tableName,
Map<String,Set<org.apache.hadoop.io.Text>> groups)
Sets a table's locality groups.
|
void |
TableOperations.setSamplerConfiguration(String tableName,
SamplerConfiguration samplerConfiguration)
Set or update the sampler configuration for a table.
|
Set<Range> |
TableOperations.splitRangeByTablets(String tableName,
Range range,
int maxSplits) |
SummaryRetriever |
TableOperations.summaries(String tableName)
Entry point for retrieving summaries with optional restrictions.
|
boolean |
TableOperations.testClassLoad(String tableName,
String className,
String asTypeName)
Test to see if the instance can load the given class as the given type.
|
Modifier and Type | Method and Description |
---|---|
static Table.ID |
Tables._getTableId(ClientContext context,
String tableName)
Lookup table ID in ZK.
|
protected TabletLocator.TabletLocation |
TabletLocatorImpl._locateTablet(ClientContext context,
org.apache.hadoop.io.Text row,
boolean skipRow,
boolean retry,
boolean lock,
org.apache.accumulo.core.clientImpl.TabletLocatorImpl.LockCheckerSession lcSession) |
int |
TableOperationsImpl.addConstraint(String tableName,
String constraintClassName) |
int |
TableOperationsHelper.addConstraint(String tableName,
String constraintClassName) |
void |
TableOperationsImpl.addSplits(String tableName,
SortedSet<org.apache.hadoop.io.Text> partitionKeys) |
void |
TableOperationsImpl.addSummarizers(String tableName,
SummarizerConfiguration... newConfigs) |
void |
TableOperationsHelper.attachIterator(String tableName,
IteratorSetting setting) |
void |
TableOperationsImpl.attachIterator(String tableName,
IteratorSetting setting,
EnumSet<IteratorUtil.IteratorScope> scopes) |
void |
TableOperationsHelper.attachIterator(String tableName,
IteratorSetting setting,
EnumSet<IteratorUtil.IteratorScope> scopes) |
<T extends Mutation> |
TimeoutTabletLocator.binMutations(ClientContext context,
List<T> mutations,
Map<String,TabletLocator.TabletServerMutations<T>> binnedMutations,
List<T> failures) |
<T extends Mutation> |
TabletLocatorImpl.binMutations(ClientContext context,
List<T> mutations,
Map<String,TabletLocator.TabletServerMutations<T>> binnedMutations,
List<T> failures) |
abstract <T extends Mutation> |
TabletLocator.binMutations(ClientContext context,
List<T> mutations,
Map<String,TabletLocator.TabletServerMutations<T>> binnedMutations,
List<T> failures) |
<T extends Mutation> |
SyncingTabletLocator.binMutations(ClientContext context,
List<T> mutations,
Map<String,TabletLocator.TabletServerMutations<T>> binnedMutations,
List<T> failures) |
List<Range> |
TimeoutTabletLocator.binRanges(ClientContext context,
List<Range> ranges,
Map<String,Map<KeyExtent,List<Range>>> binnedRanges) |
List<Range> |
TabletLocatorImpl.binRanges(ClientContext context,
List<Range> ranges,
Map<String,Map<KeyExtent,List<Range>>> binnedRanges) |
abstract List<Range> |
TabletLocator.binRanges(ClientContext context,
List<Range> ranges,
Map<String,Map<KeyExtent,List<Range>>> binnedRanges) |
List<Range> |
SyncingTabletLocator.binRanges(ClientContext context,
List<Range> ranges,
Map<String,Map<KeyExtent,List<Range>>> binnedRanges) |
void |
TableOperationsImpl.cancelCompaction(String tableName) |
void |
TableOperationsHelper.checkIteratorConflicts(String tableName,
IteratorSetting setting,
EnumSet<IteratorUtil.IteratorScope> scopes) |
void |
TableOperationsImpl.clearLocatorCache(String tableName) |
void |
TableOperationsImpl.clearSamplerConfiguration(String tableName) |
void |
TableOperationsImpl.clone(String srcTableName,
String newTableName,
boolean flush,
Map<String,String> propertiesToSet,
Set<String> propertiesToExclude) |
void |
TableOperationsImpl.compact(String tableName,
CompactionConfig config) |
void |
TableOperationsImpl.compact(String tableName,
org.apache.hadoop.io.Text start,
org.apache.hadoop.io.Text end,
boolean flush,
boolean wait) |
void |
TableOperationsImpl.compact(String tableName,
org.apache.hadoop.io.Text start,
org.apache.hadoop.io.Text end,
List<IteratorSetting> iterators,
boolean flush,
boolean wait) |
BatchDeleter |
ClientContext.createBatchDeleter(String tableName,
Authorizations authorizations,
int numQueryThreads) |
BatchDeleter |
ConnectorImpl.createBatchDeleter(String tableName,
Authorizations authorizations,
int numQueryThreads,
BatchWriterConfig config)
Deprecated.
|
BatchDeleter |
ClientContext.createBatchDeleter(String tableName,
Authorizations authorizations,
int numQueryThreads,
BatchWriterConfig config) |
BatchDeleter |
ConnectorImpl.createBatchDeleter(String tableName,
Authorizations authorizations,
int numQueryThreads,
long maxMemory,
long maxLatency,
int maxWriteThreads)
Deprecated.
|
BatchScanner |
ClientContext.createBatchScanner(String tableName) |
BatchScanner |
ClientContext.createBatchScanner(String tableName,
Authorizations authorizations) |
BatchScanner |
ConnectorImpl.createBatchScanner(String tableName,
Authorizations authorizations,
int numQueryThreads)
Deprecated.
|
BatchScanner |
ClientContext.createBatchScanner(String tableName,
Authorizations authorizations,
int numQueryThreads) |
BatchWriter |
ClientContext.createBatchWriter(String tableName) |
BatchWriter |
ConnectorImpl.createBatchWriter(String tableName,
BatchWriterConfig config)
Deprecated.
|
BatchWriter |
ClientContext.createBatchWriter(String tableName,
BatchWriterConfig config) |
BatchWriter |
ConnectorImpl.createBatchWriter(String tableName,
long maxMemory,
long maxLatency,
int maxWriteThreads)
Deprecated.
|
ConditionalWriter |
ConnectorImpl.createConditionalWriter(String tableName,
ConditionalWriterConfig config)
Deprecated.
|
ConditionalWriter |
ClientContext.createConditionalWriter(String tableName,
ConditionalWriterConfig config) |
Scanner |
ClientContext.createScanner(String tableName) |
Scanner |
ConnectorImpl.createScanner(String tableName,
Authorizations authorizations)
Deprecated.
|
Scanner |
ClientContext.createScanner(String tableName,
Authorizations authorizations) |
void |
TableOperationsImpl.delete(String tableName) |
void |
TableOperationsImpl.deleteRows(String tableName,
org.apache.hadoop.io.Text start,
org.apache.hadoop.io.Text end) |
void |
ReplicationOperationsImpl.drain(String tableName) |
void |
ReplicationOperationsImpl.drain(String tableName,
Set<String> wals) |
static <T> T |
MasterClient.execute(ClientContext context,
ClientExecReturn<T,MasterClientService.Client> exec) |
static void |
MasterClient.executeGeneric(ClientContext context,
ClientExec<MasterClientService.Client> exec) |
static void |
MasterClient.executeTable(ClientContext context,
ClientExec<MasterClientService.Client> exec) |
void |
TableOperationsImpl.exportTable(String tableName,
String exportDir) |
void |
TableOperationsImpl.flush(String tableName,
org.apache.hadoop.io.Text start,
org.apache.hadoop.io.Text end,
boolean wait) |
BatchWriter |
MultiTableBatchWriterImpl.getBatchWriter(String tableName) |
List<DiskUsage> |
TableOperationsImpl.getDiskUsage(Set<String> tableNames) |
IteratorSetting |
TableOperationsHelper.getIteratorSetting(String tableName,
String name,
IteratorUtil.IteratorScope scope) |
Map<String,Set<org.apache.hadoop.io.Text>> |
TableOperationsImpl.getLocalityGroups(String tableName) |
protected boolean |
ReplicationOperationsImpl.getMasterDrain(TInfo tinfo,
TCredentials rpcCreds,
String tableName,
Set<String> wals) |
org.apache.hadoop.io.Text |
TableOperationsImpl.getMaxRow(String tableName,
Authorizations auths,
org.apache.hadoop.io.Text startRow,
boolean startInclusive,
org.apache.hadoop.io.Text endRow,
boolean endInclusive) |
static Namespace.ID |
Tables.getNamespaceId(ClientContext context,
Table.ID tableId)
Returns the namespace id for a given table ID.
|
Iterable<Map.Entry<String,String>> |
TableOperationsImpl.getProperties(String tableName) |
SamplerConfiguration |
TableOperationsImpl.getSamplerConfiguration(String tableName) |
String |
ActiveCompactionImpl.getTable() |
protected Table.ID |
ReplicationOperationsImpl.getTableId(AccumuloClient client,
String tableName) |
static Table.ID |
Tables.getTableId(ClientContext context,
String tableName)
Lookup table ID in ZK.
|
static String |
Tables.getTableName(ClientContext context,
Table.ID tableId) |
void |
TableOperationsImpl.importDirectory(String tableName,
String dir,
String failureDir,
boolean setTime)
Deprecated.
|
Map<String,Integer> |
TableOperationsHelper.listConstraints(String tableName) |
Map<String,EnumSet<IteratorUtil.IteratorScope>> |
TableOperationsHelper.listIterators(String tableName) |
Collection<org.apache.hadoop.io.Text> |
TableOperationsImpl.listSplits(String tableName) |
Collection<org.apache.hadoop.io.Text> |
TableOperationsImpl.listSplits(String tableName,
int maxSplits) |
List<SummarizerConfiguration> |
TableOperationsImpl.listSummarizers(String tableName) |
Locations |
TableOperationsImpl.locate(String tableName,
Collection<Range> ranges) |
TabletLocator.TabletLocation |
TimeoutTabletLocator.locateTablet(ClientContext context,
org.apache.hadoop.io.Text row,
boolean skipRow,
boolean retry) |
TabletLocator.TabletLocation |
TabletLocatorImpl.locateTablet(ClientContext context,
org.apache.hadoop.io.Text row,
boolean skipRow,
boolean retry) |
abstract TabletLocator.TabletLocation |
TabletLocator.locateTablet(ClientContext context,
org.apache.hadoop.io.Text row,
boolean skipRow,
boolean retry) |
TabletLocator.TabletLocation |
SyncingTabletLocator.locateTablet(ClientContext context,
org.apache.hadoop.io.Text row,
boolean skipRow,
boolean retry) |
void |
TableOperationsImpl.merge(String tableName,
org.apache.hadoop.io.Text start,
org.apache.hadoop.io.Text end) |
void |
TableOperationsImpl.offline(String tableName) |
void |
TableOperationsImpl.offline(String tableName,
boolean wait) |
void |
TableOperationsImpl.online(String tableName) |
void |
TableOperationsImpl.online(String tableName,
boolean wait) |
Set<String> |
ReplicationOperationsImpl.referencedFiles(String tableName) |
void |
TableOperationsHelper.removeIterator(String tableName,
String name,
EnumSet<IteratorUtil.IteratorScope> scopes) |
void |
TableOperationsImpl.removeSummarizers(String tableName,
Predicate<SummarizerConfiguration> predicate) |
void |
TableOperationsImpl.rename(String oldTableName,
String newTableName) |
static List<KeyValue> |
ThriftScanner.scan(ClientContext context,
ThriftScanner.ScanState scanState,
long timeOut) |
void |
TableOperationsImpl.setLocalityGroups(String tableName,
Map<String,Set<org.apache.hadoop.io.Text>> groups) |
void |
TableOperationsImpl.setSamplerConfiguration(String tableName,
SamplerConfiguration samplerConfiguration) |
Set<Range> |
TableOperationsImpl.splitRangeByTablets(String tableName,
Range range,
int maxSplits) |
boolean |
TableOperationsImpl.testClassLoad(String tableName,
String className,
String asTypeName) |
void |
Writer.update(Mutation m) |
Modifier and Type | Method and Description |
---|---|
static List<KeyExtent> |
BulkImport.findOverlappingTablets(BulkImport.KeyExtentCache extentCache,
FileSKVIterator reader) |
static List<KeyExtent> |
BulkImport.findOverlappingTablets(ClientContext context,
BulkImport.KeyExtentCache extentCache,
org.apache.hadoop.fs.Path file,
org.apache.hadoop.fs.FileSystem fs,
com.google.common.cache.Cache<String,Long> fileLenCache,
CryptoService cs) |
void |
BulkImport.load() |
KeyExtent |
BulkImport.KeyExtentCache.lookup(org.apache.hadoop.io.Text row) |
Modifier and Type | Method and Description |
---|---|
static Map<String,Map<KeyExtent,List<Range>>> |
InputConfigurator.binOffline(Table.ID tableId,
List<Range> ranges,
ClientContext context) |
Modifier and Type | Method and Description |
---|---|
abstract void |
MetadataServicer.getTabletLocations(SortedMap<KeyExtent,String> tablets)
Populate the provided data structure with the known tablets for the table being serviced
|
Copyright © 2011–2019 The Apache Software Foundation. All rights reserved.