Deprecated Methods |
org.apache.hadoop.hbase.client.replication.ReplicationAdmin.addPeer(String, String)
Use addPeer(String, ReplicationPeerConfig, Map) instead. |
org.apache.hadoop.hbase.client.replication.ReplicationAdmin.addPeer(String, String, String)
|
org.apache.hadoop.hbase.client.HTable.batch(List extends Row>)
If any exception is thrown by one of the actions, there is no way to
retrieve the partially executed results. Use HTable.batch(List, Object[]) instead. |
org.apache.hadoop.hbase.client.HTableInterface.batch(List extends Row>)
If any exception is thrown by one of the actions, there is no way to
retrieve the partially executed results. Use HTableInterface.batch(List, Object[]) instead. |
org.apache.hadoop.hbase.client.HTable.batchCallback(List extends Row>, Batch.Callback)
If any exception is thrown by one of the actions, there is no way to
retrieve the partially executed results. Use
HTable.batchCallback(List, Object[], org.apache.hadoop.hbase.client.coprocessor.Batch.Callback)
instead. |
org.apache.hadoop.hbase.client.HTableInterface.batchCallback(List extends Row>, Batch.Callback)
If any exception is thrown by one of the actions, there is no way to
retrieve the partially executed results. Use
HTableInterface.batchCallback(List, Object[], org.apache.hadoop.hbase.client.coprocessor.Batch.Callback)
instead. |
org.apache.hadoop.hbase.client.HConnection.clearRegionCache(byte[])
|
org.apache.hadoop.hbase.RemoteExceptionHandler.decodeRemoteException(RemoteException)
Use RemoteException.unwrapRemoteException() instead.
In fact we should look into deprecating this whole class - St.Ack 2010929 |
org.apache.hadoop.hbase.client.HConnectionManager.deleteAllConnections()
kept for backward compatibility, but the behavior is broken. HBASE-8983 |
org.apache.hadoop.hbase.client.HConnectionManager.deleteAllConnections(boolean)
|
org.apache.hadoop.hbase.client.HConnectionManager.deleteConnection(Configuration)
|
org.apache.hadoop.hbase.client.HConnectionManager.deleteStaleConnection(HConnection)
|
org.apache.hadoop.hbase.filter.FilterWrapper.filterRow(List)
|
org.apache.hadoop.hbase.filter.FilterList.filterRow(List)
|
org.apache.hadoop.hbase.filter.Filter.filterRow(List)
|
org.apache.hadoop.hbase.filter.FilterBase.filterRow(List)
|
org.apache.hadoop.hbase.client.Query.getACLStrategy()
No effect |
org.apache.hadoop.hbase.client.Mutation.getACLStrategy()
No effect |
org.apache.hadoop.hbase.client.HConnection.getAdmin(ServerName, boolean)
You can pass master flag but nothing special is done. |
org.apache.hadoop.hbase.zookeeper.ZKUtil.getChildDataAndWatchForNewChildren(ZooKeeperWatcher, String)
Unused |
org.apache.hadoop.hbase.client.Result.getColumn(byte[], byte[])
Use Result.getColumnCells(byte[], byte[]) instead. |
org.apache.hadoop.hbase.client.Result.getColumnLatest(byte[], byte[])
Use Result.getColumnLatestCell(byte[], byte[]) instead. |
org.apache.hadoop.hbase.client.Result.getColumnLatest(byte[], int, int, byte[], int, int)
Use Result.getColumnLatestCell(byte[], int, int, byte[], int, int) instead. |
org.apache.hadoop.hbase.client.HTable.getConnection()
This method will be changed from public to package protected. |
org.apache.hadoop.hbase.client.HConnectionManager.getConnection(Configuration)
|
org.apache.hadoop.hbase.client.HConnection.getCurrentNrHRS()
This method will be changed from public to package protected. |
org.apache.hadoop.hbase.HColumnDescriptor.getDataBlockEncodingOnDisk()
|
org.apache.hadoop.hbase.client.Mutation.getFamilyMap()
use Mutation.getFamilyCellMap() instead. |
org.apache.hadoop.hbase.client.HConnection.getHTableDescriptor(byte[])
|
org.apache.hadoop.hbase.client.HConnection.getHTableDescriptors(List)
|
org.apache.hadoop.hbase.client.HConnection.getKeepAliveMasterService()
Since 0.96.0 |
org.apache.hadoop.hbase.filter.FilterWrapper.getNextKeyHint(KeyValue)
|
org.apache.hadoop.hbase.filter.FilterList.getNextKeyHint(KeyValue)
|
org.apache.hadoop.hbase.filter.Filter.getNextKeyHint(KeyValue)
|
org.apache.hadoop.hbase.filter.FilterBase.getNextKeyHint(KeyValue)
|
org.apache.hadoop.hbase.HTableDescriptor.getOwnerString()
|
org.apache.hadoop.hbase.client.HConnection.getRegionLocation(byte[], byte[], boolean)
|
org.apache.hadoop.hbase.client.HTableInterface.getRowOrBefore(byte[], byte[])
As of version 0.92 this method is deprecated without
replacement.
getRowOrBefore is used internally to find entries in hbase:meta and makes
various assumptions about the table (which are true for hbase:meta but not
in general) to be efficient. |
org.apache.hadoop.hbase.client.HTable.getScannerCaching()
Use Scan.setCaching(int) and Scan.getCaching() |
org.apache.hadoop.hbase.ClusterStatus.getServerInfo()
Use ClusterStatus.getServers() |
org.apache.hadoop.hbase.HTableDescriptor.getTableDir(Path, byte[])
|
org.apache.hadoop.hbase.client.ClientScanner.getTableName()
Since 0.96.0; use ClientScanner.getTable() |
org.apache.hadoop.hbase.HRegionInfo.getTableName()
Since 0.96.0; use #getTable() |
org.apache.hadoop.hbase.HRegionInfo.getTableName(byte[])
Since 0.96.0; use #getTable(byte[]) |
org.apache.hadoop.hbase.client.HBaseAdmin.getTableNames()
|
org.apache.hadoop.hbase.client.HConnection.getTableNames()
|
org.apache.hadoop.hbase.client.HBaseAdmin.getTableNames(Pattern)
|
org.apache.hadoop.hbase.client.HBaseAdmin.getTableNames(String)
|
org.apache.hadoop.hbase.coprocessor.ColumnInterpreter.getValue(byte[], byte[], KeyValue)
|
org.apache.hadoop.hbase.HRegionInfo.getVersion()
HRI is no longer a VersionedWritable |
org.apache.hadoop.hbase.client.HTable.getWriteBuffer()
since 0.96. This is an internal buffer that should not be read nor write. |
org.apache.hadoop.hbase.client.Mutation.getWriteToWAL()
Use Mutation.getDurability() instead. |
org.apache.hadoop.hbase.security.access.AccessControlClient.grant(Configuration, TableName, String, byte[], byte[], AccessControlProtos.Permission.Action...)
Use AccessControlClient.grant(Configuration, TableName, String, byte[], byte[], Permission.Action...) instead. |
org.apache.hadoop.hbase.client.HTable.incrementColumnValue(byte[], byte[], byte[], long, boolean)
Use HTable.incrementColumnValue(byte[], byte[], byte[], long, Durability) |
org.apache.hadoop.hbase.client.HTableInterface.incrementColumnValue(byte[], byte[], byte[], long, boolean)
Use HTableInterface.incrementColumnValue(byte[], byte[], byte[], long, Durability) |
org.apache.hadoop.hbase.HTableDescriptor.isDeferredLogFlush()
|
org.apache.hadoop.hbase.client.HConnection.isTableAvailable(byte[])
|
org.apache.hadoop.hbase.client.HConnection.isTableAvailable(byte[], byte[][])
|
org.apache.hadoop.hbase.client.HConnection.isTableDisabled(byte[])
|
org.apache.hadoop.hbase.client.HTable.isTableEnabled(byte[])
use HBaseAdmin.isTableEnabled(byte[]) |
org.apache.hadoop.hbase.client.HConnection.isTableEnabled(byte[])
|
org.apache.hadoop.hbase.client.HTable.isTableEnabled(Configuration, byte[])
use HBaseAdmin.isTableEnabled(byte[]) |
org.apache.hadoop.hbase.client.HTable.isTableEnabled(Configuration, String)
use HBaseAdmin.isTableEnabled(byte[]) |
org.apache.hadoop.hbase.client.HTable.isTableEnabled(Configuration, TableName)
use HBaseAdmin.isTableEnabled(org.apache.hadoop.hbase.TableName tableName) |
org.apache.hadoop.hbase.client.HTable.isTableEnabled(String)
use HBaseAdmin.isTableEnabled(byte[]) |
org.apache.hadoop.hbase.client.HTable.isTableEnabled(TableName)
use HBaseAdmin.isTableEnabled(byte[]) |
org.apache.hadoop.hbase.client.Result.list()
as of 0.96, use Result.listCells() |
org.apache.hadoop.hbase.client.replication.ReplicationAdmin.listPeers()
use ReplicationAdmin.listPeerConfigs() |
org.apache.hadoop.hbase.client.HConnection.locateRegion(byte[], byte[])
|
org.apache.hadoop.hbase.client.HConnection.locateRegions(byte[])
|
org.apache.hadoop.hbase.client.HConnection.locateRegions(byte[], boolean, boolean)
|
org.apache.hadoop.hbase.security.token.TokenUtil.obtainAndCacheToken(Configuration, UserGroupInformation)
Replaced by TokenUtil.obtainAndCacheToken(HConnection,User) |
org.apache.hadoop.hbase.security.token.TokenUtil.obtainToken(Configuration)
Replaced by TokenUtil.obtainToken(HConnection) |
org.apache.hadoop.hbase.security.token.TokenUtil.obtainTokenForJob(Configuration, UserGroupInformation, Job)
Replaced by TokenUtil.obtainTokenForJob(HConnection,User,Job) |
org.apache.hadoop.hbase.security.token.TokenUtil.obtainTokenForJob(JobConf, UserGroupInformation)
Replaced by TokenUtil.obtainTokenForJob(HConnection,JobConf,User) |
org.apache.hadoop.hbase.client.HConnectionManager.HConnectionImplementation.processBatch(List extends Row>, byte[], ExecutorService, Object[])
|
org.apache.hadoop.hbase.client.HConnection.processBatch(List extends Row>, byte[], ExecutorService, Object[])
|
org.apache.hadoop.hbase.client.HConnectionManager.HConnectionImplementation.processBatch(List extends Row>, TableName, ExecutorService, Object[])
|
org.apache.hadoop.hbase.client.HConnection.processBatch(List extends Row>, TableName, ExecutorService, Object[])
since 0.96 - Use HTableInterface.batch(java.util.List extends org.apache.hadoop.hbase.client.Row>, java.lang.Object[]) instead |
org.apache.hadoop.hbase.client.HConnectionManager.HConnectionImplementation.processBatchCallback(List extends Row>, byte[], ExecutorService, Object[], Batch.Callback)
|
org.apache.hadoop.hbase.client.HConnection.processBatchCallback(List extends Row>, byte[], ExecutorService, Object[], Batch.Callback)
|
org.apache.hadoop.hbase.client.HConnectionManager.HConnectionImplementation.processBatchCallback(List extends Row>, TableName, ExecutorService, Object[], Batch.Callback)
since 0.96 - Use HTable.processBatchCallback(java.util.List extends org.apache.hadoop.hbase.client.Row>, java.lang.Object[], org.apache.hadoop.hbase.client.coprocessor.Batch.Callback) instead |
org.apache.hadoop.hbase.client.HConnection.processBatchCallback(List extends Row>, TableName, ExecutorService, Object[], Batch.Callback)
since 0.96 - Use HTableInterface.batchCallback(java.util.List extends org.apache.hadoop.hbase.client.Row>, java.lang.Object[], org.apache.hadoop.hbase.client.coprocessor.Batch.Callback) instead |
org.apache.hadoop.hbase.client.HTableMultiplexer.put(byte[], List)
|
org.apache.hadoop.hbase.client.HTableMultiplexer.put(byte[], Put)
|
org.apache.hadoop.hbase.client.HTableMultiplexer.put(byte[], Put, int)
|
org.apache.hadoop.hbase.client.HTablePool.putTable(HTableInterface)
|
org.apache.hadoop.hbase.client.Result.raw()
as of 0.96, use Result.rawCells() |
org.apache.hadoop.hbase.HTableDescriptor.readFields(DataInput)
Writables are going away. Use pb HTableDescriptor.parseFrom(byte[]) instead. |
org.apache.hadoop.hbase.HRegionInfo.readFields(DataInput)
Use protobuf deserialization instead. |
org.apache.hadoop.hbase.HColumnDescriptor.readFields(DataInput)
Writables are going away. Use pb HColumnDescriptor.parseFrom(byte[]) instead. |
org.apache.hadoop.hbase.client.HConnection.relocateRegion(byte[], byte[])
|
org.apache.hadoop.hbase.security.access.AccessControlClient.revoke(Configuration, String, TableName, byte[], byte[], AccessControlProtos.Permission.Action...)
Use AccessControlClient.revoke(Configuration, TableName, String, byte[], byte[], Permission.Action...) instead |
org.apache.hadoop.hbase.client.Query.setACLStrategy(boolean)
No effect |
org.apache.hadoop.hbase.client.Mutation.setACLStrategy(boolean)
No effect |
org.apache.hadoop.hbase.client.HTable.setAutoFlush(boolean)
|
org.apache.hadoop.hbase.client.HTableInterface.setAutoFlush(boolean)
in 0.96. When called with setAutoFlush(false), this function also
set clearBufferOnFail to true, which is unexpected but kept for historical reasons.
Replace it with setAutoFlush(false, false) if this is exactly what you want, or by
HTableInterface.setAutoFlushTo(boolean) for all other cases. |
org.apache.hadoop.hbase.HTableDescriptor.setDeferredLogFlush(boolean)
|
org.apache.hadoop.hbase.HColumnDescriptor.setEncodeOnDisk(boolean)
|
org.apache.hadoop.hbase.client.Mutation.setFamilyMap(NavigableMap>)
use Mutation.setFamilyCellMap(NavigableMap) instead. |
org.apache.hadoop.hbase.HColumnDescriptor.setKeepDeletedCells(boolean)
use HColumnDescriptor.setKeepDeletedCells(KeepDeletedCells) |
org.apache.hadoop.hbase.HTableDescriptor.setName(byte[])
|
org.apache.hadoop.hbase.HTableDescriptor.setName(TableName)
|
org.apache.hadoop.hbase.HTableDescriptor.setOwner(User)
|
org.apache.hadoop.hbase.HTableDescriptor.setOwnerString(String)
|
org.apache.hadoop.hbase.client.replication.ReplicationAdmin.setPeerTableCFs(String, String)
use ReplicationAdmin.setPeerTableCFs(String, Map) |
org.apache.hadoop.hbase.client.HTable.setScannerCaching(int)
Use Scan.setCaching(int) |
org.apache.hadoop.hbase.client.Mutation.setWriteToWAL(boolean)
Use Mutation.setDurability(Durability) instead. |
org.apache.hadoop.hbase.filter.FilterWrapper.transform(KeyValue)
|
org.apache.hadoop.hbase.filter.FilterList.transform(KeyValue)
|
org.apache.hadoop.hbase.filter.Filter.transform(KeyValue)
|
org.apache.hadoop.hbase.filter.FilterBase.transform(KeyValue)
|
org.apache.hadoop.hbase.client.HConnection.updateCachedLocations(byte[], byte[], Object, HRegionLocation)
|
org.apache.hadoop.hbase.zookeeper.ZKUtil.updateExistingNodeData(ZooKeeperWatcher, String, byte[], int)
Unused |
org.apache.hadoop.hbase.catalog.CatalogTracker.waitForMetaServerConnection(long)
Use #getMetaServerConnection(long) |
org.apache.hadoop.hbase.HTableDescriptor.write(DataOutput)
Writables are going away.
Use MessageLite.toByteArray() instead. |
org.apache.hadoop.hbase.HRegionInfo.write(DataOutput)
Use protobuf serialization instead. See HRegionInfo.toByteArray() and
HRegionInfo.toDelimitedByteArray() |
org.apache.hadoop.hbase.HColumnDescriptor.write(DataOutput)
Writables are going away. Use HColumnDescriptor.toByteArray() instead. |