org.apache.hadoop.hbase.client.Attributes
public class BigtableExtendedScan
extends org.apache.hadoop.hbase.client.Scan
Scan
. The Cloud Bigtable ReadRows API
allows for an arbitrary set of ranges and row keys as part of a scan. Instance of
BigtableExtendedScan can be used in Table.getScanner(Scan)
.Constructor | Description |
---|---|
BigtableExtendedScan() |
Modifier and Type | Method | Description |
---|---|---|
void |
addRange(byte[] startRow,
byte[] stopRow) |
Adds a range to scan.
|
void |
addRange(com.google.bigtable.v2.RowRange range) |
Adds an arbitrary
RowRange to the request. |
void |
addRangeWithPrefix(byte[] prefix) |
Creates a
RowRange based on a prefix. |
void |
addRowKey(byte[] rowKey) |
Add a single row key to the output.
|
com.google.bigtable.v2.RowSet |
getRowSet() |
|
org.apache.hadoop.hbase.client.Scan |
setRowPrefixFilter(byte[] rowPrefix) |
|
org.apache.hadoop.hbase.client.Scan |
setStartRow(byte[] startRow) |
|
org.apache.hadoop.hbase.client.Scan |
setStopRow(byte[] stopRow) |
clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait
toJSON, toJSON, toMap, toString, toString
getAttribute, getAttributeSize, getAttributesMap, getId, getPriority
doLoadColumnFamiliesOnDemand, getACL, getAuthorizations, getColumnFamilyTimeRange, getConsistency, getIsolationLevel, getLoadColumnFamiliesOnDemandValue, getReplicaId
addColumn, addFamily, createScanFromCursor, getAllowPartialResults, getBatch, getCacheBlocks, getCaching, getFamilies, getFamilyMap, getFilter, getFingerprint, getLimit, getMaxResultSize, getMaxResultsPerColumnFamily, getMaxVersions, getReadType, getRowOffsetPerColumnFamily, getScanMetrics, getStartRow, getStopRow, getTimeRange, hasFamilies, hasFilter, includeStartRow, includeStopRow, isGetScan, isNeedCursorResult, isRaw, isReversed, isScanMetricsEnabled, isSmall, numFamilies, setACL, setACL, setAllowPartialResults, setAttribute, setAuthorizations, setBatch, setCacheBlocks, setCaching, setColumnFamilyTimeRange, setConsistency, setFamilyMap, setFilter, setId, setIsolationLevel, setLimit, setLoadColumnFamiliesOnDemand, setMaxResultSize, setMaxResultsPerColumnFamily, setMaxVersions, setMaxVersions, setNeedCursorResult, setOneRowLimit, setPriority, setRaw, setReadType, setReplicaId, setReversed, setRowOffsetPerColumnFamily, setScanMetricsEnabled, setSmall, setTimeRange, setTimeStamp, toMap, withStartRow, withStartRow, withStopRow, withStopRow
public org.apache.hadoop.hbase.client.Scan setStartRow(byte[] startRow)
setStartRow
in class org.apache.hadoop.hbase.client.Scan
UnsupportedOperationException
- to avoid confusion. Use addRange(byte[], byte[])
instead.public org.apache.hadoop.hbase.client.Scan setStopRow(byte[] stopRow)
setStopRow
in class org.apache.hadoop.hbase.client.Scan
UnsupportedOperationException
- to avoid confusion. Use addRange(byte[], byte[])
instead.public org.apache.hadoop.hbase.client.Scan setRowPrefixFilter(byte[] rowPrefix)
setRowPrefixFilter
in class org.apache.hadoop.hbase.client.Scan
UnsupportedOperationException
- to avoid confusion. Use addRangeWithPrefix(byte[])
instead.public void addRangeWithPrefix(byte[] prefix)
RowRange
based on a prefix. This is similar to Scan.setRowPrefixFilter(byte[])
.prefix
- public void addRange(byte[] startRow, byte[] stopRow)
Scan.setStartRow(byte[])
and Scan.setStopRow(byte[])
. Other ranges can be
constructed by creating a RowRange
and calling addRange(RowRange)
startRow
- stopRow
- public void addRange(com.google.bigtable.v2.RowRange range)
RowRange
to the request. Ranges can have empty start keys or end
keys. Ranges can also be inclusive/closed or exclusive/open. The default range is inclusive
start and exclusive end.range
- public void addRowKey(byte[] rowKey)
rowKey
- public com.google.bigtable.v2.RowSet getRowSet()
RowSet
built until now, which includes lists of individual keys and row ranges.