Class

org.apache.hadoop.hbase.spark

HBaseContext

Related Doc: package spark

Permalink

class HBaseContext extends Serializable with Logging

HBaseContext is a façade for HBase operations like bulk put, get, increment, delete, and scan

HBaseContext will take the responsibilities of disseminating the configuration information to the working and managing the life cycle of Connections.

Annotations
@Public()
Linear Supertypes
Logging, Serializable, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. HBaseContext
  2. Logging
  3. Serializable
  4. AnyRef
  5. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new HBaseContext(sc: SparkContext, config: Configuration, tmpHdfsConfgFile: String = null)

    Permalink

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. var appliedCredentials: Boolean

    Permalink
  5. def applyCreds[T](): Unit

    Permalink
  6. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  7. val broadcastedConf: Broadcast[SerializableWritable[Configuration]]

    Permalink
  8. def bulkDelete[T](rdd: RDD[T], tableName: TableName, f: (T) ⇒ Delete, batchSize: Integer): Unit

    Permalink

    A simple abstraction over the HBaseContext.foreachPartition method.

    A simple abstraction over the HBaseContext.foreachPartition method.

    It allow addition support for a user to take a RDD and generate delete and send them to HBase. The complexity of managing the Connection is removed from the developer

    rdd

    Original RDD with data to iterate over

    tableName

    The name of the table to delete from

    f

    Function to convert a value in the RDD to a HBase Deletes

    batchSize

    The number of delete to batch before sending to HBase

  9. def bulkGet[T, U](tableName: TableName, batchSize: Integer, rdd: RDD[T], makeGet: (T) ⇒ Get, convertResult: (Result) ⇒ U)(implicit arg0: ClassTag[U]): RDD[U]

    Permalink

    A simple abstraction over the HBaseContext.mapPartition method.

    A simple abstraction over the HBaseContext.mapPartition method.

    It allow addition support for a user to take a RDD and generates a new RDD based on Gets and the results they bring back from HBase

    tableName

    The name of the table to get from

    rdd

    Original RDD with data to iterate over

    makeGet

    function to convert a value in the RDD to a HBase Get

    convertResult

    This will convert the HBase Result object to what ever the user wants to put in the resulting RDD return new RDD that is created by the Get to HBase

  10. def bulkPut[T](rdd: RDD[T], tableName: TableName, f: (T) ⇒ Put): Unit

    Permalink

    A simple abstraction over the HBaseContext.foreachPartition method.

    A simple abstraction over the HBaseContext.foreachPartition method.

    It allow addition support for a user to take RDD and generate puts and send them to HBase. The complexity of managing the Connection is removed from the developer

    rdd

    Original RDD with data to iterate over

    tableName

    The name of the table to put into

    f

    Function to convert a value in the RDD to a HBase Put

  11. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @HotSpotIntrinsicCandidate() @throws( ... )
  12. def close(): Unit

    Permalink
  13. val config: Configuration

    Permalink
  14. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  15. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  16. def foreachPartition[T](rdd: RDD[T], f: (Iterator[T], Connection) ⇒ Unit): Unit

    Permalink

    A simple enrichment of the traditional Spark RDD foreachPartition.

    A simple enrichment of the traditional Spark RDD foreachPartition. This function differs from the original in that it offers the developer access to a already connected Connection object

    Note: Do not close the Connection object. All Connection management is handled outside this method

    rdd

    Original RDD with data to iterate over

    f

    Function to be given a iterator to iterate through the RDD values and a Connection object to interact with HBase

  17. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
    Annotations
    @HotSpotIntrinsicCandidate()
  18. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
    Annotations
    @HotSpotIntrinsicCandidate()
  19. def hbaseMapPartition[K, U](configBroadcast: Broadcast[SerializableWritable[Configuration]], it: Iterator[K], mp: (Iterator[K], Connection) ⇒ Iterator[U]): Iterator[U]

    Permalink

    underlining wrapper all mapPartition functions in HBaseContext

  20. def hbaseRDD(tableName: TableName, scans: Scan): RDD[(ImmutableBytesWritable, Result)]

    Permalink

    A overloaded version of HBaseContext hbaseRDD that defines the type of the resulting RDD

    A overloaded version of HBaseContext hbaseRDD that defines the type of the resulting RDD

    tableName

    the name of the table to scan

    scans

    the HBase scan object to use to read data from HBase

    returns

    New RDD with results from scan

  21. def hbaseRDD[U](tableName: TableName, scan: Scan, f: ((ImmutableBytesWritable, Result)) ⇒ U)(implicit arg0: ClassTag[U]): RDD[U]

    Permalink

    This function will use the native HBase TableInputFormat with the given scan object to generate a new RDD

    This function will use the native HBase TableInputFormat with the given scan object to generate a new RDD

    tableName

    the name of the table to scan

    scan

    the HBase scan object to use to read data from HBase

    f

    function to convert a Result object from HBase into what the user wants in the final generated RDD

    returns

    new RDD with results from scan

  22. def initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  23. def initializeLogIfNecessary(isInterpreter: Boolean): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  24. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  25. def isTraceEnabled(): Boolean

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  26. val job: Job

    Permalink
  27. def log: Logger

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  28. def logDebug(msg: ⇒ String, throwable: Throwable): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  29. def logDebug(msg: ⇒ String): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  30. def logError(msg: ⇒ String, throwable: Throwable): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  31. def logError(msg: ⇒ String): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  32. def logInfo(msg: ⇒ String, throwable: Throwable): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  33. def logInfo(msg: ⇒ String): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  34. def logName: String

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  35. def logTrace(msg: ⇒ String, throwable: Throwable): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  36. def logTrace(msg: ⇒ String): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  37. def logWarning(msg: ⇒ String, throwable: Throwable): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  38. def logWarning(msg: ⇒ String): Unit

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  39. def mapPartitions[T, R](rdd: RDD[T], mp: (Iterator[T], Connection) ⇒ Iterator[R])(implicit arg0: ClassTag[R]): RDD[R]

    Permalink

    A simple enrichment of the traditional Spark RDD mapPartition.

    A simple enrichment of the traditional Spark RDD mapPartition. This function differs from the original in that it offers the developer access to a already connected Connection object

    Note: Do not close the Connection object. All Connection management is handled outside this method

    rdd

    Original RDD with data to iterate over

    mp

    Function to be given a iterator to iterate through the RDD values and a Connection object to interact with HBase

    returns

    Returns a new RDD generated by the user definition function just like normal mapPartition

  40. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  41. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @HotSpotIntrinsicCandidate()
  42. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @HotSpotIntrinsicCandidate()
  43. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  44. val tmpHdfsConfgFile: String

    Permalink
  45. var tmpHdfsConfiguration: Configuration

    Permalink
  46. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  47. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  48. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  49. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Deprecated Value Members

  1. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @Deprecated @deprecated @throws( classOf[java.lang.Throwable] )
    Deprecated

    (Since version ) see corresponding Javadoc for more information.

Inherited from Logging

Inherited from Serializable

Inherited from AnyRef

Inherited from Any

Ungrouped