com.cloudera.spark.hbase

HBaseContext

class HBaseContext extends Serializable with Logging

HBaseContext is a façade of simple and complex HBase operations like bulk put, get, increment, delete, and scan

HBase Context will take the responsibilities to happen to complexity of disseminating the configuration information to the working and managing the life cycle of Connections.

serializable Configuration object

Linear Supertypes
Logging, Serializable, AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. HBaseContext
  2. Logging
  3. Serializable
  4. AnyRef
  5. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Instance Constructors

  1. new HBaseContext(sc: SparkContext, config: Configuration, tmpHdfsConfgFile: String = null)

Value Members

  1. final def !=(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  2. final def !=(arg0: Any): Boolean

    Definition Classes
    Any
  3. final def ##(): Int

    Definition Classes
    AnyRef → Any
  4. final def ==(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  5. final def ==(arg0: Any): Boolean

    Definition Classes
    Any
  6. var appliedCredentials: Boolean

  7. def applyCreds[T](configBroadcast: Broadcast[SerializableWritable[Configuration]]): Unit

  8. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  9. val broadcastedConf: Broadcast[SerializableWritable[Configuration]]

  10. def bulkCheckAndPut[T](rdd: RDD[T], tableName: String, f: (T) ⇒ (Array[Byte], Array[Byte], Array[Byte], Array[Byte], Put), autoFlush: Boolean): Unit

    A simple abstraction over the HBaseContext.foreachPartition method.

    A simple abstraction over the HBaseContext.foreachPartition method.

    It allow addition support for a user to take RDD and generate checkAndPuts and send them to HBase. The complexity of managing the Connection is removed from the developer

    rdd

    Original RDD with data to iterate over

    tableName

    The name of the table to put into

    f

    Function to convert a value in the RDD to a HBase checkAndPut

    autoFlush

    If autoFlush should be turned on

  11. def bulkCheckDelete[T](rdd: RDD[T], tableName: String, f: (T) ⇒ (Array[Byte], Array[Byte], Array[Byte], Array[Byte], Delete)): Unit

    A simple abstraction over the HBaseContext.foreachPartition method.

    A simple abstraction over the HBaseContext.foreachPartition method.

    It allow addition support for a user to take a RDD and generate checkAndDelete and send them to HBase. The complexity of managing the Connection is removed from the developer

    rdd

    Original RDD with data to iterate over

    tableName

    The name of the table to delete from

    f

    Function to convert a value in the RDD to a HBase Delete

  12. def bulkDelete[T](rdd: RDD[T], tableName: String, f: (T) ⇒ Delete, batchSize: Integer): Unit

    A simple abstraction over the HBaseContext.foreachPartition method.

    A simple abstraction over the HBaseContext.foreachPartition method.

    It allow addition support for a user to take a RDD and generate delete and send them to HBase. The complexity of managing the Connection is removed from the developer

    rdd

    Original RDD with data to iterate over

    tableName

    The name of the table to delete from

    f

    Function to convert a value in the RDD to a HBase Deletes

    batchSize

    The number of delete to batch before sending to HBase

  13. def bulkGet[T, U](tableName: String, batchSize: Integer, rdd: RDD[T], makeGet: (T) ⇒ Get, convertResult: (Result) ⇒ U): RDD[U]

    A simple abstraction over the HBaseContext.

    A simple abstraction over the HBaseContext.mapPartition method.

    It allow addition support for a user to take a RDD and generates a new RDD based on Gets and the results they bring back from HBase

    tableName

    The name of the table to get from

    rdd

    Original RDD with data to iterate over

    makeGet

    function to convert a value in the RDD to a HBase Get

    convertResult

    This will convert the HBase Result object to what ever the user wants to put in the resulting RDD return new RDD that is created by the Get to HBase

  14. def bulkIncrement[T](rdd: RDD[T], tableName: String, f: (T) ⇒ Increment, batchSize: Integer): Unit

    A simple abstraction over the HBaseContext.foreachPartition method.

    A simple abstraction over the HBaseContext.foreachPartition method.

    It allow addition support for a user to take a RDD and generate increments and send them to HBase.

    The complexity of managing the Connection is removed from the developer

    rdd

    Original RDD with data to iterate over

    tableName

    The name of the table to increment to

    f

    function to convert a value in the RDD to a HBase Increments

    batchSize

    The number of increments to batch before sending to HBase

  15. def bulkPut[T](rdd: RDD[T], tableName: String, f: (T) ⇒ Put, autoFlush: Boolean): Unit

    A simple abstraction over the HBaseContext.

    A simple abstraction over the HBaseContext.foreachPartition method.

    It allow addition support for a user to take RDD and generate puts and send them to HBase. The complexity of managing the HConnection is removed from the developer

    rdd

    Original RDD with data to iterate over

    tableName

    The name of the table to put into

    f

    Function to convert a value in the RDD to a HBase Put

    autoFlush

    If autoFlush should be turned on

  16. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  17. var credentials: Credentials

  18. val credentialsConf: Broadcast[SerializableWritable[Credentials]]

  19. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  20. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  21. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  22. def foreachPartition[T](rdd: RDD[T], f: (Iterator[T], Connection) ⇒ Unit): Unit

    A simple enrichment of the traditional Spark RDD foreachPartition.

    A simple enrichment of the traditional Spark RDD foreachPartition. This function differs from the original in that it offers the developer access to a already connected Connection object

    Note: Do not close the Connection object. All Connection management is handled outside this method

    rdd

    Original RDD with data to iterate over

    f

    Function to be given a iterator to iterate through the RDD values and a Connection object to interact with HBase

  23. def foreachRDD[T](dstream: DStream[T], f: (Iterator[T], Connection) ⇒ Unit): Unit

    A simple enrichment of the traditional Spark Streaming dStream foreach This function differs from the original in that it offers the developer access to a already connected HConnection object

    A simple enrichment of the traditional Spark Streaming dStream foreach This function differs from the original in that it offers the developer access to a already connected HConnection object

    Note: Do not close the HConnection object. All HConnection management is handled outside this method

    dstream

    Original DStream with data to iterate over

    f

    Function to be given a iterator to iterate through the DStream values and a HConnection object to interact with HBase

  24. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  25. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  26. def hbaseRDD(tableName: String, scans: Scan): RDD[(Array[Byte], List[(Array[Byte], Array[Byte], Array[Byte])])]

    A overloaded version of HBaseContext hbaseRDD that predefines the type of the outputing RDD

    A overloaded version of HBaseContext hbaseRDD that predefines the type of the outputing RDD

    tableName

    the name of the table to scan

    scans

    the HBase scan object to use to read data from HBase

    returns

    New RDD with results from scan

  27. def hbaseRDD[U](tableName: String, scan: Scan, f: ((ImmutableBytesWritable, Result)) ⇒ U)(implicit arg0: ClassTag[U]): RDD[U]

    This function will use the native HBase TableInputFormat with the given scan object to generate a new RDD

    This function will use the native HBase TableInputFormat with the given scan object to generate a new RDD

    tableName

    the name of the table to scan

    scan

    the HBase scan object to use to read data from HBase

    f

    function to convert a Result object from HBase into what the user wants in the final generated RDD

    returns

    new RDD with results from scan

  28. def hbaseScanRDD(tableName: String, scan: Scan): RDD[(Array[Byte], List[(Array[Byte], Array[Byte], Array[Byte])])]

  29. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  30. def isTraceEnabled(): Boolean

    Attributes
    protected
    Definition Classes
    Logging
  31. val job: Job

  32. def log: Logger

    Attributes
    protected
    Definition Classes
    Logging
  33. def logCredInformation[T](credentials2: Credentials): Unit

  34. def logDebug(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  35. def logDebug(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  36. def logError(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  37. def logError(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  38. def logInfo(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  39. def logInfo(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  40. def logName: String

    Attributes
    protected
    Definition Classes
    Logging
  41. def logTrace(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  42. def logTrace(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  43. def logWarning(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  44. def logWarning(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  45. def mapPartition[T, R](rdd: RDD[T], mp: (Iterator[T], Connection) ⇒ Iterator[R])(implicit arg0: ClassTag[R]): RDD[R]

    A simple enrichment of the traditional Spark RDD mapPartition.

    A simple enrichment of the traditional Spark RDD mapPartition. This function differs from the original in that it offers the developer access to a already connected HConnection object

    Note: Do not close the HConnection object. All HConnection management is handled outside this method

    Note: Make sure to partition correctly to avoid memory issue when getting data from HBase

    rdd

    Original RDD with data to iterate over

    mp

    Function to be given a iterator to iterate through the RDD values and a HConnection object to interact with HBase

    returns

    Returns a new RDD generated by the user definition function just like normal mapPartition

  46. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  47. final def notify(): Unit

    Definition Classes
    AnyRef
  48. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  49. def streamBulkCheckAndDelete[T](dstream: DStream[T], tableName: String, f: (T) ⇒ (Array[Byte], Array[Byte], Array[Byte], Array[Byte], Delete)): Unit

    A simple abstraction over the bulkCheckDelete method.

    A simple abstraction over the bulkCheckDelete method.

    It allow addition support for a user to take a DStream and generate CheckAndDelete and send them to HBase.

    The complexity of managing the Connection is removed from the developer

    dstream

    Original DStream with data to iterate over

    tableName

    The name of the table to delete from

    f

    function to convert a value in the DStream to a HBase Delete

  50. def streamBulkCheckAndPut[T](dstream: DStream[T], tableName: String, f: (T) ⇒ (Array[Byte], Array[Byte], Array[Byte], Array[Byte], Put), autoFlush: Boolean): Unit

    A simple abstraction over the bulkCheckAndPut method.

    A simple abstraction over the bulkCheckAndPut method.

    It allow addition support for a user to take a DStream and generate checkAndPuts and send them to HBase.

    The complexity of managing the Connection is removed from the developer

    dstream

    Original DStream with data to iterate over

    tableName

    The name of the table to checkAndPut into

    f

    function to convert a value in the RDD to a HBase checkAndPut

    autoFlush

    If autoFlush should be turned on

  51. def streamBulkDelete[T](dstream: DStream[T], tableName: String, f: (T) ⇒ Delete, batchSize: Integer): Unit

    A simple abstraction over the streamBulkMutation method.

    A simple abstraction over the streamBulkMutation method.

    It allow addition support for a user to take a DStream and generate Delete and send them to HBase.

    The complexity of managing the Connection is removed from the developer

    dstream

    Original DStream with data to iterate over

    tableName

    The name of the table to delete from

    f

    function to convert a value in the DStream to a HBase Delete

    batchSize

    The number of Deletes to batch before sending to HBase

  52. def streamBulkGet[T, U](tableName: String, batchSize: Integer, dstream: DStream[T], makeGet: (T) ⇒ Get, convertResult: (Result) ⇒ U)(implicit arg0: ClassTag[U]): DStream[U]

    A simple abstraction over the HBaseContext.

    A simple abstraction over the HBaseContext.streamMap method.

    It allow addition support for a user to take a DStream and generates a new DStream based on Gets and the results they bring back from HBase

    tableName

    The name of the table to get from

    dstream

    Original DStream with data to iterate over

    makeGet

    function to convert a value in the DStream to a HBase Get

    convertResult

    This will convert the HBase Result object to what ever the user wants to put in the resulting DStream return new DStream that is created by the Get to HBase

  53. def streamBulkIncrement[T](dstream: DStream[T], tableName: String, f: (T) ⇒ Increment, batchSize: Int): Unit

    A simple abstraction over the streamBulkMutation method.

    A simple abstraction over the streamBulkMutation method.

    It allow addition support for a user to take a DStream and generate Increments and send them to HBase.

    The complexity of managing the Connection is removed from the developer

    dstream

    Original DStream with data to iterate over

    tableName

    The name of the table to increments into

    f

    Function to convert a value in the DStream to a HBase Increments

    batchSize

    The number of increments to batch before sending to HBase

  54. def streamBulkPut[T](dstream: DStream[T], tableName: String, f: (T) ⇒ Put, autoFlush: Boolean): Unit

    A simple abstraction over the bulkPut method.

    A simple abstraction over the bulkPut method.

    It allow addition support for a user to take a DStream and generate puts and send them to HBase.

    The complexity of managing the Connection is removed from the developer

    dstream

    Original DStream with data to iterate over

    tableName

    The name of the table to put into

    f

    Function to convert a value in the DStream to a HBase Put

    autoFlush

    If autoFlush should be turned on

  55. def streamMap[T, U](dstream: DStream[T], mp: (Iterator[T], Connection) ⇒ Iterator[U])(implicit arg0: ClassTag[U]): DStream[U]

    A simple enrichment of the traditional Spark Streaming DStream mapPartition.

    A simple enrichment of the traditional Spark Streaming DStream mapPartition.

    This function differs from the original in that it offers the developer access to a already connected HConnection object

    Note: Do not close the HConnection object. All HConnection management is handled outside this method

    Note: Make sure to partition correctly to avoid memory issue when getting data from HBase

    dstream

    Original DStream with data to iterate over

    mp

    Function to be given a iterator to iterate through the DStream values and a HConnection object to interact with HBase

    returns

    Returns a new DStream generated by the user definition function just like normal mapPartition

  56. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  57. val tmpHdfsConfgFile: String

  58. var tmpHdfsConfiguration: Configuration

  59. def toString(): String

    Definition Classes
    AnyRef → Any
  60. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  61. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  62. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from Logging

Inherited from Serializable

Inherited from AnyRef

Inherited from Any

Ungrouped