c

org.apache.hadoop.hbase.spark

JavaHBaseContext

class JavaHBaseContext extends Serializable

This is the Java Wrapper over HBaseContext which is written in Scala. This class will be used by developers that want to work with Spark or Spark Streaming in Java

Annotations
@Public()
Linear Supertypes
Serializable, Serializable, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. JavaHBaseContext
  2. Serializable
  3. Serializable
  4. AnyRef
  5. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new JavaHBaseContext(jsc: JavaSparkContext, config: Configuration)

    jsc

    This is the JavaSparkContext that we will wrap

    config

    This is the config information to out HBase cluster

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  5. def bulkDelete[T](javaRdd: JavaRDD[T], tableName: TableName, f: Function[T, Delete], batchSize: Integer): Unit

    A simple abstraction over the HBaseContext.foreachPartition method.

    A simple abstraction over the HBaseContext.foreachPartition method.

    It allow addition support for a user to take a JavaRDD and generate delete and send them to HBase.

    The complexity of managing the Connection is removed from the developer

    javaRdd

    Original JavaRDD with data to iterate over

    tableName

    The name of the table to delete from

    f

    Function to convert a value in the JavaRDD to a HBase Deletes

    batchSize

    The number of deletes to batch before sending to HBase

  6. def bulkGet[T, U](tableName: TableName, batchSize: Integer, javaRdd: JavaRDD[T], makeGet: Function[T, Get], convertResult: Function[Result, U]): JavaRDD[U]

    A simple abstraction over the HBaseContext.mapPartition method.

    A simple abstraction over the HBaseContext.mapPartition method.

    It allow addition support for a user to take a JavaRDD and generates a new RDD based on Gets and the results they bring back from HBase

    tableName

    The name of the table to get from

    batchSize

    batch size of how many gets to retrieve in a single fetch

    javaRdd

    Original JavaRDD with data to iterate over

    makeGet

    Function to convert a value in the JavaRDD to a HBase Get

    convertResult

    This will convert the HBase Result object to what ever the user wants to put in the resulting JavaRDD

    returns

    New JavaRDD that is created by the Get to HBase

  7. def bulkPut[T](javaRdd: JavaRDD[T], tableName: TableName, f: Function[T, Put]): Unit

    A simple abstraction over the HBaseContext.foreachPartition method.

    A simple abstraction over the HBaseContext.foreachPartition method.

    It allow addition support for a user to take JavaRDD and generate puts and send them to HBase. The complexity of managing the Connection is removed from the developer

    javaRdd

    Original JavaRDD with data to iterate over

    tableName

    The name of the table to put into

    f

    Function to convert a value in the JavaRDD to a HBase Put

  8. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  9. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  10. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  11. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  12. def foreachPartition[T](javaRdd: JavaRDD[T], f: VoidFunction[(Iterator[T], Connection)]): Unit

    A simple enrichment of the traditional Spark javaRdd foreachPartition.

    A simple enrichment of the traditional Spark javaRdd foreachPartition. This function differs from the original in that it offers the developer access to a already connected Connection object

    Note: Do not close the Connection object. All Connection management is handled outside this method

    javaRdd

    Original javaRdd with data to iterate over

    f

    Function to be given a iterator to iterate through the RDD values and a Connection object to interact with HBase

  13. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  14. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  15. val hbaseContext: HBaseContext
  16. def hbaseRDD(tableName: TableName, scans: Scan): JavaRDD[(ImmutableBytesWritable, Result)]

    A overloaded version of HBaseContext hbaseRDD that define the type of the resulting JavaRDD

    A overloaded version of HBaseContext hbaseRDD that define the type of the resulting JavaRDD

    tableName

    The name of the table to scan

    scans

    The HBase scan object to use to read data from HBase

    returns

    New JavaRDD with results from scan

  17. def hbaseRDD[U](tableName: TableName, scans: Scan, f: Function[(ImmutableBytesWritable, Result), U]): JavaRDD[U]

    This function will use the native HBase TableInputFormat with the given scan object to generate a new JavaRDD

    This function will use the native HBase TableInputFormat with the given scan object to generate a new JavaRDD

    tableName

    The name of the table to scan

    scans

    The HBase scan object to use to read data from HBase

    f

    Function to convert a Result object from HBase into What the user wants in the final generated JavaRDD

    returns

    New JavaRDD with results from scan

  18. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  19. def mapPartitions[T, R](javaRdd: JavaRDD[T], f: FlatMapFunction[(Iterator[T], Connection), R]): JavaRDD[R]

    A simple enrichment of the traditional Spark JavaRDD mapPartition.

    A simple enrichment of the traditional Spark JavaRDD mapPartition. This function differs from the original in that it offers the developer access to a already connected Connection object

    Note: Do not close the Connection object. All Connection management is handled outside this method

    Note: Make sure to partition correctly to avoid memory issue when getting data from HBase

    javaRdd

    Original JavaRdd with data to iterate over

    f

    Function to be given a iterator to iterate through the RDD values and a Connection object to interact with HBase

    returns

    Returns a new RDD generated by the user definition function just like normal mapPartition

  20. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  21. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  22. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  23. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  24. def toString(): String
    Definition Classes
    AnyRef → Any
  25. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  26. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  27. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()

Inherited from Serializable

Inherited from Serializable

Inherited from AnyRef

Inherited from Any

Ungrouped