org.apache.spark.sql.hive

SnappyStoreHiveCatalog

class SnappyStoreHiveCatalog extends Catalog with Logging

Catalog using Hive for persistence and adding Snappy extensions like stream/topK tables and returning LogicalPlan to materialize these entities.

Linear Supertypes
Logging, Catalog, AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. SnappyStoreHiveCatalog
  2. Logging
  3. Catalog
  4. AnyRef
  5. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Instance Constructors

  1. new SnappyStoreHiveCatalog(context: SnappyContext)

Value Members

  1. final def !=(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  2. final def !=(arg0: Any): Boolean

    Definition Classes
    Any
  3. final def ##(): Int

    Definition Classes
    AnyRef → Any
  4. final def ==(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  5. final def ==(arg0: Any): Boolean

    Definition Classes
    Any
  6. def alterTableToAddIndexProp(inTable: QualifiedTableName, index: QualifiedTableName): Unit

  7. def alterTableToRemoveIndexProp(inTable: QualifiedTableName, index: QualifiedTableName): Unit

  8. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  9. var client: ClientWrapper

    Hive client that is used to retrieve metadata from the Hive MetaStore.

    Hive client that is used to retrieve metadata from the Hive MetaStore. The version of the Hive client that is used here must match the meta-store that is configured in the hive-site.xml file.

    Attributes
    protected[org.apache.spark.sql]
  10. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  11. def compatibleSchema(schema1: StructType, schema2: StructType): Boolean

  12. val conf: SQLConf

    Definition Classes
    SnappyStoreHiveCatalog → Catalog
  13. def configure(): Map[String, String]

    Overridden by child classes that need to set configuration before client init (but after hive-site.

    Overridden by child classes that need to set configuration before client init (but after hive-site.xml).

    Attributes
    protected
  14. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  15. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  16. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  17. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  18. def getDataSourceRelations[T](tableTypes: Seq[Type], baseTable: Option[String] = None): Seq[T]

  19. def getDataSourceTables(tableTypes: Seq[Type], baseTable: Option[String] = None): Seq[QualifiedTableName]

  20. def getTableName(tableIdent: TableIdentifier): String

    Attributes
    protected
    Definition Classes
    Catalog
  21. def getTableType(relation: BaseRelation): Type

  22. def getTables(db: Option[String]): Seq[(String, Boolean)]

    Definition Classes
    SnappyStoreHiveCatalog → Catalog
  23. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  24. def hiveMetastoreBarrierPrefixes(): Seq[String]

    A comma separated list of class prefixes that should explicitly be reloaded for each version of Hive that Spark SQL is communicating with.

    A comma separated list of class prefixes that should explicitly be reloaded for each version of Hive that Spark SQL is communicating with. For example, Hive UDFs that are declared in a prefix that typically would be shared (i.e. org.apache.spark.*)

    Attributes
    protected[org.apache.spark.sql]
  25. def hiveMetastoreJars(): String

    The location of the jars that should be used to instantiate the Hive meta-store client.

    The location of the jars that should be used to instantiate the Hive meta-store client. This property can be one of three options:

    a classpath in the standard format for both hive and hadoop.

    builtin - attempt to discover the jars that were used to load Spark SQL and use those. This option is only valid when using the execution version of Hive.

    maven - download the correct version of hive on demand from maven.

    Attributes
    protected[org.apache.spark.sql]
  26. def hiveMetastoreSharedPrefixes(): Seq[String]

    A comma separated list of class prefixes that should be loaded using the ClassLoader that is shared between Spark SQL and a specific version of Hive.

    A comma separated list of class prefixes that should be loaded using the ClassLoader that is shared between Spark SQL and a specific version of Hive. An example of classes that should be shared is JDBC drivers that are needed to talk to the meta-store. Other classes that need to be shared are those that interact with classes that are already shared. For example, custom appender used by log4j.

    Attributes
    protected[org.apache.spark.sql]
  27. val hiveMetastoreVersion: String

    The version of the hive client that will be used to communicate with the meta-store for catalog.

    The version of the hive client that will be used to communicate with the meta-store for catalog.

    Attributes
    protected[org.apache.spark.sql]
  28. val internalHiveclient: Hive

    Attributes
    protected
  29. def invalidateTable(tableIdent: QualifiedTableName): Unit

  30. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  31. def isTraceEnabled(): Boolean

    Attributes
    protected
    Definition Classes
    Logging
  32. def log: Logger

    Attributes
    protected
    Definition Classes
    Logging
  33. def logDebug(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  34. def logDebug(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  35. def logError(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  36. def logError(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  37. def logInfo(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  38. def logInfo(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  39. def logName: String

    Attributes
    protected
    Definition Classes
    Logging
  40. def logTrace(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  41. def logTrace(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  42. def logWarning(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  43. def logWarning(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  44. def lookupRelation(tableIdent: TableIdentifier, alias: Option[String]): LogicalPlan

    Definition Classes
    SnappyStoreHiveCatalog → Catalog
  45. final def lookupRelation(tableIdent: QualifiedTableName): LogicalPlan

  46. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  47. def newQualifiedTableName(tableIdent: String): QualifiedTableName

  48. def newQualifiedTableName(tableIdent: TableIdentifier): QualifiedTableName

  49. def normalizeSchema(schema: StructType): StructType

  50. final def notify(): Unit

    Definition Classes
    AnyRef
  51. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  52. def processTableIdentifier(tableIdentifier: String): String

  53. def refreshTable(tableIdent: TableIdentifier): Unit

    Definition Classes
    SnappyStoreHiveCatalog → Catalog
  54. def registerDataSourceTable(tableIdent: QualifiedTableName, userSpecifiedSchema: Option[StructType], partitionColumns: Array[String], provider: String, options: Map[String, String], relation: BaseRelation): Unit

    Creates a data source table (a table created with USING clause) in Hive's meta-store.

  55. def registerTable(tableName: QualifiedTableName, plan: LogicalPlan): Unit

  56. def registerTable(tableIdentifier: TableIdentifier, plan: LogicalPlan): Unit

    Definition Classes
    SnappyStoreHiveCatalog → Catalog
  57. def removeIndexProp(inTable: QualifiedTableName, index: QualifiedTableName): Unit

  58. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  59. def tableExists(tableName: QualifiedTableName): Boolean

  60. def tableExists(tableIdentifier: String): Boolean

  61. def tableExists(tableIdentifier: TableIdentifier): Boolean

    Definition Classes
    SnappyStoreHiveCatalog → Catalog
  62. val tempTables: Map[QualifiedTableName, LogicalPlan]

  63. def toString(): String

    Definition Classes
    AnyRef → Any
  64. def unregisterAllTables(): Unit

    Definition Classes
    SnappyStoreHiveCatalog → Catalog
  65. def unregisterDataSourceTable(tableIdent: QualifiedTableName, relation: Option[BaseRelation]): Unit

    Drops a data source table from Hive's meta-store.

  66. def unregisterTable(tableIdent: QualifiedTableName): Unit

  67. def unregisterTable(tableIdentifier: TableIdentifier): Unit

    Definition Classes
    SnappyStoreHiveCatalog → Catalog
  68. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  69. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  70. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  71. def withHiveExceptionHandling[T](function: ⇒ T): T

Inherited from Logging

Inherited from Catalog

Inherited from AnyRef

Inherited from Any

Ungrouped