Object

org.apache.spark.sql.collection

Utils

Related Doc: package collection

Permalink

object Utils

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. Utils
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. def ERROR_NO_QCS(module: String): String

    Permalink
  5. final val SKIP_ANALYSIS_PREFIX: String("SAMPLE_")

    Permalink
  6. final val WEIGHTAGE_COLUMN_NAME: String("SNAPPY_SAMPLER_WEIGHTAGE")

    Permalink
  7. final val Z95Percent: Double

    Permalink
  8. final val Z95Squared: Double

    Permalink
  9. def analysisException(msg: String, cause: Option[Throwable] = None): AnalysisException

    Permalink
  10. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  11. def charMetadata(size: Int, md: Metadata): Metadata

    Permalink

    Utility function to return a metadata for a StructField of StringType, to ensure that the field is stored (and rendered) as CHAR by SnappyStore.

    Utility function to return a metadata for a StructField of StringType, to ensure that the field is stored (and rendered) as CHAR by SnappyStore.

    size

    the size parameter of the CHAR() column type

    md

    optional Metadata object to be merged into the result

    returns

    the result Metadata object to use for StructField

  12. def charMetadata(size: Int): Metadata

    Permalink

    Utility function to return a metadata for a StructField of StringType, to ensure that the field is stored (and rendered) as CHAR by SnappyStore.

    Utility function to return a metadata for a StructField of StringType, to ensure that the field is stored (and rendered) as CHAR by SnappyStore.

    size

    the size parameter of the CHAR() column type

    returns

    the result Metadata object to use for StructField

  13. def charMetadata(): Metadata

    Permalink

    Utility function to return a metadata for a StructField of StringType, to ensure that the field is stored (and rendered) as CHAR by SnappyStore.

    Utility function to return a metadata for a StructField of StringType, to ensure that the field is stored (and rendered) as CHAR by SnappyStore.

    returns

    the result Metadata object to use for StructField

  14. def classForName(className: String): Class[_]

    Permalink
  15. def clearDefaultSerializerAndCodec(): Unit

    Permalink
  16. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  17. def codecCompress(codec: CompressionCodec, input: Array[Byte], inputLen: Int): Array[Byte]

    Permalink
  18. def codecDecompress(codec: CompressionCodec, input: Array[Byte], inputOffset: Int, inputLen: Int, outputLen: Int): Array[Byte]

    Permalink
  19. def columnIndex(col: String, cols: Array[String], module: String): Int

    Permalink
  20. def compare(left: UTF8String, right: UTF8String): Int

    Permalink
  21. def createCatalystConverter(dataType: DataType): (Any) ⇒ Any

    Permalink
  22. def createScalaConverter(dataType: DataType): (Any) ⇒ Any

    Permalink
  23. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  24. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  25. def fieldIndex(relationOutput: Seq[Attribute], columnName: String, caseSensitive: Boolean): Int

    Permalink
  26. def fieldName(f: StructField): String

    Permalink
  27. def fillArray[T](a: Array[_ >: T], v: T, start: Int, endP1: Int): Unit

    Permalink
  28. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  29. def generateJson(dataType: DataType, gen: JsonGenerator, row: InternalRow): Unit

    Permalink
  30. def getAllExecutorsMemoryStatus(sc: SparkContext): Map[BlockManagerId, (Long, Long)]

    Permalink
  31. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  32. def getClientHostPort(netServer: String): String

    Permalink
  33. def getDriverClassName(url: String): String

    Permalink
  34. def getFields(o: Any): Map[String, Any]

    Permalink
  35. def getFixedPartitionRDD[T](sc: SparkContext, f: (TaskContext, Partition) ⇒ Iterator[T], partitioner: Partitioner, numPartitions: Int)(implicit arg0: ClassTag[T]): RDD[T]

    Permalink
  36. def getGenericRowValues(row: GenericRow): Array[Any]

    Permalink
  37. def getHostExecutorId(blockId: BlockManagerId): String

    Permalink
  38. def getInternalType(dataType: DataType): Class[_]

    Permalink
  39. def getNumColumns(partitioning: Partitioning): Int

    Permalink
  40. def getPartitionData(blockId: BlockId, bm: BlockManager): ByteBuffer

    Permalink
  41. def getSQLDataType(dataType: DataType): DataType

    Permalink
    Annotations
    @tailrec()
  42. def getSchemaAndPlanFromBase(schemaOpt: Option[StructType], baseTableOpt: Option[String], catalog: SnappyStoreHiveCatalog, asSelect: Boolean, table: String, tableType: String): (StructType, Option[LogicalPlan])

    Permalink

    Get the result schema given an optional explicit schema and base table.

    Get the result schema given an optional explicit schema and base table. In case both are specified, then check compatibility between the two.

  43. def hasLowerCase(k: String): Boolean

    Permalink
  44. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  45. def immutableMap[A, B](m: Map[A, B]): Map[A, B]

    Permalink
  46. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  47. final def isLoner(sc: SparkContext): Boolean

    Permalink
  48. def mapExecutors[T](sc: SparkContext, f: (TaskContext, ExecutorLocalPartition) ⇒ Iterator[T])(implicit arg0: ClassTag[T]): RDD[T]

    Permalink
  49. def mapExecutors[T](sqlContext: SQLContext, f: () ⇒ Iterator[T])(implicit arg0: ClassTag[T]): RDD[T]

    Permalink
  50. def matchOption(optName: String, options: Map[String, Any]): Option[(String, Any)]

    Permalink
  51. def metricMethods: ((String) ⇒ String, (String) ⇒ String)

    Permalink
  52. def millisToDays(millisUtc: Long, tz: TimeZone): Int

    Permalink
  53. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  54. def newChunkedByteBuffer(chunks: Array[ByteBuffer]): ChunkedByteBuffer

    Permalink
  55. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  56. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  57. def parseColumn(cv: Any, cols: Array[String], module: String, option: String): Int

    Permalink
  58. def parseColumnsAsClob(s: String): (Boolean, Set[String])

    Permalink
  59. def parseDouble(v: Any, module: String, option: String, min: Double, max: Double, exclusive: Boolean = true): Double

    Permalink
  60. def parseInteger(v: Any, module: String, option: String, min: Int = 1, max: Int = Int.MaxValue): Int

    Permalink
  61. def parseTimeInterval(optV: Any, module: String): Long

    Permalink

    Parse the given time interval value as long milliseconds.

    Parse the given time interval value as long milliseconds.

    See also

    timeIntervalSpec for the allowed string specification

  62. def parseTimestamp(ts: String, module: String, col: String): Long

    Permalink
  63. def qcsOf(qa: Array[String], cols: Array[String], module: String): (Array[Int], Array[String])

    Permalink
  64. def registerDriver(driver: String): Unit

    Permalink

    Register given driver class with Spark's loader.

  65. def registerDriverUrl(url: String): String

    Permalink

    Register driver for given JDBC URL and return the driver class name.

  66. def resolveQCS(options: Map[String, Any], fieldNames: Array[String], module: String): (Array[Int], Array[String])

    Permalink
  67. def resolveQCS(qcsV: Option[Any], fieldNames: Array[String], module: String): (Array[Int], Array[String])

    Permalink
  68. def schemaFields(schema: StructType): Map[String, StructField]

    Permalink
  69. def setDefaultConfProperty(conf: SparkConf, name: String, default: String): Unit

    Permalink
  70. def setDefaultSerializerAndCodec(conf: SparkConf): Unit

    Permalink
  71. def stringMetadata(md: Metadata = Metadata.empty): Metadata

    Permalink

    Utility function to return a metadata for a StructField of StringType, to ensure that the field is rendered as CLOB by SnappyStore.

    Utility function to return a metadata for a StructField of StringType, to ensure that the field is rendered as CLOB by SnappyStore.

    md

    optional Metadata object to be merged into the result

    returns

    the result Metadata object to use for StructField

  72. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  73. def taskMemoryManager(context: TaskContext): TaskMemoryManager

    Permalink
  74. final val timeIntervalSpec: Regex

    Permalink

    string specification for time intervals

  75. def toLowerCase(k: String): String

    Permalink
  76. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  77. def toUnsafeRow(buffer: ByteBuffer, numColumns: Int): UnsafeRow

    Permalink
  78. def toUpperCase(k: String): String

    Permalink
  79. lazy val usingEnhancedSpark: Boolean

    Permalink
  80. def varcharMetadata(size: Int, md: Metadata): Metadata

    Permalink

    Utility function to return a metadata for a StructField of StringType, to ensure that the field is stored (and rendered) as VARCHAR by SnappyStore.

    Utility function to return a metadata for a StructField of StringType, to ensure that the field is stored (and rendered) as VARCHAR by SnappyStore.

    size

    the size parameter of the VARCHAR() column type

    md

    optional Metadata object to be merged into the result

    returns

    the result Metadata object to use for StructField

  81. def varcharMetadata(size: Int): Metadata

    Permalink

    Utility function to return a metadata for a StructField of StringType, to ensure that the field is stored (and rendered) as VARCHAR by SnappyStore.

    Utility function to return a metadata for a StructField of StringType, to ensure that the field is stored (and rendered) as VARCHAR by SnappyStore.

    size

    the size parameter of the VARCHAR() column type

    returns

    the result Metadata object to use for StructField

  82. def varcharMetadata(): Metadata

    Permalink

    Utility function to return a metadata for a StructField of StringType, to ensure that the field is stored (and rendered) as VARCHAR by SnappyStore.

    Utility function to return a metadata for a StructField of StringType, to ensure that the field is stored (and rendered) as VARCHAR by SnappyStore.

    returns

    the result Metadata object to use for StructField

  83. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  84. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  85. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  86. def withNewExecutionId[T](df: DataFrame, body: ⇒ T): T

    Permalink

    Wrap a DataFrame action to track all Spark jobs in the body so that we can connect them with an execution.

Inherited from AnyRef

Inherited from Any

Ungrouped