package functions
- Alphabetic
- By Inheritance
- functions
- UnaryFunctions
- Udf
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Type Members
- trait AggregateFunctions extends AnyRef
- trait CatalystExplodableCollection[V[_]] extends AnyRef
- trait CatalystSizableCollection[V[_]] extends AnyRef
- trait CatalystSortableCollection[V[_]] extends AnyRef
- case class FramelessUdf[T, R](function: AnyRef, encoders: Seq[TypedEncoder[_]], children: Seq[Expression], rencoder: TypedEncoder[R]) extends Expression with NonSQLExpression with Product with Serializable
NB: Implementation detail, isn't intended to be directly used.
NB: Implementation detail, isn't intended to be directly used.
Our own implementation of
ScalaUDF
from Catalyst compatible with TypedEncoder. - trait NonAggregateFunctions extends AnyRef
- case class Spark2_4_LambdaVariable(value: String, isNull: String, dataType: DataType, nullable: Boolean = true) extends LeafExpression with NonSQLExpression with Product with Serializable
- trait Udf extends AnyRef
Documentation marked "apache/spark" is thanks to apache/spark Contributors at https://github.com/apache/spark, licensed under Apache v2.0 available at http://www.apache.org/licenses/LICENSE-2.0
- trait UnaryFunctions extends AnyRef
Value Members
- def lit[A, T](value: A)(implicit encoder: TypedEncoder[A]): TypedColumn[T, A]
Creates a frameless.TypedColumn of literal value.
Creates a frameless.TypedColumn of literal value. If A is to be encoded using an Injection make sure the injection instance is in scope.
apache/spark
- A
the literal value type
- T
the row type
- def litAggr[A, T](value: A)(implicit i0: TypedEncoder[A], i1: Refute[IsValueClass[A]]): TypedAggregate[T, A]
Creates a frameless.TypedAggregate of literal value.
Creates a frameless.TypedAggregate of literal value. If A is to be encoded using an Injection make sure the injection instance is in scope.
apache/spark
- def litValue[A, T, G <: ::[_, HNil], H <: ::[_ <: FieldType[_ <: Symbol, _], HNil], K <: Symbol, V, KS <: ::[_ <: Symbol, HNil], VS <: HList](value: Option[A])(implicit arg0: IsValueClass[A], i0: Aux[A, G], i1: Aux[G, H], i2: Aux[H, _ <: FieldType[K, V], HNil], i3: Aux[H, KS], i4: Aux[H, VS], i5: Aux[KS, K, HNil], i6: Aux[VS, V, HNil], i7: TypedEncoder[V], i8: ClassTag[A]): TypedColumn[T, Option[A]]
Creates a frameless.TypedColumn of literal value for an optional Value class
A
.Creates a frameless.TypedColumn of literal value for an optional Value class
A
.- A
the value class
- T
the row type
- def litValue[A, T, G <: ::[_, HNil], H <: ::[_ <: FieldType[_ <: Symbol, _], HNil], K <: Symbol, V, KS <: ::[_ <: Symbol, HNil], VS <: HList](value: A)(implicit arg0: IsValueClass[A], i0: Aux[A, G], i1: Aux[G, H], i2: Aux[H, _ <: FieldType[K, V], HNil], i3: Aux[H, KS], i4: Aux[H, VS], i5: Aux[KS, K, HNil], i6: Aux[VS, V, HNil], i7: TypedEncoder[V], i8: ClassTag[A]): TypedColumn[T, A]
Creates a frameless.TypedColumn of literal value for a Value class
A
.Creates a frameless.TypedColumn of literal value for a Value class
A
.- A
the value class
- T
the row type
- def size[T, A, B](column: TypedColumn[T, Map[A, B]]): TypedColumn[T, Int]
Returns length of Map
Returns length of Map
apache/spark
- Definition Classes
- UnaryFunctions
- def size[T, A, V[_]](column: TypedColumn[T, V[A]])(implicit arg0: CatalystSizableCollection[V]): TypedColumn[T, Int]
Returns length of array
Returns length of array
apache/spark
- Definition Classes
- UnaryFunctions
- def sortAscending[T, A, V[_]](column: TypedColumn[T, V[A]])(implicit arg0: Ordering[A], arg1: CatalystSortableCollection[V]): TypedColumn[T, V[A]]
Sorts the input array for the given column in ascending order, according to the natural ordering of the array elements.
Sorts the input array for the given column in ascending order, according to the natural ordering of the array elements.
apache/spark
- Definition Classes
- UnaryFunctions
- def sortDescending[T, A, V[_]](column: TypedColumn[T, V[A]])(implicit arg0: Ordering[A], arg1: CatalystSortableCollection[V]): TypedColumn[T, V[A]]
Sorts the input array for the given column in descending order, according to the natural ordering of the array elements.
Sorts the input array for the given column in descending order, according to the natural ordering of the array elements.
apache/spark
- Definition Classes
- UnaryFunctions
- def udf[T, A1, A2, A3, A4, A5, R](f: (A1, A2, A3, A4, A5) => R)(implicit arg0: TypedEncoder[R]): (TypedColumn[T, A1], TypedColumn[T, A2], TypedColumn[T, A3], TypedColumn[T, A4], TypedColumn[T, A5]) => TypedColumn[T, R]
Defines a user-defined function of 5 arguments as user-defined function (UDF).
Defines a user-defined function of 5 arguments as user-defined function (UDF). The data types are automatically inferred based on the function's signature.
apache/spark
- Definition Classes
- Udf
- def udf[T, A1, A2, A3, A4, R](f: (A1, A2, A3, A4) => R)(implicit arg0: TypedEncoder[R]): (TypedColumn[T, A1], TypedColumn[T, A2], TypedColumn[T, A3], TypedColumn[T, A4]) => TypedColumn[T, R]
Defines a user-defined function of 4 arguments as user-defined function (UDF).
Defines a user-defined function of 4 arguments as user-defined function (UDF). The data types are automatically inferred based on the function's signature.
apache/spark
- Definition Classes
- Udf
- def udf[T, A1, A2, A3, R](f: (A1, A2, A3) => R)(implicit arg0: TypedEncoder[R]): (TypedColumn[T, A1], TypedColumn[T, A2], TypedColumn[T, A3]) => TypedColumn[T, R]
Defines a user-defined function of 3 arguments as user-defined function (UDF).
Defines a user-defined function of 3 arguments as user-defined function (UDF). The data types are automatically inferred based on the function's signature.
apache/spark
- Definition Classes
- Udf
- def udf[T, A1, A2, R](f: (A1, A2) => R)(implicit arg0: TypedEncoder[R]): (TypedColumn[T, A1], TypedColumn[T, A2]) => TypedColumn[T, R]
Defines a user-defined function of 2 arguments as user-defined function (UDF).
Defines a user-defined function of 2 arguments as user-defined function (UDF). The data types are automatically inferred based on the function's signature.
apache/spark
- Definition Classes
- Udf
- def udf[T, A, R](f: (A) => R)(implicit arg0: TypedEncoder[R]): (TypedColumn[T, A]) => TypedColumn[T, R]
Defines a user-defined function of 1 arguments as user-defined function (UDF).
Defines a user-defined function of 1 arguments as user-defined function (UDF). The data types are automatically inferred based on the function's signature.
apache/spark
- Definition Classes
- Udf
- object CatalystExplodableCollection
- object CatalystSizableCollection
- object CatalystSortableCollection
- object FramelessUdf extends Serializable
- object aggregate extends AggregateFunctions
- object nonAggregate extends NonAggregateFunctions
Deprecated Value Members
- def explode[T, A, V[_]](column: TypedColumn[T, V[A]])(implicit arg0: TypedEncoder[A], arg1: CatalystExplodableCollection[V]): TypedColumn[T, A]
Creates a new row for each element in the given collection.
Creates a new row for each element in the given collection. The column types eligible for this operation are constrained by CatalystExplodableCollection.
apache/spark
- Definition Classes
- UnaryFunctions
- Annotations
- @deprecated
- Deprecated
(Since version 0.6.2) Use explode() from the TypedDataset instead. This method will result in runtime error if applied to two columns in the same select statement.