Computes the given aggregations, returning a Dataset of tuples for each unique key and the result of computing these aggregations over all elements in the group.
Computes the given aggregations, returning a Dataset of tuples for each unique key and the result of computing these aggregations over all elements in the group.
1.6.0
Computes the given aggregations, returning a Dataset of tuples for each unique key and the result of computing these aggregations over all elements in the group.
Computes the given aggregations, returning a Dataset of tuples for each unique key and the result of computing these aggregations over all elements in the group.
1.6.0
Computes the given aggregations, returning a Dataset of tuples for each unique key and the result of computing these aggregations over all elements in the group.
Computes the given aggregations, returning a Dataset of tuples for each unique key and the result of computing these aggregations over all elements in the group.
1.6.0
Computes the given aggregation, returning a Dataset of tuples for each unique key and the result of computing this aggregation over all elements in the group.
Computes the given aggregation, returning a Dataset of tuples for each unique key and the result of computing this aggregation over all elements in the group.
1.6.0
(Scala-specific) Applies the given function to each cogrouped data.
(Scala-specific) Applies the given function to each cogrouped data.
For each unique group, the function will be passed the grouping key
and 2 iterators containing all elements in the group from
Dataset this
and other
. The function can return an iterator
containing elements of an arbitrary type which will be returned as
a new Dataset.
1.6.0
Returns a Dataset that contains a tuple with each key and the number of items present for that key.
Returns a Dataset that contains a tuple with each key and the number of items present for that key.
1.6.0
(Scala-specific) Applies the given function to each group of data.
(Scala-specific) Applies the given function to each group of data. For each unique group, the function will be passed the group key and an iterator that contains all of the elements in the group. The function can return an iterator containing elements of an arbitrary type which will be returned as a new Dataset.
This function does not support partial aggregation, and as a result
requires shuffling all the data in the Dataset. If an
application intends to perform an aggregation over each key, it is
best to use the reduce function or an
org.apache.spark.sql.expressions#Aggregator
.
Internally, the implementation will spill to disk if any given
group is too large to fit into memory. However, users must take
care to avoid materializing the whole iterator for a group (for
example, by calling toList
) unless they are sure that this is
possible given the memory constraints of their cluster.
1.6.0
::Experimental:: (Scala-specific) Applies the given function to each group of data, while maintaining a user-defined per-group state.
::Experimental:: (Scala-specific) Applies the given function to
each group of data, while maintaining a user-defined per-group
state. The result Dataset will represent the objects returned by
the function. For a static batch Dataset, the function will be
invoked once per group. For a streaming Dataset, the function will
be invoked for each group repeatedly in every trigger, and updates
to each group's state will be saved across invocations. See
GroupState
for more details.
The type of the user-defined state. Must be encodable to Spark SQL types.
The type of the output objects. Must be encodable to Spark SQL types.
The output mode of the function.
Timeout configuration for groups that do not receive data for a while. See Encoder for more details on what types are encodable to Spark SQL.
Function to be called on every group.
2.2.0
Returns a new KeyValueGroupedDataset where the type of the key has been mapped to the specified type.
Returns a new KeyValueGroupedDataset where the type of the key
has been mapped to the specified type. The mapping of key columns
to the type follows the same rules as as
on Dataset.
1.6.0
Returns a Dataset that contains each unique key.
Returns a Dataset that contains each unique key. This is equivalent to doing mapping over the Dataset to extract the keys and then running a distinct operation on those.
1.6.0
(Scala-specific) Applies the given function to each group of data.
(Scala-specific) Applies the given function to each group of data. For each unique group, the function will be passed the group key and an iterator that contains all of the elements in the group. The function can return an element of arbitrary type which will be returned as a new Dataset.
This function does not support partial aggregation, and as a result
requires shuffling all the data in the Dataset. If an
application intends to perform an aggregation over each key, it is
best to use the reduce function or an
org.apache.spark.sql.expressions#Aggregator
.
Internally, the implementation will spill to disk if any given
group is too large to fit into memory. However, users must take
care to avoid materializing the whole iterator for a group (for
example, by calling toList
) unless they are sure that this is
possible given the memory constraints of their cluster.
1.6.0
::Experimental:: (Scala-specific) Applies the given function to each group of data, while maintaining a user-defined per-group state.
::Experimental:: (Scala-specific) Applies the given function to each group of data, while maintaining a user-defined per-group state. The result Dataset will represent the objects returned by the function. For a static batch Dataset, the function will be invoked once per group. For a streaming Dataset, the function will be invoked for each group repeatedly in every trigger, and updates to each group's state will be saved across invocations. See org.apache.spark.sql.streaming.GroupState for more details.
The type of the user-defined state. Must be encodable to Spark SQL types.
The type of the output objects. Must be encodable to Spark SQL types.
Timeout configuration for groups that do not receive data for a while. See Encoder for more details on what types are encodable to Spark SQL.
Function to be called on every group.
2.2.0
::Experimental:: (Scala-specific) Applies the given function to each group of data, while maintaining a user-defined per-group state.
::Experimental:: (Scala-specific) Applies the given function to each group of data, while maintaining a user-defined per-group state. The result Dataset will represent the objects returned by the function. For a static batch Dataset, the function will be invoked once per group. For a streaming Dataset, the function will be invoked for each group repeatedly in every trigger, and updates to each group's state will be saved across invocations. See org.apache.spark.sql.streaming.GroupState for more details.
The type of the user-defined state. Must be encodable to Spark SQL types.
The type of the output objects. Must be encodable to Spark SQL types.
Function to be called on every group. See Encoder for more details on what types are encodable to Spark SQL.
2.2.0
Returns a new KeyValueGroupedDataset where the given function
func
has been applied to the data.
Returns a new KeyValueGroupedDataset where the given function
func
has been applied to the data. The grouping key is unchanged
by this.
// Create values grouped by key from a Dataset[(K, V)] ds.groupByKey(_._1).mapValues(_._2) // Scala
2.1.0
(Scala-specific) Reduces the elements of each group of data using the specified binary function.
(Scala-specific) Reduces the elements of each group of data using the specified binary function. The given function must be commutative and associative or the result may be non-deterministic.
1.6.0
Applies a transformation to the underlying KeyValueGroupedDataset.
Unpack the underlying KeyValueGroupedDataset into a DataFrame.
Unpack the underlying KeyValueGroupedDataset into a DataFrame, it is used for transformations that can fail due to an AnalysisException.