Aggregate the values of each key with Aggregator.
Aggregate the values of each key with Aggregator. First each value V is mapped to A, then we reduce with a semigroup of A, then finally we present the results as U. This could be more powerful and better optimized in some cases.
Aggregate the values of each key, using given combine functions and a neutral "zero value".
Aggregate the values of each key, using given combine functions and a neutral "zero value". This function can return a different result type, U, than the type of the values in this SCollection, V. Thus, we need one operation for merging a V into a U and one operation for merging two U's. To avoid memory allocation, both of these functions are allowed to modify and return their first argument instead of creating a new U.
For each key, compute the values' data distribution using approximate N
-tiles.
For each key, compute the values' data distribution using approximate N
-tiles.
a new SCollection whose values are Iterables of the approximate N
-tiles of
the elements.
Convert this SCollection to a SideInput, mapping key-value pairs of each window to a Map[key, value], to be used with SCollection.withSideInputs.
Convert this SCollection to a SideInput, mapping key-value pairs of each window to a Map[key, value], to be used with SCollection.withSideInputs. It is required that each key of the input be associated with a single value.
Convert this SCollection to a SideInput, mapping key-value pairs of each window to a Map[key, Iterable[value]], to be used with SCollection.withSideInputs.
Convert this SCollection to a SideInput, mapping key-value pairs of each window to a Map[key, Iterable[value]], to be used with SCollection.withSideInputs. It is not required that the keys in the input collection be unique.
For each key k in this
or that1
or that2
or that3
, return a resulting SCollection
that contains a tuple with the list of values for that key in this
, that1
, that2
and
that3
.
For each key k in this
or that1
or that2
, return a resulting SCollection that contains
a tuple with the list of values for that key in this
, that1
and that2
.
For each key k in this
or that
, return a resulting SCollection that contains a tuple with
the list of values for that key in this
as well as that
.
Generic function to combine the elements for each key using a custom set of aggregation functions.
Generic function to combine the elements for each key using a custom set of aggregation functions. Turns an SCollection[(K, V)] into a result of type SCollection[(K, C)], for a "combined type" C Note that V and C can be different -- for example, one might group an SCollection of type (Int, Int) into an RDD of type (Int, Seq[Int]). Users provide three functions:
- createCombiner
, which turns a V into a C (e.g., creates a one-element list)
- mergeValue
, to merge a V into a C (e.g., adds it to the end of a list)
- mergeCombiners
, to combine two C's into a single one.
Count approximate number of distinct values for each key in the SCollection.
Count approximate number of distinct values for each key in the SCollection.
the maximum estimation error, which should be in the range
[0.01, 0.5]
.
Count approximate number of distinct values for each key in the SCollection.
Count approximate number of distinct values for each key in the SCollection.
the number of entries in the statisticalsample; the higher this number, the
more accurate the estimate will be; should be >= 16
.
Count the number of elements for each key.
Count the number of elements for each key.
a new SCollection of (key, count) pairs
Pass each value in the key-value pair SCollection through a flatMap function without changing the keys.
Fold by key with Monoid, which defines the associative function and "zero value" for V.
Fold by key with Monoid, which defines the associative function and "zero value" for V. This could be more powerful and better optimized in some cases.
Merge the values for each key using an associative function and a neutral "zero value" which may be added to the result an arbitrary number of times, and must not change the result (e.
Merge the values for each key using an associative function and a neutral "zero value" which may be added to the result an arbitrary number of times, and must not change the result (e.g., Nil for list concatenation, 0 for addition, or 1 for multiplication.).
Perform a full outer join of this
and that
.
Perform a full outer join of this
and that
. For each element (k, v) in this
, the
resulting SCollection will either contain all pairs (k, (Some(v), Some(w))) for w in that
,
or the pair (k, (Some(v), None)) if no elements in that
have key k. Similarly, for each
element (k, w) in that
, the resulting SCollection will either contain all pairs (k,
(Some(v), Some(w))) for v in this
, or the pair (k, (None, Some(w))) if no elements in
this
have key k. Uses the given Partitioner to partition the output SCollection.
Group the values for each key in the SCollection into a single sequence.
Group the values for each key in the SCollection into a single sequence. The ordering of elements within each group is not guaranteed, and may even differ each time the resulting SCollection is evaluated.
Note: This operation may be very expensive. If you are grouping in order to perform an aggregation (such as a sum or average) over each key, using PairSCollectionFunctions.aggregateByKey or PairSCollectionFunctions.reduceByKey will provide much better performance.
Note: As currently implemented, groupByKey must be able to hold all the key-value pairs for any key in memory. If a key has too many values, it can result in an OutOfMemoryError.
Alias for cogroup.
Alias for cogroup.
Alias for cogroup.
Perform an inner join by replicating that
to all workers.
Perform an inner join by replicating that
to all workers. The right side should be tiny and
fit in memory.
Perform a left outer join by replicating that
to all workers.
Perform a left outer join by replicating that
to all workers. The right side should be tiny
and fit in memory.
Return an SCollection with the pairs from this
whose keys are in that
.
Return an SCollection containing all pairs of elements with matching keys in this
and
that
.
Return an SCollection containing all pairs of elements with matching keys in this
and
that
. Each pair of elements will be returned as a (k, (v1, v2)) tuple, where (k, v1) is in
this
and (k, v2) is in that
. Uses the given Partitioner to partition the output RDD.
Return an SCollection with the keys of each tuple.
Perform a left outer join of this
and that
.
Perform a left outer join of this
and that
. For each element (k, v) in this
, the
resulting SCollection will either contain all pairs (k, (v, Some(w))) for w in that
, or the
pair (k, (v, None)) if no elements in that
have key k. Uses the given Partitioner to
partition the output SCollection.
Pass each value in the key-value pair SCollection through a map function without changing the keys.
Return the max of values for each key as defined by the implicit Ordering[T].
Return the max of values for each key as defined by the implicit Ordering[T].
a new SCollection of (key, maximum value) pairs
Return the min of values for each key as defined by the implicit Ordering[T].
Return the min of values for each key as defined by the implicit Ordering[T].
a new SCollection of (key, minimum value) pairs
Merge the values for each key using an associative reduce function.
Merge the values for each key using an associative reduce function. This will also perform the merging locally on each mapper before sending results to a reducer, similarly to a "combiner" in MapReduce.
Perform a right outer join of this
and that
.
Perform a right outer join of this
and that
. For each element (k, w) in that
, the
resulting SCollection will either contain all pairs (k, (Some(v), w)) for v in this
, or the
pair (k, (None, w)) if no elements in this
have key k. Uses the given Partitioner to
partition the output SCollection.
Return a subset of this SCollection sampled by key (via stratified sampling).
Return a subset of this SCollection sampled by key (via stratified sampling).
Create a sample of this SCollection using variable sampling rates for different keys as
specified by fractions
, a key to sampling rate map, via simple random sampling with one
pass over the SCollection, to produce a sample of size that's approximately equal to the sum
of math.ceil(numItems * samplingRate) over all key values.
whether to sample with or without replacement
map of specific keys to sampling rates
SCollection containing the sampled subset
Return a sampled subset of values for each key of this SCollection.
Return a sampled subset of values for each key of this SCollection.
a new SCollection of (key, sampled values) pairs
N to 1 skewproof flavor of PairSCollectionFunctions.join().
N to 1 skewproof flavor of PairSCollectionFunctions.join().
Perform a skewed join where some keys on the left hand may be hot, i.e. appear more than
hotKeyThreshold
times. Frequency of a key is estimated with 1 - delta
probability, and the
estimate is within eps * N
of the true frequency.
true frequency <= estimate <= true frequency + eps * N
, where N is the total size of
the left hand side stream so far.
key with hotKeyThreshold
values will be considered hot. Some runners
have inefficient GroupByKey implementation for groups with more than 10K
values. Thus it is recommended to set hotKeyThreshold
to below 10K,
keep upper estimation error in mind.
left hand side key com.twitter.algebird.CMSMonoid
// Implicits that enabling CMS-hashing import com.twitter.algebird.CMSHasherImplicits._ val keyAggregator = CMS.aggregator[K](eps, delta, seed) val hotKeyCMS = self.keys.aggregate(keyAggregator) val p = logs.skewedJoin(logMetadata, hotKeyThreshold = 8500, cms=hotKeyCMS)
Read more about CMS -> com.twitter.algebird.CMSMonoid
Make sure to import com.twitter.algebird.CMSHasherImplicits before using this join
N to 1 skewproof flavor of PairSCollectionFunctions.join().
N to 1 skewproof flavor of PairSCollectionFunctions.join().
Perform a skewed join where some keys on the left hand may be hot, i.e. appear more than
hotKeyThreshold
times. Frequency of a key is estimated with 1 - delta
probability, and the
estimate is within eps * N
of the true frequency.
true frequency <= estimate <= true frequency + eps * N
, where N is the total size of
the left hand side stream so far.
key with hotKeyThreshold
values will be considered hot. Some runners
have inefficient GroupByKey implementation for groups with more than 10K
values. Thus it is recommended to set hotKeyThreshold
to below 10K,
keep upper estimation error in mind.
One-sided error bound on the error of each point query, i.e. frequency estimate. Must lie in (0, 1).
A seed to initialize the random number generator used to create the pairwise independent hash functions.
A bound on the probability that a query estimate does not lie within some small
interval (an interval that depends on eps
) around the truth. Must lie in (0, 1).
left side sample fracation. Default is 1.0
- no sampling.
whether to use sampling with replacement, see SCollection.sample()
// Implicits that enabling CMS-hashing import com.twitter.algebird.CMSHasherImplicits._ val p = logs.skewedJoin(logMetadata, hotKeyThreshold = 8500, eps=0.0005, seed=1)
Read more about CMS -> com.twitter.algebird.CMSMonoid
Make sure to import com.twitter.algebird.CMSHasherImplicits before using this join
Return an SCollection with the pairs from this
whose keys are not in that
.
Reduce by key with Semigroup.
Reduce by key with Semigroup. This could be more powerful and better optimized in some cases.
Swap the keys with the values.
Return the top k (largest) values for each key from this SCollection as defined by the specified implicit Ordering[T].
Return the top k (largest) values for each key from this SCollection as defined by the specified implicit Ordering[T].
a new SCollection of (key, top k) pairs
Return an SCollection with the values of each tuple.
Convert this SCollection to an SCollectionWithHotKeyFanout that uses an intermediate node to combine "hot" keys partially before performing the full combine.
Convert this SCollection to an SCollectionWithHotKeyFanout that uses an intermediate node to combine "hot" keys partially before performing the full combine.
constant value for every key
Convert this SCollection to an SCollectionWithHotKeyFanout that uses an intermediate node to combine "hot" keys partially before performing the full combine.
Convert this SCollection to an SCollectionWithHotKeyFanout that uses an intermediate node to combine "hot" keys partially before performing the full combine.
a function from keys to an integer N, where the key will be spread among N intermediate nodes for partial combining. If N is less than or equal to 1, this key will not be sent through an intermediate node.
Extra functions available on SCollections of (key, value) pairs through an implicit conversion.