Merge two TypedPipes (no order is guaranteed) This is only realized when a group (or join) is performed.
Merge two TypedPipes (no order is guaranteed) This is only realized when a group (or join) is performed.
If any errors happen below this line, but before a groupBy, write to a TypedSink
If any errors happen below this line, but before a groupBy, write to a TypedSink
Aggregate all items in this pipe into a single ValuePipe
Aggregate all items in this pipe into a single ValuePipe
Aggregators are composable reductions that allow you to glue together several reductions and process them in one pass.
Same as groupAll.aggregate.values
Put the items in this into the keys, and unit as the value in a Group in some sense, this is the dual of groupAll
Put the items in this into the keys, and unit as the value in a Group in some sense, this is the dual of groupAll
Provide the internal implementation to get from a typed pipe to a cascading Pipe
Provide the internal implementation to get from a typed pipe to a cascading Pipe
Filter and map.
Filter and map. See scala.collection.List.collect.
collect { case Some(x) => fn(x)
}
Implements a cross product.
Implements a cross product. The right side should be tiny This gives the same results as {code for { l <- list1; l2 <- list2 } yield (l, l2) }
Attach a ValuePipe to each element this TypedPipe
Attach a ValuePipe to each element this TypedPipe
prints the current pipe to stdout
prints the current pipe to stdout
Returns the set of distinct elements in the TypedPipe
This is the same as: .map((_, ())).group.sum.keys
If you want a distinct while joining, consider:
instead of:
a.join(b.distinct.asKeys)
manually do the distinct:
a.join(b.asKeys.sum)
The latter creates 1 map/reduce phase rather than 2
Returns the set of distinct elements in the TypedPipe
This is the same as: .map((_, ())).group.sum.keys
If you want a distinct while joining, consider:
instead of:
a.join(b.distinct.asKeys)
manually do the distinct:
a.join(b.asKeys.sum)
The latter creates 1 map/reduce phase rather than 2
Returns the set of distinct elements identified by a given lambda extractor in the TypedPipe
Returns the set of distinct elements identified by a given lambda extractor in the TypedPipe
Merge two TypedPipes of different types by using Either
Merge two TypedPipes of different types by using Either
Sometimes useful for implementing custom joins with groupBy + mapValueStream when you know that the value/key can fit in memory.
Sometimes useful for implementing custom joins with groupBy + mapValueStream when you know that the value/key can fit in memory. Beware.
Keep only items that satisfy this predicate
Keep only items that satisfy this predicate
If T is a (K, V) for some V, then we can use this function to filter.
If T is a (K, V) for some V, then we can use this function to filter. Prefer to use this if your filter only touches the key.
This is here to match the function in KeyedListLike, where it is optimized
Keep only items that don't satisfy the predicate.
Keep only items that don't satisfy the predicate.
filterNot
is the same as filter
with a negated predicate.
common pattern of attaching a value and then filter
recommended style:
filterWithValue(vpu) {
case (t, Some(u)) => op(t, u)
case (t, None) => // if you never expect this:
sys.error("unexpected empty value pipe")
}
common pattern of attaching a value and then filter
recommended style:
filterWithValue(vpu) {
case (t, Some(u)) => op(t, u)
case (t, None) => // if you never expect this:
sys.error("unexpected empty value pipe")
}
This is the fundamental mapper operation.
This is the fundamental mapper operation. It behaves in a way similar to List.flatMap, which means that each item is fed to the input function, which can return 0, 1, or many outputs (as a TraversableOnce) per input. The returned results will be iterated through once and then flattened into a single TypedPipe which is passed to the next step in the pipeline.
This behavior makes it a powerful operator -- it can be used to filter records (by returning 0 items for a given input), it can be used the way map is used (by returning 1 item per input), it can be used to explode 1 input into many outputs, or even a combination of all of the above at once.
Similar to mapValues, but allows to return a collection of outputs for each input value
Similar to mapValues, but allows to return a collection of outputs for each input value
common pattern of attaching a value and then flatMap
recommended style:
flatMapWithValue(vpu) {
case (t, Some(u)) => op(t, u)
case (t, None) => // if you never expect this:
sys.error("unexpected empty value pipe")
}
common pattern of attaching a value and then flatMap
recommended style:
flatMapWithValue(vpu) {
case (t, Some(u)) => op(t, u)
case (t, None) => // if you never expect this:
sys.error("unexpected empty value pipe")
}
flatten an Iterable
flatten an Iterable
flatten just the values This is more useful on KeyedListLike, but added here to reduce assymmetry in the APIs
flatten just the values This is more useful on KeyedListLike, but added here to reduce assymmetry in the APIs
Force a materialization of this pipe prior to the next operation.
Force a materialization of this pipe prior to the next operation. This is useful if you filter almost everything before a hashJoin, for instance. This is useful for experts who see some heuristic of the planner causing slower performance.
This is used when you are working with Execution[T] to create loops.
This is used when you are working with Execution[T] to create loops. You might do this to checkpoint and then flatMap Execution to continue from there. Probably only useful if you need to flatMap it twice to fan out the data into two children jobs.
This writes the current TypedPipe into a temporary file and then opens it after complete so that you can continue from that point
If you are going to create two branches or forks, it may be more efficient to call this method first which will create a node in the cascading graph.
If you are going to create two branches or forks, it may be more efficient to call this method first which will create a node in the cascading graph. Without this, both full branches of the fork will be put into separate cascading pipes, which can, in some cases, be slower.
Ideally the planner would see this
This is the default means of grouping all pairs with the same key.
This is the default means of grouping all pairs with the same key. Generally this triggers 1 Map/Reduce transition
Send all items to a single reducer
Send all items to a single reducer
Given a key function, add the key, then call .group
Given a key function, add the key, then call .group
Forces a shuffle by randomly assigning each item into one of the partitions.
Forces a shuffle by randomly assigning each item into one of the partitions.
This is for the case where you mappers take a long time, and it is faster to shuffle them to more reducers and then operate.
You probably want shard if you are just forcing a shuffle.
Group using an explicit Ordering on the key.
Group using an explicit Ordering on the key.
These operations look like joins, but they do not force any communication of the current TypedPipe.
These operations look like joins, but they do not force any communication of the current TypedPipe. They are mapping operations where this pipe is streamed through one item at a time.
WARNING These behave semantically very differently than cogroup. This is because we handle (K,V) tuples on the left as we see them. The iterable on the right is over all elements with a matching key K, and it may be empty if there are no values for this key K.
Do an inner-join without shuffling this TypedPipe, but replicating argument to all tasks
Do an inner-join without shuffling this TypedPipe, but replicating argument to all tasks
Do an leftjoin without shuffling this TypedPipe, but replicating argument to all tasks
Do an leftjoin without shuffling this TypedPipe, but replicating argument to all tasks
For each element, do a map-side (hash) left join to look up a value
For each element, do a map-side (hash) left join to look up a value
Just keep the keys, or ._1 (if this type is a Tuple2)
Just keep the keys, or ._1 (if this type is a Tuple2)
uses hashJoin but attaches None if thatPipe is empty
uses hashJoin but attaches None if thatPipe is empty
ValuePipe may be empty, so, this attaches it as an Option cross is the same as leftCross(p).collect { case (t, Some(v)) => (t, v) }
ValuePipe may be empty, so, this attaches it as an Option cross is the same as leftCross(p).collect { case (t, Some(v)) => (t, v) }
limit the output to at most count items, if at least count items exist.
limit the output to at most count items, if at least count items exist.
If you want to writeThrough to a specific file if it doesn't already exist, and otherwise just read from it going forward, use this.
If you want to writeThrough to a specific file if it doesn't already exist, and otherwise just read from it going forward, use this.
Transform each element via the function f
Transform each element via the function f
Transform only the values (sometimes requires giving the types due to scala type inference)
Transform only the values (sometimes requires giving the types due to scala type inference)
common pattern of attaching a value and then map
recommended style:
mapWithValue(vpu) {
case (t, Some(u)) => op(t, u)
case (t, None) => // if you never expect this:
sys.error("unexpected empty value pipe")
}
common pattern of attaching a value and then map
recommended style:
mapWithValue(vpu) {
case (t, Some(u)) => op(t, u)
case (t, None) => // if you never expect this:
sys.error("unexpected empty value pipe")
}
This attaches a function that is called at the end of the map phase on EACH of the tasks that are executing.
This attaches a function that is called at the end of the map phase on EACH of the tasks that are executing. This is for expert use only. You probably won't ever need it. Try hard to avoid it. Execution also has onComplete that can run when an Execution has completed.
Partitions this into two pipes according to a predicate.
Partitions this into two pipes according to a predicate.
Sometimes what you really want is a groupBy in these cases.
If T <:< U, then this is safe to treat as TypedPipe[U] due to covariance
If T <:< U, then this is safe to treat as TypedPipe[U] due to covariance
Sample a fraction (between 0 and 1) uniformly independently at random each element of the pipe with a given seed.
Sample a fraction (between 0 and 1) uniformly independently at random each element of the pipe with a given seed. Does not require a reduce step.
Sample a fraction (between 0 and 1) uniformly independently at random each element of the pipe does not require a reduce step.
Sample a fraction (between 0 and 1) uniformly independently at random each element of the pipe does not require a reduce step.
Used to force a shuffle into a given size of nodes.
Used to force a shuffle into a given size of nodes. Only use this if your mappers are taking far longer than the time to shuffle.
Enables joining when this TypedPipe has some keys with many many values and but many with very few values.
Enables joining when this TypedPipe has some keys with many many values and but many with very few values. For instance, a graph where some nodes have millions of neighbors, but most have only a few.
We build a (count-min) sketch of each key's frequency, and we use that to shard the heavy keys across many reducers. This increases communication cost in order to reduce the maximum time needed to complete the join.
pipe.sketch(100).join(thatPipe)
will add an extra map/reduce job over a standard join to create the count-min-sketch.
This will generally only be beneficial if you have really heavy skew, where without
this you have 1 or 2 reducers taking hours longer than the rest.
Reasonably common shortcut for cases of total associative/commutative reduction returns a ValuePipe with only one element if there is any input, otherwise EmptyValue.
Reasonably common shortcut for cases of total associative/commutative reduction returns a ValuePipe with only one element if there is any input, otherwise EmptyValue.
Reasonably common shortcut for cases of associative/commutative reduction by Key
Reasonably common shortcut for cases of associative/commutative reduction by Key
This does a sum of values WITHOUT triggering a shuffle.
This does a sum of values WITHOUT triggering a shuffle. the contract is, if followed by a group.sum the result is the same with or without this present, and it never increases the number of items. BUT due to the cost of caching, it might not be faster if there is poor key locality.
It is only useful for expert tuning, and best avoided unless you are struggling with performance problems. If you are not sure you need this, you probably don't.
The main use case is to reduce the values down before a key expansion such as is often done in a data cube.
swap the keys with the values
swap the keys with the values
This gives an Execution that when run evaluates the TypedPipe, writes it to disk, and then gives you an Iterable that reads from disk on the submit node each time .iterator is called.
This gives an Execution that when run evaluates the TypedPipe, writes it to disk, and then gives you an Iterable that reads from disk on the submit node each time .iterator is called. Because of how scala Iterables work, mapping/flatMapping/filtering the Iterable forces a read of the entire thing. If you need it to be lazy, call .iterator and use the Iterator inside instead.
Export back to a raw cascading Pipe.
Export back to a raw cascading Pipe. useful for interop with the scalding Fields API or with Cascading code. Avoid this if possible. Prefer to write to TypedSink.
use a TupleUnpacker to flatten U out into a cascading Tuple
use a TupleUnpacker to flatten U out into a cascading Tuple
Just keep the values, or ._2 (if this type is a Tuple2)
Just keep the values, or ._2 (if this type is a Tuple2)
adds a description to the pipe
adds a description to the pipe
Safely write to a TypedSink[T].
Safely write to a TypedSink[T]. If you want to write to a Source (not a Sink) you need to do something like: toPipe(fieldNames).write(dest)
a pipe equivalent to the current pipe.
This is the functionally pure approach to building jobs.
This is the functionally pure approach to building jobs. Note, that you have to call run on the result or flatMap/zip it into an Execution that is run for anything to happen here.
If you want to write to a specific location, and then read from that location going forward, use this.
If you want to write to a specific location, and then read from that location going forward, use this.