KeyedStream

@Public
class KeyedStream[T, K](javaStream: KeyedStream[T, K]) extends DataStream[T]
class DataStream[T]
class Object
trait Matchable
class Any

Type members

Classlikes

@PublicEvolving
class IntervalJoin[IN1, IN2, KEY](val streamOne: KeyedStream[IN1, KEY], val streamTwo: KeyedStream[IN2, KEY])

Perform a join over a time interval.

Perform a join over a time interval.

Type parameters:
IN1

The type parameter of the elements in the first streams

IN2

The type parameter of the elements in the second stream

@PublicEvolving
class IntervalJoined[IN1, IN2, KEY](firstStream: KeyedStream[IN1, KEY], secondStream: KeyedStream[IN2, KEY], lowerBound: Long, upperBound: Long)

IntervalJoined is a container for two streams that have keys for both sides as well as the time boundaries over which elements should be joined.

IntervalJoined is a container for two streams that have keys for both sides as well as the time boundaries over which elements should be joined.

Type parameters:
IN1

Input type of elements from the first stream

IN2

Input type of elements from the second stream

KEY

The type of the key

Value members

Concrete methods

@PublicEvolving
def asQueryableState(queryableStateName: String): QueryableStateStream[K, T]

Publishes the keyed stream as a queryable ValueState instance.

Publishes the keyed stream as a queryable ValueState instance.

Value parameters:
queryableStateName

Name under which to the publish the queryable state instance

Returns:

Queryable state instance

@PublicEvolving
def asQueryableState(queryableStateName: String, stateDescriptor: ValueStateDescriptor[T]): QueryableStateStream[K, T]

Publishes the keyed stream as a queryable ValueState instance.

Publishes the keyed stream as a queryable ValueState instance.

Value parameters:
queryableStateName

Name under which to the publish the queryable state instance

stateDescriptor

State descriptor to create state instance from

Returns:

Queryable state instance

@PublicEvolving
def asQueryableState(queryableStateName: String, stateDescriptor: ReducingStateDescriptor[T]): QueryableStateStream[K, T]

Publishes the keyed stream as a queryable ReducingState instance.

Publishes the keyed stream as a queryable ReducingState instance.

Value parameters:
queryableStateName

Name under which to the publish the queryable state instance

stateDescriptor

State descriptor to create state instance from

Returns:

Queryable state instance

def countWindow(size: Long, slide: Long): WindowedStream[T, K, GlobalWindow]

Windows this KeyedStream into sliding count windows.

Windows this KeyedStream into sliding count windows.

Value parameters:
size

The size of the windows in number of elements.

slide

The slide interval in number of elements.

def countWindow(size: Long): WindowedStream[T, K, GlobalWindow]

Windows this KeyedStream into tumbling count windows.

Windows this KeyedStream into tumbling count windows.

Value parameters:
size

The size of the windows in number of elements.

def filterWithState[S : TypeInformation](fun: (T, Option[S]) => (Boolean, Option[S])): DataStream[T]

Creates a new DataStream that contains only the elements satisfying the given stateful filter predicate. To use state partitioning, a key must be defined using .keyBy(..), in which case an independent state will be kept per key.

Creates a new DataStream that contains only the elements satisfying the given stateful filter predicate. To use state partitioning, a key must be defined using .keyBy(..), in which case an independent state will be kept per key.

Note that the user state object needs to be serializable.

def flatMapWithState[R : TypeInformation, S : TypeInformation](fun: (T, Option[S]) => (IterableOnce[R], Option[S])): DataStream[R]

Creates a new DataStream by applying the given stateful function to every element and flattening the results. To use state partitioning, a key must be defined using .keyBy(..), in which case an independent state will be kept per key.

Creates a new DataStream by applying the given stateful function to every element and flattening the results. To use state partitioning, a key must be defined using .keyBy(..), in which case an independent state will be kept per key.

Note that the user state object needs to be serializable.

@Internal
def getKeyType: TypeInformation[K]

Gets the type of the key by which this stream is keyed.

Gets the type of the key by which this stream is keyed.

@PublicEvolving
def intervalJoin[OTHER](otherStream: KeyedStream[OTHER, K]): IntervalJoin[T, OTHER, K]

Join elements of this KeyedStream with elements of another KeyedStream over a time interval that can be specified with IntervalJoin.between.

Join elements of this KeyedStream with elements of another KeyedStream over a time interval that can be specified with IntervalJoin.between.

Type parameters:
OTHER

Type parameter of elements in the other stream

Value parameters:
otherStream

The other keyed stream to join this keyed stream with

Returns:

An instance of IntervalJoin with this keyed stream and the other keyed stream

def mapWithState[R : TypeInformation, S : TypeInformation](fun: (T, Option[S]) => (R, Option[S])): DataStream[R]

Creates a new DataStream by applying the given stateful function to every element of this DataStream. To use state partitioning, a key must be defined using .keyBy(..), in which case an independent state will be kept per key.

Creates a new DataStream by applying the given stateful function to every element of this DataStream. To use state partitioning, a key must be defined using .keyBy(..), in which case an independent state will be kept per key.

Note that the user state object needs to be serializable.

def max(position: Int): DataStream[T]

Applies an aggregation that that gives the current maximum of the data stream at the given position by the given key. An independent aggregate is kept per key.

Applies an aggregation that that gives the current maximum of the data stream at the given position by the given key. An independent aggregate is kept per key.

Value parameters:
position

The field position in the data points to minimize. This is applicable to Tuple types, Scala case classes, and primitive types (which is considered as having one field).

def max(field: String): DataStream[T]

Applies an aggregation that that gives the current maximum of the data stream at the given field by the given key. An independent aggregate is kept per key.

Applies an aggregation that that gives the current maximum of the data stream at the given field by the given key. An independent aggregate is kept per key.

Value parameters:
field

In case of a POJO, Scala case class, or Tuple type, the name of the (public) field on which to perform the aggregation. Additionally, a dot can be used to drill down into nested objects, as in "field1.fieldxy". Furthermore "*" can be specified in case of a basic type (which is considered as having only one field).

def maxBy(position: Int): DataStream[T]

Applies an aggregation that that gives the current maximum element of the data stream by the given position by the given key. An independent aggregate is kept per key. When equality, the first element is returned with the maximal value.

Applies an aggregation that that gives the current maximum element of the data stream by the given position by the given key. An independent aggregate is kept per key. When equality, the first element is returned with the maximal value.

Value parameters:
position

The field position in the data points to minimize. This is applicable to Tuple types, Scala case classes, and primitive types (which is considered as having one field).

def maxBy(field: String): DataStream[T]

Applies an aggregation that that gives the current maximum element of the data stream by the given field by the given key. An independent aggregate is kept per key. When equality, the first element is returned with the maximal value.

Applies an aggregation that that gives the current maximum element of the data stream by the given field by the given key. An independent aggregate is kept per key. When equality, the first element is returned with the maximal value.

Value parameters:
field

In case of a POJO, Scala case class, or Tuple type, the name of the (public) field on which to perform the aggregation. Additionally, a dot can be used to drill down into nested objects, as in "field1.fieldxy". Furthermore "*" can be specified in case of a basic type (which is considered as having only one field).

def min(position: Int): DataStream[T]

Applies an aggregation that that gives the current minimum of the data stream at the given position by the given key. An independent aggregate is kept per key.

Applies an aggregation that that gives the current minimum of the data stream at the given position by the given key. An independent aggregate is kept per key.

Value parameters:
position

The field position in the data points to minimize. This is applicable to Tuple types, Scala case classes, and primitive types (which is considered as having one field).

def min(field: String): DataStream[T]

Applies an aggregation that that gives the current minimum of the data stream at the given field by the given key. An independent aggregate is kept per key.

Applies an aggregation that that gives the current minimum of the data stream at the given field by the given key. An independent aggregate is kept per key.

Value parameters:
field

In case of a POJO, Scala case class, or Tuple type, the name of the (public) field on which to perform the aggregation. Additionally, a dot can be used to drill down into nested objects, as in "field1.fieldxy". Furthermore "*" can be specified in case of a basic type (which is considered as having only one field).

def minBy(position: Int): DataStream[T]

Applies an aggregation that that gives the current minimum element of the data stream by the given position by the given key. An independent aggregate is kept per key. When equality, the first element is returned with the minimal value.

Applies an aggregation that that gives the current minimum element of the data stream by the given position by the given key. An independent aggregate is kept per key. When equality, the first element is returned with the minimal value.

Value parameters:
position

The field position in the data points to minimize. This is applicable to Tuple types, Scala case classes, and primitive types (which is considered as having one field).

def minBy(field: String): DataStream[T]

Applies an aggregation that that gives the current minimum element of the data stream by the given field by the given key. An independent aggregate is kept per key. When equality, the first element is returned with the minimal value.

Applies an aggregation that that gives the current minimum element of the data stream by the given field by the given key. An independent aggregate is kept per key. When equality, the first element is returned with the minimal value.

Value parameters:
field

In case of a POJO, Scala case class, or Tuple type, the name of the (public) field on which to perform the aggregation. Additionally, a dot can be used to drill down into nested objects, as in "field1.fieldxy". Furthermore "*" can be specified in case of a basic type (which is considered as having only one field).

@PublicEvolving
def process[R : TypeInformation](keyedProcessFunction: KeyedProcessFunction[K, T, R]): DataStream[R]

Applies the given KeyedProcessFunction on the input stream, thereby creating a transformed output stream.

Applies the given KeyedProcessFunction on the input stream, thereby creating a transformed output stream.

The function will be called for every element in the stream and can produce zero or more output. The function can also query the time and set timers. When reacting to the firing of set timers the function can emit yet more elements.

The function will be called for every element in the input streams and can produce zero or more output elements. Contrary to the DataStream#flatMap function, this function can also query the time and set timers. When reacting to the firing of set timers the function can directly emit elements and/or register yet more timers.

Value parameters:
keyedProcessFunction

The KeyedProcessFunction that is called for each element in the stream.

def reduce(reducer: ReduceFunction[T]): DataStream[T]

Creates a new DataStream by reducing the elements of this DataStream using an associative reduce function. An independent aggregate is kept per key.

Creates a new DataStream by reducing the elements of this DataStream using an associative reduce function. An independent aggregate is kept per key.

def reduce(fun: (T, T) => T): DataStream[T]

Creates a new DataStream by reducing the elements of this DataStream using an associative reduce function. An independent aggregate is kept per key.

Creates a new DataStream by reducing the elements of this DataStream using an associative reduce function. An independent aggregate is kept per key.

def sum(position: Int): DataStream[T]

Applies an aggregation that sums the data stream at the given position by the given key. An independent aggregate is kept per key.

Applies an aggregation that sums the data stream at the given position by the given key. An independent aggregate is kept per key.

Value parameters:
position

The field position in the data points to minimize. This is applicable to Tuple types, Scala case classes, and primitive types (which is considered as having one field).

def sum(field: String): DataStream[T]

Applies an aggregation that sums the data stream at the given field by the given key. An independent aggregate is kept per key.

Applies an aggregation that sums the data stream at the given field by the given key. An independent aggregate is kept per key.

Value parameters:
field

In case of a POJO, Scala case class, or Tuple type, the name of the (public) field on which to perform the aggregation. Additionally, a dot can be used to drill down into nested objects, as in "field1.fieldxy". Furthermore "*" can be specified in case of a basic type (which is considered as having only one field).

@PublicEvolving
def window[W <: Window](assigner: WindowAssigner[_ >: T, W]): WindowedStream[T, K, W]

Windows this data stream to a WindowedStream, which evaluates windows over a key grouped stream. Elements are put into windows by a WindowAssigner. The grouping of elements is done both by key and by window.

Windows this data stream to a WindowedStream, which evaluates windows over a key grouped stream. Elements are put into windows by a WindowAssigner. The grouping of elements is done both by key and by window.

A org.apache.flink.streaming.api.windowing.triggers.Trigger can be defined to specify when windows are evaluated. However, WindowAssigner have a default Trigger that is used if a Trigger is not specified.

Value parameters:
assigner

The WindowAssigner that assigns elements to windows.

Returns:

The trigger windows data stream.

Deprecated methods

@deprecated("will be removed in a future version") @PublicEvolving
override def process[R : TypeInformation](processFunction: ProcessFunction[T, R]): DataStream[R]

Applies the given ProcessFunction on the input stream, thereby creating a transformed output stream.

Applies the given ProcessFunction on the input stream, thereby creating a transformed output stream.

The function will be called for every element in the stream and can produce zero or more output. The function can also query the time and set timers. When reacting to the firing of set timers the function can emit yet more elements.

The function will be called for every element in the input streams and can produce zero or more output elements. Contrary to the DataStream#flatMap function, this function can also query the time and set timers. When reacting to the firing of set timers the function can directly emit elements and/or register yet more timers.

Value parameters:
processFunction

The ProcessFunction that is called for each element in the stream.

Deprecated

Use KeyedStream#process

Definition Classes
@deprecated
def timeWindow(size: Time): WindowedStream[T, K, TimeWindow]

Windows this KeyedStream into tumbling time windows.

Windows this KeyedStream into tumbling time windows.

This is a shortcut for either .window(TumblingEventTimeWindows.of(size)) or .window(TumblingProcessingTimeWindows.of(size)) depending on the time characteristic set using StreamExecutionEnvironment.setStreamTimeCharacteristic()

Value parameters:
size

The size of the window.

Deprecated

Please use window with either TumblingEventTimeWindows or TumblingProcessingTimeWindows. For more information, see the deprecation notice on org.apache.flink.streaming.api.TimeCharacteristic.

@deprecated
def timeWindow(size: Time, slide: Time): WindowedStream[T, K, TimeWindow]

Windows this KeyedStream into sliding time windows.

Windows this KeyedStream into sliding time windows.

This is a shortcut for either .window(SlidingEventTimeWindows.of(size)) or .window(SlidingProcessingTimeWindows.of(size)) depending on the time characteristic set using StreamExecutionEnvironment.setStreamTimeCharacteristic()

Value parameters:
size

The size of the window.

Deprecated

Please use window with either SlidingEventTimeWindows or SlidingProcessingTimeWindows. For more information, see the deprecation notice on org.apache.flink.streaming.api.TimeCharacteristic.

Inherited methods

def addSink(fun: T => Unit): DataStreamSink[T]

Adds the given sink to this DataStream. Only streams with sinks added will be executed once the StreamExecutionEnvironment.execute(...) method is called.

Adds the given sink to this DataStream. Only streams with sinks added will be executed once the StreamExecutionEnvironment.execute(...) method is called.

Inherited from:
DataStream
def addSink(sinkFunction: SinkFunction[T]): DataStreamSink[T]

Adds the given sink to this DataStream. Only streams with sinks added will be executed once the StreamExecutionEnvironment.execute(...) method is called.

Adds the given sink to this DataStream. Only streams with sinks added will be executed once the StreamExecutionEnvironment.execute(...) method is called.

Inherited from:
DataStream
@PublicEvolving
def assignAscendingTimestamps(extractor: T => Long): DataStream[T]

Assigns timestamps to the elements in the data stream and periodically creates watermarks to signal event time progress.

Assigns timestamps to the elements in the data stream and periodically creates watermarks to signal event time progress.

This method is a shortcut for data streams where the element timestamp are known to be monotonously ascending within each parallel stream. In that case, the system can generate watermarks automatically and perfectly by tracking the ascending timestamps.

For cases where the timestamps are not monotonously increasing, use the more general methods assignTimestampsAndWatermarks and assignTimestampsAndWatermarks.

Inherited from:
DataStream
def assignTimestampsAndWatermarks(watermarkStrategy: WatermarkStrategy[T]): DataStream[T]

Assigns timestamps to the elements in the data stream and generates watermarks to signal event time progress. The given [[WatermarkStrategy is used to create a TimestampAssigner and org.apache.flink.api.common.eventtime.WatermarkGenerator.

Assigns timestamps to the elements in the data stream and generates watermarks to signal event time progress. The given [[WatermarkStrategy is used to create a TimestampAssigner and org.apache.flink.api.common.eventtime.WatermarkGenerator.

For each event in the data stream, the long) method is called to assign an event timestamp.

For each event in the data stream, the long, WatermarkOutput) will be called.

Periodically (defined by the ExecutionConfig#getAutoWatermarkInterval()), the WatermarkGenerator#onPeriodicEmit(WatermarkOutput) method will be called.

Common watermark generation patterns can be found as static methods in the org.apache.flink.api.common.eventtime.WatermarkStrategy class.

Inherited from:
DataStream
@PublicEvolving
def broadcast(broadcastStateDescriptors: MapStateDescriptor[_, _]*): BroadcastStream[T]

Sets the partitioning of the DataStream so that the output elements are broadcasted to every parallel instance of the next operation. In addition, it implicitly creates as many broadcast states as the specified descriptors which can be used to store the element of the stream.

Sets the partitioning of the DataStream so that the output elements are broadcasted to every parallel instance of the next operation. In addition, it implicitly creates as many broadcast states as the specified descriptors which can be used to store the element of the stream.

Value parameters:
broadcastStateDescriptors

the descriptors of the broadcast states to create.

Returns:

A BroadcastStream which can be used in the DataStream.connect to create a BroadcastConnectedStream for further processing of the elements.

Inherited from:
DataStream

Sets the partitioning of the DataStream so that the output tuples are broad casted to every parallel instance of the next component.

Sets the partitioning of the DataStream so that the output tuples are broad casted to every parallel instance of the next component.

Inherited from:
DataStream
def coGroup[T2](otherStream: DataStream[T2]): CoGroupedStreams[T, T2]

Creates a co-group operation. See CoGroupedStreams for an example of how the keys and window can be specified.

Creates a co-group operation. See CoGroupedStreams for an example of how the keys and window can be specified.

Inherited from:
DataStream
@PublicEvolving
def connect[R](broadcastStream: BroadcastStream[R]): BroadcastConnectedStream[T, R]

Creates a new BroadcastConnectedStream by connecting the current DataStream or KeyedStream with a BroadcastStream.

Creates a new BroadcastConnectedStream by connecting the current DataStream or KeyedStream with a BroadcastStream.

The latter can be created using the broadcast method.

The resulting stream can be further processed using the broadcastConnectedStream.process(myFunction) method, where myFunction can be either a org.apache.flink.streaming.api.functions.co.KeyedBroadcastProcessFunction or a org.apache.flink.streaming.api.functions.co.BroadcastProcessFunction depending on the current stream being a KeyedStream or not.

Value parameters:
broadcastStream

The broadcast stream with the broadcast state to be connected with this stream.

Returns:

The BroadcastConnectedStream.

Inherited from:
DataStream
def connect[T2](dataStream: DataStream[T2]): ConnectedStreams[T, T2]

Creates a new ConnectedStreams by connecting DataStream outputs of different type with each other. The DataStreams connected using this operators can be used with CoFunctions.

Creates a new ConnectedStreams by connecting DataStream outputs of different type with each other. The DataStreams connected using this operators can be used with CoFunctions.

Inherited from:
DataStream
def countWindowAll(size: Long): AllWindowedStream[T, GlobalWindow]

Windows this DataStream into tumbling count windows.

Windows this DataStream into tumbling count windows.

Note: This operation can be inherently non-parallel since all elements have to pass through the same operator instance. (Only for special cases, such as aligned time windows is it possible to perform this operation in parallel).

Value parameters:
size

The size of the windows in number of elements.

Inherited from:
DataStream
def countWindowAll(size: Long, slide: Long): AllWindowedStream[T, GlobalWindow]

Windows this DataStream into sliding count windows.

Windows this DataStream into sliding count windows.

Note: This operation can be inherently non-parallel since all elements have to pass through the same operator instance. (Only for special cases, such as aligned time windows is it possible to perform this operation in parallel).

Value parameters:
size

The size of the windows in number of elements.

slide

The slide interval in number of elements.

Inherited from:
DataStream
def dataType: TypeInformation[T]

Returns the TypeInformation for the elements of this DataStream.

Returns the TypeInformation for the elements of this DataStream.

Inherited from:
DataStream
@PublicEvolving

Turns off chaining for this operator so thread co-location will not be used as an optimization. Chaining can be turned off for the whole job by StreamExecutionEnvironment.disableOperatorChaining however it is not advised for performance considerations.

Turns off chaining for this operator so thread co-location will not be used as an optimization. Chaining can be turned off for the whole job by StreamExecutionEnvironment.disableOperatorChaining however it is not advised for performance considerations.

Inherited from:
DataStream
def executeAndCollect(jobExecutionName: String, limit: Int): List[T]

Triggers the distributed execution of the streaming dataflow and returns an iterator over the elements of the given DataStream.

Triggers the distributed execution of the streaming dataflow and returns an iterator over the elements of the given DataStream.

The DataStream application is executed in the regular distributed manner on the target environment, and the events from the stream are polled back to this application process and thread through Flink's REST API.

Inherited from:
DataStream
def executeAndCollect(limit: Int): List[T]

Triggers the distributed execution of the streaming dataflow and returns an iterator over the elements of the given DataStream.

Triggers the distributed execution of the streaming dataflow and returns an iterator over the elements of the given DataStream.

The DataStream application is executed in the regular distributed manner on the target environment, and the events from the stream are polled back to this application process and thread through Flink's REST API.

Inherited from:
DataStream
def executeAndCollect(jobExecutionName: String): CloseableIterator[T]

Triggers the distributed execution of the streaming dataflow and returns an iterator over the elements of the given DataStream.

Triggers the distributed execution of the streaming dataflow and returns an iterator over the elements of the given DataStream.

The DataStream application is executed in the regular distributed manner on the target environment, and the events from the stream are polled back to this application process and thread through Flink's REST API.

IMPORTANT The returned iterator must be closed to free all cluster resources.

Inherited from:
DataStream

Triggers the distributed execution of the streaming dataflow and returns an iterator over the elements of the given DataStream.

Triggers the distributed execution of the streaming dataflow and returns an iterator over the elements of the given DataStream.

The DataStream application is executed in the regular distributed manner on the target environment, and the events from the stream are polled back to this application process and thread through Flink's REST API.

IMPORTANT The returned iterator must be closed to free all cluster resources.

Inherited from:
DataStream
def executionConfig: ExecutionConfig

Returns the execution config.

Returns the execution config.

Inherited from:
DataStream

Returns the StreamExecutionEnvironment associated with this data stream

Returns the StreamExecutionEnvironment associated with this data stream

Inherited from:
DataStream
def filter(fun: T => Boolean): DataStream[T]

Creates a new DataStream that contains only the elements satisfying the given filter predicate.

Creates a new DataStream that contains only the elements satisfying the given filter predicate.

Inherited from:
DataStream
def filter(filter: FilterFunction[T]): DataStream[T]

Creates a new DataStream that contains only the elements satisfying the given filter predicate.

Creates a new DataStream that contains only the elements satisfying the given filter predicate.

Inherited from:
DataStream
def flatMap[R : TypeInformation](fun: T => IterableOnce[R]): DataStream[R]

Creates a new DataStream by applying the given function to every element and flattening the results.

Creates a new DataStream by applying the given function to every element and flattening the results.

Inherited from:
DataStream
def flatMap[R : TypeInformation](fun: (T, Collector[R]) => Unit): DataStream[R]

Creates a new DataStream by applying the given function to every element and flattening the results.

Creates a new DataStream by applying the given function to every element and flattening the results.

Inherited from:
DataStream
def flatMap[R : TypeInformation](flatMapper: FlatMapFunction[T, R]): DataStream[R]

Creates a new DataStream by applying the given function to every element and flattening the results.

Creates a new DataStream by applying the given function to every element and flattening the results.

Inherited from:
DataStream

Sets the partitioning of the DataStream so that the output tuples are forwarded to the local subtask of the next component (whenever possible).

Sets the partitioning of the DataStream so that the output tuples are forwarded to the local subtask of the next component (whenever possible).

Inherited from:
DataStream
@PublicEvolving
def getSideOutput[X : TypeInformation](tag: OutputTag[X]): DataStream[X]
Inherited from:
DataStream
@PublicEvolving

Sets the partitioning of the DataStream so that the output values all go to the first instance of the next processing operator. Use this setting with care since it might cause a serious performance bottleneck in the application.

Sets the partitioning of the DataStream so that the output values all go to the first instance of the next processing operator. Use this setting with care since it might cause a serious performance bottleneck in the application.

Inherited from:
DataStream
@PublicEvolving
def iterate[R, F : TypeInformation](stepFunction: ConnectedStreams[T, F] => (DataStream[F], DataStream[R]), maxWaitTimeMillis: Long): DataStream[R]

Initiates an iterative part of the program that creates a loop by feeding back data streams. To create a streaming iteration the user needs to define a transformation that creates two DataStreams. The first one is the output that will be fed back to the start of the iteration and the second is the output stream of the iterative part.

Initiates an iterative part of the program that creates a loop by feeding back data streams. To create a streaming iteration the user needs to define a transformation that creates two DataStreams. The first one is the output that will be fed back to the start of the iteration and the second is the output stream of the iterative part.

The input stream of the iterate operator and the feedback stream will be treated as a ConnectedStreams where the input is connected with the feedback stream.

This allows the user to distinguish standard input from feedback inputs.

stepfunction: initialStream => (feedback, output)

The user must set the max waiting time for the iteration head. If no data received in the set time the stream terminates. If this parameter is set to 0 then the iteration sources will indefinitely, so the job must be killed to stop.

Inherited from:
DataStream
@PublicEvolving
def iterate[R](stepFunction: DataStream[T] => (DataStream[T], DataStream[R]), maxWaitTimeMillis: Long): DataStream[R]

Initiates an iterative part of the program that creates a loop by feeding back data streams. To create a streaming iteration the user needs to define a transformation that creates two DataStreams. The first one is the output that will be fed back to the start of the iteration and the second is the output stream of the iterative part.

Initiates an iterative part of the program that creates a loop by feeding back data streams. To create a streaming iteration the user needs to define a transformation that creates two DataStreams. The first one is the output that will be fed back to the start of the iteration and the second is the output stream of the iterative part.

stepfunction: initialStream => (feedback, output)

A common pattern is to use output splitting to create feedback and output DataStream. Please see the side outputs of ProcessFunction method of the DataStream

By default a DataStream with iteration will never terminate, but the user can use the maxWaitTime parameter to set a max waiting time for the iteration head. If no data received in the set time the stream terminates.

Parallelism of the feedback stream must match the parallelism of the original stream. Please refer to the setParallelism method for parallelism modification

Inherited from:
DataStream
def join[T2](otherStream: DataStream[T2]): JoinedStreams[T, T2]

Creates a join operation. See JoinedStreams for an example of how the keys and window can be specified.

Creates a join operation. See JoinedStreams for an example of how the keys and window can be specified.

Inherited from:
DataStream
def keyBy[K : TypeInformation](fun: KeySelector[T, K]): KeyedStream[T, K]

Groups the elements of a DataStream by the given K key to be used with grouped operators like grouped reduce or grouped aggregations.

Groups the elements of a DataStream by the given K key to be used with grouped operators like grouped reduce or grouped aggregations.

Inherited from:
DataStream
def keyBy[K : TypeInformation](fun: T => K): KeyedStream[T, K]

Groups the elements of a DataStream by the given K key to be used with grouped operators like grouped reduce or grouped aggregations.

Groups the elements of a DataStream by the given K key to be used with grouped operators like grouped reduce or grouped aggregations.

Inherited from:
DataStream
def map[R : TypeInformation](mapper: MapFunction[T, R]): DataStream[R]

Creates a new DataStream by applying the given function to every element of this DataStream.

Creates a new DataStream by applying the given function to every element of this DataStream.

Inherited from:
DataStream
def map[R : TypeInformation](fun: T => R): DataStream[R]

Creates a new DataStream by applying the given function to every element of this DataStream.

Creates a new DataStream by applying the given function to every element of this DataStream.

Inherited from:
DataStream
@PublicEvolving
def minResources: ResourceSpec

Returns the minimum resources of this operation.

Returns the minimum resources of this operation.

Inherited from:
DataStream
def name(name: String): DataStream[T]

Sets the name of the current data stream. This name is used by the visualization and logging during runtime.

Sets the name of the current data stream. This name is used by the visualization and logging during runtime.

Returns:

The named operator

Inherited from:
DataStream
def name: String

Gets the name of the current data stream. This name is used by the visualization and logging during runtime.

Gets the name of the current data stream. This name is used by the visualization and logging during runtime.

Returns:

Name of the stream.

Inherited from:
DataStream
def parallelism: Int

Returns the parallelism of this operation.

Returns the parallelism of this operation.

Inherited from:
DataStream
def partitionCustom[K : TypeInformation](partitioner: Partitioner[K], fun: T => K): DataStream[T]

Partitions a DataStream on the key returned by the selector, using a custom partitioner. This method takes the key selector to get the key to partition on, and a partitioner that accepts the key type.

Partitions a DataStream on the key returned by the selector, using a custom partitioner. This method takes the key selector to get the key to partition on, and a partitioner that accepts the key type.

Note: This method works only on single field keys, i.e. the selector cannot return tuples of fields.

Inherited from:
DataStream
@PublicEvolving
def preferredResources: ResourceSpec

Returns the preferred resources of this operation.

Returns the preferred resources of this operation.

Inherited from:
DataStream
@PublicEvolving
def printToErr(sinkIdentifier: String): DataStreamSink[T]

Writes a DataStream to the standard error stream (stderr).

Writes a DataStream to the standard error stream (stderr).

For each element of the DataStream the result of AnyRef.toString is written.

Value parameters:
sinkIdentifier

The string to prefix the output with.

Returns:

The closed DataStream.

Inherited from:
DataStream
@PublicEvolving
def printToErr(): DataStreamSink[T]

Writes a DataStream to the standard error stream (stderr).

Writes a DataStream to the standard error stream (stderr).

For each element of the DataStream the result of AnyRef.toString is written.

Returns:

The closed DataStream.

Inherited from:
DataStream

Sets the partitioning of the DataStream so that the output tuples are distributed evenly to the next component.

Sets the partitioning of the DataStream so that the output tuples are distributed evenly to the next component.

Inherited from:
DataStream
@PublicEvolving

Sets the partitioning of the DataStream so that the output tuples are distributed evenly to a subset of instances of the downstream operation.

Sets the partitioning of the DataStream so that the output tuples are distributed evenly to a subset of instances of the downstream operation.

The subset of downstream operations to which the upstream operation sends elements depends on the degree of parallelism of both the upstream and downstream operation. For example, if the upstream operation has parallelism 2 and the downstream operation has parallelism 4, then one upstream operation would distribute elements to two downstream operations while the other upstream operation would distribute to the other two downstream operations. If, on the other hand, the downstream operation has parallelism 2 while the upstream operation has parallelism 4 then two upstream operations will distribute to one downstream operation while the other two upstream operations will distribute to the other downstream operations.

In cases where the different parallelisms are not multiples of each other one or several downstream operations will have a differing number of inputs from upstream operations.

Inherited from:
DataStream
def setBufferTimeout(timeoutMillis: Long): DataStream[T]

Sets the maximum time frequency (ms) for the flushing of the output buffer. By default the output buffers flush only when they are full.

Sets the maximum time frequency (ms) for the flushing of the output buffer. By default the output buffers flush only when they are full.

Value parameters:
timeoutMillis

The maximum time between two output flushes.

Returns:

The operator with buffer timeout set.

Inherited from:
DataStream
@PublicEvolving
def setDescription(description: String): DataStream[T]

Sets the description of this data stream.

Sets the description of this data stream.

Description is used in json plan and web ui, but not in logging and metrics where only name is available. Description is expected to provide detailed information about this operation, while name is expected to be more simple, providing summary information only, so that we can have more user-friendly logging messages and metric tags without losing useful messages for debugging.

Returns:

The operator with new description

Inherited from:
DataStream
def setMaxParallelism(maxParallelism: Int): DataStream[T]
Inherited from:
DataStream
def setParallelism(parallelism: Int): DataStream[T]

Sets the parallelism of this operation. This must be at least 1.

Sets the parallelism of this operation. This must be at least 1.

Inherited from:
DataStream
@PublicEvolving
def setUidHash(hash: String): DataStream[T]

Sets an user provided hash for this operator. This will be used AS IS the create the JobVertexID.

Sets an user provided hash for this operator. This will be used AS IS the create the JobVertexID.

The user provided hash is an alternative to the generated hashes, that is considered when identifying an operator through the default hash mechanics fails (e.g. because of changes between Flink versions).

Important: this should be used as a workaround or for trouble shooting. The provided hash needs to be unique per transformation and job. Otherwise, job submission will fail. Furthermore, you cannot assign user-specified hash to intermediate nodes in an operator chain and trying so will let your job fail.

Value parameters:
hash

the user provided hash for this operator.

Returns:

The operator with the user provided hash.

Inherited from:
DataStream
@PublicEvolving

Sets the partitioning of the DataStream so that the output tuples are shuffled to the next component.

Sets the partitioning of the DataStream so that the output tuples are shuffled to the next component.

Inherited from:
DataStream
def sinkTo(sink: Sink[T]): DataStreamSink[T]

Adds the given sink to this DataStream. Only streams with sinks added will be executed once the StreamExecutionEnvironment.execute(...) method is called.

Adds the given sink to this DataStream. Only streams with sinks added will be executed once the StreamExecutionEnvironment.execute(...) method is called.

Inherited from:
DataStream
def sinkTo(sink: Sink[T, _, _, _]): DataStreamSink[T]

Adds the given sink to this DataStream. Only streams with sinks added will be executed once the StreamExecutionEnvironment.execute(...) method is called.

Adds the given sink to this DataStream. Only streams with sinks added will be executed once the StreamExecutionEnvironment.execute(...) method is called.

Inherited from:
DataStream
@PublicEvolving
def slotSharingGroup(slotSharingGroup: SlotSharingGroup): DataStream[T]

Sets the slot sharing group of this operation. Parallel instances of operations that are in the same slot sharing group will be co-located in the same TaskManager slot, if possible.

Sets the slot sharing group of this operation. Parallel instances of operations that are in the same slot sharing group will be co-located in the same TaskManager slot, if possible.

Operations inherit the slot sharing group of input operations if all input operations are in the same slot sharing group and no slot sharing group was explicitly specified.

Initially an operation is in the default slot sharing group. An operation can be put into the default group explicitly by setting the slot sharing group to "default".

Value parameters:
slotSharingGroup

Which contains name and its resource spec.

Inherited from:
DataStream
@PublicEvolving
def slotSharingGroup(slotSharingGroup: String): DataStream[T]

Sets the slot sharing group of this operation. Parallel instances of operations that are in the same slot sharing group will be co-located in the same TaskManager slot, if possible.

Sets the slot sharing group of this operation. Parallel instances of operations that are in the same slot sharing group will be co-located in the same TaskManager slot, if possible.

Operations inherit the slot sharing group of input operations if all input operations are in the same slot sharing group and no slot sharing group was explicitly specified.

Initially an operation is in the default slot sharing group. An operation can be put into the default group explicitly by setting the slot sharing group to "default".

Value parameters:
slotSharingGroup

The slot sharing group name.

Inherited from:
DataStream
@PublicEvolving

Starts a new task chain beginning at this operator. This operator will not be chained (thread co-located for increased performance) to any previous tasks even if possible.

Starts a new task chain beginning at this operator. This operator will not be chained (thread co-located for increased performance) to any previous tasks even if possible.

Inherited from:
DataStream
@PublicEvolving
def transform[R : TypeInformation](operatorName: String, operator: OneInputStreamOperator[T, R]): DataStream[R]

Transforms the DataStream by using a custom OneInputStreamOperator.

Transforms the DataStream by using a custom OneInputStreamOperator.

Type parameters:
R

the type of elements emitted by the operator

Value parameters:
operator

the object containing the transformation logic

operatorName

name of the operator, for logging purposes

Inherited from:
DataStream
@PublicEvolving
def uid(uid: String): DataStream[T]

Sets an ID for this operator.

Sets an ID for this operator.

The specified ID is used to assign the same operator ID across job submissions (for example when starting a job from a savepoint).

Important: this ID needs to be unique per transformation and job. Otherwise, job submission will fail.

Value parameters:
uid

The unique user-specified ID of this transformation.

Returns:

The operator with the specified ID.

Inherited from:
DataStream
def union(dataStreams: DataStream[T]*): DataStream[T]

Creates a new DataStream by merging DataStream outputs of the same type with each other. The DataStreams merged using this operator will be transformed simultaneously.

Creates a new DataStream by merging DataStream outputs of the same type with each other. The DataStreams merged using this operator will be transformed simultaneously.

Inherited from:
DataStream
@PublicEvolving
def windowAll[W <: Window](assigner: WindowAssigner[_ >: T, W]): AllWindowedStream[T, W]

Windows this data stream to a AllWindowedStream, which evaluates windows over a key grouped stream. Elements are put into windows by a WindowAssigner. The grouping of elements is done both by key and by window.

Windows this data stream to a AllWindowedStream, which evaluates windows over a key grouped stream. Elements are put into windows by a WindowAssigner. The grouping of elements is done both by key and by window.

A org.apache.flink.streaming.api.windowing.triggers.Trigger can be defined to specify when windows are evaluated. However, WindowAssigner have a default Trigger that is used if a Trigger is not specified.

Note: This operation can be inherently non-parallel since all elements have to pass through the same operator instance. (Only for special cases, such as aligned time windows is it possible to perform this operation in parallel).

Value parameters:
assigner

The WindowAssigner that assigns elements to windows.

Returns:

The trigger windows data stream.

Inherited from:
DataStream
@PublicEvolving
def writeToSocket(hostname: String, port: Integer, schema: SerializationSchema[T]): DataStreamSink[T]

Writes the DataStream to a socket as a byte array. The format of the output is specified by a SerializationSchema.

Writes the DataStream to a socket as a byte array. The format of the output is specified by a SerializationSchema.

Inherited from:
DataStream
@PublicEvolving
def writeUsingOutputFormat(format: OutputFormat[T]): DataStreamSink[T]

Writes a DataStream using the given OutputFormat.

Writes a DataStream using the given OutputFormat.

Inherited from:
DataStream

Deprecated and Inherited methods

@deprecated @PublicEvolving
def getExecutionConfig: ExecutionConfig

Returns the execution config.

Returns the execution config.

Deprecated

Use executionConfig instead.

Inherited from:
DataStream
@deprecated @PublicEvolving

Returns the StreamExecutionEnvironment associated with the current DataStream.

Returns the StreamExecutionEnvironment associated with the current DataStream.

Returns:

associated execution environment

Deprecated

Use executionEnvironment instead

Inherited from:
DataStream
@deprecated @PublicEvolving
def getName: String

Gets the name of the current data stream. This name is used by the visualization and logging during runtime.

Gets the name of the current data stream. This name is used by the visualization and logging during runtime.

Returns:

Name of the stream.

Deprecated

Use name instead

Inherited from:
DataStream
@deprecated @PublicEvolving

Returns the parallelism of this operation.

Returns the parallelism of this operation.

Deprecated

Use parallelism instead.

Inherited from:
DataStream
@deprecated @PublicEvolving
def getType(): TypeInformation[T]

Returns the TypeInformation for the elements of this DataStream.

Returns the TypeInformation for the elements of this DataStream.

Deprecated

Use dataType instead.

Inherited from:
DataStream
@deprecated("use [[DataStream.keyBy(KeySelector)]] instead")
def keyBy(firstField: String, otherFields: String*): KeyedStream[T, Tuple]

Groups the elements of a DataStream by the given field expressions to be used with grouped operators like grouped reduce or grouped aggregations.

Groups the elements of a DataStream by the given field expressions to be used with grouped operators like grouped reduce or grouped aggregations.

Deprecated
Inherited from:
DataStream
@deprecated("use [[DataStream.keyBy(KeySelector)]] instead")
def keyBy(fields: Int*): KeyedStream[T, Tuple]

Groups the elements of a DataStream by the given key positions (for tuple/array types) to be used with grouped operators like grouped reduce or grouped aggregations.

Groups the elements of a DataStream by the given key positions (for tuple/array types) to be used with grouped operators like grouped reduce or grouped aggregations.

Deprecated
Inherited from:
DataStream
@deprecated("Use [[DataStream.partitionCustom(Partitioner, Function1)]] instead")
def partitionCustom[K : TypeInformation](partitioner: Partitioner[K], field: String): DataStream[T]

Partitions a POJO DataStream on the specified key fields using a custom partitioner. This method takes the key expression to partition on, and a partitioner that accepts the key type.

Partitions a POJO DataStream on the specified key fields using a custom partitioner. This method takes the key expression to partition on, and a partitioner that accepts the key type.

Note: This method works only on single field keys.

Deprecated
Inherited from:
DataStream
@deprecated("Use [[DataStream.partitionCustom(Partitioner, Function1)]] instead")
def partitionCustom[K : TypeInformation](partitioner: Partitioner[K], field: Int): DataStream[T]

Partitions a tuple DataStream on the specified key fields using a custom partitioner. This method takes the key position to partition on, and a partitioner that accepts the key type.

Partitions a tuple DataStream on the specified key fields using a custom partitioner. This method takes the key position to partition on, and a partitioner that accepts the key type.

Note: This method works only on single field keys.

Deprecated
Inherited from:
DataStream