DataStream
Value members
Concrete methods
Adds the given sink to this DataStream. Only streams with sinks added will be executed once the StreamExecutionEnvironment.execute(...) method is called.
Adds the given sink to this DataStream. Only streams with sinks added will be executed once the StreamExecutionEnvironment.execute(...) method is called.
Adds the given sink to this DataStream. Only streams with sinks added will be executed once the StreamExecutionEnvironment.execute(...) method is called.
Adds the given sink to this DataStream. Only streams with sinks added will be executed once the StreamExecutionEnvironment.execute(...) method is called.
Assigns timestamps to the elements in the data stream and periodically creates watermarks to signal event time progress.
Assigns timestamps to the elements in the data stream and periodically creates watermarks to signal event time progress.
This method is a shortcut for data streams where the element timestamp are known to be monotonously ascending within each parallel stream. In that case, the system can generate watermarks automatically and perfectly by tracking the ascending timestamps.
For cases where the timestamps are not monotonously increasing, use the more general methods assignTimestampsAndWatermarks and assignTimestampsAndWatermarks.
Assigns timestamps to the elements in the data stream and generates watermarks to signal event time progress. The given [[WatermarkStrategy is used to create a TimestampAssigner and org.apache.flink.api.common.eventtime.WatermarkGenerator.
Assigns timestamps to the elements in the data stream and generates watermarks to signal event time progress. The given [[WatermarkStrategy is used to create a TimestampAssigner and org.apache.flink.api.common.eventtime.WatermarkGenerator.
For each event in the data stream, the long) method is called to assign an event timestamp.
For each event in the data stream, the long, WatermarkOutput) will be called.
Periodically (defined by the ExecutionConfig#getAutoWatermarkInterval()), the WatermarkGenerator#onPeriodicEmit(WatermarkOutput) method will be called.
Common watermark generation patterns can be found as static methods in the org.apache.flink.api.common.eventtime.WatermarkStrategy class.
Sets the partitioning of the DataStream so that the output tuples are broad casted to every parallel instance of the next component.
Sets the partitioning of the DataStream so that the output tuples are broad casted to every parallel instance of the next component.
Sets the partitioning of the DataStream so that the output elements are broadcasted to every parallel instance of the next operation. In addition, it implicitly creates as many broadcast states as the specified descriptors which can be used to store the element of the stream.
Sets the partitioning of the DataStream so that the output elements are broadcasted to every parallel instance of the next operation. In addition, it implicitly creates as many broadcast states as the specified descriptors which can be used to store the element of the stream.
- Value parameters:
- broadcastStateDescriptors
the descriptors of the broadcast states to create.
- Returns:
A BroadcastStream which can be used in the DataStream.connect to create a BroadcastConnectedStream for further processing of the elements.
Creates a co-group operation. See CoGroupedStreams for an example of how the keys and window can be specified.
Creates a co-group operation. See CoGroupedStreams for an example of how the keys and window can be specified.
Creates a new ConnectedStreams by connecting DataStream outputs of different type with each other. The DataStreams connected using this operators can be used with CoFunctions.
Creates a new ConnectedStreams by connecting DataStream outputs of different type with each other. The DataStreams connected using this operators can be used with CoFunctions.
Creates a new BroadcastConnectedStream by connecting the current DataStream or KeyedStream with a BroadcastStream.
Creates a new BroadcastConnectedStream by connecting the current DataStream or KeyedStream with a BroadcastStream.
The latter can be created using the broadcast method.
The resulting stream can be further processed using the broadcastConnectedStream.process(myFunction)
method,
where myFunction
can be either a org.apache.flink.streaming.api.functions.co.KeyedBroadcastProcessFunction
or a org.apache.flink.streaming.api.functions.co.BroadcastProcessFunction depending on the current stream
being a KeyedStream or not.
- Value parameters:
- broadcastStream
The broadcast stream with the broadcast state to be connected with this stream.
- Returns:
Windows this DataStream into sliding count windows.
Windows this DataStream into sliding count windows.
Note: This operation can be inherently non-parallel since all elements have to pass through the same operator instance. (Only for special cases, such as aligned time windows is it possible to perform this operation in parallel).
- Value parameters:
- size
The size of the windows in number of elements.
- slide
The slide interval in number of elements.
Windows this DataStream into tumbling count windows.
Windows this DataStream into tumbling count windows.
Note: This operation can be inherently non-parallel since all elements have to pass through the same operator instance. (Only for special cases, such as aligned time windows is it possible to perform this operation in parallel).
- Value parameters:
- size
The size of the windows in number of elements.
Returns the TypeInformation for the elements of this DataStream.
Returns the TypeInformation for the elements of this DataStream.
Turns off chaining for this operator so thread co-location will not be used as an optimization. Chaining can be turned off for the whole job by StreamExecutionEnvironment.disableOperatorChaining however it is not advised for performance considerations.
Turns off chaining for this operator so thread co-location will not be used as an optimization. Chaining can be turned off for the whole job by StreamExecutionEnvironment.disableOperatorChaining however it is not advised for performance considerations.
Triggers the distributed execution of the streaming dataflow and returns an iterator over the elements of the given DataStream.
Triggers the distributed execution of the streaming dataflow and returns an iterator over the elements of the given DataStream.
The DataStream application is executed in the regular distributed manner on the target environment, and the events from the stream are polled back to this application process and thread through Flink's REST API.
IMPORTANT The returned iterator must be closed to free all cluster resources.
Triggers the distributed execution of the streaming dataflow and returns an iterator over the elements of the given DataStream.
Triggers the distributed execution of the streaming dataflow and returns an iterator over the elements of the given DataStream.
The DataStream application is executed in the regular distributed manner on the target environment, and the events from the stream are polled back to this application process and thread through Flink's REST API.
IMPORTANT The returned iterator must be closed to free all cluster resources.
Triggers the distributed execution of the streaming dataflow and returns an iterator over the elements of the given DataStream.
Triggers the distributed execution of the streaming dataflow and returns an iterator over the elements of the given DataStream.
The DataStream application is executed in the regular distributed manner on the target environment, and the events from the stream are polled back to this application process and thread through Flink's REST API.
Triggers the distributed execution of the streaming dataflow and returns an iterator over the elements of the given DataStream.
Triggers the distributed execution of the streaming dataflow and returns an iterator over the elements of the given DataStream.
The DataStream application is executed in the regular distributed manner on the target environment, and the events from the stream are polled back to this application process and thread through Flink's REST API.
Returns the StreamExecutionEnvironment associated with this data stream
Returns the StreamExecutionEnvironment associated with this data stream
Creates a new DataStream that contains only the elements satisfying the given filter predicate.
Creates a new DataStream that contains only the elements satisfying the given filter predicate.
Creates a new DataStream that contains only the elements satisfying the given filter predicate.
Creates a new DataStream that contains only the elements satisfying the given filter predicate.
Creates a new DataStream by applying the given function to every element and flattening the results.
Creates a new DataStream by applying the given function to every element and flattening the results.
Creates a new DataStream by applying the given function to every element and flattening the results.
Creates a new DataStream by applying the given function to every element and flattening the results.
Creates a new DataStream by applying the given function to every element and flattening the results.
Creates a new DataStream by applying the given function to every element and flattening the results.
Sets the partitioning of the DataStream so that the output tuples are forwarded to the local subtask of the next component (whenever possible).
Sets the partitioning of the DataStream so that the output tuples are forwarded to the local subtask of the next component (whenever possible).
Sets the partitioning of the DataStream so that the output values all go to the first instance of the next processing operator. Use this setting with care since it might cause a serious performance bottleneck in the application.
Sets the partitioning of the DataStream so that the output values all go to the first instance of the next processing operator. Use this setting with care since it might cause a serious performance bottleneck in the application.
Initiates an iterative part of the program that creates a loop by feeding back data streams. To create a streaming iteration the user needs to define a transformation that creates two DataStreams. The first one is the output that will be fed back to the start of the iteration and the second is the output stream of the iterative part.
Initiates an iterative part of the program that creates a loop by feeding back data streams. To create a streaming iteration the user needs to define a transformation that creates two DataStreams. The first one is the output that will be fed back to the start of the iteration and the second is the output stream of the iterative part.
stepfunction: initialStream => (feedback, output)
A common pattern is to use output splitting to create feedback and output DataStream. Please see the side outputs of ProcessFunction method of the DataStream
By default a DataStream with iteration will never terminate, but the user can use the maxWaitTime parameter to set a max waiting time for the iteration head. If no data received in the set time the stream terminates.
Parallelism of the feedback stream must match the parallelism of the original stream. Please refer to the setParallelism method for parallelism modification
Initiates an iterative part of the program that creates a loop by feeding back data streams. To create a streaming iteration the user needs to define a transformation that creates two DataStreams. The first one is the output that will be fed back to the start of the iteration and the second is the output stream of the iterative part.
Initiates an iterative part of the program that creates a loop by feeding back data streams. To create a streaming iteration the user needs to define a transformation that creates two DataStreams. The first one is the output that will be fed back to the start of the iteration and the second is the output stream of the iterative part.
The input stream of the iterate operator and the feedback stream will be treated as a ConnectedStreams where the input is connected with the feedback stream.
This allows the user to distinguish standard input from feedback inputs.
stepfunction: initialStream => (feedback, output)
The user must set the max waiting time for the iteration head. If no data received in the set time the stream terminates. If this parameter is set to 0 then the iteration sources will indefinitely, so the job must be killed to stop.
Creates a join operation. See JoinedStreams for an example of how the keys and window can be specified.
Creates a join operation. See JoinedStreams for an example of how the keys and window can be specified.
Groups the elements of a DataStream by the given K key to be used with grouped operators like grouped reduce or grouped aggregations.
Groups the elements of a DataStream by the given K key to be used with grouped operators like grouped reduce or grouped aggregations.
Groups the elements of a DataStream by the given K key to be used with grouped operators like grouped reduce or grouped aggregations.
Groups the elements of a DataStream by the given K key to be used with grouped operators like grouped reduce or grouped aggregations.
Creates a new DataStream by applying the given function to every element of this DataStream.
Creates a new DataStream by applying the given function to every element of this DataStream.
Creates a new DataStream by applying the given function to every element of this DataStream.
Creates a new DataStream by applying the given function to every element of this DataStream.
Returns the minimum resources of this operation.
Returns the minimum resources of this operation.
Gets the name of the current data stream. This name is used by the visualization and logging during runtime.
Gets the name of the current data stream. This name is used by the visualization and logging during runtime.
- Returns:
Name of the stream.
Sets the name of the current data stream. This name is used by the visualization and logging during runtime.
Sets the name of the current data stream. This name is used by the visualization and logging during runtime.
- Returns:
The named operator
Partitions a DataStream on the key returned by the selector, using a custom partitioner. This method takes the key selector to get the key to partition on, and a partitioner that accepts the key type.
Partitions a DataStream on the key returned by the selector, using a custom partitioner. This method takes the key selector to get the key to partition on, and a partitioner that accepts the key type.
Note: This method works only on single field keys, i.e. the selector cannot return tuples of fields.
Returns the preferred resources of this operation.
Returns the preferred resources of this operation.
Writes a DataStream to the standard output stream (stdout). For each element of the DataStream the result of .toString is written.
Writes a DataStream to the standard output stream (stdout). For each element of the DataStream the result of .toString is written.
Writes a DataStream to the standard output stream (stdout). For each element of the DataStream the result of AnyRef.toString is written.
Writes a DataStream to the standard output stream (stdout). For each element of the DataStream the result of AnyRef.toString is written.
- Value parameters:
- sinkIdentifier
The string to prefix the output with.
- Returns:
The closed DataStream.
Writes a DataStream to the standard error stream (stderr).
Writes a DataStream to the standard error stream (stderr).
For each element of the DataStream the result of AnyRef.toString is written.
- Returns:
The closed DataStream.
Writes a DataStream to the standard error stream (stderr).
Writes a DataStream to the standard error stream (stderr).
For each element of the DataStream the result of AnyRef.toString is written.
- Value parameters:
- sinkIdentifier
The string to prefix the output with.
- Returns:
The closed DataStream.
Applies the given ProcessFunction on the input stream, thereby creating a transformed output stream.
Applies the given ProcessFunction on the input stream, thereby creating a transformed output stream.
The function will be called for every element in the stream and can produce zero or more output.
- Value parameters:
- processFunction
The ProcessFunction that is called for each element in the stream.
Sets the partitioning of the DataStream so that the output tuples are distributed evenly to the next component.
Sets the partitioning of the DataStream so that the output tuples are distributed evenly to the next component.
Sets the partitioning of the DataStream so that the output tuples are distributed evenly to a subset of instances of the downstream operation.
Sets the partitioning of the DataStream so that the output tuples are distributed evenly to a subset of instances of the downstream operation.
The subset of downstream operations to which the upstream operation sends elements depends on the degree of parallelism of both the upstream and downstream operation. For example, if the upstream operation has parallelism 2 and the downstream operation has parallelism 4, then one upstream operation would distribute elements to two downstream operations while the other upstream operation would distribute to the other two downstream operations. If, on the other hand, the downstream operation has parallelism 2 while the upstream operation has parallelism 4 then two upstream operations will distribute to one downstream operation while the other two upstream operations will distribute to the other downstream operations.
In cases where the different parallelisms are not multiples of each other one or several downstream operations will have a differing number of inputs from upstream operations.
Sets the maximum time frequency (ms) for the flushing of the output buffer. By default the output buffers flush only when they are full.
Sets the maximum time frequency (ms) for the flushing of the output buffer. By default the output buffers flush only when they are full.
- Value parameters:
- timeoutMillis
The maximum time between two output flushes.
- Returns:
The operator with buffer timeout set.
Sets the description of this data stream.
Sets the description of this data stream.
Description is used in json plan and web ui, but not in logging and metrics where only name is available. Description is expected to provide detailed information about this operation, while name is expected to be more simple, providing summary information only, so that we can have more user-friendly logging messages and metric tags without losing useful messages for debugging.
- Returns:
The operator with new description
Sets the parallelism of this operation. This must be at least 1.
Sets the parallelism of this operation. This must be at least 1.
Sets an user provided hash for this operator. This will be used AS IS the create the JobVertexID.
Sets an user provided hash for this operator. This will be used AS IS the create the JobVertexID.
The user provided hash is an alternative to the generated hashes, that is considered when identifying an operator through the default hash mechanics fails (e.g. because of changes between Flink versions).
Important: this should be used as a workaround or for trouble shooting. The provided hash needs to be unique per transformation and job. Otherwise, job submission will fail. Furthermore, you cannot assign user-specified hash to intermediate nodes in an operator chain and trying so will let your job fail.
- Value parameters:
- hash
the user provided hash for this operator.
- Returns:
The operator with the user provided hash.
Sets the partitioning of the DataStream so that the output tuples are shuffled to the next component.
Sets the partitioning of the DataStream so that the output tuples are shuffled to the next component.
Adds the given sink to this DataStream. Only streams with sinks added will be executed once the StreamExecutionEnvironment.execute(...) method is called.
Adds the given sink to this DataStream. Only streams with sinks added will be executed once the StreamExecutionEnvironment.execute(...) method is called.
Adds the given sink to this DataStream. Only streams with sinks added will be executed once the StreamExecutionEnvironment.execute(...) method is called.
Adds the given sink to this DataStream. Only streams with sinks added will be executed once the StreamExecutionEnvironment.execute(...) method is called.
Sets the slot sharing group of this operation. Parallel instances of operations that are in the same slot sharing group will be co-located in the same TaskManager slot, if possible.
Sets the slot sharing group of this operation. Parallel instances of operations that are in the same slot sharing group will be co-located in the same TaskManager slot, if possible.
Operations inherit the slot sharing group of input operations if all input operations are in the same slot sharing group and no slot sharing group was explicitly specified.
Initially an operation is in the default slot sharing group. An operation can be put into the default group
explicitly by setting the slot sharing group to "default"
.
- Value parameters:
- slotSharingGroup
The slot sharing group name.
Sets the slot sharing group of this operation. Parallel instances of operations that are in the same slot sharing group will be co-located in the same TaskManager slot, if possible.
Sets the slot sharing group of this operation. Parallel instances of operations that are in the same slot sharing group will be co-located in the same TaskManager slot, if possible.
Operations inherit the slot sharing group of input operations if all input operations are in the same slot sharing group and no slot sharing group was explicitly specified.
Initially an operation is in the default slot sharing group. An operation can be put into the default group
explicitly by setting the slot sharing group to "default"
.
- Value parameters:
- slotSharingGroup
Which contains name and its resource spec.
Starts a new task chain beginning at this operator. This operator will not be chained (thread co-located for increased performance) to any previous tasks even if possible.
Starts a new task chain beginning at this operator. This operator will not be chained (thread co-located for increased performance) to any previous tasks even if possible.
Transforms the DataStream by using a custom OneInputStreamOperator.
Transforms the DataStream by using a custom OneInputStreamOperator.
- Type parameters:
- R
the type of elements emitted by the operator
- Value parameters:
- operator
the object containing the transformation logic
- operatorName
name of the operator, for logging purposes
Sets an ID for this operator.
Sets an ID for this operator.
The specified ID is used to assign the same operator ID across job submissions (for example when starting a job from a savepoint).
Important: this ID needs to be unique per transformation and job. Otherwise, job submission will fail.
- Value parameters:
- uid
The unique user-specified ID of this transformation.
- Returns:
The operator with the specified ID.
Creates a new DataStream by merging DataStream outputs of the same type with each other. The DataStreams merged using this operator will be transformed simultaneously.
Creates a new DataStream by merging DataStream outputs of the same type with each other. The DataStreams merged using this operator will be transformed simultaneously.
Windows this data stream to a AllWindowedStream, which evaluates windows over a key grouped stream. Elements are put into windows by a WindowAssigner. The grouping of elements is done both by key and by window.
Windows this data stream to a AllWindowedStream, which evaluates windows over a key grouped stream. Elements are put into windows by a WindowAssigner. The grouping of elements is done both by key and by window.
A org.apache.flink.streaming.api.windowing.triggers.Trigger can be defined to specify when windows are
evaluated. However, WindowAssigner
have a default Trigger
that is used if a Trigger
is not specified.
Note: This operation can be inherently non-parallel since all elements have to pass through the same operator instance. (Only for special cases, such as aligned time windows is it possible to perform this operation in parallel).
- Value parameters:
- assigner
The
WindowAssigner
that assigns elements to windows.
- Returns:
The trigger windows data stream.
Writes the DataStream to a socket as a byte array. The format of the output is specified by a SerializationSchema.
Writes the DataStream to a socket as a byte array. The format of the output is specified by a SerializationSchema.
Deprecated methods
Returns the execution config.
Returns the execution config.
- Deprecated
Use executionConfig instead.
Returns the StreamExecutionEnvironment associated with the current DataStream.
Returns the StreamExecutionEnvironment associated with the current DataStream.
- Returns:
associated execution environment
- Deprecated
Use executionEnvironment instead
Gets the name of the current data stream. This name is used by the visualization and logging during runtime.
Gets the name of the current data stream. This name is used by the visualization and logging during runtime.
- Returns:
Name of the stream.
- Deprecated
Use name instead
Returns the parallelism of this operation.
Returns the parallelism of this operation.
- Deprecated
Use parallelism instead.
Returns the TypeInformation for the elements of this DataStream.
Returns the TypeInformation for the elements of this DataStream.
- Deprecated
Use dataType instead.
Groups the elements of a DataStream by the given key positions (for tuple/array types) to be used with grouped operators like grouped reduce or grouped aggregations.
Groups the elements of a DataStream by the given key positions (for tuple/array types) to be used with grouped operators like grouped reduce or grouped aggregations.
- Deprecated
Groups the elements of a DataStream by the given field expressions to be used with grouped operators like grouped reduce or grouped aggregations.
Groups the elements of a DataStream by the given field expressions to be used with grouped operators like grouped reduce or grouped aggregations.
- Deprecated
Partitions a tuple DataStream on the specified key fields using a custom partitioner. This method takes the key position to partition on, and a partitioner that accepts the key type.
Partitions a tuple DataStream on the specified key fields using a custom partitioner. This method takes the key position to partition on, and a partitioner that accepts the key type.
Note: This method works only on single field keys.
- Deprecated
Partitions a POJO DataStream on the specified key fields using a custom partitioner. This method takes the key expression to partition on, and a partitioner that accepts the key type.
Partitions a POJO DataStream on the specified key fields using a custom partitioner. This method takes the key expression to partition on, and a partitioner that accepts the key type.
Note: This method works only on single field keys.
- Deprecated