Interface StreamTableEnvironment
- 
- All Superinterfaces:
 org.apache.flink.table.api.TableEnvironment
- All Known Implementing Classes:
 StreamTableEnvironmentImpl
@PublicEvolving public interface StreamTableEnvironment extends org.apache.flink.table.api.TableEnvironmentThis table environment is the entry point and central context for creating Table and SQL API programs that integrate with the Java-specificDataStreamAPI.It is unified for bounded and unbounded data processing.
A stream table environment is responsible for:
- Convert a 
DataStreamintoTableand vice-versa. - Connecting to external systems.
 - Registering and retrieving 
Tables and other meta objects from a catalog. - Executing SQL statements.
 - Offering further configuration options.
 
Note: If you don't intend to use the
DataStreamAPI,TableEnvironmentis meant for pure table programs. 
- 
- 
Method Summary
All Methods Static Methods Instance Methods Abstract Methods Deprecated Methods Modifier and Type Method Description static StreamTableEnvironmentcreate(org.apache.flink.streaming.api.environment.StreamExecutionEnvironment executionEnvironment)Creates a table environment that is the entry point and central context for creating Table and SQL API programs that integrate with the Java-specificDataStreamAPI.static StreamTableEnvironmentcreate(org.apache.flink.streaming.api.environment.StreamExecutionEnvironment executionEnvironment, org.apache.flink.table.api.EnvironmentSettings settings)Creates a table environment that is the entry point and central context for creating Table and SQL API programs that integrate with the Java-specificDataStreamAPI.StreamStatementSetcreateStatementSet()Returns aStatementSetthat integrates with the Java-specificDataStreamAPI.<T> voidcreateTemporaryView(String path, org.apache.flink.streaming.api.datastream.DataStream<T> dataStream)Creates a view from the givenDataStreamin a given path.<T> voidcreateTemporaryView(String path, org.apache.flink.streaming.api.datastream.DataStream<T> dataStream, org.apache.flink.table.api.Schema schema)Creates a view from the givenDataStreamin a given path.<T> voidcreateTemporaryView(String path, org.apache.flink.streaming.api.datastream.DataStream<T> dataStream, org.apache.flink.table.expressions.Expression... fields)Deprecated.UsecreateTemporaryView(String, DataStream, Schema)instead.org.apache.flink.table.api.TablefromChangelogStream(org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.types.Row> dataStream)Converts the givenDataStreamof changelog entries into aTable.org.apache.flink.table.api.TablefromChangelogStream(org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.types.Row> dataStream, org.apache.flink.table.api.Schema schema)Converts the givenDataStreamof changelog entries into aTable.org.apache.flink.table.api.TablefromChangelogStream(org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.types.Row> dataStream, org.apache.flink.table.api.Schema schema, org.apache.flink.table.connector.ChangelogMode changelogMode)Converts the givenDataStreamof changelog entries into aTable.<T> org.apache.flink.table.api.TablefromDataStream(org.apache.flink.streaming.api.datastream.DataStream<T> dataStream)Converts the givenDataStreaminto aTable.<T> org.apache.flink.table.api.TablefromDataStream(org.apache.flink.streaming.api.datastream.DataStream<T> dataStream, org.apache.flink.table.api.Schema schema)Converts the givenDataStreaminto aTable.<T> org.apache.flink.table.api.TablefromDataStream(org.apache.flink.streaming.api.datastream.DataStream<T> dataStream, org.apache.flink.table.expressions.Expression... fields)Deprecated.UsefromDataStream(DataStream, Schema)instead.<T> voidregisterDataStream(String name, org.apache.flink.streaming.api.datastream.DataStream<T> dataStream)Deprecated.<T,ACC>
voidregisterFunction(String name, org.apache.flink.table.functions.AggregateFunction<T,ACC> aggregateFunction)Deprecated.UseTableEnvironment.createTemporarySystemFunction(String, UserDefinedFunction)instead.<T,ACC>
voidregisterFunction(String name, org.apache.flink.table.functions.TableAggregateFunction<T,ACC> tableAggregateFunction)Deprecated.UseTableEnvironment.createTemporarySystemFunction(String, UserDefinedFunction)instead.<T> voidregisterFunction(String name, org.apache.flink.table.functions.TableFunction<T> tableFunction)Deprecated.UseTableEnvironment.createTemporarySystemFunction(String, UserDefinedFunction)instead.<T> org.apache.flink.streaming.api.datastream.DataStream<T>toAppendStream(org.apache.flink.table.api.Table table, Class<T> clazz)Deprecated.UsetoDataStream(Table, Class)instead.<T> org.apache.flink.streaming.api.datastream.DataStream<T>toAppendStream(org.apache.flink.table.api.Table table, org.apache.flink.api.common.typeinfo.TypeInformation<T> typeInfo)Deprecated.UsetoDataStream(Table, Class)instead.org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.types.Row>toChangelogStream(org.apache.flink.table.api.Table table)Converts the givenTableinto aDataStreamof changelog entries.org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.types.Row>toChangelogStream(org.apache.flink.table.api.Table table, org.apache.flink.table.api.Schema targetSchema)Converts the givenTableinto aDataStreamof changelog entries.org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.types.Row>toChangelogStream(org.apache.flink.table.api.Table table, org.apache.flink.table.api.Schema targetSchema, org.apache.flink.table.connector.ChangelogMode changelogMode)Converts the givenTableinto aDataStreamof changelog entries.org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.types.Row>toDataStream(org.apache.flink.table.api.Table table)Converts the givenTableinto aDataStream.<T> org.apache.flink.streaming.api.datastream.DataStream<T>toDataStream(org.apache.flink.table.api.Table table, Class<T> targetClass)<T> org.apache.flink.streaming.api.datastream.DataStream<T>toDataStream(org.apache.flink.table.api.Table table, org.apache.flink.table.types.AbstractDataType<?> targetDataType)Converts the givenTableinto aDataStreamof the givenDataType.<T> org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.api.java.tuple.Tuple2<Boolean,T>>toRetractStream(org.apache.flink.table.api.Table table, Class<T> clazz)Deprecated.UsetoChangelogStream(Table, Schema)instead.<T> org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.api.java.tuple.Tuple2<Boolean,T>>toRetractStream(org.apache.flink.table.api.Table table, org.apache.flink.api.common.typeinfo.TypeInformation<T> typeInfo)Deprecated.UsetoChangelogStream(Table, Schema)instead.- 
Methods inherited from interface org.apache.flink.table.api.TableEnvironment
compilePlanSql, createCatalog, createFunction, createFunction, createFunction, createFunction, createTable, createTemporaryFunction, createTemporaryFunction, createTemporaryFunction, createTemporarySystemFunction, createTemporarySystemFunction, createTemporarySystemFunction, createTemporaryTable, createTemporaryView, dropFunction, dropTemporaryFunction, dropTemporarySystemFunction, dropTemporaryTable, dropTemporaryView, executePlan, executeSql, explainSql, explainSql, from, from, fromValues, fromValues, fromValues, fromValues, fromValues, fromValues, getCatalog, getCompletionHints, getConfig, getCurrentCatalog, getCurrentDatabase, listCatalogs, listDatabases, listFullModules, listFunctions, listModules, listTables, listTables, listTemporaryTables, listTemporaryViews, listUserDefinedFunctions, listViews, loadModule, loadPlan, registerCatalog, registerFunction, registerTable, scan, sqlQuery, unloadModule, useCatalog, useDatabase, useModules 
 - 
 
 - 
 
- 
- 
Method Detail
- 
create
static StreamTableEnvironment create(org.apache.flink.streaming.api.environment.StreamExecutionEnvironment executionEnvironment)
Creates a table environment that is the entry point and central context for creating Table and SQL API programs that integrate with the Java-specificDataStreamAPI.It is unified for bounded and unbounded data processing.
A stream table environment is responsible for:
- Convert a 
DataStreamintoTableand vice-versa. - Connecting to external systems.
 - Registering and retrieving 
Tables and other meta objects from a catalog. - Executing SQL statements.
 - Offering further configuration options.
 
Note: If you don't intend to use the
DataStreamAPI,TableEnvironmentis meant for pure table programs.- Parameters:
 executionEnvironment- The JavaStreamExecutionEnvironmentof theTableEnvironment.
 - Convert a 
 
- 
create
static StreamTableEnvironment create(org.apache.flink.streaming.api.environment.StreamExecutionEnvironment executionEnvironment, org.apache.flink.table.api.EnvironmentSettings settings)
Creates a table environment that is the entry point and central context for creating Table and SQL API programs that integrate with the Java-specificDataStreamAPI.It is unified for bounded and unbounded data processing.
A stream table environment is responsible for:
- Convert a 
DataStreamintoTableand vice-versa. - Connecting to external systems.
 - Registering and retrieving 
Tables and other meta objects from a catalog. - Executing SQL statements.
 - Offering further configuration options.
 
Note: If you don't intend to use the
DataStreamAPI,TableEnvironmentis meant for pure table programs.- Parameters:
 executionEnvironment- The JavaStreamExecutionEnvironmentof theTableEnvironment.settings- The environment settings used to instantiate theTableEnvironment.
 - Convert a 
 
- 
registerFunction
@Deprecated <T> void registerFunction(String name, org.apache.flink.table.functions.TableFunction<T> tableFunction)
Deprecated.UseTableEnvironment.createTemporarySystemFunction(String, UserDefinedFunction)instead. Please note that the new method also uses the new type system and reflective extraction logic. It might be necessary to update the function implementation as well. See the documentation ofTableFunctionfor more information on the new function design.Registers aTableFunctionunder a unique name in the TableEnvironment's catalog. Registered functions can be referenced in Table API and SQL queries.- Type Parameters:
 T- The type of the output row.- Parameters:
 name- The name under which the function is registered.tableFunction- The TableFunction to register.
 
- 
registerFunction
@Deprecated <T,ACC> void registerFunction(String name, org.apache.flink.table.functions.AggregateFunction<T,ACC> aggregateFunction)
Deprecated.UseTableEnvironment.createTemporarySystemFunction(String, UserDefinedFunction)instead. Please note that the new method also uses the new type system and reflective extraction logic. It might be necessary to update the function implementation as well. See the documentation ofAggregateFunctionfor more information on the new function design.Registers anAggregateFunctionunder a unique name in the TableEnvironment's catalog. Registered functions can be referenced in Table API and SQL queries.- Type Parameters:
 T- The type of the output value.ACC- The type of aggregate accumulator.- Parameters:
 name- The name under which the function is registered.aggregateFunction- The AggregateFunction to register.
 
- 
registerFunction
@Deprecated <T,ACC> void registerFunction(String name, org.apache.flink.table.functions.TableAggregateFunction<T,ACC> tableAggregateFunction)
Deprecated.UseTableEnvironment.createTemporarySystemFunction(String, UserDefinedFunction)instead. Please note that the new method also uses the new type system and reflective extraction logic. It might be necessary to update the function implementation as well. See the documentation ofTableAggregateFunctionfor more information on the new function design.Registers anTableAggregateFunctionunder a unique name in the TableEnvironment's catalog. Registered functions can only be referenced in Table API.- Type Parameters:
 T- The type of the output value.ACC- The type of aggregate accumulator.- Parameters:
 name- The name under which the function is registered.tableAggregateFunction- The TableAggregateFunction to register.
 
- 
fromDataStream
<T> org.apache.flink.table.api.Table fromDataStream(org.apache.flink.streaming.api.datastream.DataStream<T> dataStream)
Converts the givenDataStreaminto aTable.Column names and types of the
Tableare automatically derived from theTypeInformationof theDataStream. If the outermost record'sTypeInformationis aCompositeType, it will be flattened in the first level.TypeInformationthat cannot be represented as one of the listedDataTypeswill be treated as a black-boxDataTypes.RAW(Class, TypeSerializer)type. Thus, composite nested fields will not be accessible.Since the DataStream API does not support changelog processing natively, this method assumes append-only/insert-only semantics during the stream-to-table conversion. Records of type
Rowmust describeRowKind.INSERTchanges.By default, the stream record's timestamp and watermarks are not propagated to downstream table operations unless explicitly declared via
fromDataStream(DataStream, Schema).If the returned table is converted back to DataStream via
toDataStream(Table), the input DataStream of this method would be returned.- Type Parameters:
 T- The external type of theDataStream.- Parameters:
 dataStream- TheDataStreamto be converted.- Returns:
 - The converted 
Table. - See Also:
 fromChangelogStream(DataStream)
 
- 
fromDataStream
<T> org.apache.flink.table.api.Table fromDataStream(org.apache.flink.streaming.api.datastream.DataStream<T> dataStream, org.apache.flink.table.api.Schema schema)Converts the givenDataStreaminto aTable.Column names and types of the
Tableare automatically derived from theTypeInformationof theDataStream. If the outermost record'sTypeInformationis aCompositeType, it will be flattened in the first level.TypeInformationthat cannot be represented as one of the listedDataTypeswill be treated as a black-boxDataTypes.RAW(Class, TypeSerializer)type. Thus, composite nested fields will not be accessible.Since the DataStream API does not support changelog processing natively, this method assumes append-only/insert-only semantics during the stream-to-table conversion. Records of class
Rowmust describeRowKind.INSERTchanges.By default, the stream record's timestamp and watermarks are not propagated to downstream table operations unless explicitly declared in the input schema.
This method allows to declare a
Schemafor the resulting table. The declaration is similar to aCREATE TABLEDDL in SQL and allows to:- enrich or overwrite automatically derived columns with a custom 
DataType - reorder columns
 - add computed or metadata columns next to the physical columns
 - access a stream record's timestamp
 - declare a watermark strategy or propagate the 
DataStreamwatermarks 
It is possible to declare a schema without physical/regular columns. In this case, those columns will be automatically derived and implicitly put at the beginning of the schema declaration.
The following examples illustrate common schema declarations and their semantics:
// given a DataStream of Tuple2 < String , BigDecimal > // === EXAMPLE 1 === // no physical columns defined, they will be derived automatically, // e.g. BigDecimal becomes DECIMAL(38, 18) Schema.newBuilder() .columnByExpression("c1", "f1 + 42") .columnByExpression("c2", "f1 - 1") .build() // equal to: CREATE TABLE (f0 STRING, f1 DECIMAL(38, 18), c1 AS f1 + 42, c2 AS f1 - 1) // === EXAMPLE 2 === // physical columns defined, input fields and columns will be mapped by name, // columns are reordered and their data type overwritten, // all columns must be defined to show up in the final table's schema Schema.newBuilder() .column("f1", "DECIMAL(10, 2)") .columnByExpression("c", "f1 - 1") .column("f0", "STRING") .build() // equal to: CREATE TABLE (f1 DECIMAL(10, 2), c AS f1 - 1, f0 STRING) // === EXAMPLE 3 === // timestamp and watermarks can be added from the DataStream API, // physical columns will be derived automatically Schema.newBuilder() .columnByMetadata("rowtime", "TIMESTAMP_LTZ(3)") // extract timestamp into a column .watermark("rowtime", "SOURCE_WATERMARK()") // declare watermarks propagation .build() // equal to: // CREATE TABLE ( // f0 STRING, // f1 DECIMAL(38, 18), // rowtime TIMESTAMP(3) METADATA, // WATERMARK FOR rowtime AS SOURCE_WATERMARK() // )- Type Parameters:
 T- The external type of theDataStream.- Parameters:
 dataStream- TheDataStreamto be converted.schema- The customized schema for the final table.- Returns:
 - The converted 
Table. - See Also:
 fromChangelogStream(DataStream, Schema)
 - enrich or overwrite automatically derived columns with a custom 
 
- 
fromChangelogStream
org.apache.flink.table.api.Table fromChangelogStream(org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.types.Row> dataStream)
Converts the givenDataStreamof changelog entries into aTable.Compared to
fromDataStream(DataStream), this method consumes instances ofRowand evaluates theRowKindflag that is contained in every record during runtime. The runtime behavior is similar to that of aDynamicTableSource.This method expects a changelog containing all kinds of changes (enumerated in
RowKind) as the defaultChangelogMode. UsefromChangelogStream(DataStream, Schema, ChangelogMode)to limit the kinds of changes (e.g. for upsert mode).Column names and types of the
Tableare automatically derived from theTypeInformationof theDataStream. If the outermost record'sTypeInformationis aCompositeType, it will be flattened in the first level.TypeInformationthat cannot be represented as one of the listedDataTypeswill be treated as a black-boxDataTypes.RAW(Class, TypeSerializer)type. Thus, composite nested fields will not be accessible.By default, the stream record's timestamp and watermarks are not propagated to downstream table operations unless explicitly declared via
fromChangelogStream(DataStream, Schema).- Parameters:
 dataStream- The changelog stream ofRow.- Returns:
 - The converted 
Table. 
 
- 
fromChangelogStream
org.apache.flink.table.api.Table fromChangelogStream(org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.types.Row> dataStream, org.apache.flink.table.api.Schema schema)Converts the givenDataStreamof changelog entries into aTable.Compared to
fromDataStream(DataStream), this method consumes instances ofRowand evaluates theRowKindflag that is contained in every record during runtime. The runtime behavior is similar to that of aDynamicTableSource.This method expects a changelog containing all kinds of changes (enumerated in
RowKind) as the defaultChangelogMode. UsefromChangelogStream(DataStream, Schema, ChangelogMode)to limit the kinds of changes (e.g. for upsert mode).Column names and types of the
Tableare automatically derived from theTypeInformationof theDataStream. If the outermost record'sTypeInformationis aCompositeType, it will be flattened in the first level.TypeInformationthat cannot be represented as one of the listedDataTypeswill be treated as a black-boxDataTypes.RAW(Class, TypeSerializer)type. Thus, composite nested fields will not be accessible.By default, the stream record's timestamp and watermarks are not propagated to downstream table operations unless explicitly declared in the input schema.
This method allows to declare a
Schemafor the resulting table. The declaration is similar to aCREATE TABLEDDL in SQL and allows to:- enrich or overwrite automatically derived columns with a custom 
DataType - reorder columns
 - add computed or metadata columns next to the physical columns
 - access a stream record's timestamp
 - declare a watermark strategy or propagate the 
DataStreamwatermarks - declare a primary key
 
See
fromDataStream(DataStream, Schema)for more information and examples on how to declare aSchema.- Parameters:
 dataStream- The changelog stream ofRow.schema- The customized schema for the final table.- Returns:
 - The converted 
Table. 
 - enrich or overwrite automatically derived columns with a custom 
 
- 
fromChangelogStream
org.apache.flink.table.api.Table fromChangelogStream(org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.types.Row> dataStream, org.apache.flink.table.api.Schema schema, org.apache.flink.table.connector.ChangelogMode changelogMode)Converts the givenDataStreamof changelog entries into aTable.Compared to
fromDataStream(DataStream), this method consumes instances ofRowand evaluates theRowKindflag that is contained in every record during runtime. The runtime behavior is similar to that of aDynamicTableSource.This method requires an explicitly declared
ChangelogMode. For example, useChangelogMode.upsert()if the stream will not containRowKind.UPDATE_BEFORE, orChangelogMode.insertOnly()for non-updating streams.Column names and types of the
Tableare automatically derived from theTypeInformationof theDataStream. If the outermost record'sTypeInformationis aCompositeType, it will be flattened in the first level.TypeInformationthat cannot be represented as one of the listedDataTypeswill be treated as a black-boxDataTypes.RAW(Class, TypeSerializer)type. Thus, composite nested fields will not be accessible.By default, the stream record's timestamp and watermarks are not propagated to downstream table operations unless explicitly declared in the input schema.
This method allows to declare a
Schemafor the resulting table. The declaration is similar to aCREATE TABLEDDL in SQL and allows to:- enrich or overwrite automatically derived columns with a custom 
DataType - reorder columns
 - add computed or metadata columns next to the physical columns
 - access a stream record's timestamp
 - declare a watermark strategy or propagate the 
DataStreamwatermarks - declare a primary key
 
See
fromDataStream(DataStream, Schema)for more information and examples of how to declare aSchema.- Parameters:
 dataStream- The changelog stream ofRow.schema- The customized schema for the final table.changelogMode- The expected kinds of changes in the incoming changelog.- Returns:
 - The converted 
Table. 
 - enrich or overwrite automatically derived columns with a custom 
 
- 
createTemporaryView
<T> void createTemporaryView(String path, org.apache.flink.streaming.api.datastream.DataStream<T> dataStream)
Creates a view from the givenDataStreamin a given path. Registered views can be referenced in SQL queries.See
fromDataStream(DataStream)for more information on how aDataStreamis translated into a table.Temporary objects can shadow permanent ones. If a permanent object in a given path exists, it will be inaccessible in the current session. To make the permanent object available again you can drop the corresponding temporary object.
- Type Parameters:
 T- The type of theDataStream.- Parameters:
 path- The path under which theDataStreamis created. See also theTableEnvironmentclass description for the format of the path.dataStream- TheDataStreamout of which to create the view.
 
- 
createTemporaryView
<T> void createTemporaryView(String path, org.apache.flink.streaming.api.datastream.DataStream<T> dataStream, org.apache.flink.table.api.Schema schema)
Creates a view from the givenDataStreamin a given path. Registered views can be referenced in SQL queries.See
fromDataStream(DataStream, Schema)for more information on how aDataStreamis translated into a table.Temporary objects can shadow permanent ones. If a permanent object in a given path exists, it will be inaccessible in the current session. To make the permanent object available again you can drop the corresponding temporary object.
- Type Parameters:
 T- The type of theDataStream.- Parameters:
 path- The path under which theDataStreamis created. See also theTableEnvironmentclass description for the format of the path.schema- The customized schema for the final table.dataStream- TheDataStreamout of which to create the view.
 
- 
toDataStream
org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.types.Row> toDataStream(org.apache.flink.table.api.Table table)
Converts the givenTableinto aDataStream.Since the DataStream API does not support changelog processing natively, this method assumes append-only/insert-only semantics during the table-to-stream conversion. The records of class
Rowwill always describeRowKind.INSERTchanges. Updating tables are not supported by this method and will produce an exception.If you want to convert the
Tableto a specific class or data type, usetoDataStream(Table, Class)ortoDataStream(Table, AbstractDataType)instead.Note that the type system of the table ecosystem is richer than the one of the DataStream API. The table runtime will make sure to properly serialize the output records to the first operator of the DataStream API. Afterwards, the
Typessemantics of the DataStream API need to be considered.If the input table contains a single rowtime column, it will be propagated into a stream record's timestamp. Watermarks will be propagated as well.
- Parameters:
 table- TheTableto convert. It must be insert-only.- Returns:
 - The converted 
DataStream. - See Also:
 toDataStream(Table, AbstractDataType),toChangelogStream(Table)
 
- 
toDataStream
<T> org.apache.flink.streaming.api.datastream.DataStream<T> toDataStream(org.apache.flink.table.api.Table table, Class<T> targetClass)Converts the givenTableinto aDataStreamof the givenClass.See
toDataStream(Table, AbstractDataType)for more information on how aTableis translated into aDataStream.This method is a shortcut for:
tableEnv.toDataStream(table, DataTypes.of(targetClass))Calling this method with a class of
Rowwill redirect totoDataStream(Table).- Type Parameters:
 T- External record.- Parameters:
 table- TheTableto convert. It must be insert-only.targetClass- TheClassthat decides about the final external representation inDataStreamrecords.- Returns:
 - The converted 
DataStream. - See Also:
 toChangelogStream(Table, Schema)
 
- 
toDataStream
<T> org.apache.flink.streaming.api.datastream.DataStream<T> toDataStream(org.apache.flink.table.api.Table table, org.apache.flink.table.types.AbstractDataType<?> targetDataType)Converts the givenTableinto aDataStreamof the givenDataType.The given
DataTypeis used to configure the table runtime to convert columns and internal data structures to the desired representation. The following example shows how to convert the table columns into the fields of a POJO type.// given a Table of (name STRING, age INT) public static class MyPojo { public String name; public Integer age; // default constructor for DataStream API public MyPojo() {} // fully assigning constructor for field order in Table API public MyPojo(String name, Integer age) { this.name = name; this.age = age; } } tableEnv.toDataStream(table, DataTypes.of(MyPojo.class));Since the DataStream API does not support changelog processing natively, this method assumes append-only/insert-only semantics during the table-to-stream conversion. Updating tables are not supported by this method and will produce an exception.
Note that the type system of the table ecosystem is richer than the one of the DataStream API. The table runtime will make sure to properly serialize the output records to the first operator of the DataStream API. Afterwards, the
Typessemantics of the DataStream API need to be considered.If the input table contains a single rowtime column, it will be propagated into a stream record's timestamp. Watermarks will be propagated as well.
- Type Parameters:
 T- External record.- Parameters:
 table- TheTableto convert. It must be insert-only.targetDataType- TheDataTypethat decides about the final external representation inDataStreamrecords.- Returns:
 - The converted 
DataStream. - See Also:
 toDataStream(Table),toChangelogStream(Table, Schema)
 
- 
toChangelogStream
org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.types.Row> toChangelogStream(org.apache.flink.table.api.Table table)
Converts the givenTableinto aDataStreamof changelog entries.Compared to
toDataStream(Table), this method produces instances ofRowand sets theRowKindflag that is contained in every record during runtime. The runtime behavior is similar to that of aDynamicTableSink.This method can emit a changelog containing all kinds of changes (enumerated in
RowKind) that the given updating table requires as the defaultChangelogMode. UsetoChangelogStream(Table, Schema, ChangelogMode)to limit the kinds of changes (e.g. for upsert mode).Note that the type system of the table ecosystem is richer than the one of the DataStream API. The table runtime will make sure to properly serialize the output records to the first operator of the DataStream API. Afterwards, the
Typessemantics of the DataStream API need to be considered.If the input table contains a single rowtime column, it will be propagated into a stream record's timestamp. Watermarks will be propagated as well.
- Parameters:
 table- TheTableto convert. It can be updating or insert-only.- Returns:
 - The converted changelog stream of 
Row. 
 
- 
toChangelogStream
org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.types.Row> toChangelogStream(org.apache.flink.table.api.Table table, org.apache.flink.table.api.Schema targetSchema)Converts the givenTableinto aDataStreamof changelog entries.Compared to
toDataStream(Table), this method produces instances ofRowand sets theRowKindflag that is contained in every record during runtime. The runtime behavior is similar to that of aDynamicTableSink.This method can emit a changelog containing all kinds of changes (enumerated in
RowKind) that the given updating table requires as the defaultChangelogMode. UsetoChangelogStream(Table, Schema, ChangelogMode)to limit the kinds of changes (e.g. for upsert mode).The given
Schemais used to configure the table runtime to convert columns and internal data structures to the desired representation. The following example shows how to convert a table column into a POJO type.// given a Table of (id BIGINT, payload ROW < name STRING , age INT >) public static class MyPojo { public String name; public Integer age; // default constructor for DataStream API public MyPojo() {} // fully assigning constructor for field order in Table API public MyPojo(String name, Integer age) { this.name = name; this.age = age; } } tableEnv.toChangelogStream( table, Schema.newBuilder() .column("id", DataTypes.BIGINT()) .column("payload", DataTypes.of(MyPojo.class)) // force an implicit conversion .build());Note that the type system of the table ecosystem is richer than the one of the DataStream API. The table runtime will make sure to properly serialize the output records to the first operator of the DataStream API. Afterwards, the
Typessemantics of the DataStream API need to be considered.If the input table contains a single rowtime column, it will be propagated into a stream record's timestamp. Watermarks will be propagated as well.
If the rowtime should not be a concrete field in the final
Rowanymore, or the schema should be symmetrical for bothfromChangelogStream(org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.types.Row>)andtoChangelogStream(org.apache.flink.table.api.Table), the rowtime can also be declared as a metadata column that will be propagated into a stream record's timestamp. It is possible to declare a schema without physical/regular columns. In this case, those columns will be automatically derived and implicitly put at the beginning of the schema declaration.The following examples illustrate common schema declarations and their semantics:
// given a Table of (id INT, name STRING, my_rowtime TIMESTAMP_LTZ(3)) // === EXAMPLE 1 === // no physical columns defined, they will be derived automatically, // the last derived physical column will be skipped in favor of the metadata column Schema.newBuilder() .columnByMetadata("rowtime", "TIMESTAMP_LTZ(3)") .build() // equal to: CREATE TABLE (id INT, name STRING, rowtime TIMESTAMP_LTZ(3) METADATA) // === EXAMPLE 2 === // physical columns defined, all columns must be defined Schema.newBuilder() .column("id", "INT") .column("name", "STRING") .columnByMetadata("rowtime", "TIMESTAMP_LTZ(3)") .build() // equal to: CREATE TABLE (id INT, name STRING, rowtime TIMESTAMP_LTZ(3) METADATA)- Parameters:
 table- TheTableto convert. It can be updating or insert-only.targetSchema- TheSchemathat decides about the final external representation inDataStreamrecords.- Returns:
 - The converted changelog stream of 
Row. 
 
- 
toChangelogStream
org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.types.Row> toChangelogStream(org.apache.flink.table.api.Table table, org.apache.flink.table.api.Schema targetSchema, org.apache.flink.table.connector.ChangelogMode changelogMode)Converts the givenTableinto aDataStreamof changelog entries.Compared to
toDataStream(Table), this method produces instances ofRowand sets theRowKindflag that is contained in every record during runtime. The runtime behavior is similar to that of aDynamicTableSink.This method requires an explicitly declared
ChangelogMode. For example, useChangelogMode.upsert()if the stream will not containRowKind.UPDATE_BEFORE, orChangelogMode.insertOnly()for non-updating streams.Note that the type system of the table ecosystem is richer than the one of the DataStream API. The table runtime will make sure to properly serialize the output records to the first operator of the DataStream API. Afterwards, the
Typessemantics of the DataStream API need to be considered.If the input table contains a single rowtime column, it will be propagated into a stream record's timestamp. Watermarks will be propagated as well. However, it is also possible to write out the rowtime as a metadata column. See
toChangelogStream(Table, Schema)for more information and examples on how to declare aSchema.- Parameters:
 table- TheTableto convert. It can be updating or insert-only.targetSchema- TheSchemathat decides about the final external representation inDataStreamrecords.changelogMode- The required kinds of changes in the result changelog. An exception will be thrown if the given updating table cannot be represented in this changelog mode.- Returns:
 - The converted changelog stream of 
Row. 
 
- 
createStatementSet
StreamStatementSet createStatementSet()
Returns aStatementSetthat integrates with the Java-specificDataStreamAPI.It accepts pipelines defined by DML statements or
Tableobjects. The planner can optimize all added statements together and then either submit them as one job or attach them to the underlyingStreamExecutionEnvironment.- Specified by:
 createStatementSetin interfaceorg.apache.flink.table.api.TableEnvironment- Returns:
 - statement set builder for the Java-specific 
DataStreamAPI 
 
- 
fromDataStream
@Deprecated <T> org.apache.flink.table.api.Table fromDataStream(org.apache.flink.streaming.api.datastream.DataStream<T> dataStream, org.apache.flink.table.expressions.Expression... fields)
Deprecated.UsefromDataStream(DataStream, Schema)instead. In most cases,fromDataStream(DataStream)should already be sufficient. It integrates with the new type system and supports all kinds ofDataTypesthat the table runtime can consume. The semantics might be slightly different for raw and structured types.Converts the givenDataStreaminto aTablewith specified field names.There are two modes for mapping original fields to the fields of the
Table:1. Reference input fields by name: All fields in the schema definition are referenced by name (and possibly renamed using an alias (as). Moreover, we can define proctime and rowtime attributes at arbitrary positions using arbitrary names (except those that exist in the result schema). In this mode, fields can be reordered and projected out. This mode can be used for any input type, including POJOs.
Example:
DataStream<Tuple2<String, Long>> stream = ... Table table = tableEnv.fromDataStream( stream, $("f1"), // reorder and use the original field $("rowtime").rowtime(), // extract the internally attached timestamp into an event-time // attribute named 'rowtime' $("f0").as("name") // reorder and give the original field a better name );2. Reference input fields by position: In this mode, fields are simply renamed. Event-time attributes can replace the field on their position in the input data (if it is of correct type) or be appended at the end. Proctime attributes must be appended at the end. This mode can only be used if the input type has a defined field order (tuple, case class, Row) and none of the
fieldsreferences a field of the input type.Example:
DataStream<Tuple2<String, Long>> stream = ... Table table = tableEnv.fromDataStream( stream, $("a"), // rename the first field to 'a' $("b"), // rename the second field to 'b' $("rowtime").rowtime() // extract the internally attached timestamp into an event-time // attribute named 'rowtime' );- Type Parameters:
 T- The type of theDataStream.- Parameters:
 dataStream- TheDataStreamto be converted.fields- The fields expressions to map original fields of the DataStream to the fields of theTable.- Returns:
 - The converted 
Table. 
 
- 
registerDataStream
@Deprecated <T> void registerDataStream(String name, org.apache.flink.streaming.api.datastream.DataStream<T> dataStream)
Deprecated.Creates a view from the givenDataStream. Registered views can be referenced in SQL queries.The field names of the
Tableare automatically derived from the type of theDataStream.The view is registered in the namespace of the current catalog and database. To register the view in a different catalog use
createTemporaryView(String, DataStream).Temporary objects can shadow permanent ones. If a permanent object in a given path exists, it will be inaccessible in the current session. To make the permanent object available again you can drop the corresponding temporary object.
- Type Parameters:
 T- The type of theDataStreamto register.- Parameters:
 name- The name under which theDataStreamis registered in the catalog.dataStream- TheDataStreamto register.
 
- 
createTemporaryView
@Deprecated <T> void createTemporaryView(String path, org.apache.flink.streaming.api.datastream.DataStream<T> dataStream, org.apache.flink.table.expressions.Expression... fields)
Deprecated.UsecreateTemporaryView(String, DataStream, Schema)instead. In most cases,createTemporaryView(String, DataStream)should already be sufficient. It integrates with the new type system and supports all kinds ofDataTypesthat the table runtime can consume. The semantics might be slightly different for raw and structured types.Creates a view from the givenDataStreamin a given path with specified field names. Registered views can be referenced in SQL queries.There are two modes for mapping original fields to the fields of the View:
1. Reference input fields by name: All fields in the schema definition are referenced by name (and possibly renamed using an alias (as). Moreover, we can define proctime and rowtime attributes at arbitrary positions using arbitrary names (except those that exist in the result schema). In this mode, fields can be reordered and projected out. This mode can be used for any input type, including POJOs.
Example:
DataStream<Tuple2<String, Long>> stream = ... tableEnv.createTemporaryView( "cat.db.myTable", stream, $("f1"), // reorder and use the original field $("rowtime").rowtime(), // extract the internally attached timestamp into an event-time // attribute named 'rowtime' $("f0").as("name") // reorder and give the original field a better name );2. Reference input fields by position: In this mode, fields are simply renamed. Event-time attributes can replace the field on their position in the input data (if it is of correct type) or be appended at the end. Proctime attributes must be appended at the end. This mode can only be used if the input type has a defined field order (tuple, case class, Row) and none of the
fieldsreferences a field of the input type.Example:
DataStream<Tuple2<String, Long>> stream = ... tableEnv.createTemporaryView( "cat.db.myTable", stream, $("a"), // rename the first field to 'a' $("b"), // rename the second field to 'b' $("rowtime").rowtime() // adds an event-time attribute named 'rowtime' );Temporary objects can shadow permanent ones. If a permanent object in a given path exists, it will be inaccessible in the current session. To make the permanent object available again you can drop the corresponding temporary object.
- Type Parameters:
 T- The type of theDataStream.- Parameters:
 path- The path under which theDataStreamis created. See also theTableEnvironmentclass description for the format of the path.dataStream- TheDataStreamout of which to create the view.fields- The fields expressions to map original fields of the DataStream to the fields of the View.
 
- 
toAppendStream
@Deprecated <T> org.apache.flink.streaming.api.datastream.DataStream<T> toAppendStream(org.apache.flink.table.api.Table table, Class<T> clazz)
Deprecated.UsetoDataStream(Table, Class)instead. It integrates with the new type system and supports all kinds ofDataTypesthat the table runtime can produce. The semantics might be slightly different for raw and structured types. UsetoDataStream(DataTypes.of(TypeInformation.of(Class)))ifTypeInformationshould be used as source of truth.Converts the givenTableinto an appendDataStreamof a specified type.The
Tablemust only have insert (append) changes. If theTableis also modified by update or delete changes, the conversion will fail.The fields of the
Tableare mapped toDataStreamfields as follows:RowandTupletypes: Fields are mapped by position, field types must match.- POJO 
DataStreamtypes: Fields are mapped by field name, field types must match. 
- Type Parameters:
 T- The type of the resultingDataStream.- Parameters:
 table- TheTableto convert.clazz- The class of the type of the resultingDataStream.- Returns:
 - The converted 
DataStream. 
 
- 
toAppendStream
@Deprecated <T> org.apache.flink.streaming.api.datastream.DataStream<T> toAppendStream(org.apache.flink.table.api.Table table, org.apache.flink.api.common.typeinfo.TypeInformation<T> typeInfo)
Deprecated.UsetoDataStream(Table, Class)instead. It integrates with the new type system and supports all kinds ofDataTypesthat the table runtime can produce. The semantics might be slightly different for raw and structured types. UsetoDataStream(DataTypes.of(TypeInformation.of(Class)))ifTypeInformationshould be used as source of truth.Converts the givenTableinto an appendDataStreamof a specified type.The
Tablemust only have insert (append) changes. If theTableis also modified by update or delete changes, the conversion will fail.The fields of the
Tableare mapped toDataStreamfields as follows:RowandTupletypes: Fields are mapped by position, field types must match.- POJO 
DataStreamtypes: Fields are mapped by field name, field types must match. 
- Type Parameters:
 T- The type of the resultingDataStream.- Parameters:
 table- TheTableto convert.typeInfo- TheTypeInformationthat specifies the type of theDataStream.- Returns:
 - The converted 
DataStream. 
 
- 
toRetractStream
@Deprecated <T> org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.api.java.tuple.Tuple2<Boolean,T>> toRetractStream(org.apache.flink.table.api.Table table, Class<T> clazz)
Deprecated.UsetoChangelogStream(Table, Schema)instead. It integrates with the new type system and supports all kinds ofDataTypesand everyChangelogModethat the table runtime can produce.Converts the givenTableinto aDataStreamof add and retract messages. The message will be encoded asTuple2. The first field is aBooleanflag, the second field holds the record of the specified typeStreamTableEnvironment.A true
Booleanflag indicates an add message, a false flag indicates a retract message.The fields of the
Tableare mapped toDataStreamfields as follows:RowandTupletypes: Fields are mapped by position, field types must match.- POJO 
DataStreamtypes: Fields are mapped by field name, field types must match. 
- Type Parameters:
 T- The type of the requested record type.- Parameters:
 table- TheTableto convert.clazz- The class of the requested record type.- Returns:
 - The converted 
DataStream. 
 
- 
toRetractStream
@Deprecated <T> org.apache.flink.streaming.api.datastream.DataStream<org.apache.flink.api.java.tuple.Tuple2<Boolean,T>> toRetractStream(org.apache.flink.table.api.Table table, org.apache.flink.api.common.typeinfo.TypeInformation<T> typeInfo)
Deprecated.UsetoChangelogStream(Table, Schema)instead. It integrates with the new type system and supports all kinds ofDataTypesand everyChangelogModethat the table runtime can produce.Converts the givenTableinto aDataStreamof add and retract messages. The message will be encoded asTuple2. The first field is aBooleanflag, the second field holds the record of the specified typeStreamTableEnvironment.A true
Booleanflag indicates an add message, a false flag indicates a retract message.The fields of the
Tableare mapped toDataStreamfields as follows:RowandTupletypes: Fields are mapped by position, field types must match.- POJO 
DataStreamtypes: Fields are mapped by field name, field types must match. 
- Type Parameters:
 T- The type of the requested record type.- Parameters:
 table- TheTableto convert.typeInfo- TheTypeInformationof the requested record type.- Returns:
 - The converted 
DataStream. 
 
 - 
 
 -