Class TableImpl
- java.lang.Object
-
- org.apache.flink.table.api.internal.TableImpl
-
- All Implemented Interfaces:
Executable
,Explainable<Table>
,Table
@Internal public class TableImpl extends Object implements Table
Implementation forTable
.
-
-
Method Summary
All Methods Static Methods Instance Methods Concrete Methods Modifier and Type Method Description Table
addColumns(org.apache.flink.table.expressions.Expression... fields)
Adds additional columns.Table
addOrReplaceColumns(org.apache.flink.table.expressions.Expression... fields)
Adds additional columns.AggregatedTable
aggregate(org.apache.flink.table.expressions.Expression aggregateFunction)
Performs a global aggregate operation with an aggregate function.Table
as(String field, String... fields)
Renames the fields of the expression result.Table
as(org.apache.flink.table.expressions.Expression... fields)
Renames the fields of the expression result.static TableImpl
createTable(TableEnvironmentInternal tableEnvironment, QueryOperation operationTree, OperationTreeBuilder operationTreeBuilder, FunctionLookup functionLookup)
org.apache.flink.table.functions.TemporalTableFunction
createTemporalTableFunction(org.apache.flink.table.expressions.Expression timeAttribute, org.apache.flink.table.expressions.Expression primaryKey)
CreatesTemporalTableFunction
backed up by this table as a history table.Table
distinct()
Removes duplicate values and returns only distinct (different) values.Table
dropColumns(org.apache.flink.table.expressions.Expression... fields)
Drops existing columns.TableResult
execute()
Executes this object.String
explain(ExplainFormat format, ExplainDetail... extraDetails)
Returns the AST of this object and the execution plan to compute the result of the given statement.Table
fetch(int fetch)
Limits a (possibly sorted) result to the first n rows.Table
filter(org.apache.flink.table.expressions.Expression predicate)
Filters out elements that don't pass the filter predicate.FlatAggregateTable
flatAggregate(org.apache.flink.table.expressions.Expression tableAggregateFunction)
Perform a global flatAggregate without groupBy.Table
flatMap(org.apache.flink.table.expressions.Expression tableFunction)
Performs a flatMap operation with an user-defined table function or built-in table function.Table
fullOuterJoin(Table right, org.apache.flink.table.expressions.Expression joinPredicate)
Joins twoTable
s.QueryOperation
getQueryOperation()
Returns underlying logical representation of this table.org.apache.flink.table.catalog.ResolvedSchema
getResolvedSchema()
Returns the resolved schema of this table.TableEnvironment
getTableEnvironment()
GroupedTable
groupBy(org.apache.flink.table.expressions.Expression... fields)
Groups the elements on some grouping keys.TablePipeline
insertInto(String tablePath)
Declares that the pipeline defined by the givenTable
object should be written to a table (backed by aDynamicTableSink
) that was registered under the specified path.TablePipeline
insertInto(String tablePath, boolean overwrite)
Declares that the pipeline defined by the givenTable
object should be written to a table (backed by aDynamicTableSink
) that was registered under the specified path.TablePipeline
insertInto(TableDescriptor descriptor)
Declares that the pipeline defined by the givenTable
object should be written to a table (backed by aDynamicTableSink
) expressed via the givenTableDescriptor
.TablePipeline
insertInto(TableDescriptor descriptor, boolean overwrite)
Declares that the pipeline defined by the givenTable
object should be written to a table (backed by aDynamicTableSink
) expressed via the givenTableDescriptor
.Table
intersect(Table right)
Intersects twoTable
s with duplicate records removed.Table
intersectAll(Table right)
Intersects twoTable
s.Table
join(Table right)
Joins twoTable
s.Table
join(Table right, org.apache.flink.table.expressions.Expression joinPredicate)
Joins twoTable
s.Table
joinLateral(org.apache.flink.table.expressions.Expression tableFunctionCall)
Joins thisTable
with an user-definedTableFunction
.Table
joinLateral(org.apache.flink.table.expressions.Expression tableFunctionCall, org.apache.flink.table.expressions.Expression joinPredicate)
Joins thisTable
with an user-definedTableFunction
.Table
leftOuterJoin(Table right)
Joins twoTable
s.Table
leftOuterJoin(Table right, org.apache.flink.table.expressions.Expression joinPredicate)
Joins twoTable
s.Table
leftOuterJoinLateral(org.apache.flink.table.expressions.Expression tableFunctionCall)
Joins thisTable
with an user-definedTableFunction
.Table
leftOuterJoinLateral(org.apache.flink.table.expressions.Expression tableFunctionCall, org.apache.flink.table.expressions.Expression joinPredicate)
Joins thisTable
with an user-definedTableFunction
.Table
map(org.apache.flink.table.expressions.Expression mapFunction)
Performs a map operation with an user-defined scalar function or built-in scalar function.Table
minus(Table right)
Minus of twoTable
s with duplicate records removed.Table
minusAll(Table right)
Minus of twoTable
s.Table
offset(int offset)
Limits a (possibly sorted) result from an offset position.Table
orderBy(org.apache.flink.table.expressions.Expression... fields)
Sorts the givenTable
.void
printSchema()
Prints the schema of this table to the console in a summary format.Table
renameColumns(org.apache.flink.table.expressions.Expression... fields)
Renames existing columns.Table
rightOuterJoin(Table right, org.apache.flink.table.expressions.Expression joinPredicate)
Joins twoTable
s.Table
select(org.apache.flink.table.expressions.Expression... fields)
Performs a selection operation.String
toString()
Table
union(Table right)
Unions twoTable
s with duplicate records removed.Table
unionAll(Table right)
Unions twoTable
s.Table
where(org.apache.flink.table.expressions.Expression predicate)
Filters out elements that don't pass the filter predicate.GroupWindowedTable
window(GroupWindow groupWindow)
Groups the records of a table by assigning them to windows defined by a time or row interval.OverWindowedTable
window(OverWindow... overWindows)
Defines over-windows on the records of a table.-
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait
-
Methods inherited from interface org.apache.flink.table.api.Explainable
explain, printExplain
-
Methods inherited from interface org.apache.flink.table.api.Table
executeInsert, executeInsert, executeInsert, executeInsert, getSchema, limit, limit
-
-
-
-
Method Detail
-
getTableEnvironment
public TableEnvironment getTableEnvironment()
-
createTable
public static TableImpl createTable(TableEnvironmentInternal tableEnvironment, QueryOperation operationTree, OperationTreeBuilder operationTreeBuilder, FunctionLookup functionLookup)
-
getResolvedSchema
public org.apache.flink.table.catalog.ResolvedSchema getResolvedSchema()
Description copied from interface:Table
Returns the resolved schema of this table.- Specified by:
getResolvedSchema
in interfaceTable
-
printSchema
public void printSchema()
Description copied from interface:Table
Prints the schema of this table to the console in a summary format.- Specified by:
printSchema
in interfaceTable
-
getQueryOperation
public QueryOperation getQueryOperation()
Description copied from interface:Table
Returns underlying logical representation of this table.- Specified by:
getQueryOperation
in interfaceTable
-
select
public Table select(org.apache.flink.table.expressions.Expression... fields)
Description copied from interface:Table
Performs a selection operation. Similar to a SQL SELECT statement. The field expressions can contain complex expressions and aggregations.Java Example:
tab.select($("key"), $("value").avg().plus(" The average").as("average"));
Scala Example:
tab.select($"key", $"value".avg + " The average" as "average")
-
createTemporalTableFunction
public org.apache.flink.table.functions.TemporalTableFunction createTemporalTableFunction(org.apache.flink.table.expressions.Expression timeAttribute, org.apache.flink.table.expressions.Expression primaryKey)
Description copied from interface:Table
CreatesTemporalTableFunction
backed up by this table as a history table. Temporal Tables represent a concept of a table that changes over time and for which Flink keeps track of those changes.TemporalTableFunction
provides a way how to access those data.For more information please check Flink's documentation on Temporal Tables.
Currently
TemporalTableFunction
s are only supported in streaming.- Specified by:
createTemporalTableFunction
in interfaceTable
- Parameters:
timeAttribute
- Must points to a time indicator. Provides a way to compare which records are a newer or older version.primaryKey
- Defines the primary key. With primary key it is possible to update a row or to delete it.- Returns:
TemporalTableFunction
which is an instance ofTableFunction
. It takes one single argument, thetimeAttribute
, for which it returns matching version of theTable
, from whichTemporalTableFunction
was created.
-
as
public Table as(String field, String... fields)
Description copied from interface:Table
Renames the fields of the expression result. Use this to disambiguate fields before joining to operations.Example:
tab.as("a", "b")
-
as
public Table as(org.apache.flink.table.expressions.Expression... fields)
Description copied from interface:Table
Renames the fields of the expression result. Use this to disambiguate fields before joining to operations.Java Example:
tab.as($("a"), $("b"))
Scala Example:
tab.as($"a", $"b")
-
filter
public Table filter(org.apache.flink.table.expressions.Expression predicate)
Description copied from interface:Table
Filters out elements that don't pass the filter predicate. Similar to a SQL WHERE clause.Java Example:
tab.filter($("name").isEqual("Fred"));
Scala Example:
tab.filter($"name" === "Fred")
-
where
public Table where(org.apache.flink.table.expressions.Expression predicate)
Description copied from interface:Table
Filters out elements that don't pass the filter predicate. Similar to a SQL WHERE clause.Java Example:
tab.where($("name").isEqual("Fred"));
Scala Example:
tab.where($"name" === "Fred")
-
groupBy
public GroupedTable groupBy(org.apache.flink.table.expressions.Expression... fields)
Description copied from interface:Table
Groups the elements on some grouping keys. Use this before a selection with aggregations to perform the aggregation on a per-group basis. Similar to a SQL GROUP BY statement.Java Example:
tab.groupBy($("key")).select($("key"), $("value").avg());
Scala Example:
tab.groupBy($"key").select($"key", $"value".avg)
-
distinct
public Table distinct()
Description copied from interface:Table
Removes duplicate values and returns only distinct (different) values.Example:
tab.select($("key"), $("value")).distinct();
-
join
public Table join(Table right)
Description copied from interface:Table
Joins twoTable
s. Similar to a SQL join. The fields of the two joined operations must not overlap, useas
to rename fields if necessary. You can use where and select clauses after a join to further specify the behaviour of the join.Note: Both tables must be bound to the same
TableEnvironment
.Example:
left.join(right) .where($("a").isEqual($("b")).and($("c").isGreater(3)) .select($("a"), $("b"), $("d"));
-
join
public Table join(Table right, org.apache.flink.table.expressions.Expression joinPredicate)
Description copied from interface:Table
Joins twoTable
s. Similar to a SQL join. The fields of the two joined operations must not overlap, useas
to rename fields if necessary.Note: Both tables must be bound to the same
TableEnvironment
.Java Example:
left.join(right, $("a").isEqual($("b"))) .select($("a"), $("b"), $("d"));
Scala Example:
left.join(right, $"a" === $"b") .select($"a", $"b", $"d")
-
leftOuterJoin
public Table leftOuterJoin(Table right)
Description copied from interface:Table
Joins twoTable
s. Similar to a SQL left outer join. The fields of the two joined operations must not overlap, useas
to rename fields if necessary.Note: Both tables must be bound to the same
TableEnvironment
and itsTableConfig
must have null check enabled (default).Example:
left.leftOuterJoin(right) .select($("a"), $("b"), $("d"));
- Specified by:
leftOuterJoin
in interfaceTable
-
leftOuterJoin
public Table leftOuterJoin(Table right, org.apache.flink.table.expressions.Expression joinPredicate)
Description copied from interface:Table
Joins twoTable
s. Similar to a SQL left outer join. The fields of the two joined operations must not overlap, useas
to rename fields if necessary.Note: Both tables must be bound to the same
TableEnvironment
and itsTableConfig
must have null check enabled (default).Java Example:
left.leftOuterJoin(right, $("a").isEqual($("b"))) .select($("a"), $("b"), $("d"));
Scala Example:
left.leftOuterJoin(right, $"a" === $"b") .select($"a", $"b", $"d")
- Specified by:
leftOuterJoin
in interfaceTable
-
rightOuterJoin
public Table rightOuterJoin(Table right, org.apache.flink.table.expressions.Expression joinPredicate)
Description copied from interface:Table
Joins twoTable
s. Similar to a SQL right outer join. The fields of the two joined operations must not overlap, useas
to rename fields if necessary.Note: Both tables must be bound to the same
TableEnvironment
and itsTableConfig
must have null check enabled (default).Java Example:
left.rightOuterJoin(right, $("a").isEqual($("b"))) .select($("a"), $("b"), $("d"));
Scala Example:
left.rightOuterJoin(right, $"a" === $"b") .select($"a", $"b", $"d")
- Specified by:
rightOuterJoin
in interfaceTable
-
fullOuterJoin
public Table fullOuterJoin(Table right, org.apache.flink.table.expressions.Expression joinPredicate)
Description copied from interface:Table
Joins twoTable
s. Similar to a SQL full outer join. The fields of the two joined operations must not overlap, useas
to rename fields if necessary.Note: Both tables must be bound to the same
TableEnvironment
and itsTableConfig
must have null check enabled (default).Java Example:
left.fullOuterJoin(right, $("a").isEqual($("b"))) .select($("a"), $("b"), $("d"));
Scala Example:
left.fullOuterJoin(right, $"a" === $"b") .select($"a", $"b", $"d")
- Specified by:
fullOuterJoin
in interfaceTable
-
joinLateral
public Table joinLateral(org.apache.flink.table.expressions.Expression tableFunctionCall)
Description copied from interface:Table
Joins thisTable
with an user-definedTableFunction
. This join is similar to a SQL inner join with ON TRUE predicate but works with a table function. Each row of the table is joined with all rows produced by the table function.Java Example:
class MySplitUDTF extends TableFunction<String> { public void eval(String str) { str.split("#").forEach(this::collect); } } table.joinLateral(call(MySplitUDTF.class, $("c")).as("s")) .select($("a"), $("b"), $("c"), $("s"));
Scala Example:
class MySplitUDTF extends TableFunction[String] { def eval(str: String): Unit = { str.split("#").foreach(collect) } } val split = new MySplitUDTF() table.joinLateral(split($"c") as "s") .select($"a", $"b", $"c", $"s")
- Specified by:
joinLateral
in interfaceTable
-
joinLateral
public Table joinLateral(org.apache.flink.table.expressions.Expression tableFunctionCall, org.apache.flink.table.expressions.Expression joinPredicate)
Description copied from interface:Table
Joins thisTable
with an user-definedTableFunction
. This join is similar to a SQL inner join but works with a table function. Each row of the table is joined with all rows produced by the table function.Java Example:
class MySplitUDTF extends TableFunction<String> { public void eval(String str) { str.split("#").forEach(this::collect); } } table.joinLateral(call(MySplitUDTF.class, $("c")).as("s"), $("a").isEqual($("s"))) .select($("a"), $("b"), $("c"), $("s"));
Scala Example:
class MySplitUDTF extends TableFunction[String] { def eval(str: String): Unit = { str.split("#").foreach(collect) } } val split = new MySplitUDTF() table.joinLateral(split($"c") as "s", $"a" === $"s") .select($"a", $"b", $"c", $"s")
- Specified by:
joinLateral
in interfaceTable
-
leftOuterJoinLateral
public Table leftOuterJoinLateral(org.apache.flink.table.expressions.Expression tableFunctionCall)
Description copied from interface:Table
Joins thisTable
with an user-definedTableFunction
. This join is similar to a SQL left outer join with ON TRUE predicate but works with a table function. Each row of the table is joined with all rows produced by the table function. If the table function does not produce any row, the outer row is padded with nulls.Java Example:
class MySplitUDTF extends TableFunction<String> { public void eval(String str) { str.split("#").forEach(this::collect); } } table.leftOuterJoinLateral(call(MySplitUDTF.class, $("c")).as("s")) .select($("a"), $("b"), $("c"), $("s"));
Scala Example:
class MySplitUDTF extends TableFunction[String] { def eval(str: String): Unit = { str.split("#").foreach(collect) } } val split = new MySplitUDTF() table.leftOuterJoinLateral(split($"c") as "s") .select($"a", $"b", $"c", $"s")
- Specified by:
leftOuterJoinLateral
in interfaceTable
-
leftOuterJoinLateral
public Table leftOuterJoinLateral(org.apache.flink.table.expressions.Expression tableFunctionCall, org.apache.flink.table.expressions.Expression joinPredicate)
Description copied from interface:Table
Joins thisTable
with an user-definedTableFunction
. This join is similar to a SQL left outer join with ON TRUE predicate but works with a table function. Each row of the table is joined with all rows produced by the table function. If the table function does not produce any row, the outer row is padded with nulls.Java Example:
class MySplitUDTF extends TableFunction<String> { public void eval(String str) { str.split("#").forEach(this::collect); } } table.leftOuterJoinLateral(call(MySplitUDTF.class, $("c")).as("s"), $("a").isEqual($("s"))) .select($("a"), $("b"), $("c"), $("s"));
Scala Example:
class MySplitUDTF extends TableFunction[String] { def eval(str: String): Unit = { str.split("#").foreach(collect) } } val split = new MySplitUDTF() table.leftOuterJoinLateral(split($"c") as "s", $"a" === $"s") .select($"a", $"b", $"c", $"s")
- Specified by:
leftOuterJoinLateral
in interfaceTable
-
minus
public Table minus(Table right)
Description copied from interface:Table
Minus of twoTable
s with duplicate records removed. Similar to a SQL EXCEPT clause. Minus returns records from the left table that do not exist in the right table. Duplicate records in the left table are returned exactly once, i.e., duplicates are removed. Both tables must have identical field types.Note: Both tables must be bound to the same
TableEnvironment
.Example:
left.minus(right);
-
minusAll
public Table minusAll(Table right)
Description copied from interface:Table
Minus of twoTable
s. Similar to a SQL EXCEPT ALL. Similar to a SQL EXCEPT ALL clause. MinusAll returns the records that do not exist in the right table. A record that is present n times in the left table and m times in the right table is returned (n - m) times, i.e., as many duplicates as are present in the right table are removed. Both tables must have identical field types.Note: Both tables must be bound to the same
TableEnvironment
.Example:
left.minusAll(right);
-
union
public Table union(Table right)
Description copied from interface:Table
Unions twoTable
s with duplicate records removed. Similar to a SQL UNION. The fields of the two union operations must fully overlap.Note: Both tables must be bound to the same
TableEnvironment
.Example:
left.union(right);
-
unionAll
public Table unionAll(Table right)
Description copied from interface:Table
Unions twoTable
s. Similar to a SQL UNION ALL. The fields of the two union operations must fully overlap.Note: Both tables must be bound to the same
TableEnvironment
.Example:
left.unionAll(right);
-
intersect
public Table intersect(Table right)
Description copied from interface:Table
Intersects twoTable
s with duplicate records removed. Intersect returns records that exist in both tables. If a record is present in one or both tables more than once, it is returned just once, i.e., the resulting table has no duplicate records. Similar to a SQL INTERSECT. The fields of the two intersect operations must fully overlap.Note: Both tables must be bound to the same
TableEnvironment
.Example:
left.intersect(right);
-
intersectAll
public Table intersectAll(Table right)
Description copied from interface:Table
Intersects twoTable
s. IntersectAll returns records that exist in both tables. If a record is present in both tables more than once, it is returned as many times as it is present in both tables, i.e., the resulting table might have duplicate records. Similar to an SQL INTERSECT ALL. The fields of the two intersect operations must fully overlap.Note: Both tables must be bound to the same
TableEnvironment
.Example:
left.intersectAll(right);
- Specified by:
intersectAll
in interfaceTable
-
orderBy
public Table orderBy(org.apache.flink.table.expressions.Expression... fields)
Description copied from interface:Table
Sorts the givenTable
. Similar to SQLORDER BY
.The resulting Table is globally sorted across all parallel partitions.
Java Example:
tab.orderBy($("name").desc());
Scala Example:
tab.orderBy($"name".desc)
For unbounded tables, this operation requires a sorting on a time attribute or a subsequent fetch operation.
-
offset
public Table offset(int offset)
Description copied from interface:Table
Limits a (possibly sorted) result from an offset position.This method can be combined with a preceding
Table.orderBy(Expression...)
call for a deterministic order and a subsequentTable.fetch(int)
call to return n rows after skipping the first o rows.// skips the first 3 rows and returns all following rows. tab.orderBy($("name").desc()).offset(3); // skips the first 10 rows and returns the next 5 rows. tab.orderBy($("name").desc()).offset(10).fetch(5);
For unbounded tables, this operation requires a subsequent fetch operation.
-
fetch
public Table fetch(int fetch)
Description copied from interface:Table
Limits a (possibly sorted) result to the first n rows.This method can be combined with a preceding
Table.orderBy(Expression...)
call for a deterministic order andTable.offset(int)
call to return n rows after skipping the first o rows.// returns the first 3 records. tab.orderBy($("name").desc()).fetch(3); // skips the first 10 rows and returns the next 5 rows. tab.orderBy($("name").desc()).offset(10).fetch(5);
-
window
public GroupWindowedTable window(GroupWindow groupWindow)
Description copied from interface:Table
Groups the records of a table by assigning them to windows defined by a time or row interval.For streaming tables of infinite size, grouping into windows is required to define finite groups on which group-based aggregates can be computed.
For batch tables of finite size, windowing essentially provides shortcuts for time-based groupBy.
Note: Computing windowed aggregates on a streaming table is only a parallel operation if additional grouping attributes are added to the
groupBy(...)
clause. If thegroupBy(...)
only references a GroupWindow alias, the streamed table will be processed by a single task, i.e., with parallelism 1.
-
window
public OverWindowedTable window(OverWindow... overWindows)
Description copied from interface:Table
Defines over-windows on the records of a table.An over-window defines for each record an interval of records over which aggregation functions can be computed.
Java Example:
table .window(Over.partitionBy($("c")).orderBy($("rowTime")).preceding(lit(10).seconds()).as("ow") .select($("c"), $("b").count().over($("ow")), $("e").sum().over($("ow")));
Scala Example:
table .window(Over partitionBy $"c" orderBy $"rowTime" preceding 10.seconds as "ow") .select($"c", $"b".count over $"ow", $"e".sum over $"ow")
Note: Computing over window aggregates on a streaming table is only a parallel operation if the window is partitioned. Otherwise, the whole stream will be processed by a single task, i.e., with parallelism 1.
Note: Over-windows for batch tables are currently not supported.
-
addColumns
public Table addColumns(org.apache.flink.table.expressions.Expression... fields)
Description copied from interface:Table
Adds additional columns. Similar to a SQL SELECT statement. The field expressions can contain complex expressions, but can not contain aggregations. It will throw an exception if the added fields already exist.Java Example:
tab.addColumns( $("a").plus(1).as("a1"), concat($("b"), "sunny").as("b1") );
Scala Example:
tab.addColumns( $"a" + 1 as "a1", concat($"b", "sunny") as "b1" )
- Specified by:
addColumns
in interfaceTable
-
addOrReplaceColumns
public Table addOrReplaceColumns(org.apache.flink.table.expressions.Expression... fields)
Description copied from interface:Table
Adds additional columns. Similar to a SQL SELECT statement. The field expressions can contain complex expressions, but can not contain aggregations. Existing fields will be replaced. If the added fields have duplicate field name, then the last one is used.Java Example:
tab.addOrReplaceColumns( $("a").plus(1).as("a1"), concat($("b"), "sunny").as("b1") );
Scala Example:
tab.addOrReplaceColumns( $"a" + 1 as "a1", concat($"b", "sunny") as "b1" )
- Specified by:
addOrReplaceColumns
in interfaceTable
-
renameColumns
public Table renameColumns(org.apache.flink.table.expressions.Expression... fields)
Description copied from interface:Table
Renames existing columns. Similar to a field alias statement. The field expressions should be alias expressions, and only the existing fields can be renamed.Java Example:
tab.renameColumns( $("a").as("a1"), $("b").as("b1") );
Scala Example:
tab.renameColumns( $"a" as "a1", $"b" as "b1" )
- Specified by:
renameColumns
in interfaceTable
-
dropColumns
public Table dropColumns(org.apache.flink.table.expressions.Expression... fields)
Description copied from interface:Table
Drops existing columns. The field expressions should be field reference expressions.Java Example:
tab.dropColumns($("a"), $("b"));
Scala Example:
tab.dropColumns($"a", $"b")
- Specified by:
dropColumns
in interfaceTable
-
map
public Table map(org.apache.flink.table.expressions.Expression mapFunction)
Description copied from interface:Table
Performs a map operation with an user-defined scalar function or built-in scalar function. The output will be flattened if the output type is a composite type.Java Example:
tab.map(call(MyMapFunction.class, $("c")))
Scala Example:
val func = new MyMapFunction() tab.map(func($"c"))
-
flatMap
public Table flatMap(org.apache.flink.table.expressions.Expression tableFunction)
Description copied from interface:Table
Performs a flatMap operation with an user-defined table function or built-in table function. The output will be flattened if the output type is a composite type.Java Example:
tab.flatMap(call(MyFlatMapFunction.class, $("c")))
Scala Example:
val func = new MyFlatMapFunction() tab.flatMap(func($"c"))
-
aggregate
public AggregatedTable aggregate(org.apache.flink.table.expressions.Expression aggregateFunction)
Description copied from interface:Table
Performs a global aggregate operation with an aggregate function. You have to close theTable.aggregate(Expression)
with a select statement. The output will be flattened if the output type is a composite type.Java Example:
tab.aggregate(call(MyAggregateFunction.class, $("a"), $("b")).as("f0", "f1", "f2")) .select($("f0"), $("f1"));
Scala Example:
val aggFunc = new MyAggregateFunction table.aggregate(aggFunc($"a", $"b") as ("f0", "f1", "f2")) .select($"f0", $"f1")
-
flatAggregate
public FlatAggregateTable flatAggregate(org.apache.flink.table.expressions.Expression tableAggregateFunction)
Description copied from interface:Table
Perform a global flatAggregate without groupBy. FlatAggregate takes a TableAggregateFunction which returns multiple rows. Use a selection after the flatAggregate.Java Example:
tab.flatAggregate(call(MyTableAggregateFunction.class, $("a"), $("b")).as("x", "y", "z")) .select($("x"), $("y"), $("z"));
Scala Example:
val tableAggFunc: TableAggregateFunction = new MyTableAggregateFunction tab.flatAggregate(tableAggFunc($"a", $"b") as ("x", "y", "z")) .select($"x", $"y", $"z")
- Specified by:
flatAggregate
in interfaceTable
-
insertInto
public TablePipeline insertInto(String tablePath)
Description copied from interface:Table
Declares that the pipeline defined by the givenTable
object should be written to a table (backed by aDynamicTableSink
) that was registered under the specified path.See the documentation of
TableEnvironment.useDatabase(String)
orTableEnvironment.useCatalog(String)
for the rules on the path resolution.Example:
Table table = tableEnv.sqlQuery("SELECT * FROM MyTable"); TablePipeline tablePipeline = table.insertInto("MySinkTable"); TableResult tableResult = tablePipeline.execute(); tableResult.await();
One can execute the returned
TablePipeline
usingExecutable.execute()
, or compile it to aCompiledPlan
usingCompilable.compilePlan()
.If multiple pipelines should insert data into one or more sink tables as part of a single execution, use a
StatementSet
(seeTableEnvironment.createStatementSet()
).- Specified by:
insertInto
in interfaceTable
- Parameters:
tablePath
- The path of the registered table (backed by aDynamicTableSink
).- Returns:
- The complete pipeline from one or more source tables to a sink table.
-
insertInto
public TablePipeline insertInto(String tablePath, boolean overwrite)
Description copied from interface:Table
Declares that the pipeline defined by the givenTable
object should be written to a table (backed by aDynamicTableSink
) that was registered under the specified path.See the documentation of
TableEnvironment.useDatabase(String)
orTableEnvironment.useCatalog(String)
for the rules on the path resolution.Example:
Table table = tableEnv.sqlQuery("SELECT * FROM MyTable"); TablePipeline tablePipeline = table.insertInto("MySinkTable", true); TableResult tableResult = tablePipeline.execute(); tableResult.await();
One can execute the returned
TablePipeline
usingExecutable.execute()
, or compile it to aCompiledPlan
usingCompilable.compilePlan()
.If multiple pipelines should insert data into one or more sink tables as part of a single execution, use a
StatementSet
(seeTableEnvironment.createStatementSet()
).- Specified by:
insertInto
in interfaceTable
- Parameters:
tablePath
- The path of the registered table (backed by aDynamicTableSink
).overwrite
- Indicates whether existing data should be overwritten.- Returns:
- The complete pipeline from one or more source tables to a sink table.
-
insertInto
public TablePipeline insertInto(TableDescriptor descriptor)
Description copied from interface:Table
Declares that the pipeline defined by the givenTable
object should be written to a table (backed by aDynamicTableSink
) expressed via the givenTableDescriptor
.The
descriptor
won't be registered in the catalog, but it will be propagated directly in the operation tree. Note that calling this method multiple times, even with the same descriptor, results in multiple sink tables instances.This method allows to declare a
Schema
for the sink descriptor. The declaration is similar to aCREATE TABLE
DDL in SQL and allows to:- overwrite automatically derived columns with a custom
DataType
- add metadata columns next to the physical columns
- declare a primary key
It is possible to declare a schema without physical/regular columns. In this case, those columns will be automatically derived and implicitly put at the beginning of the schema declaration.
Examples:
Schema schema = Schema.newBuilder() .column("f0", DataTypes.STRING()) .build(); Table table = tableEnv.from(TableDescriptor.forConnector("datagen") .schema(schema) .build()); table.insertInto(TableDescriptor.forConnector("blackhole") .schema(schema) .build());
One can execute the returned
TablePipeline
usingExecutable.execute()
, or compile it to aCompiledPlan
usingCompilable.compilePlan()
.If multiple pipelines should insert data into one or more sink tables as part of a single execution, use a
StatementSet
(seeTableEnvironment.createStatementSet()
).- Specified by:
insertInto
in interfaceTable
- Parameters:
descriptor
- Descriptor describing the sink table into which data should be inserted.- Returns:
- The complete pipeline from one or more source tables to a sink table.
- overwrite automatically derived columns with a custom
-
insertInto
public TablePipeline insertInto(TableDescriptor descriptor, boolean overwrite)
Description copied from interface:Table
Declares that the pipeline defined by the givenTable
object should be written to a table (backed by aDynamicTableSink
) expressed via the givenTableDescriptor
.The
descriptor
won't be registered in the catalog, but it will be propagated directly in the operation tree. Note that calling this method multiple times, even with the same descriptor, results in multiple sink tables being registered.This method allows to declare a
Schema
for the sink descriptor. The declaration is similar to aCREATE TABLE
DDL in SQL and allows to:- overwrite automatically derived columns with a custom
DataType
- add metadata columns next to the physical columns
- declare a primary key
It is possible to declare a schema without physical/regular columns. In this case, those columns will be automatically derived and implicitly put at the beginning of the schema declaration.
Examples:
Schema schema = Schema.newBuilder() .column("f0", DataTypes.STRING()) .build(); Table table = tableEnv.from(TableDescriptor.forConnector("datagen") .schema(schema) .build()); table.insertInto(TableDescriptor.forConnector("blackhole") .schema(schema) .build(), true);
One can execute the returned
TablePipeline
usingExecutable.execute()
, or compile it to aCompiledPlan
usingCompilable.compilePlan()
.If multiple pipelines should insert data into one or more sink tables as part of a single execution, use a
StatementSet
(seeTableEnvironment.createStatementSet()
).- Specified by:
insertInto
in interfaceTable
- Parameters:
descriptor
- Descriptor describing the sink table into which data should be inserted.overwrite
- Indicates whether existing data should be overwritten.- Returns:
- The complete pipeline from one or more source tables to a sink table.
- overwrite automatically derived columns with a custom
-
execute
public TableResult execute()
Description copied from interface:Executable
Executes this object.By default, all DML operations are executed asynchronously. Use
TableResult.await()
orTableResult.getJobClient()
to monitor the execution. SetTableConfigOptions.TABLE_DML_SYNC
for always synchronous execution.- Specified by:
execute
in interfaceExecutable
-
explain
public String explain(ExplainFormat format, ExplainDetail... extraDetails)
Description copied from interface:Explainable
Returns the AST of this object and the execution plan to compute the result of the given statement.- Specified by:
explain
in interfaceExplainable<Table>
- Parameters:
format
- The output format of explained planextraDetails
- The extra explain details which the result of this method should include, e.g. estimated cost, changelog mode for streaming- Returns:
- AST and the execution plan.
-
-