org.apache.flink.api.java.ExecutionEnvironment.createHadoopInput(InputFormat, Class, Class, Job)
Please use org.apache.flink.hadoopcompatibility.HadoopInputs#createHadoopInput(org.apache.hadoop.mapreduce.InputFormat, Class, Class, Job)
from the flink-hadoop-compatibility module.
|
org.apache.flink.api.java.ExecutionEnvironment.createHadoopInput(InputFormat, Class, Class, JobConf)
Please use org.apache.flink.hadoopcompatibility.HadoopInputs#createHadoopInput(org.apache.hadoop.mapred.InputFormat, Class, Class, JobConf)
from the flink-hadoop-compatibility module.
|
org.apache.flink.api.java.io.CsvReader.fieldDelimiter(char) |
org.apache.flink.api.java.utils.ParameterTool.fromGenericOptionsParser(String[])
Please use org.apache.flink.hadoopcompatibility.HadoopUtils#paramsFromGenericOptionsParser(String[])
from project flink-hadoop-compatibility
|
org.apache.flink.api.java.ExecutionEnvironment.getNumberOfExecutionRetries()
|
org.apache.flink.api.java.DataSet.print(String)
|
org.apache.flink.api.java.DataSet.printToErr(String)
|
org.apache.flink.api.java.ExecutionEnvironment.readHadoopFile(FileInputFormat, Class, Class, String)
Please use org.apache.flink.hadoopcompatibility.HadoopInputs#readHadoopFile(org.apache.hadoop.mapred.FileInputFormat, Class, Class, String)
from the flink-hadoop-compatibility module.
|
org.apache.flink.api.java.ExecutionEnvironment.readHadoopFile(FileInputFormat, Class, Class, String)
Please use org.apache.flink.hadoopcompatibility.HadoopInputs#readHadoopFile(org.apache.hadoop.mapreduce.lib.input.FileInputFormat, Class, Class, String)
from the flink-hadoop-compatibility module.
|
org.apache.flink.api.java.ExecutionEnvironment.readHadoopFile(FileInputFormat, Class, Class, String, Job)
Please use org.apache.flink.hadoopcompatibility.HadoopInputs#readHadoopFile(org.apache.hadoop.mapreduce.lib.input.FileInputFormat, Class, Class, String, Job)
from the flink-hadoop-compatibility module.
|
org.apache.flink.api.java.ExecutionEnvironment.readHadoopFile(FileInputFormat, Class, Class, String, JobConf)
Please use org.apache.flink.hadoopcompatibility.HadoopInputs#readHadoopFile(org.apache.hadoop.mapred.FileInputFormat, Class, Class, String, JobConf)
from the flink-hadoop-compatibility module.
|
org.apache.flink.api.java.ExecutionEnvironment.readSequenceFile(Class, Class, String)
Please use org.apache.flink.hadoopcompatibility.HadoopInputs#readSequenceFile(Class, Class, String)
from the flink-hadoop-compatibility module.
|
org.apache.flink.api.java.operators.SingleInputUdfOperator.returns(String)
|
org.apache.flink.api.java.operators.TwoInputUdfOperator.returns(String)
|
org.apache.flink.api.java.ExecutionEnvironment.setNumberOfExecutionRetries(int)
|
org.apache.flink.api.java.operators.DataSink.sortLocalOutput(int, Order) |
org.apache.flink.api.java.operators.DataSink.sortLocalOutput(String, Order) |
org.apache.flink.api.java.operators.CrossOperator.ProjectCross.types(Class>...) |
org.apache.flink.api.java.operators.JoinOperator.ProjectJoin.types(Class>...) |
org.apache.flink.api.java.operators.ProjectOperator.types(Class>...) |