Interface SparkSQL.Builder
-
- All Superinterfaces:
Buildable
,CopyableBuilder<SparkSQL.Builder,SparkSQL>
,SdkBuilder<SparkSQL.Builder,SparkSQL>
,SdkPojo
- Enclosing class:
- SparkSQL
public static interface SparkSQL.Builder extends SdkPojo, CopyableBuilder<SparkSQL.Builder,SparkSQL>
-
-
Method Summary
All Methods Instance Methods Abstract Methods Modifier and Type Method Description SparkSQL.Builder
inputs(String... inputs)
The data inputs identified by their node names.SparkSQL.Builder
inputs(Collection<String> inputs)
The data inputs identified by their node names.SparkSQL.Builder
name(String name)
The name of the transform node.SparkSQL.Builder
outputSchemas(Collection<GlueSchema> outputSchemas)
Specifies the data schema for the SparkSQL transform.SparkSQL.Builder
outputSchemas(Consumer<GlueSchema.Builder>... outputSchemas)
Specifies the data schema for the SparkSQL transform.SparkSQL.Builder
outputSchemas(GlueSchema... outputSchemas)
Specifies the data schema for the SparkSQL transform.SparkSQL.Builder
sqlAliases(Collection<SqlAlias> sqlAliases)
A list of aliases.SparkSQL.Builder
sqlAliases(Consumer<SqlAlias.Builder>... sqlAliases)
A list of aliases.SparkSQL.Builder
sqlAliases(SqlAlias... sqlAliases)
A list of aliases.SparkSQL.Builder
sqlQuery(String sqlQuery)
A SQL query that must use Spark SQL syntax and return a single data set.-
Methods inherited from interface software.amazon.awssdk.utils.builder.CopyableBuilder
copy
-
Methods inherited from interface software.amazon.awssdk.utils.builder.SdkBuilder
applyMutation, build
-
Methods inherited from interface software.amazon.awssdk.core.SdkPojo
equalsBySdkFields, sdkFields
-
-
-
-
Method Detail
-
name
SparkSQL.Builder name(String name)
The name of the transform node.
- Parameters:
name
- The name of the transform node.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
inputs
SparkSQL.Builder inputs(Collection<String> inputs)
The data inputs identified by their node names. You can associate a table name with each input node to use in the SQL query. The name you choose must meet the Spark SQL naming restrictions.
- Parameters:
inputs
- The data inputs identified by their node names. You can associate a table name with each input node to use in the SQL query. The name you choose must meet the Spark SQL naming restrictions.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
inputs
SparkSQL.Builder inputs(String... inputs)
The data inputs identified by their node names. You can associate a table name with each input node to use in the SQL query. The name you choose must meet the Spark SQL naming restrictions.
- Parameters:
inputs
- The data inputs identified by their node names. You can associate a table name with each input node to use in the SQL query. The name you choose must meet the Spark SQL naming restrictions.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
sqlQuery
SparkSQL.Builder sqlQuery(String sqlQuery)
A SQL query that must use Spark SQL syntax and return a single data set.
- Parameters:
sqlQuery
- A SQL query that must use Spark SQL syntax and return a single data set.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
sqlAliases
SparkSQL.Builder sqlAliases(Collection<SqlAlias> sqlAliases)
A list of aliases. An alias allows you to specify what name to use in the SQL for a given input. For example, you have a datasource named "MyDataSource". If you specify
From
as MyDataSource, andAlias
as SqlName, then in your SQL you can do:select * from SqlName
and that gets data from MyDataSource.
- Parameters:
sqlAliases
- A list of aliases. An alias allows you to specify what name to use in the SQL for a given input. For example, you have a datasource named "MyDataSource". If you specifyFrom
as MyDataSource, andAlias
as SqlName, then in your SQL you can do:select * from SqlName
and that gets data from MyDataSource.
- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
sqlAliases
SparkSQL.Builder sqlAliases(SqlAlias... sqlAliases)
A list of aliases. An alias allows you to specify what name to use in the SQL for a given input. For example, you have a datasource named "MyDataSource". If you specify
From
as MyDataSource, andAlias
as SqlName, then in your SQL you can do:select * from SqlName
and that gets data from MyDataSource.
- Parameters:
sqlAliases
- A list of aliases. An alias allows you to specify what name to use in the SQL for a given input. For example, you have a datasource named "MyDataSource". If you specifyFrom
as MyDataSource, andAlias
as SqlName, then in your SQL you can do:select * from SqlName
and that gets data from MyDataSource.
- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
sqlAliases
SparkSQL.Builder sqlAliases(Consumer<SqlAlias.Builder>... sqlAliases)
A list of aliases. An alias allows you to specify what name to use in the SQL for a given input. For example, you have a datasource named "MyDataSource". If you specify
From
as MyDataSource, andAlias
as SqlName, then in your SQL you can do:select * from SqlName
and that gets data from MyDataSource.
This is a convenience method that creates an instance of theSqlAlias.Builder
avoiding the need to create one manually viaSqlAlias.builder()
.When the
Consumer
completes,SdkBuilder.build()
is called immediately and its result is passed to#sqlAliases(List
.) - Parameters:
sqlAliases
- a consumer that will call methods onSqlAlias.Builder
- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
#sqlAliases(java.util.Collection
)
-
outputSchemas
SparkSQL.Builder outputSchemas(Collection<GlueSchema> outputSchemas)
Specifies the data schema for the SparkSQL transform.
- Parameters:
outputSchemas
- Specifies the data schema for the SparkSQL transform.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
outputSchemas
SparkSQL.Builder outputSchemas(GlueSchema... outputSchemas)
Specifies the data schema for the SparkSQL transform.
- Parameters:
outputSchemas
- Specifies the data schema for the SparkSQL transform.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
outputSchemas
SparkSQL.Builder outputSchemas(Consumer<GlueSchema.Builder>... outputSchemas)
Specifies the data schema for the SparkSQL transform.
This is a convenience method that creates an instance of theGlueSchema.Builder
avoiding the need to create one manually viaGlueSchema.builder()
.When the
Consumer
completes,SdkBuilder.build()
is called immediately and its result is passed to#outputSchemas(List
.) - Parameters:
outputSchemas
- a consumer that will call methods onGlueSchema.Builder
- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
#outputSchemas(java.util.Collection
)
-
-