AWS Data Pipeline activity objects.
Defines AWS Data Pipeline Data Formats
Defines AWS Data Pipeline Data Formats
ref: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-dataformats.html
AWS Data Pipeline DataNode objects
AWS Data Pipeline DataNode objects
ref: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-datanodes.html
Each data pipeline can have a default object
The base class of all AWS Data Pipeline objects.
The base class of all AWS Data Pipeline objects.
ref: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-pipeline-objects.html
AWS Data Pipeline database objects.
AWS Data Pipeline database objects.
Ref: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-databases.html
An EC2 instance that will perform the work defined by a pipeline activity.
An EC2 instance that will perform the work defined by a pipeline activity.
The IAM role to use to create the EC2 instance.
The IAM role to use to control the resources that the EC2 instance can access.
The AMI version to use for the EC2 instances. For more information, see Amazon Machine Images (AMIs).
The type of EC2 instance to use for the resource pool. The default value is m1.small. The values currently supported are: c1.medium, c1.xlarge, c3.2xlarge, c3.4xlarge, c3.8xlarge, c3.large, c3.xlarge, cc1.4xlarge, cc2.8xlarge, cg1.4xlarge, cr1.8xlarge, g2.2xlarge, hi1.4xlarge, hs1.8xlarge, i2.2xlarge, i2.4xlarge, i2.8xlarge, i2.xlarge, m1.large, m1.medium, m1.small, m1.xlarge, m2.2xlarge, m2.4xlarge, m2.xlarge, m3.2xlarge, m3.xlarge, t1.micro.
A region code to specify that the resource should run in a different region. For more information, see Using a Pipeline with Resources in Multiple Regions.
The names of one or more security groups to use for the instances in the resource pool. By default, Amazon EC2 uses the default security group.
The IDs of one or more security groups to use for the instances in the resource pool. By default, Amazon EC2 uses the default security group.
Indicates whether to assign a public IP address to an instance. (An instance in a VPC can't access Amazon S3 unless it has a public IP address or a network address translation (NAT) instance with proper routing configuration.) If the instance is in EC2-Classic or a default VPC, the default value is true. Otherwise, the default value is false.
Runs an Amazon EMR cluster.
Runs an Amazon EMR cluster.
AWS Data Pipeline uses a different format for steps than Amazon EMR, for example AWS Data Pipeline uses comma-separated arguments after the JAR name in the EmrActivity step field.
The input data source.
The location for the output
Shell scripts to be run before any steps are run. To specify multiple scripts, up to 255, add multiple preStepCommand fields.
Shell scripts to be run after all steps are finished. To specify multiple scripts, up to 255, add multiple postStepCommand fields.
The Amazon EMR cluster to run this cluster.
One or more steps for the cluster to run. To specify multiple steps, up to 255, add multiple step fields. Use comma-separated arguments after the JAR name; for example, "s3://example-bucket/MyWork.jar,arg1,arg2,arg3".
Represents the configuration of an Amazon EMR cluster.
Represents the configuration of an Amazon EMR cluster. This object is used by EmrActivity to launch a cluster.
ref: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-redshiftcopyactivity.html
ref: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-redshiftcopyactivity.html
required for AdpDataPipelineObject
required for AdpDataPipelineObject
The input data node. The data source can be Amazon S3, DynamoDB, or Amazon Redshift.
Determines what AWS Data Pipeline does with pre-existing data in the target table that overlaps with rows in the data to be loaded. Valid values are KEEP_EXISTING, OVERWRITE_EXISTING, and TRUNCATE.
The output data node. The output location can be Amazon S3 or Amazon Redshift.
Required for AdpActivity
The SQL SELECT expression used to transform the input data.
Takes COPY parameters to pass to the Amazon Redshift data node.
Corresponds to the query_group setting in Amazon Redshift, which allows you to assign and prioritize concurrent activities based on their placement in queues. Amazon Redshift limits the number of simultaneous connections to 15.
Required for AdpActivity
Defines a data node using Amazon Redshift.
Defines a data node using Amazon Redshift.
If you do not specify primaryKeys for a destination table in RedShiftCopyActivity, you can specify a list of columns using primaryKeys which will act as a mergeKey. However, if you have an existing primaryKey defined in a Redshift table, this setting overrides the existing key.
Defines an Amazon Redshift database.
Defines an Amazon Redshift database.
The identifier provided by the user when the Amazon Redshift cluster was created. For example, if the endpoint for your Amazon Redshift cluster is mydb.example.us-east-1.redshift.amazonaws.com, the correct clusterId value is mydb. In the Amazon Redshift console, this value is "Cluster Name".
The JDBC endpoint for connecting to an Amazon Redshift instance owned by an account different than the pipeline.
References to an existing aws data pipeline object
References to an existing aws data pipeline object
more details: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-pipeline-expressions.html
Defines the AWS Data Pipeline Resources
Defines the AWS Data Pipeline Resources
ref: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-resources.html
Defines a data node using Amazon S3.
You must provide either a filePath or directoryPath value.
You must provide either a filePath or directoryPath value.
Defines the timing of a scheduled event, such as when an activity runs.
Defines the timing of a scheduled event, such as when an activity runs.
ref: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-schedule.html
Runs a command on an EC2 node.
Runs a command on an EC2 node. You specify the input S3 location, output S3 location and the script/command.
The command to run. This value and any associated parameters must function in the environment from which you are running the Task Runner.
An Amazon S3 URI path for a file to download and run as a shell command. Only one scriptUri or command field should be present. scriptUri cannot use parameters, use command instead.
A list of arguments to pass to the shell script.
The input data source.
The location for the output.
Determines whether staging is enabled and allows your shell commands to have access to the staged-data variables, such as
$\{INPUT1_STAGING_DIR\
The Amazon S3 path that receives redirected output from the command. If you use the runsOn field, this must be an Amazon S3 path because of the transitory nature of the resource running your activity. However if you specify the workerGroup field, a local file path is permitted.
The path that receives redirected system error messages from the command. If you use the runsOn field, this must be an Amazon S3 path because of the transitory nature of the resource running your activity. However if you specify the workerGroup field, a local file path is permitted.
Runs a SQL query on a database.
Runs a SQL query on a database. You specify the input table where the SQL query is run and the output table where the results are stored. If the output table doesn't exist, this operation creates a new table with that name.
The SQL script to run. For example:
insert into output select * from input where lastModified in range (?, ?)
the script is not evaluated as an expression. In that situation, scriptArgument are useful
that scriptUri is deliberately missing from this implementation, as there does not seem to be any use case for now.
The date and time at which to start the scheduled pipeline runs. Valid value is FIRST_ACTIVATION_DATE_TIME. FIRST_ACTIVATION_DATE_TIME is assumed to be the current date and time.
The date and time to start the scheduled runs. You must use either startDateTime or startAt but not both.
A comma-delimited data format where the column separator is a tab character and the record separator is a newline character.
A comma-delimited data format where the column separator is a tab character and the record separator is a newline character.
The structure of the data file. Use column names and data types separated by a space. For example:
[ "Name STRING", "Score INT", "DateOfBirth TIMESTAMP" ]
You can omit the data type when using STRING, which is the default. Valid data types: TINYINT, SMALLINT, INT, BIGINT, BOOLEAN, FLOAT, DOUBLE, STRING, TIMESTAMP
A character, for example "\", that instructs the parser to ignore the next character.
Serializes a aws data pipeline object to JSON
AWS Data Pipeline activity objects.
ref: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-activities.html