The activity that copies data from one data node to the other.
Activity to recursively delete files in an S3 path.
The base trait for activities that run on an Amazon EMR cluster
Google Storage Download activity
Google Storage Upload activity
Runs a Hive query on an Amazon EMR cluster.
Runs a Hive query on an Amazon EMR cluster. HiveActivity makes it easier to set up an Amzon EMR activity and automatically creates Hive tables based on input data coming in from either Amazon S3 or Amazon RDS. All you need to specify is the HiveQL to run on the source data. AWS Data Pipeline automatically creates Hive tables with ${input1}, ${input2}, etc. based on the input fields in the Hive Activity object. For S3 inputs, the dataFormat field is used to create the Hive column names. For MySQL (RDS) inputs, the column names for the SQL query are used to create the Hive column names.
Runs a Hive query on an Amazon EMR cluster.
Runs a Hive query on an Amazon EMR cluster. HiveCopyActivity makes it easier to copy data between Amazon S3 and DynamoDB. HiveCopyActivity accepts a HiveQL statement to filter input data from Amazon S3 or DynomoDB at the column and row level.
Shell command activity that runs a given Jar
Runs map reduce steps on an Amazon EMR cluster
A MapReduce step that runs on MapReduce Cluster
PigActivity provides native support for Pig scripts in AWS Data Pipeline without the requirement to use ShellCommandActivity or EmrActivity.
PigActivity provides native support for Pig scripts in AWS Data Pipeline without the requirement to use ShellCommandActivity or EmrActivity. In addition, PigActivity supports data staging. When the stage field is set to true, AWS Data Pipeline stages the input data as a schema in Pig without additional code from the user.
The activity trait.
The activity trait. All activities should mixin this trait.
Copies data directly from DynamoDB or Amazon S3 to Amazon Redshift.
Copies data directly from DynamoDB or Amazon S3 to Amazon Redshift. You can load data into a new table, or easily merge data into an existing table.
Unload result of the given sql script from redshift to given s3Path.
Run time references of runnable objects
Runs a command or script
Runs spark steps on given spark cluster with Amazon EMR
A spark step that runs on Spark Cluster
Runs an SQL query on a RedShift cluster.
Runs an SQL query on a RedShift cluster. If the query writes out to a table that does not exist, a new table with that name is created.
The activity that copies data from one data node to the other.
it seems that both input and output format needs to be in CsvDataFormat for this copy to work properly and it needs to be a specific variance of the CSV, for more information check the web page:
http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-copyactivity.html
From our experience it's really hard to export using TsvDataFormat, in both import and export especially for tasks involving RedshiftCopyActivity. A general rule of thumb is always use default CsvDataFormat for tasks involving both exporting to S3 and copy to redshift.