A Hive SQL statement fragment that filters a subset of DynamoDB or Amazon S3 data to copy. The filter should only contain predicates and not begin with a WHERE clause, because AWS Data Pipeline adds it automatically.
An Amazon S3 path capturing the Hive script that ran after all the expressions in it were evaluated, including staging information. This script is stored for troubleshooting purposes.
The input data node. This must be S3DataNode or DynamoDBDataNode. If you use DynamoDBDataNode, specify a DynamoDBExportDataFormat.
The output data node. If input is S3DataNode, this must be DynamoDBDataNode. Otherwise, this can be S3DataNode or DynamoDBDataNode. If you use DynamoDBDataNode, specify a DynamoDBExportDataFormat.
One or more references to other Activities that must reach the FINISHED state before this activity will start.
One or more references to other Activities that must reach the FINISHED state before this activity will start.
A Hive SQL statement fragment that filters a subset of DynamoDB or Amazon S3 data to copy.
A Hive SQL statement fragment that filters a subset of DynamoDB or Amazon S3 data to copy. The filter should only contain predicates and not begin with a WHERE clause, because AWS Data Pipeline adds it automatically.
An Amazon S3 path capturing the Hive script that ran after all the expressions in it were evaluated, including staging information.
An Amazon S3 path capturing the Hive script that ran after all the expressions in it were evaluated, including staging information. This script is stored for troubleshooting purposes.
The ID of the object, IDs must be unique within a pipeline definition
The ID of the object, IDs must be unique within a pipeline definition
The input data node.
The input data node. This must be S3DataNode or DynamoDBDataNode. If you use DynamoDBDataNode, specify a DynamoDBExportDataFormat.
The optional, user-defined label of the object.
The optional, user-defined label of the object. If you do not provide a name for an object in a pipeline definition, AWS Data Pipeline automatically duplicates the value of id.
The SNS alarm to raise when the activity fails.
The SNS alarm to raise when the activity fails.
The SNS alarm to raise when the activity fails to start on time.
The SNS alarm to raise when the activity fails to start on time.
The SNS alarm to raise when the activity succeeds.
The SNS alarm to raise when the activity succeeds.
The output data node.
The output data node. If input is S3DataNode, this must be DynamoDBDataNode. Otherwise, this can be S3DataNode or DynamoDBDataNode. If you use DynamoDBDataNode, specify a DynamoDBExportDataFormat.
A condition that must be met before the object can run.
A condition that must be met before the object can run. To specify multiple conditions, add multiple precondition fields. The activity cannot run until all its conditions are met.
The computational resource to run the activity or command.
The computational resource to run the activity or command. For example, an Amazon EC2 instance or Amazon EMR cluster.
The type of object.
The type of object. Use one of the predefined AWS Data Pipeline object types.
ref: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-hivecopyactivity.html
A Hive SQL statement fragment that filters a subset of DynamoDB or Amazon S3 data to copy. The filter should only contain predicates and not begin with a WHERE clause, because AWS Data Pipeline adds it automatically.
An Amazon S3 path capturing the Hive script that ran after all the expressions in it were evaluated, including staging information. This script is stored for troubleshooting purposes.
The input data node. This must be S3DataNode or DynamoDBDataNode. If you use DynamoDBDataNode, specify a DynamoDBExportDataFormat.
The output data node. If input is S3DataNode, this must be DynamoDBDataNode. Otherwise, this can be S3DataNode or DynamoDBDataNode. If you use DynamoDBDataNode, specify a DynamoDBExportDataFormat.