The structure of the data file. Use column names and data types separated by a space. For example:
[ "Name STRING", "Score INT", "DateOfBirth TIMESTAMP" ]
You can omit the data type when using STRING, which is the default. Valid data types: TINYINT, SMALLINT, INT, BIGINT, BOOLEAN, FLOAT, DOUBLE, STRING, TIMESTAMP
A character, for example "\", that instructs the parser to ignore the next character.
The structure of the data file.
The structure of the data file. Use column names and data types separated by a space. For example:
[ "Name STRING", "Score INT", "DateOfBirth TIMESTAMP" ]
You can omit the data type when using STRING, which is the default. Valid data types: TINYINT, SMALLINT, INT, BIGINT, BOOLEAN, FLOAT, DOUBLE, STRING, TIMESTAMP
A character, for example "\", that instructs the parser to ignore the next character.
The ID of the object, IDs must be unique within a pipeline definition
The ID of the object, IDs must be unique within a pipeline definition
The optional, user-defined label of the object.
The optional, user-defined label of the object. If you do not provide a name for an object in a pipeline definition, AWS Data Pipeline automatically duplicates the value of id.
The type of object.
The type of object. Use one of the predefined AWS Data Pipeline object types.
A comma-delimited data format where the column separator is a tab character and the record separator is a newline character.
The structure of the data file. Use column names and data types separated by a space. For example:
You can omit the data type when using STRING, which is the default. Valid data types: TINYINT, SMALLINT, INT, BIGINT, BOOLEAN, FLOAT, DOUBLE, STRING, TIMESTAMP
A character, for example "\", that instructs the parser to ignore the next character.