An abstract Aggregate step, which uses the list of grouping fields to partition the data supplied row aggregator to aggregate each partition.
Adds a set of new fields, each of them either a constant, a variable, or an environment variable.
Calculates basic statistics about the specified fields.
Reads rows from Apache Cassandra.
Writes rows into a Cassandra table.
Reads CSV files.
Static data grid input.
Cassandra row writer for DataFrame objects.
Prints out data on the standard output.
The default implementation of SparkRuntime.
Environment literal.
Filters the data frame based on a combination of boolean conditions against fields.
Calculates new fields based on string expressions in various dialects.
Frame Flow represents an executable job.
Base trait for all frame flow events.
Listener which will be notified on frame flow events.
Workflow step that emits DataFrame as the output.
Finds the intersection of the two DataRow RDDs.
Loads and executes a subflow stored in an external file (json or xml).
Performs join of the two data frames.
Reads a JSON file, which contains a separate JSON object in each line.
Reads messages from a Kafka topic, converting each of them into a row with a single column.
Posts rows as messages onto a Kafka topic.
Reads documents from MongoDB.
Writes rows into a MongoDB collection.
Used to limit the results returned from data store queries.
Supplies helper functions for pair RDDs.
A simple passthrough.
Performs reduceByKey() function by grouping the rows by the selected key first, and then applying a list of reduce functions to the specified data columns.
Encapsulates HTTP request.
HTTP REST Client, executes one request per row.
Aggregates a data row into some arbitrary class U using Spark aggregateByKey method.
Executes an SQL statement against the inputs.
Action performed by SelectValues step.
Modifies, deletes, retains columns in the data rows.
Sets or drops the ignition runtime variables.
A sorting order.
Encapsulates the spark context and SQL context and provides helper functions to manage Spark runtime environment.
An implicit conversion of: $".
Reads the text file into a data frame with a single column.
Writes rows to a CSV file.
Reads a folder of text files.
Merges multiple DataFrames.
Variable literal.
CQL WHERE clause.
Add Fields companion object.
Basic aggregate functions.
Basic Stats companion object.
Cassandra Input companion object.
Cassandra Output companion object.
CSV Input companion object.
Data grid companion object.
Debug output companion object.
Filter companion object.
Formula companion object.
FrameFlow companion object.
Constants and helper functions for FrameStep.
Creates FrameStep instances from Xml and Json.
Provides SubFlow common methods.
HTTP method.
Intersection companion object.
Invoke companion object.
Join companion object.
DataFrame join type.
JSON file input companion object.
Kafka Input companion object.
Kafka Output companion object.
The entry point for starting ignition frame flows.
Mongo Input companion object.
Mongo Output companion object.
Passthrough companion object.
Reduce companion object.
Reduce operations.
REST Client companion object.
SQL query companion object.
Supported actions.
Select Values companion object.
SetVariables companion object.
Text File Input companion object.
CSV output companion object.
Text Folder Input companion object.
Union companion object.
CQL WHERE companion object.
Injects row fields, environment settings and variables into the string.
Injects the fields from the specified row, by replacing substrings of form ${field}
with the value of the specified field.
Inject JVM environment variables and spark variables by substituting e{env} and v{var} patterns in the expression.
Data types, implicits, aliases for DataFrame-based workflows.