Used by DependentRelation
s to register with parent
Used by DependentRelation
s to register with parent
Alter's table schema by adding or dropping a provided column
Alter's table schema by adding or dropping a provided column
Base table of this relation.
Base table of this relation.
Create an index on a table.
Create an index on a table.
Index Identifier which goes in the catalog
Table identifier on which the index is created.
Columns on which the index has to be created with the direction of sorting. Direction can be specified as None.
Options for indexes. For e.g. column table index - ("COLOCATE_WITH"->"CUSTOMER"). row table index - ("INDEX_TYPE"->"GLOBAL HASH") or ("INDEX_TYPE"->"UNIQUE")
Delete a set of row matching given criteria.
Delete a set of row matching given criteria.
SQL WHERE criteria to select rows that will be deleted
number of rows deleted
Destroy and cleanup this relation.
Destroy and cleanup this relation. It may include, but not limited to, dropping the external table that this relation represents.
Drops an index on this table
Drops an index on this table
Index identifier
Table identifier
Drop if exists
Execute a DML SQL and return the number of rows affected.
Execute a DML SQL and return the number of rows affected.
Get a spark plan to delete rows the relation.
Get a spark plan to delete rows the relation. The result of SparkPlan execution should be a count of number of updated rows.
Get the dependent child.
Get the dependent child.
Get a spark plan for insert.
Get a spark plan for insert. The result of SparkPlan execution should be a count of number of inserted rows.
Get the "key" columns for the table that need to be projected out by UPDATE and DELETE operations for affecting the selected rows.
Get the "key" columns for the table that need to be projected out by UPDATE and DELETE operations for affecting the selected rows.
Get a spark plan for puts.
Get a spark plan for puts. If the row is already present, it gets updated otherwise it gets inserted into the table represented by this relation. The result of SparkPlan execution should be a count of number of rows put.
Get a spark plan to update rows in the relation.
Get a spark plan to update rows in the relation. The result of SparkPlan execution should be a count of number of updated rows.
Insert a sequence of rows into the table represented by this relation.
Insert a sequence of rows into the table represented by this relation.
the rows to be inserted
number of rows inserted
Name of this relation in the catalog.
Name of this relation in the catalog.
Get the partitioning columns for the table, if any.
Get the partitioning columns for the table, if any.
If the row is already present, it gets updated otherwise it gets inserted into the table represented by this relation
If the row is already present, it gets updated otherwise it gets inserted into the table represented by this relation
the rows to be upserted
number of rows upserted
Recover/Re-create the dependent child relations.
Recover/Re-create the dependent child relations. This callback is to recreate Dependent relations when the ParentRelation is being created.
Used by DependentRelation
s to unregister with parent
Used by DependentRelation
s to unregister with parent
Name of this mutable table as stored in catalog.
Name of this mutable table as stored in catalog.
Return true if table already existed when the relation object was created.
Return true if table already existed when the relation object was created.
Truncate the table represented by this relation.
Truncate the table represented by this relation.
Update a set of rows matching given criteria.
Update a set of rows matching given criteria.
SQL WHERE criteria to select rows that will be updated
updated values for the columns being changed;
must match updateColumns
the columns to be updated; must match updatedColumns
number of rows affected
If required inject the key columns in the original relation.
If required inject the key columns in the original relation.
A LogicalPlan implementation for an Snappy row table whose contents are retrieved using a JDBC URL or DataSource.