Function to take variable number of values and create an array column out of it.
Function to take variable number of values and create an array column out of it.
input value
variable number of input values.
an array of column.
UDF to find and return element in arr sequence at passed index.
UDF to find and return element in arr sequence at passed index. If no element found then null is returned.
Spark UDF that makes a single blocking rest API call to a given url.
Spark UDF that makes a single blocking rest API call to a given url. The result of this udf is always produced, contains a proper error if it failed at any stage, and never interrupts the job execution (unless called with invalid signature).
The default timeout can be configured through the spark.network.timeout
Spark configuration option.
Parameters:
Response - a struct with the following fields:
name: value
response headers (e.g. [Server: akka-http/10.1.10, Date: Tue, 07 Sep 2021 18:11:47 GMT])Function to add new typecasted column in input dataframe.
Function to add new typecasted column in input dataframe. Newly added column is typecasted version of passed column. Typecast operation is supported for string, boolean, byte, short, int, long, float, double, decimal, date, timestamp
spark session
input dataframe
input column to be typecasted
datatype to cast column to.
column name to be added in dataframe.
new dataframe with new typecasted column.
Function registers 4 different UDFs with spark registry.
Function registers 4 different UDFs with spark registry. UDF for lookup_match, lookup_count, lookup_row and lookup functions are registered. This function stores the data of input dataframe in a broadcast variable, then uses this broadcast variable in different lookup functions.
lookup : This function returns the first matching row for given input keys lookup_count : This function returns the count of all matching rows for given input keys. lookup_match : This function returns 0 if there is no matching row and 1 for some matching rows for given input keys. lookup_row : This function returns all the matching rows for given input keys.
This function registers for upto 10 matching keys as input to these lookup functions.
UDF Name
input dataframe
spark session
columns to be used as keys in lookup functions.
schema of entire row which will be stored for each matching key.
registered UDF definitions for lookup functions. These UDF functions returns different results depending on the lookup function.
Method to create UDF which looks for passed input double in input dataframe.
Method to create UDF which looks for passed input double in input dataframe. This function first loads the data of dataframe in broadcast variable and then defines a UDF which looks for input double value in the data stored in broadcast variable. If input double lies between passed col1 and col2 values then it adds corresponding row in the returned result. If value of input double doesn't lie between col1 and col2 then it simply returns null for current row in result.
created UDF name
input dataframe
spark session
column whose value to be considered as minimum in comparison.
column whose value to be considered as maximum in comparison.
remaining column names to be part of result.
registers UDF which in turn returns rows corresponding to each row in dataframe on which range UDF is called.
Function to drop passed columns from input dataframe.
Function to drop passed columns from input dataframe.
spark session
input dataframe.
list of columns to be dropped from dataframe.
new dataframe with dropped columns.
By default returns only the first matching record
Returns the last matching record
Boolean Column
Function to add new column in passed dataframe.
Function to add new column in passed dataframe. Newly added column value is decided by the presence of value corresponding to inputCol in array comprised of value and values. If inputCol is found then value of replaceWith is added in new column otherwise inputCol value is added.
spark session.
input dataframe.
name of new column to be added.
column name whose value is searched.
value with which to replace searched value if found.
element to be combined in array column
all values to be combined in array column for searching purpose.
dataframe with new column with column name outputCol
Function to add new column in passed dataframe.
Function to add new column in passed dataframe. Newly added column value is decided by the presence of value corresponding to inputCol in array comprised of value and values and null. If inputCol is found then value of replaceWith is added in new column otherwise inputCol value is added.
spark session.
input dataframe.
name of new column to be added.
column name whose value is searched.
value with which to replace searched value if found.
element to be combined in array column
all values to be combined in array column for searching purpose.
dataframe with new column with column name outputCol
Function to add new column in passed dataframe.
Function to add new column in passed dataframe. Newly added column value is decided by the presence of value corresponding to inputCol in array comprised of value and values and null. If inputCol is found then value of null is added in new column otherwise inputCol value is added.
spark session.
input dataframe.
name of new Column to be added.
column name whose value is searched.
element to be combined in array column.
all values to be combined in array column for searching purpose.
dataframe with new column with column name outputCol
UDF to find str in input sequence toBeReplaced and return replace if found.
UDF to find str in input sequence toBeReplaced and return replace if found. Otherwise str is returned.
UDF to find str in input sequence toBeReplaced and return null if found.
UDF to find str in input sequence toBeReplaced and return null if found. Otherwise str is returned.
Function to split column with colName in input dataframe using split pattern into multiple columns.
Function to split column with colName in input dataframe using split pattern into multiple columns. If prefix name is provided each new generated column is prefixed with prefix followed by column number, otherwise original column name is used.
spark session.
input dataframe.
column in dataframe which needs to be split into multiple columns.
regex with which column in input dataframe will be split into multiple columns.
column prefix to be used with all newly generated columns.
new dataframe with new columns where new column values are generated after splitting original column colName.
UDF to return nth element from last in passed array of elements.
UDF to return nth element from last in passed array of elements. In case input sequence has less number of elements than n then first element is returned.
UDF to take Nth element from beginning.
UDF to take Nth element from beginning. In case input sequence has less element than N then exception is thrown.
(Since version ) see corresponding Javadoc for more information.
Utility class with different UDFs to take care of miscellaneous tasks.