com.johnsnowlabs.nlp.annotators.sda.vivekn
internal types to show Rows as a relevant StructType Should be deleted once Spark releases UserDefinedTypes to @developerAPI
internal types to show Rows as a relevant StructType Should be deleted once Spark releases UserDefinedTypes to @developerAPI
Tokens are needed to identify each word in a sentence boundary POS tags are optionally submitted to the model in case they are needed Lemmas are another optional annotator for some models Bounds of sentiment are hardcoded to 0 as they render useless
Tokens are needed to identify each word in a sentence boundary POS tags are optionally submitted to the model in case they are needed Lemmas are another optional annotator for some models Bounds of sentiment are hardcoded to 0 as they render useless
Annotations that correspond to inputAnnotationCols generated by previous annotators if any
any number of annotations processed for every input annotation. Not necessary one to one relationship
Positive: 0, Negative: 1, NA: 2
requirement for annotators copies
requirement for annotators copies
Wraps annotate to happen inside SparkSQL user defined functions in order to act with org.apache.spark.sql.Column
Wraps annotate to happen inside SparkSQL user defined functions in order to act with org.apache.spark.sql.Column
udf function to be applied to inputCols using this annotator's annotate function as part of ML transformation
Override for additional custom schema checks
Override for additional custom schema checks
content feature limit, to boost performance in very dirt text.
content feature limit, to boost performance in very dirt text. Default disabled with -1
Get content feature limit, to boost performance in very dirt text.
Get content feature limit, to boost performance in very dirt text. Default disabled with -1
Set of unique words
Get Proportion of feature content to be considered relevant.
Get Proportion of feature content to be considered relevant. Defaults to 0.5
input annotations columns currently used
Count of negative words
Gets annotation column name going to generate
Gets annotation column name going to generate
Count of positive words
Get Proportion to lookahead in unimportant features.
Get Proportion to lookahead in unimportant features. Defaults to 0.025
proportion of feature content to be considered relevant.
proportion of feature content to be considered relevant. Defaults to 0.5
Input annotator type : SENTIMENT
Input annotator type : SENTIMENT
columns that contain annotations necessary to run this annotator AnnotatorType is used both as input and output columns if not specified
columns that contain annotations necessary to run this annotator AnnotatorType is used both as input and output columns if not specified
Detects negations and transforms them into not_ form
Detects negations and transforms them into not_ form
negative_sentences
negative_sentences
count of negative words
count of negative words
Output annotator type : SENTIMENT
Output annotator type : SENTIMENT
positive_sentences
positive_sentences
count of positive words
count of positive words
Set content feature limit, to boost performance in very dirt text.
Set content feature limit, to boost performance in very dirt text. Default disabled with -1
Set Proportion of feature content to be considered relevant.
Set Proportion of feature content to be considered relevant. Defaults to 0.5
Overrides required annotators column if different than default
Overrides required annotators column if different than default
Overrides annotation column name when transforming
Overrides annotation column name when transforming
Set Proportion to lookahead in unimportant features.
Set Proportion to lookahead in unimportant features. Defaults to 0.025
Given requirements are met, this applies ML transformation within a Pipeline or stand-alone Output annotation will be generated as a new column, previous annotations are still available separately metadata is built at schema level to record annotations structural information outside its content
Given requirements are met, this applies ML transformation within a Pipeline or stand-alone Output annotation will be generated as a new column, previous annotations are still available separately metadata is built at schema level to record annotations structural information outside its content
Dataset[Row]
requirement for pipeline transformation validation.
requirement for pipeline transformation validation. It is called on fit()
proportion to lookahead in unimportant features.
proportion to lookahead in unimportant features. Defaults to 0.025
takes a Dataset and checks to see if all the required annotation types are present.
takes a Dataset and checks to see if all the required annotation types are present.
to be validated
True if all the required types are present, else false
words
words
Required input and expected output annotator types
Inspired on vivekn sentiment analysis algorithm https://github.com/vivekn/sentiment/.
requires sentence boundaries to give score in context. Tokenization to make sure tokens are within bounds. Transitivity requirements are also required.
See https://github.com/JohnSnowLabs/spark-nlp/tree/master/src/test/scala/com/johnsnowlabs/nlp/annotators/sda/vivekn for further reference on how to use this API.