com.johnsnowlabs.nlp.annotators
internal types to show Rows as a relevant StructType Should be deleted once Spark releases UserDefinedTypes to @developerAPI
internal types to show Rows as a relevant StructType Should be deleted once Spark releases UserDefinedTypes to @developerAPI
takes a document and annotations and produces new annotations of this annotator's annotation type
takes a document and annotations and produces new annotations of this annotator's annotation type
Annotations that correspond to inputAnnotationCols generated by previous annotators if any
any number of annotations processed for every input annotation. Not necessary one to one relationship
Whether to do a case-sensitive comparison over the stop words (Default: false
)
requirement for annotators copies
requirement for annotators copies
Wraps annotate to happen inside SparkSQL user defined functions in order to act with org.apache.spark.sql.Column
Wraps annotate to happen inside SparkSQL user defined functions in order to act with org.apache.spark.sql.Column
udf function to be applied to inputCols using this annotator's annotate function as part of ML transformation
Override for additional custom schema checks
Override for additional custom schema checks
Whether to do a case-sensitive comparison over the stop words (Default: false
)
input annotations columns currently used
Locale of the input for case insensitive matching (Default: system default locale, or Locale.US
if the default locale is not in available locales).
Locale of the input for case insensitive matching (Default: system default locale, or Locale.US
if the default locale is not in available locales).
Ignored when caseSensitive is true
Gets annotation column name going to generate
Gets annotation column name going to generate
The words to be filtered out
Input annotator type: TOKEN
Input annotator type: TOKEN
columns that contain annotations necessary to run this annotator AnnotatorType is used both as input and output columns if not specified
columns that contain annotations necessary to run this annotator AnnotatorType is used both as input and output columns if not specified
Locale of the input for case insensitive matching (Default: system default locale, or Locale.US
if the default locale is not in available locales).
Locale of the input for case insensitive matching (Default: system default locale, or Locale.US
if the default locale is not in available locales).
Ignored when caseSensitive is true.
Output annotator type: TOKEN
Output annotator type: TOKEN
Whether to do a case-sensitive comparison over the stop words (Default: false
)
Overrides required annotators column if different than default
Overrides required annotators column if different than default
Locale of the input for case insensitive matching (Default: system default locale, or Locale.US
if the default locale is not in available locales).
Locale of the input for case insensitive matching (Default: system default locale, or Locale.US
if the default locale is not in available locales).
Ignored when caseSensitive is true
Overrides annotation column name when transforming
Overrides annotation column name when transforming
The words to be filtered out
The words to be filtered out (Default: Stop words from MLlib)
Given requirements are met, this applies ML transformation within a Pipeline or stand-alone Output annotation will be generated as a new column, previous annotations are still available separately metadata is built at schema level to record annotations structural information outside its content
Given requirements are met, this applies ML transformation within a Pipeline or stand-alone Output annotation will be generated as a new column, previous annotations are still available separately metadata is built at schema level to record annotations structural information outside its content
Dataset[Row]
requirement for pipeline transformation validation.
requirement for pipeline transformation validation. It is called on fit()
takes a Dataset and checks to see if all the required annotation types are present.
takes a Dataset and checks to see if all the required annotation types are present.
to be validated
True if all the required types are present, else false
A list of (hyper-)parameter keys this annotator can take. Users can set and get the parameter values through setters and getters, respectively.
Required input and expected output annotator types
This annotator takes a sequence of strings (e.g. the output of a Tokenizer, Normalizer, Lemmatizer, and Stemmer) and drops all the stop words from the input sequences.
By default, it uses stop words from MLlibs StopWordsRemover. Stop words can also be defined by explicitly setting them with
setStopWords(value: Array[String])
or loaded from pretrained models usingpretrained
of its companion object.For available pretrained models please see the Models Hub.
For extended examples of usage, see the Spark NLP Workshop and StopWordsCleanerTestSpec.
Example