com.johnsnowlabs.nlp.annotators.spell.context
internal types to show Rows as a relevant StructType Should be deleted once Spark releases UserDefinedTypes to @developerAPI
internal types to show Rows as a relevant StructType Should be deleted once Spark releases UserDefinedTypes to @developerAPI
takes a document and annotations and produces new annotations of this annotator's annotation type
takes a document and annotations and produces new annotations of this annotator's annotation type
Annotations that correspond to inputAnnotationCols generated by previous annotators if any
any number of annotations processed for every input annotation. Not necessary one to one relationship
What case combinations to try when generating candidates (Default: CandidateStrategy.ALL
).
Classes the spell checker recognizes
If true will compare tokens in low case with vocabulary (Default: false
)
ConfigProto from tensorflow, serialized into byte array.
ConfigProto from tensorflow, serialized into byte array. Get with config_proto.SerializeToString()
requirement for annotators copies
requirement for annotators copies
Whether to correct special symbols or skip spell checking for them
Wraps annotate to happen inside SparkSQL user defined functions in order to act with org.apache.spark.sql.Column
Wraps annotate to happen inside SparkSQL user defined functions in order to act with org.apache.spark.sql.Column
udf function to be applied to inputCols using this annotator's annotate function as part of ML transformation
Threshold perplexity for a word to be considered as an error.
Override for additional custom schema checks
Override for additional custom schema checks
Controls the influence of individual word frequency in the decision (Default: 120.0f
).
input annotations columns currently used
Gets annotation column name going to generate
Gets annotation column name going to generate
Mapping of ids to vocabulary
Input Annotator Types: TOKEN
Input Annotator Types: TOKEN
columns that contain annotations necessary to run this annotator AnnotatorType is used both as input and output columns if not specified
columns that contain annotations necessary to run this annotator AnnotatorType is used both as input and output columns if not specified
Maximum number of candidates for every word (Default: 6
).
Maximum size for the window used to remember history prior to every correction (Default: 5
).
Output Annotator Types: TOKEN
Output Annotator Types: TOKEN
Overrides required annotators column if different than default
Overrides required annotators column if different than default
Overrides annotation column name when transforming
Overrides annotation column name when transforming
Tradeoff between the cost of a word and a transition in the language model (Default: 18.0f
).
Given requirements are met, this applies ML transformation within a Pipeline or stand-alone Output annotation will be generated as a new column, previous annotations are still available separately metadata is built at schema level to record annotations structural information outside its content
Given requirements are met, this applies ML transformation within a Pipeline or stand-alone Output annotation will be generated as a new column, previous annotations are still available separately metadata is built at schema level to record annotations structural information outside its content
Dataset[Row]
requirement for pipeline transformation validation.
requirement for pipeline transformation validation. It is called on fit()
required uid for storing annotator to disk
required uid for storing annotator to disk
When set to true new lines will be treated as any other character (Default: false
).
When set to true new lines will be treated as any other character (Default: false
).
When set to false correction is applied on paragraphs as defined by newline characters.
takes a Dataset and checks to see if all the required annotation types are present.
takes a Dataset and checks to see if all the required annotation types are present.
to be validated
True if all the required types are present, else false
Frequency words from the vocabulary
Mapping of vocabulary to ids
Maximum distance for the generated candidates for every word, minimum 1.
A list of (hyper-)parameter keys this annotator can take. Users can set and get the parameter values through setters and getters, respectively.
Required input and expected output annotator types
Implements a deep-learning based Noisy Channel Model Spell Algorithm. Correction candidates are extracted combining context information and word information.
Spell Checking is a sequence to sequence mapping problem. Given an input sequence, potentially containing a certain number of errors,
ContextSpellChecker
will rank correction sequences according to three things:For an in-depth explanation of the module see the article Applying Context Aware Spell Checking in Spark NLP.
This is the instantiated model of the ContextSpellCheckerApproach. For training your own model, please see the documentation of that class.
Pretrained models can be loaded with
pretrained
of the companion object:The default model is
"spellcheck_dl"
, if no name is provided. For available pretrained models please see the Models Hub.For extended examples of usage, see the Spark NLP Workshop and the ContextSpellCheckerTestSpec.
Example
NorvigSweetingModel and SymmetricDeleteModel for alternative approaches to spell checking