com.johnsnowlabs.nlp.annotators.spell.norvig
Sensitivity on spell checking.
Sensitivity on spell checking. Defaults to false. Might affect accuracy
Spell checking algorithm inspired on Norvig model
Spell checking algorithm inspired on Norvig model
file with a list of correct words
Increase search at cost of performance.
Increase search at cost of performance. Enables extra check for word combinations, More accuracy at performance
Maximum duplicate of characters in a word to consider.
Maximum duplicate of characters in a word to consider. Defaults to 2 .Maximum duplicate of characters to account for. Defaults to 2.
Applies frequency over hamming in intersections.
Applies frequency over hamming in intersections. When false hamming takes priority
Sensitivity on spell checking.
Sensitivity on spell checking. Defaults to false. Might affect accuracy
Increase search at cost of performance.
Increase search at cost of performance. Enables extra check for word combinations
Maximum duplicate of characters in a word to consider.
Maximum duplicate of characters in a word to consider. Defaults to 2 .Maximum duplicate of characters to account for. Defaults to 2.
Applies frequency over hamming in intersections.
Applies frequency over hamming in intersections. When false hamming takes priority
input annotations columns currently used
Hamming intersections to attempt.
Hamming intersections to attempt. Defaults to 10
Gets annotation column name going to generate
Gets annotation column name going to generate
Word reduction limit.
Word reduction limit. Defaults to 3
Increase performance at cost of accuracy.
Increase performance at cost of accuracy. Faster but less accurate mode
Vowel swap attempts.
Vowel swap attempts. Defaults to 6
Minimum size of word before ignoring.
Minimum size of word before ignoring. Defaults to 3 ,Minimum size of word before moving on. Defaults to 3.
Output annotator type : TOKEN
Output annotator type : TOKEN
columns that contain annotations necessary to run this annotator AnnotatorType is used both as input and output columns if not specified
columns that contain annotations necessary to run this annotator AnnotatorType is used both as input and output columns if not specified
Hamming intersections to attempt.
Hamming intersections to attempt. Defaults to 10
Output annotator type : TOKEN
Output annotator type : TOKEN
Word reduction limit.
Word reduction limit. Defaults to 3
Sensitivity on spell checking.
Sensitivity on spell checking. Defaults to false. Might affect accuracy
Path to file with properly spelled words, tokenPattern is the regex pattern to identify them in text, readAs LINE_BY_LINE or SPARK_DATASET, with options passed to Spark reader if the latter is set.
Path to file with properly spelled words, tokenPattern is the regex pattern to identify them in text, readAs LINE_BY_LINE or SPARK_DATASET, with options passed to Spark reader if the latter is set. dictionary needs 'tokenPattern' regex in dictionary for separating words
External path to file with properly spelled words, tokenPattern is the regex pattern to identify them in text, readAs LINE_BY_LINE or SPARK_DATASET, with options passed to Spark reader if the latter is set.
External path to file with properly spelled words, tokenPattern is the regex pattern to identify them in text, readAs LINE_BY_LINE or SPARK_DATASET, with options passed to Spark reader if the latter is set. dictionary needs 'tokenPattern' regex in dictionary for separating words
Increase search at cost of performance.
Increase search at cost of performance. Enables extra check for word combinations
Maximum duplicate of characters in a word to consider.
Maximum duplicate of characters in a word to consider. Defaults to 2 .Maximum duplicate of characters to account for. Defaults to 2.
Applies frequency over hamming in intersections.
Applies frequency over hamming in intersections. When false hamming takes priority
Overrides required annotators column if different than default
Overrides required annotators column if different than default
Hamming intersections to attempt.
Hamming intersections to attempt. Defaults to 10
Overrides annotation column name when transforming
Overrides annotation column name when transforming
Word reduction limit.
Word reduction limit. Defaults to 3
Increase performance at cost of accuracy.
Increase performance at cost of accuracy. Faster but less accurate mode
Vowel swap attempts.
Vowel swap attempts. Defaults to 6
Minimum size of word before ignoring.
Minimum size of word before ignoring. Defaults to 3 ,Minimum size of word before moving on. Defaults to 3.
Increase performance at cost of accuracy.
Increase performance at cost of accuracy. Faster but less accurate mode
requirement for pipeline transformation validation.
requirement for pipeline transformation validation. It is called on fit()
takes a Dataset and checks to see if all the required annotation types are present.
takes a Dataset and checks to see if all the required annotation types are present.
to be validated
True if all the required types are present, else false
Vowel swap attempts.
Vowel swap attempts. Defaults to 6
Minimum size of word before ignoring.
Minimum size of word before ignoring. Defaults to 3 ,Minimum size of word before moving on. Defaults to 3.
Required input and expected output annotator types
This annotator retrieves tokens and makes corrections automatically if not found in an English dictionary. Inspired by Norvig model
Inspired on https://github.com/wolfgarbe/SymSpell
The Symmetric Delete spelling correction algorithm reduces the complexity of edit candidate generation and dictionary lookup for a given Damerau-Levenshtein distance. It is six orders of magnitude faster (than the standard approach with deletes + transposes + replaces + inserts) and language independent.
See https://github.com/JohnSnowLabs/spark-nlp/blob/master/src/test/scala/com/johnsnowlabs/nlp/annotators/spell/norvig/NorvigSweetingTestSpec.scala for further reference on how to use this API