class Tokenizer extends AnnotatorApproach[TokenizerModel]
Tokenizes raw text in document type columns into TokenizedSentence .
This class represents a non fitted tokenizer. Fitting it will cause the internal RuleFactory to construct the rules for tokenizing from the input configuration.
Identifies tokens with tokenization open standards. A few rules will help customizing it if defaults do not fit user needs.
For extended examples of usage see the Examples and Tokenizer test class
Example
import spark.implicits._ import com.johnsnowlabs.nlp.DocumentAssembler import com.johnsnowlabs.nlp.annotators.Tokenizer import org.apache.spark.ml.Pipeline val data = Seq("I'd like to say we didn't expect that. Jane's boyfriend.").toDF("text") val documentAssembler = new DocumentAssembler().setInputCol("text").setOutputCol("document") val tokenizer = new Tokenizer().setInputCols("document").setOutputCol("token").fit(data) val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer)).fit(data) val result = pipeline.transform(data) result.selectExpr("token.result").show(false) +-----------------------------------------------------------------------+ |output | +-----------------------------------------------------------------------+ |[I'd, like, to, say, we, didn't, expect, that, ., Jane's, boyfriend, .]| +-----------------------------------------------------------------------+
- Grouped
- Alphabetic
- By Inheritance
- Tokenizer
- AnnotatorApproach
- CanBeLazy
- DefaultParamsWritable
- MLWritable
- HasOutputAnnotatorType
- HasOutputAnnotationCol
- HasInputAnnotationCols
- Estimator
- PipelineStage
- Logging
- Params
- Serializable
- Identifiable
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Instance Constructors
Type Members
- type AnnotatorType = String
- Definition Classes
- HasOutputAnnotatorType
Value Members
- final def !=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def ##: Int
- Definition Classes
- AnyRef → Any
- final def $[T](param: Param[T]): T
- Attributes
- protected
- Definition Classes
- Params
- final def ==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- def _fit(dataset: Dataset[_], recursiveStages: Option[PipelineModel]): TokenizerModel
- Attributes
- protected
- Definition Classes
- AnnotatorApproach
- def addContextChars(v: String): Tokenizer.this.type
Add one character string to rip off from tokens, such as parenthesis or question marks.
Add one character string to rip off from tokens, such as parenthesis or question marks. Ignored if using prefix, infix or suffix patterns.
- def addException(value: String): Tokenizer.this.type
Add a single exception
- def addInfixPattern(value: String): Tokenizer.this.type
Add an extension pattern regex with groups to the top of thsetExceptionse rules (will target first, from more specific to the more general).
- def addSplitChars(v: String): Tokenizer.this.type
One character string to split tokens inside, such as hyphens.
One character string to split tokens inside, such as hyphens. Ignored if using infix, prefix or suffix patterns.
- final def asInstanceOf[T0]: T0
- Definition Classes
- Any
- def beforeTraining(spark: SparkSession): Unit
- Definition Classes
- AnnotatorApproach
- def buildRuleFactory: RuleFactory
Build rule factory which combines all defined parameters to build regex that is applied to tokens
- val caseSensitiveExceptions: BooleanParam
Whether to care for case sensitiveness in exceptions (Default:
true) - final def checkSchema(schema: StructType, inputAnnotatorType: String): Boolean
- Attributes
- protected
- Definition Classes
- HasInputAnnotationCols
- final def clear(param: Param[_]): Tokenizer.this.type
- Definition Classes
- Params
- def clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @HotSpotIntrinsicCandidate() @native()
- val contextChars: StringArrayParam
Character list used to separate from token boundaries (Default:
Array(".", ",", ";", ":", "!", "?", "*", "-", "(", ")", "\"", "'")) - final def copy(extra: ParamMap): Estimator[TokenizerModel]
- Definition Classes
- AnnotatorApproach → Estimator → PipelineStage → Params
- def copyValues[T <: Params](to: T, extra: ParamMap): T
- Attributes
- protected
- Definition Classes
- Params
- final def defaultCopy[T <: Params](extra: ParamMap): T
- Attributes
- protected
- Definition Classes
- Params
- val description: String
Annotator that identifies points of analysis in a useful manner
Annotator that identifies points of analysis in a useful manner
- Definition Classes
- Tokenizer → AnnotatorApproach
- final def eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def equals(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef → Any
- val exceptions: StringArrayParam
Words that won't be affected by tokenization rules
- val exceptionsPath: ExternalResourceParam
Path to file containing list of exceptions
- def explainParam(param: Param[_]): String
- Definition Classes
- Params
- def explainParams(): String
- Definition Classes
- Params
- final def extractParamMap(): ParamMap
- Definition Classes
- Params
- final def extractParamMap(extra: ParamMap): ParamMap
- Definition Classes
- Params
- final def fit(dataset: Dataset[_]): TokenizerModel
- Definition Classes
- AnnotatorApproach → Estimator
- def fit(dataset: Dataset[_], paramMaps: Seq[ParamMap]): Seq[TokenizerModel]
- Definition Classes
- Estimator
- Annotations
- @Since("2.0.0")
- def fit(dataset: Dataset[_], paramMap: ParamMap): TokenizerModel
- Definition Classes
- Estimator
- Annotations
- @Since("2.0.0")
- def fit(dataset: Dataset[_], firstParamPair: ParamPair[_], otherParamPairs: ParamPair[_]*): TokenizerModel
- Definition Classes
- Estimator
- Annotations
- @varargs() @Since("2.0.0")
- final def get[T](param: Param[T]): Option[T]
- Definition Classes
- Params
- def getCaseSensitiveExceptions(value: Boolean): Boolean
Whether to follow case sensitiveness for matching exceptions in text
- final def getClass(): Class[_ <: AnyRef]
- Definition Classes
- AnyRef → Any
- Annotations
- @HotSpotIntrinsicCandidate() @native()
- def getContextChars: Array[String]
List of 1 character string to rip off from tokens, such as parenthesis or question marks.
List of 1 character string to rip off from tokens, such as parenthesis or question marks. Ignored if using prefix, infix or suffix patterns.
- final def getDefault[T](param: Param[T]): Option[T]
- Definition Classes
- Params
- def getExceptions: Array[String]
- def getInfixPatterns: Array[String]
Add an extension pattern regex with groups to the top of the rules (will target first, from more specific to the more general).
- def getInputCols: Array[String]
- returns
input annotations columns currently used
- Definition Classes
- HasInputAnnotationCols
- def getLazyAnnotator: Boolean
- Definition Classes
- CanBeLazy
- def getMaxLength(value: Int): Int
Get the maximum allowed length for each token
- def getMinLength(value: Int): Int
Get the minimum allowed length for each token
- final def getOrDefault[T](param: Param[T]): T
- Definition Classes
- Params
- final def getOutputCol: String
Gets annotation column name going to generate
Gets annotation column name going to generate
- Definition Classes
- HasOutputAnnotationCol
- def getParam(paramName: String): Param[Any]
- Definition Classes
- Params
- def getPrefixPattern: String
Regex to identify subtokens that come in the beginning of the token.
Regex to identify subtokens that come in the beginning of the token. Regex has to start with \\A and must contain groups (). Each group will become a separate token within the prefix. Defaults to non-letter characters. e.g. quotes or parenthesis
- def getSplitChars: Array[String]
List of 1 character string to split tokens inside, such as hyphens.
List of 1 character string to split tokens inside, such as hyphens. Ignored if using infix, prefix or suffix patterns.
- def getSplitPattern: String
List of 1 character string to split tokens inside, such as hyphens.
List of 1 character string to split tokens inside, such as hyphens. Ignored if using infix, prefix or suffix patterns.
- def getSuffixPattern: String
Regex to identify subtokens that are in the end of the token.
Regex to identify subtokens that are in the end of the token. Regex has to end with \\z and must contain groups (). Each group will become a separate token within the prefix. Defaults to non-letter characters. e.g. quotes or parenthesis
- def getTargetPattern: String
Basic regex rule to identify a candidate for tokenization.
Basic regex rule to identify a candidate for tokenization. Defaults to \\S+ which means anything not a space
- final def hasDefault[T](param: Param[T]): Boolean
- Definition Classes
- Params
- def hasParam(paramName: String): Boolean
- Definition Classes
- Params
- def hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @HotSpotIntrinsicCandidate() @native()
- val infixPatterns: StringArrayParam
Regex patterns that match tokens within a single target.
Regex patterns that match tokens within a single target. groups identify different sub-tokens. multiple defaults
Infix patterns must use regex group. Notice each group will result in separate token
Example:
import org.apache.spark.ml.Pipeline import com.johnsnowlabs.nlp.annotators.Tokenizer import com.johnsnowlabs.nlp.DocumentAssembler val textDf = sqlContext.sparkContext.parallelize(Array("l'une d'un l'un, des l'extrême des l'extreme")).toDF("text") val documentAssembler = new DocumentAssembler().setInputCol("text").setOutputCol("sentences") val tokenizer = new Tokenizer().setInputCols("sentences").setOutputCol("tokens").setInfixPatterns(Array("([\\p{L}\\w]+'{1})([\\p{L}\\w]+)")) new Pipeline().setStages(Array(documentAssembler, tokenizer)).fit(textDf).transform(textDf).select("tokens.result").show(false)
This will yield:
l', une, d', un, l', un, , , des, l', extrême, des, l', extreme - def initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
- Attributes
- protected
- Definition Classes
- Logging
- def initializeLogIfNecessary(isInterpreter: Boolean): Unit
- Attributes
- protected
- Definition Classes
- Logging
- val inputAnnotatorTypes: Array[AnnotatorType]
Input annotator type : DOCUMENT
Input annotator type : DOCUMENT
- Definition Classes
- Tokenizer → HasInputAnnotationCols
- final val inputCols: StringArrayParam
columns that contain annotations necessary to run this annotator AnnotatorType is used both as input and output columns if not specified
columns that contain annotations necessary to run this annotator AnnotatorType is used both as input and output columns if not specified
- Attributes
- protected
- Definition Classes
- HasInputAnnotationCols
- final def isDefined(param: Param[_]): Boolean
- Definition Classes
- Params
- final def isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- final def isSet(param: Param[_]): Boolean
- Definition Classes
- Params
- def isTraceEnabled(): Boolean
- Attributes
- protected
- Definition Classes
- Logging
- val lazyAnnotator: BooleanParam
- Definition Classes
- CanBeLazy
- def log: Logger
- Attributes
- protected
- Definition Classes
- Logging
- def logDebug(msg: => String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logDebug(msg: => String): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logError(msg: => String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logError(msg: => String): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logInfo(msg: => String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logInfo(msg: => String): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logName: String
- Attributes
- protected
- Definition Classes
- Logging
- def logTrace(msg: => String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logTrace(msg: => String): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logWarning(msg: => String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
- def logWarning(msg: => String): Unit
- Attributes
- protected
- Definition Classes
- Logging
- val maxLength: IntParam
Set the maximum allowed length for each token
- val minLength: IntParam
Set the minimum allowed length for each token
- def msgHelper(schema: StructType): String
- Attributes
- protected
- Definition Classes
- HasInputAnnotationCols
- final def ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- final def notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @HotSpotIntrinsicCandidate() @native()
- final def notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @HotSpotIntrinsicCandidate() @native()
- def onTrained(model: TokenizerModel, spark: SparkSession): Unit
- Definition Classes
- AnnotatorApproach
- val optionalInputAnnotatorTypes: Array[String]
- Definition Classes
- HasInputAnnotationCols
- val outputAnnotatorType: AnnotatorType
Output annotator type : TOKEN
Output annotator type : TOKEN
- Definition Classes
- Tokenizer → HasOutputAnnotatorType
- final val outputCol: Param[String]
- Attributes
- protected
- Definition Classes
- HasOutputAnnotationCol
- lazy val params: Array[Param[_]]
- Definition Classes
- Params
- val prefixPattern: Param[String]
Regex with groups and begins with \\A to match target prefix.
Regex with groups and begins with \\A to match target prefix. Overrides contextCharacters Param
- def save(path: String): Unit
- Definition Classes
- MLWritable
- Annotations
- @throws("If the input path already exists but overwrite is not enabled.") @Since("1.6.0")
- final def set(paramPair: ParamPair[_]): Tokenizer.this.type
- Attributes
- protected
- Definition Classes
- Params
- final def set(param: String, value: Any): Tokenizer.this.type
- Attributes
- protected
- Definition Classes
- Params
- final def set[T](param: Param[T], value: T): Tokenizer.this.type
- Definition Classes
- Params
- def setCaseSensitiveExceptions(value: Boolean): Tokenizer.this.type
Whether to follow case sensitiveness for matching exceptions in text
- def setContextChars(v: Array[String]): Tokenizer.this.type
List of 1 character string to rip off from tokens, such as parenthesis or question marks.
List of 1 character string to rip off from tokens, such as parenthesis or question marks. Ignored if using prefix, infix or suffix patterns.
- final def setDefault(paramPairs: ParamPair[_]*): Tokenizer.this.type
- Attributes
- protected
- Definition Classes
- Params
- final def setDefault[T](param: Param[T], value: T): Tokenizer.this.type
- Attributes
- protected[org.apache.spark.ml]
- Definition Classes
- Params
- def setExceptions(value: Array[String]): Tokenizer.this.type
List of tokens to not alter at all.
List of tokens to not alter at all. Allows composite tokens like two worded tokens that the user may not want to split.
- def setExceptionsPath(path: String, readAs: Format = ReadAs.TEXT, options: Map[String, String] = Map("format" -> "text")): Tokenizer.this.type
Path to txt file with list of token exceptions
- def setInfixPatterns(value: Array[String]): Tokenizer.this.type
Set a list of Regex patterns that match tokens within a single target.
Set a list of Regex patterns that match tokens within a single target. Groups identify different sub-tokens. multiple defaults
- final def setInputCols(value: String*): Tokenizer.this.type
- Definition Classes
- HasInputAnnotationCols
- def setInputCols(value: Array[String]): Tokenizer.this.type
Overrides required annotators column if different than default
Overrides required annotators column if different than default
- Definition Classes
- HasInputAnnotationCols
- def setLazyAnnotator(value: Boolean): Tokenizer.this.type
- Definition Classes
- CanBeLazy
- def setMaxLength(value: Int): Tokenizer.this.type
Get the maximum allowed length for each token
- def setMinLength(value: Int): Tokenizer.this.type
Set the minimum allowed length for each token
- final def setOutputCol(value: String): Tokenizer.this.type
Overrides annotation column name when transforming
Overrides annotation column name when transforming
- Definition Classes
- HasOutputAnnotationCol
- def setPrefixPattern(value: String): Tokenizer.this.type
Regex to identify subtokens that come in the beginning of the token.
Regex to identify subtokens that come in the beginning of the token. Regex has to start with \\A and must contain groups (). Each group will become a separate token within the prefix. Defaults to non-letter characters. e.g. quotes or parenthesis
- def setSplitChars(v: Array[String]): Tokenizer.this.type
List of 1 character string to split tokens inside, such as hyphens.
List of 1 character string to split tokens inside, such as hyphens. Ignored if using infix, prefix or suffix patterns.
- def setSplitPattern(value: String): Tokenizer.this.type
Regex pattern to separate from the inside of tokens.
Regex pattern to separate from the inside of tokens. Takes priority over splitChars.
- def setSuffixPattern(value: String): Tokenizer.this.type
Regex to identify subtokens that are in the end of the token.
Regex to identify subtokens that are in the end of the token. Regex has to end with \\z and must contain groups (). Each group will become a separate token within the prefix. Defaults to non-letter characters. e.g. quotes or parenthesis
- def setTargetPattern(value: String): Tokenizer.this.type
Set a basic regex rule to identify token candidates in text.
- val splitChars: StringArrayParam
Character list used to separate from the inside of tokens
- val splitPattern: Param[String]
Pattern to separate from the inside of tokens.
Pattern to separate from the inside of tokens. takes priority over splitChars.
This pattern will be applied to the tokens which where extracted with the target pattern previously
Example:
import org.apache.spark.ml.Pipeline import com.johnsnowlabs.nlp.annotators.Tokenizer import com.johnsnowlabs.nlp.DocumentAssembler val textDf = sqlContext.sparkContext.parallelize(Array("Tokens in this-text will#be#split on hashtags-and#dashes")).toDF("text") val documentAssembler = new DocumentAssembler().setInputCol("text").setOutputCol("sentences") val tokenizer = new Tokenizer().setInputCols("sentences").setOutputCol("tokens").setSplitPattern("-|#") new Pipeline().setStages(Array(documentAssembler, tokenizer)).fit(textDf).transform(textDf).select("tokens.result").show(false)
This will yield:
Tokens, in, this, text, will, be, split, on, hashtags, and, dashes - val suffixPattern: Param[String]
Regex with groups and ends with \\z to match target suffix.
Regex with groups and ends with \\z to match target suffix. Overrides contextCharacters Param
- final def synchronized[T0](arg0: => T0): T0
- Definition Classes
- AnyRef
- val targetPattern: Param[String]
Pattern to grab from text as token candidates.
Pattern to grab from text as token candidates. (Default:
"\\S+")Defaults to: "\\S+" which means anything not a space will be matched and considered as a token candidate, This will cause text to be split on on white spaces to yield token candidates.
This rule will be added to the BREAK_PATTERN varaible, which is used to yield token candidates.
import org.apache.spark.ml.Pipeline import com.johnsnowlabs.nlp.annotators.Tokenizer import com.johnsnowlabs.nlp.DocumentAssembler val textDf = sqlContext.sparkContext.parallelize(Array("I only consider lowercase characters and NOT UPPERCASED and only the numbers 0,1, to 7 as tokens but not 8 or 9")).toDF("text") val documentAssembler = new DocumentAssembler().setInputCol("text").setOutputCol("sentences") val tokenizer = new Tokenizer().setInputCols("sentences").setOutputCol("tokens").setTargetPattern("a-z-0-7") new Pipeline().setStages(Array(documentAssembler, tokenizer)).fit(textDf).transform(textDf).select("tokens.result").show(false)
This will yield:
only, consider, lowercase, characters, and, and, only, the, numbers, 0, 1, to, 7, as, tokens, but, not, or - def toString(): String
- Definition Classes
- Identifiable → AnyRef → Any
- def train(dataset: Dataset[_], recursivePipeline: Option[PipelineModel]): TokenizerModel
Clears out rules and constructs a new rule for every combination of rules provided .
Clears out rules and constructs a new rule for every combination of rules provided . The strategy is to catch one token per regex group. User may add its own groups if needs targets to be tokenized separately from the rest
- Definition Classes
- Tokenizer → AnnotatorApproach
- final def transformSchema(schema: StructType): StructType
requirement for pipeline transformation validation.
requirement for pipeline transformation validation. It is called on fit()
- Definition Classes
- AnnotatorApproach → PipelineStage
- def transformSchema(schema: StructType, logging: Boolean): StructType
- Attributes
- protected
- Definition Classes
- PipelineStage
- Annotations
- @DeveloperApi()
- val uid: String
- Definition Classes
- Tokenizer → Identifiable
- def validate(schema: StructType): Boolean
takes a Dataset and checks to see if all the required annotation types are present.
takes a Dataset and checks to see if all the required annotation types are present.
- schema
to be validated
- returns
True if all the required types are present, else false
- Attributes
- protected
- Definition Classes
- AnnotatorApproach
- final def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()
- final def wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- def write: MLWriter
- Definition Classes
- DefaultParamsWritable → MLWritable
Deprecated Value Members
- def finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.Throwable]) @Deprecated
- Deprecated
(Since version 9)
Inherited from AnnotatorApproach[TokenizerModel]
Inherited from CanBeLazy
Inherited from DefaultParamsWritable
Inherited from MLWritable
Inherited from HasOutputAnnotatorType
Inherited from HasOutputAnnotationCol
Inherited from HasInputAnnotationCols
Inherited from Estimator[TokenizerModel]
Inherited from PipelineStage
Inherited from Logging
Inherited from Params
Inherited from Serializable
Inherited from Identifiable
Inherited from AnyRef
Inherited from Any
param
A list of (hyper-)parameter keys this annotator can take. Users can set and get the parameter values through setters and getters, respectively.
Annotator types
Required input and expected output annotator types