class YakeModel extends AnnotatorModel[YakeModel] with HasSimpleAnnotate[YakeModel] with YakeParams
Yake is an Unsupervised, Corpus-Independent, Domain and Language-Independent and Single-Document keyword extraction algorithm. Extracting keywords from texts has become a challenge for individuals and organizations as the information grows in complexity and size. The need to automate this task so that text can be processed in a timely and adequate manner has led to the emergence of automatic keyword extraction tools. Yake is a novel feature-based system for multi-lingual keyword extraction, which supports texts of different sizes, domain or languages. Unlike other approaches, Yake does not rely on dictionaries nor thesauri, neither is trained against any corpora. Instead, it follows an unsupervised approach which builds upon features extracted from the text, making it thus applicable to documents written in different languages without the need for further knowledge. This can be beneficial for a large number of tasks and a plethora of situations where access to training corpora is either limited or restricted. The algorithm makes use of the position of a sentence and token. Therefore, to use the annotator, the text should be first sent through a Sentence Boundary Detector and then a tokenizer. You can tweak the following parameters to get the best result from the annotator.
setMinNGrams(int) Select the minimum length of a extracted keyword setMaxNGrams(int) Select the maximum length of a extracted keyword setNKeywords(int) Extract the top N keywords setStopWords(list) Set the list of stop words setThreshold(float) Each keyword will be given a keyword score greater than 0. (Lower the score better the keyword) Set an upper bound for the keyword score from this method. setWindowSize(int) Yake will construct a co-occurence matrix. You can set the window size for the cooccurence matrix construction from this method. ex: windowSize=2 will look at two words to both left and right of a candidate word.
See Campos, R., Mangaravite, V., Pasquali, A., Jatowt, A., Jorge, A., Nunes, C. and Jatowt, A. (2020). YAKE! Keyword Extraction from Single Documents using Multiple Local Features. In Information Sciences Journal. Elsevier, Vol 509, pp 257-289. https://github.com/JohnSnowLabs/spark-nlp/blob/master/src/test/scala/com/johnsnowlabs/nlp/annotators/keyword/yake/YakeTestSpec.scala for further reference on how to use this API. Sources:
Sources :
https://www.sciencedirect.com/science/article/pii/S0020025519308588
Paper abstract As the amount of generated information grows, reading and summarizing texts of large collections turns into a challenging task. Many documents do not come with descriptive terms, thus requiring humans to generate keywords on-the-fly. The need to automate this kind of task demands the development of keyword extraction systems with the ability to automatically identify keywords within the text. One approach is to resort to machine-learning algorithms. These, however, depend on large annotated text corpora, which are not always available. An alternative solution is to consider an unsupervised approach. In this article, we describe YAKE!, a light-weight unsupervised automatic keyword extraction method which rests on statistical text features extracted from single documents to select the most relevant keywords of a text. Our system does not need to be trained on a particular set of documents, nor does it depend on dictionaries, external corpora, text size, language, or domain. To demonstrate the merits and significance of YAKE!, we compare it against ten state-of-the-art unsupervised approaches and one supervised method. Experimental results carried out on top of twenty datasets show that YAKE! significantly outperforms other unsupervised methods on texts of different sizes, languages, and domains.
- Alphabetic
- By Inheritance
- YakeModel
- YakeParams
- HasSimpleAnnotate
- AnnotatorModel
- CanBeLazy
- RawAnnotator
- HasOutputAnnotationCol
- HasInputAnnotationCols
- HasOutputAnnotatorType
- ParamsAndFeaturesWritable
- HasFeatures
- DefaultParamsWritable
- MLWritable
- Model
- Transformer
- PipelineStage
- Logging
- Params
- Serializable
- Serializable
- Identifiable
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Instance Constructors
Type Members
-
type
AnnotationContent = Seq[Row]
internal types to show Rows as a relevant StructType Should be deleted once Spark releases UserDefinedTypes to @developerAPI
internal types to show Rows as a relevant StructType Should be deleted once Spark releases UserDefinedTypes to @developerAPI
- Attributes
- protected
- Definition Classes
- AnnotatorModel
-
type
AnnotatorType = String
- Definition Classes
- HasOutputAnnotatorType
Value Members
-
final
def
!=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
##(): Int
- Definition Classes
- AnyRef → Any
-
final
def
$[T](param: Param[T]): T
- Attributes
- protected
- Definition Classes
- Params
-
def
$$[T](feature: StructFeature[T]): T
- Attributes
- protected
- Definition Classes
- HasFeatures
-
def
$$[K, V](feature: MapFeature[K, V]): Map[K, V]
- Attributes
- protected
- Definition Classes
- HasFeatures
-
def
$$[T](feature: SetFeature[T]): Set[T]
- Attributes
- protected
- Definition Classes
- HasFeatures
-
def
$$[T](feature: ArrayFeature[T]): Array[T]
- Attributes
- protected
- Definition Classes
- HasFeatures
-
final
def
==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
def
_transform(dataset: Dataset[_], recursivePipeline: Option[PipelineModel]): DataFrame
- Attributes
- protected
- Definition Classes
- AnnotatorModel
-
def
afterAnnotate(dataset: DataFrame): DataFrame
- Attributes
- protected
- Definition Classes
- AnnotatorModel
-
def
annotate(annotations: Seq[Annotation]): Seq[Annotation]
takes a document and annotations and produces new annotations of this annotator's annotation type
takes a document and annotations and produces new annotations of this annotator's annotation type
- annotations
Annotations that correspond to inputAnnotationCols generated by previous annotators if any
- returns
any number of annotations processed for every input annotation. Not necessary one to one relationship
- Definition Classes
- YakeModel → HasSimpleAnnotate
-
final
def
asInstanceOf[T0]: T0
- Definition Classes
- Any
- def assignTags(resultFlattenIndexed: Array[(String, Int)]): Array[(String, Int, Int, String)]
-
def
beforeAnnotate(dataset: Dataset[_]): Dataset[_]
- Attributes
- protected
- Definition Classes
- AnnotatorModel
-
def
calculateTokenScores(basicStats: Array[(String, Int)], coOccurLeftAggregate: Map[String, Map[String, Int]], coOccurRightAggregate: Map[String, Map[String, Int]]): Iterable[Token]
Calculate token scores given statistics
Calculate token scores given statistics
Refer YAKE Paper
T Position = ln ( ln ( 3 + Median(Sentence Index)) T Case = max(TF(U(t)) , TF(A(t))) / ln(TF(t)) TF Norm =TF(t) / (MeanTF + 1 ∗ σ) T Rel = 1 + ( DL + DR ) * TF(t)/MaxTF T Sentence = SF(t)/# Sentences TS = ( TPos ∗ TRel ) / ( TCase + (( TFNorm + TSent ) / TRel ))
- basicStats
Basic stats
- coOccurLeftAggregate
Left Co Occurrence
- coOccurRightAggregate
Right Co Occurrence
-
final
def
checkSchema(schema: StructType, inputAnnotatorType: String): Boolean
- Attributes
- protected
- Definition Classes
- HasInputAnnotationCols
-
final
def
clear(param: Param[_]): YakeModel.this.type
- Definition Classes
- Params
-
def
clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()
-
def
copy(extra: ParamMap): YakeModel
requirement for annotators copies
requirement for annotators copies
- Definition Classes
- RawAnnotator → Model → Transformer → PipelineStage → Params
-
def
copyValues[T <: Params](to: T, extra: ParamMap): T
- Attributes
- protected
- Definition Classes
- Params
-
final
def
defaultCopy[T <: Params](extra: ParamMap): T
- Attributes
- protected
- Definition Classes
- Params
-
def
dfAnnotate: UserDefinedFunction
Wraps annotate to happen inside SparkSQL user defined functions in order to act with org.apache.spark.sql.Column
Wraps annotate to happen inside SparkSQL user defined functions in order to act with org.apache.spark.sql.Column
- returns
udf function to be applied to inputCols using this annotator's annotate function as part of ML transformation
- Definition Classes
- HasSimpleAnnotate
-
final
def
eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
def
equals(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
def
explainParam(param: Param[_]): String
- Definition Classes
- Params
-
def
explainParams(): String
- Definition Classes
- Params
-
def
extraValidate(structType: StructType): Boolean
- Attributes
- protected
- Definition Classes
- RawAnnotator
-
def
extraValidateMsg: String
Override for additional custom schema checks
Override for additional custom schema checks
- Attributes
- protected
- Definition Classes
- RawAnnotator
-
final
def
extractParamMap(): ParamMap
- Definition Classes
- Params
-
final
def
extractParamMap(extra: ParamMap): ParamMap
- Definition Classes
- Params
-
val
features: ArrayBuffer[Feature[_, _, _]]
- Definition Classes
- HasFeatures
-
def
finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( classOf[java.lang.Throwable] )
-
def
get[T](feature: StructFeature[T]): Option[T]
- Attributes
- protected
- Definition Classes
- HasFeatures
-
def
get[K, V](feature: MapFeature[K, V]): Option[Map[K, V]]
- Attributes
- protected
- Definition Classes
- HasFeatures
-
def
get[T](feature: SetFeature[T]): Option[Set[T]]
- Attributes
- protected
- Definition Classes
- HasFeatures
-
def
get[T](feature: ArrayFeature[T]): Option[Array[T]]
- Attributes
- protected
- Definition Classes
- HasFeatures
-
final
def
get[T](param: Param[T]): Option[T]
- Definition Classes
- Params
-
def
getBasicStats(result: Array[Annotation]): Array[(String, Int)]
Calculates basic statistics like total Sentences in the document and assign a tag for each token
Calculates basic statistics like total Sentences in the document and assign a tag for each token
- result
Document to annotate as array of tokens with sentence metadata
- returns
Dataframe with columns SentenceID, token, totalSentences, tag
-
def
getCandidateKeywords(sentences: Array[(String, Int, Int, String)]): Map[String, Int]
Generate candidate keywords
Generate candidate keywords
- sentences
sentences as a list
- returns
candidate keywords
-
final
def
getClass(): Class[_]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
def
getCoOccurrence(sentences: ListBuffer[ListBuffer[String]], left: Boolean): Map[String, Map[String, Int]]
Calculate Co Occurrence for left to right given a window size
Calculate Co Occurrence for left to right given a window size
- sentences
DataFrame with tokens
- returns
Co Occurrence for token x from left to right as a Map
-
final
def
getDefault[T](param: Param[T]): Option[T]
- Definition Classes
- Params
-
def
getInputCols: Array[String]
- returns
input annotations columns currently used
- Definition Classes
- HasInputAnnotationCols
-
def
getKeywords(candidate: Map[String, Int], tokens: Iterable[Token]): ListMap[String, Double]
Extract keywords
Extract keywords
- candidate
candidate keywords
- tokens
tokens with scores
- returns
keywords
-
def
getLazyAnnotator: Boolean
- Definition Classes
- CanBeLazy
-
final
def
getOrDefault[T](param: Param[T]): T
- Definition Classes
- Params
-
final
def
getOutputCol: String
Gets annotation column name going to generate
Gets annotation column name going to generate
- Definition Classes
- HasOutputAnnotationCol
-
def
getParam(paramName: String): Param[Any]
- Definition Classes
- Params
-
def
getSentences(tokenizedArray: Array[Annotation]): ListBuffer[ListBuffer[String]]
Separate sentences given tokens with sentence metadata
Separate sentences given tokens with sentence metadata
- tokenizedArray
Tokens with sentence metadata
- returns
separated sentences
-
def
getStopWords: Array[String]
- Definition Classes
- YakeParams
-
final
def
hasDefault[T](param: Param[T]): Boolean
- Definition Classes
- Params
-
def
hasParam(paramName: String): Boolean
- Definition Classes
- Params
-
def
hasParent: Boolean
- Definition Classes
- Model
-
def
hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
def
initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
- Attributes
- protected
- Definition Classes
- Logging
-
def
initializeLogIfNecessary(isInterpreter: Boolean): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
val
inputAnnotatorTypes: Array[AnnotatorType]
Annotator reference id.
Annotator reference id. Used to identify elements in metadata or to refer to this annotator type
- Definition Classes
- YakeModel → HasInputAnnotationCols
-
final
val
inputCols: StringArrayParam
columns that contain annotations necessary to run this annotator AnnotatorType is used both as input and output columns if not specified
columns that contain annotations necessary to run this annotator AnnotatorType is used both as input and output columns if not specified
- Attributes
- protected
- Definition Classes
- HasInputAnnotationCols
-
final
def
isDefined(param: Param[_]): Boolean
- Definition Classes
- Params
-
final
def
isInstanceOf[T0]: Boolean
- Definition Classes
- Any
-
final
def
isSet(param: Param[_]): Boolean
- Definition Classes
- Params
-
def
isTraceEnabled(): Boolean
- Attributes
- protected
- Definition Classes
- Logging
-
val
lazyAnnotator: BooleanParam
- Definition Classes
- CanBeLazy
-
def
log: Logger
- Attributes
- protected
- Definition Classes
- Logging
-
def
logDebug(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logDebug(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logError(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logError(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logInfo(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logInfo(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logName: String
- Attributes
- protected
- Definition Classes
- Logging
-
def
logTrace(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logTrace(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logWarning(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logWarning(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
val
maxNGrams: IntParam
- Definition Classes
- YakeParams
-
val
minNGrams: IntParam
- Definition Classes
- YakeParams
-
def
msgHelper(schema: StructType): String
- Attributes
- protected
- Definition Classes
- HasInputAnnotationCols
-
val
nKeywords: IntParam
- Definition Classes
- YakeParams
-
final
def
ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
final
def
notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
final
def
notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
def
onWrite(path: String, spark: SparkSession): Unit
- Attributes
- protected
- Definition Classes
- ParamsAndFeaturesWritable
-
val
outputAnnotatorType: AnnotatorType
- Definition Classes
- YakeModel → HasOutputAnnotatorType
-
final
val
outputCol: Param[String]
- Attributes
- protected
- Definition Classes
- HasOutputAnnotationCol
-
lazy val
params: Array[Param[_]]
- Definition Classes
- Params
-
var
parent: Estimator[YakeModel]
- Definition Classes
- Model
-
def
processSentences(annotations: Seq[Annotation]): Seq[Annotation]
Execute the YAKE algorithm for each sentence
Execute the YAKE algorithm for each sentence
- annotations
token array to annotate
- returns
annotated token array
-
def
save(path: String): Unit
- Definition Classes
- MLWritable
- Annotations
- @Since( "1.6.0" ) @throws( ... )
-
def
set[T](feature: StructFeature[T], value: T): YakeModel.this.type
- Attributes
- protected
- Definition Classes
- HasFeatures
-
def
set[K, V](feature: MapFeature[K, V], value: Map[K, V]): YakeModel.this.type
- Attributes
- protected
- Definition Classes
- HasFeatures
-
def
set[T](feature: SetFeature[T], value: Set[T]): YakeModel.this.type
- Attributes
- protected
- Definition Classes
- HasFeatures
-
def
set[T](feature: ArrayFeature[T], value: Array[T]): YakeModel.this.type
- Attributes
- protected
- Definition Classes
- HasFeatures
-
final
def
set(paramPair: ParamPair[_]): YakeModel.this.type
- Attributes
- protected
- Definition Classes
- Params
-
final
def
set(param: String, value: Any): YakeModel.this.type
- Attributes
- protected
- Definition Classes
- Params
-
final
def
set[T](param: Param[T], value: T): YakeModel.this.type
- Definition Classes
- Params
-
def
setDefault[T](feature: StructFeature[T], value: () ⇒ T): YakeModel.this.type
- Attributes
- protected
- Definition Classes
- HasFeatures
-
def
setDefault[K, V](feature: MapFeature[K, V], value: () ⇒ Map[K, V]): YakeModel.this.type
- Attributes
- protected
- Definition Classes
- HasFeatures
-
def
setDefault[T](feature: SetFeature[T], value: () ⇒ Set[T]): YakeModel.this.type
- Attributes
- protected
- Definition Classes
- HasFeatures
-
def
setDefault[T](feature: ArrayFeature[T], value: () ⇒ Array[T]): YakeModel.this.type
- Attributes
- protected
- Definition Classes
- HasFeatures
-
final
def
setDefault(paramPairs: ParamPair[_]*): YakeModel.this.type
- Attributes
- protected
- Definition Classes
- Params
-
final
def
setDefault[T](param: Param[T], value: T): YakeModel.this.type
- Attributes
- protected
- Definition Classes
- Params
-
final
def
setInputCols(value: String*): YakeModel.this.type
- Definition Classes
- HasInputAnnotationCols
-
final
def
setInputCols(value: Array[String]): YakeModel.this.type
Overrides required annotators column if different than default
Overrides required annotators column if different than default
- Definition Classes
- HasInputAnnotationCols
-
def
setLazyAnnotator(value: Boolean): YakeModel.this.type
- Definition Classes
- CanBeLazy
-
def
setMaxNGrams(value: Int): YakeModel.this.type
- Definition Classes
- YakeParams
-
def
setMinNGrams(value: Int): YakeModel.this.type
- Definition Classes
- YakeParams
-
def
setNKeywords(value: Int): YakeModel.this.type
- Definition Classes
- YakeParams
-
final
def
setOutputCol(value: String): YakeModel.this.type
Overrides annotation column name when transforming
Overrides annotation column name when transforming
- Definition Classes
- HasOutputAnnotationCol
-
def
setParent(parent: Estimator[YakeModel]): YakeModel
- Definition Classes
- Model
-
def
setStopWords(value: Array[String]): YakeModel.this.type
- Definition Classes
- YakeParams
-
def
setThreshold(value: Float): YakeModel.this.type
- Definition Classes
- YakeParams
-
def
setWindowSize(value: Int): YakeModel.this.type
- Definition Classes
- YakeParams
-
val
stopWords: StringArrayParam
- Definition Classes
- YakeParams
-
final
def
synchronized[T0](arg0: ⇒ T0): T0
- Definition Classes
- AnyRef
-
val
threshold: FloatParam
- Definition Classes
- YakeParams
-
def
toString(): String
- Definition Classes
- Identifiable → AnyRef → Any
-
final
def
transform(dataset: Dataset[_]): DataFrame
Given requirements are met, this applies ML transformation within a Pipeline or stand-alone Output annotation will be generated as a new column, previous annotations are still available separately metadata is built at schema level to record annotations structural information outside its content
Given requirements are met, this applies ML transformation within a Pipeline or stand-alone Output annotation will be generated as a new column, previous annotations are still available separately metadata is built at schema level to record annotations structural information outside its content
- dataset
Dataset[Row]
- Definition Classes
- AnnotatorModel → Transformer
-
def
transform(dataset: Dataset[_], paramMap: ParamMap): DataFrame
- Definition Classes
- Transformer
- Annotations
- @Since( "2.0.0" )
-
def
transform(dataset: Dataset[_], firstParamPair: ParamPair[_], otherParamPairs: ParamPair[_]*): DataFrame
- Definition Classes
- Transformer
- Annotations
- @Since( "2.0.0" ) @varargs()
-
final
def
transformSchema(schema: StructType): StructType
requirement for pipeline transformation validation.
requirement for pipeline transformation validation. It is called on fit()
- Definition Classes
- RawAnnotator → PipelineStage
-
def
transformSchema(schema: StructType, logging: Boolean): StructType
- Attributes
- protected
- Definition Classes
- PipelineStage
- Annotations
- @DeveloperApi()
-
val
uid: String
- Definition Classes
- YakeModel → Identifiable
-
def
validate(schema: StructType): Boolean
takes a Dataset and checks to see if all the required annotation types are present.
takes a Dataset and checks to see if all the required annotation types are present.
- schema
to be validated
- returns
True if all the required types are present, else false
- Attributes
- protected
- Definition Classes
- RawAnnotator
-
final
def
wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()
-
val
windowSize: IntParam
- Definition Classes
- YakeParams
-
def
wrapColumnMetadata(col: Column): Column
- Attributes
- protected
- Definition Classes
- RawAnnotator
-
def
write: MLWriter
- Definition Classes
- ParamsAndFeaturesWritable → DefaultParamsWritable → MLWritable