com.johnsnowlabs.nlp.embeddings
internal types to show Rows as a relevant StructType Should be deleted once Spark releases UserDefinedTypes to @developerAPI
internal types to show Rows as a relevant StructType Should be deleted once Spark releases UserDefinedTypes to @developerAPI
takes a document and annotations and produces new annotations of this annotator's annotation type
takes a document and annotations and produces new annotations of this annotator's annotation type
Annotations that correspond to inputAnnotationCols generated by previous annotators if any
any number of annotations processed for every input annotation. Not necessary one to one relationship
Size of every batch (Default depends on model).
Size of every batch (Default depends on model).
Whether to ignore case in index lookups (Default depends on model)
Whether to ignore case in index lookups (Default depends on model)
ConfigProto from tensorflow, serialized into byte array.
ConfigProto from tensorflow, serialized into byte array. Get with config_proto.SerializeToString()
requirement for annotators copies
requirement for annotators copies
Number of embedding dimensions (Default depends on model)
Number of embedding dimensions (Default depends on model)
Override for additional custom schema checks
Override for additional custom schema checks
Size of every batch.
Size of every batch.
input annotations columns currently used
Gets annotation column name going to generate
Gets annotation column name going to generate
Input Annotator Types: DOCUMENT.
Input Annotator Types: DOCUMENT. TOKEN
columns that contain annotations necessary to run this annotator AnnotatorType is used both as input and output columns if not specified
columns that contain annotations necessary to run this annotator AnnotatorType is used both as input and output columns if not specified
Max sentence length to process (Default: 128
)
Output Annotator Types: WORD_EMBEDDINGS
Output Annotator Types: WORD_EMBEDDINGS
Size of every batch.
Size of every batch.
Whether to lowercase tokens or not
Whether to lowercase tokens or not
Set Embeddings dimensions for the DistilBERT model.
Set Embeddings dimensions for the DistilBERT model. Only possible to set this when the first time is saved dimension is not changeable, it comes from DistilBERT config file.
Overrides required annotators column if different than default
Overrides required annotators column if different than default
Overrides annotation column name when transforming
Overrides annotation column name when transforming
It contains TF model signatures for the laded saved model
Unique identifier for storage (Default: this.uid
)
Unique identifier for storage (Default: this.uid
)
Given requirements are met, this applies ML transformation within a Pipeline or stand-alone Output annotation will be generated as a new column, previous annotations are still available separately metadata is built at schema level to record annotations structural information outside its content
Given requirements are met, this applies ML transformation within a Pipeline or stand-alone Output annotation will be generated as a new column, previous annotations are still available separately metadata is built at schema level to record annotations structural information outside its content
Dataset[Row]
requirement for pipeline transformation validation.
requirement for pipeline transformation validation. It is called on fit()
takes a Dataset and checks to see if all the required annotation types are present.
takes a Dataset and checks to see if all the required annotation types are present.
to be validated
True if all the required types are present, else false
Vocabulary used to encode the words to ids with WordPieceEncoder
A list of (hyper-)parameter keys this annotator can take. Users can set and get the parameter values through setters and getters, respectively.
DistilBERT is a small, fast, cheap and light Transformer model trained by distilling BERT base. It has 40% less parameters than
bert-base-uncased
, runs 60% faster while preserving over 95% of BERT's performances as measured on the GLUE language understanding benchmark.Pretrained models can be loaded with
pretrained
of the companion object:The default model is
"distilbert_base_cased"
, if no name is provided. For available pretrained models please see the Models Hub.For extended examples of usage, see the Spark NLP Workshop and the DistilBertEmbeddingsTestSpec. Models from the HuggingFace π€ Transformers library are also compatible with Spark NLP π. The Spark NLP Workshop example shows how to import them https://github.com/JohnSnowLabs/spark-nlp/discussions/5669.
The DistilBERT model was proposed in the paper DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter.
Paper Abstract:
As Transfer Learning from large-scale pre-trained models becomes more prevalent in Natural Language Processing (NLP), operating these large models in on-the-edge and/or under constrained computational training or inference budgets remains challenging. In this work, we propose a method to pre-train a smaller general-purpose language representation model, called DistilBERT, which can then be fine-tuned with good performances on a wide range of tasks like its larger counterparts. While most prior work investigated the use of distillation for building task-specific models, we leverage knowledge distillation during the pretraining phase and show that it is possible to reduce the size of a BERT model by 40%, while retaining 97% of its language understanding capabilities and being 60% faster. To leverage the inductive biases learned by larger models during pretraining, we introduce a triple loss combining language modeling, distillation and cosine-distance losses. Our smaller, faster and lighter model is cheaper to pre-train and we demonstrate its capabilities for on-device computations in a proof-of-concept experiment and a comparative on-device study.
Tips:
:obj:token_type_ids
, you don't need to indicate which token belongs to which segment. Just separate your segments with the separation token:obj:tokenizer.sep_token
(or:obj:[SEP]
).:obj:position_ids
input). This could be added if necessary though, just let us know if you need this option.Example
Annotators Main Page for a list of transformer based embeddings