Class

com.johnsnowlabs.ml.tensorflow

TensorflowBertClassification

Related Doc: package tensorflow

Permalink

class TensorflowBertClassification extends Serializable with TensorflowForClassification

Linear Supertypes
TensorflowForClassification, Serializable, Serializable, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. TensorflowBertClassification
  2. TensorflowForClassification
  3. Serializable
  4. Serializable
  5. AnyRef
  6. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new TensorflowBertClassification(tensorflowWrapper: TensorflowWrapper, sentenceStartTokenId: Int, sentenceEndTokenId: Int, configProtoBytes: Option[Array[Byte]] = None, tags: Map[String, Int], signatures: Option[Map[String, String]] = None, vocabulary: Map[String, Int])

    Permalink

    tensorflowWrapper

    Bert Model wrapper with TensorFlow Wrapper

    sentenceStartTokenId

    Id of sentence start Token

    sentenceEndTokenId

    Id of sentence end Token.

    configProtoBytes

    Configuration for TensorFlow session

    tags

    labels which model was trained with in order

    signatures

    TF v2 signatures in Spark NLP

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. val _tfBertSignatures: Map[String, String]

    Permalink
  5. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  6. def calculateSoftmax(scores: Array[Float]): Array[Float]

    Permalink
    Definition Classes
    TensorflowForClassification
  7. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  8. def encode(sentences: Seq[(WordpieceTokenizedSentence, Int)], maxSequenceLength: Int): Seq[Array[Int]]

    Permalink

    Encode the input sequence to indexes IDs adding padding where necessary

    Encode the input sequence to indexes IDs adding padding where necessary

    Definition Classes
    TensorflowForClassification
  9. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  10. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  11. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  12. def findIndexedToken(tokenizedSentences: Seq[TokenizedSentence], sentence: (WordpieceTokenizedSentence, Int), tokenPiece: TokenPiece): Option[IndexedToken]

    Permalink
  13. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  14. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  15. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  16. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  17. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  18. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  19. def predict(tokenizedSentences: Seq[TokenizedSentence], batchSize: Int, maxSentenceLength: Int, caseSensitive: Boolean, tags: Map[String, Int]): Seq[Annotation]

    Permalink
    Definition Classes
    TensorflowForClassification
  20. def predictSequence(tokenizedSentences: Seq[TokenizedSentence], sentences: Seq[Sentence], batchSize: Int, maxSentenceLength: Int, caseSensitive: Boolean, coalesceSentences: Boolean = false, tags: Map[String, Int]): Seq[Annotation]

    Permalink
  21. val sentenceEndTokenId: Int

    Permalink

    Id of sentence end Token.

    Id of sentence end Token.

    Definition Classes
    TensorflowBertClassificationTensorflowForClassification
  22. val sentencePadTokenId: Int

    Permalink
    Attributes
    protected
    Definition Classes
    TensorflowBertClassificationTensorflowForClassification
  23. val sentenceStartTokenId: Int

    Permalink

    Id of sentence start Token

    Id of sentence start Token

    Definition Classes
    TensorflowBertClassificationTensorflowForClassification
  24. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  25. def tag(batch: Seq[Array[Int]]): Seq[Array[Array[Float]]]

    Permalink
  26. def tagSequence(batch: Seq[Array[Int]]): Array[Array[Float]]

    Permalink
  27. val tensorflowWrapper: TensorflowWrapper

    Permalink

    Bert Model wrapper with TensorFlow Wrapper

  28. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  29. def tokenizeWithAlignment(sentences: Seq[TokenizedSentence], maxSeqLength: Int, caseSensitive: Boolean): Seq[WordpieceTokenizedSentence]

    Permalink
  30. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  31. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  32. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  33. def wordAndSpanLevelAlignmentWithTokenizer(tokenLogits: Array[Array[Float]], tokenizedSentences: Seq[TokenizedSentence], sentence: (WordpieceTokenizedSentence, Int), tags: Map[String, Int]): Seq[Annotation]

    Permalink

    Word-level and span-level alignment with Tokenizer https://github.com/google-research/bert#tokenization

    Word-level and span-level alignment with Tokenizer https://github.com/google-research/bert#tokenization

    ### Input orig_tokens = ["John", "Johanson", "'s", "house"] labels = ["NNP", "NNP", "POS", "NN"]

    # bert_tokens == ["[CLS]", "john", "johan", "##son", "'", "s", "house", "[SEP]"] # orig_to_tok_map == [1, 2, 4, 6]

    Definition Classes
    TensorflowForClassification

Inherited from Serializable

Inherited from Serializable

Inherited from AnyRef

Inherited from Any

Ungrouped