lamp.nn.bert

package lamp.nn.bert

Type members

Classlikes

case class BertEncoder(tokenEmbedding: Embedding, segmentEmbedding: Embedding, positionalEmbedding: Constant, blocks: Seq[TransformerEncoderBlock]) extends GenericModule[(Variable, Variable), Variable]

BertEncoder module

BertEncoder module

Input is (tokens, segments) where tokens and segments are both (batch,num tokens) long tensor.

Output is (batch, num tokens, out dimension)

Companion:
object
Companion:
class
case class BertLoss(pretrain: BertPretrainModule, mlmLoss: LossFunction, wholeSentenceLoss: LossFunction) extends GenericModule[BertLossInput, Variable]
Companion:
object
object BertLoss
Companion:
class
case class BertLossInput(input: BertPretrainInput, maskedLanguageModelTarget: STen, wholeSentenceTarget: STen)

Input to BertLoss module

Input to BertLoss module

  • input: feature data, see documentation of BertPretrainInput
  • maskedLanguageModelTarget: long tensor of (batch size, masked positions (variable)). Values are the true tokens masked out at the positions in input.positions
  • wholeSentenceTarget: float tensor of size (batch size). Values are truth targets for the whole sentence loss which is a BCEWithLogitLoss. Values are floats in [0,1].
Companion:
object
Companion:
class
case class BertPretrainInput(tokens: Constant, segments: Constant, positions: STen)

Input for BERT pretrain module

Input for BERT pretrain module

  • Tokens: Long tensor of size (batch, sequence length). Sequence length includes cls and sep tokens. Values are tokens of the input vocabulary and 4 additional control tokens: cls, sep, pad, mask. First token must be cls.
  • Segments: Long tensor of size (batch, sequence length). Values are segment tokens.
  • Positions: Long tensor of size (batch, mask size (variable)). Values are indices in [0,sequence length) selecting masked sequence positions. They never select positions of cls, sep, pad.
Companion:
object
Companion:
class
case class BertPretrainModule(encoder: BertEncoder, mlm: MaskedLanguageModelModule, wholeSentenceBinaryClassifier: MLP) extends GenericModule[BertPretrainInput, BertPretrainOutput]
Companion:
object
Companion:
class
case class BertPretrainOutput(encoded: Variable, languageModelScores: Variable, wholeSentenceBinaryClassifierScore: Variable)

Output of BERT

Output of BERT

  • encoded: float tensor of size (batch, sequence length, embedding dimension ) holds per token embeddings
  • languageModelScores: float tensor of size (batch, sequence length, vocabulary size) holds per token log probability distributions (from logSoftMax)
  • wholeSentenceBinaryClassifierScore: float tensor of size (batch) holds the output score of the whole sentence prediction task suitable for BCELogitLoss
case class MaskedLanguageModelModule(mlp: MLP) extends GenericModule[(Variable, STen), Variable]

Masked Language Model Input of (embedding, positions) Embedding of size (batch, num tokens, embedding dim) Positions of size (batch, max num tokens) long tensor indicating which positions to make predictions on Output (batch, len(Positions), vocabulary size)

Masked Language Model Input of (embedding, positions) Embedding of size (batch, num tokens, embedding dim) Positions of size (batch, max num tokens) long tensor indicating which positions to make predictions on Output (batch, len(Positions), vocabulary size)

Companion:
object