This class is used to calculate ELMO embeddings for For Sequence Batches of TokenizedSentences.
This class is used to calculate ELMO embeddings for For Sequence Batches of TokenizedSentences.
https://tfhub.dev/google/elmo/3
* word_emb: the character-based word representations with shape [batch_size, max_length, 512]. == word_emb
* lstm_outputs1: the first LSTM hidden state with shape [batch_size, max_length, 1024]. === lstm_outputs1
* lstm_outputs2: the second LSTM hidden state with shape [batch_size, max_length, 1024]. === lstm_outputs2
* elmo: the weighted sum of the 3 layers, where the weights are trainable. This tensor has shape [batch_size, max_length, 1024] == elmo
This class is used to calculate ELMO embeddings for For Sequence Batches of TokenizedSentences.
https://tfhub.dev/google/elmo/3 * word_emb: the character-based word representations with shape [batch_size, max_length, 512]. == word_emb * lstm_outputs1: the first LSTM hidden state with shape [batch_size, max_length, 1024]. === lstm_outputs1 * lstm_outputs2: the second LSTM hidden state with shape [batch_size, max_length, 1024]. === lstm_outputs2 * elmo: the weighted sum of the 3 layers, where the weights are trainable. This tensor has shape [batch_size, max_length, 1024] == elmo