lamp.nn

package lamp.nn

Provides building blocks for neural networks

Notable types:

Optimizers:

Modules facilitating composing other modules:

  • nn.Sequential composes a homogenous list of modules (analogous to List)
  • nn.sequence composes a heterogeneous list of modules (analogous to tuples)
  • nn.EitherModule composes two modules in a scala.Either

Examples of neural network building blocks, layers etc:

Type members

Classlikes

object AdamW
Companion:
class
case class AdamW(parameters: Seq[(STen, PTag)], weightDecay: OptimizerHyperparameter, learningRate: OptimizerHyperparameter, beta1: OptimizerHyperparameter, beta2: OptimizerHyperparameter, eps: Double, clip: Option[Double], debias: Boolean) extends Optimizer
See also:
Companion:
object
case class AdversarialTraining(eps: Double) extends LossCalculation[Variable]
object Attention
case class AttentionDecoder[T, M <: StatefulModule[Variable, Variable, T], M0 <: Module](decoder: M & StatefulModule[Variable, Variable, T], embedding: M0 & Module, stateToKey: T => Variable, keyValue: Variable, tokens: Variable, padToken: Long) extends StatefulModule[Variable, Variable, T]
case class BatchNorm(weight: Constant, bias: Constant, runningMean: Constant, runningVar: Constant, training: Boolean, momentum: Double, eps: Double, forceTrain: Boolean, forceEval: Boolean, evalIfBatchSizeIsOne: Boolean) extends Module
Companion:
object
object BatchNorm
Companion:
class
case class BatchNorm2D(weight: Constant, bias: Constant, runningMean: Constant, runningVar: Constant, training: Boolean, momentum: Double, eps: Double) extends Module
Companion:
object
Companion:
class
case class Conv1D(weights: Constant, bias: Constant, stride: Long, padding: Long, dilation: Long, groups: Long) extends Module
Companion:
object
object Conv1D
Companion:
class
case class Conv2D(weights: Constant, bias: Constant, stride: Long, padding: Long, dilation: Long, groups: Long) extends Module
Companion:
object
object Conv2D
Companion:
class
case class Conv2DTransposed(weights: Constant, bias: Constant, stride: Long, padding: Long, dilation: Long) extends Module
Companion:
object
Companion:
class
case class Debug(fun: (STen, Boolean, Boolean) => Unit) extends Module
Companion:
object
object Debug
Companion:
class
case class DependentHyperparameter(default: Double)(pf: PartialFunction[PTag, Double]) extends OptimizerHyperparameter
case class Dropout(prob: Double, training: Boolean) extends Module
Companion:
object
object Dropout
Companion:
class
case class EitherModule[A, B, M1 <: GenericModule[A, B], M2 <: GenericModule[A, B]](members: Either[M1 & GenericModule[A, B], M2 & GenericModule[A, B]]) extends GenericModule[A, B]
Companion:
object
Companion:
class
case class Embedding(weights: Constant) extends Module

Learnable mapping from classes to dense vectors. Equivalent to L * W where L is the n x C one-hot encoded matrix of the classes * is matrix multiplication W is the C x dim dense matrix. W is learnable. L is never computed directly. C is the number of classes. n is the size of the batch.

Learnable mapping from classes to dense vectors. Equivalent to L * W where L is the n x C one-hot encoded matrix of the classes * is matrix multiplication W is the C x dim dense matrix. W is learnable. L is never computed directly. C is the number of classes. n is the size of the batch.

Input is a long tensor with values in [0,C-1]. Input shape is arbitrary, (). Output shape is ( x D) where D is the embedding dimension.

Companion:
object
object Embedding
Companion:
class
case class FreeRunningRNN[T, M <: StatefulModule[Variable, Variable, T]](module: M & StatefulModule[Variable, Variable, T], timeSteps: Int) extends StatefulModule[Variable, Variable, T]

Wraps a (sequence x batch) long -> (sequence x batch x dim) double stateful module and runs in it greedy (argmax) generation mode over timeSteps steps.

Wraps a (sequence x batch) long -> (sequence x batch x dim) double stateful module and runs in it greedy (argmax) generation mode over timeSteps steps.

Companion:
object
Companion:
class
case class Fun(fun: Scope => Variable => Variable) extends Module
Companion:
object
object Fun
Companion:
class
case class GRU(weightXh: Constant, weightHh: Constant, weightXr: Constant, weightXz: Constant, weightHr: Constant, weightHz: Constant, biasR: Constant, biasZ: Constant, biasH: Constant) extends StatefulModule[Variable, Variable, Option[Variable]]

Inputs of size (sequence length * batch * in dim) Outputs of size (sequence length * batch * hidden dim)

Inputs of size (sequence length * batch * in dim) Outputs of size (sequence length * batch * hidden dim)

Companion:
object
object GRU
Companion:
class
case class GenericFun[A, B](fun: Scope => A => B) extends GenericModule[A, B]
Companion:
object
object GenericFun
Companion:
class
Companion:
class
trait GenericModule[A, B]

Base type of modules

Base type of modules

Modules are functions of type (Seq[lamp.autograd.Constant],A) => B, where the Seq[lamp.autograd.Constant] arguments are optimizable parameters and A is a non-optimizable input.

Modules provide a way to build composite functions while also keep track of the parameter list of the composite function.

===Example===

case object Weights extends LeafTag
case object Bias extends LeafTag
case class Linear(weights: Constant, bias: Option[Constant]) extends Module {

 override val state = List(
   weights -> Weights
 ) ++ bias.toList.map(b => (b, Bias))

 def forward[S: Sc](x: Variable): Variable = {
   val v = x.mm(weights)
   bias.map(_ + v).getOrElse(v)

 }
}

Some other attributes of modules are attached by type classes e.g. with the nn.TrainingMode, nn.Load type classes.

Type parameters:
A

the argument type of the module

B

the value type of the module

See also:

nn.Module is an alias for simple Variable => Variable modules

Companion:
object
trait InitState[M, C]

Type class about how to initialize recurrent neural networks

Type class about how to initialize recurrent neural networks

Companion:
object
object InitState
Companion:
class
implicit class InitStateSyntax[M, C](m: M)(implicit is: InitState[M, C])
case class LSTM(weightXi: Constant, weightXf: Constant, weightXo: Constant, weightHi: Constant, weightHf: Constant, weightHo: Constant, weightXc: Constant, weightHc: Constant, biasI: Constant, biasF: Constant, biasO: Constant, biasC: Constant) extends StatefulModule[Variable, Variable, Option[(Variable, Variable)]]

Inputs of size (sequence length * batch * vocab) Outputs of size (sequence length * batch * output dim)

Inputs of size (sequence length * batch * vocab) Outputs of size (sequence length * batch * output dim)

Companion:
object
object LSTM
Companion:
class
case class LayerNorm(scale: Constant, bias: Constant, eps: Double, normalizedShape: List[Long]) extends Module
Companion:
object
object LayerNorm
Companion:
class
trait LeafTag extends PTag
Companion:
object
Companion:
class
case class LiftedModule[M <: Module](mod: M & Module) extends StatefulModule[Variable, Variable, Unit]
Companion:
object
Companion:
class
case class Linear(weights: Constant, bias: Option[Constant]) extends Module
Companion:
object
object Linear
Companion:
class
trait Load[M]

Type class about how to load the contents of the state of modules from external tensors

Type class about how to load the contents of the state of modules from external tensors

Companion:
object
object Load
Companion:
class
implicit class LoadSyntax[M](m: M)(implicit evidence$2: Load[M])

Loss and Gradient calculation

Loss and Gradient calculation

Takes samples, target, module, loss function and computes the loss and the gradients

object MLP

Factory for multilayer fully connected feed forward networks

Factory for multilayer fully connected feed forward networks

Returned network has the following repeated structure: [linear -> batchnorm -> nonlinearity -> dropout]*

The last block does not include the nonlinearity and the dropout.

Value parameters:
dropout

dropout applied to each block

hidden

list of hidden dimensions

in

input dimensions

out

output dimensions

case class MappedState[A, B, C, D, M <: StatefulModule[A, B, C]](statefulModule: M & StatefulModule[A, B, C], map: C => D) extends StatefulModule2[A, B, C, D]
Companion:
object
Companion:
class
case class ModelWithOptimizer[I, M <: GenericModule[I, Variable]](model: SupervisedModel[I, M], optimizer: Optimizer)
case class MultiheadAttention(wQ: Constant, wK: Constant, wV: Constant, wO: Constant, dropout: Double, train: Boolean, numHeads: Int, padToken: Long, linearized: Boolean) extends GenericModule[(Variable, Variable, Variable, STen), Variable]

Multi-head scaled dot product attention module

Multi-head scaled dot product attention module

Input: (query,key,value,tokens) where query: batch x num queries x query dim key: batch x num k-v x key dim value: batch x num k-v x key value tokens: batch x num queries, long type

Tokens is used to carry over padding information and ignore the padding

Companion:
object
Companion:
class
case object NoTag extends LeafTag
trait Optimizer
trait PTag

A small trait to mark paramters for unique identification

A small trait to mark paramters for unique identification

Companion:
object
object PTag
Companion:
class
class PerturbedLossCalculation[I](noiseLevel: Double) extends LossCalculation[I]

Evaluates the gradient at current point + eps where eps is I * N(0,noiseLevel)

Evaluates the gradient at current point + eps where eps is I * N(0,noiseLevel)

object RAdam
Companion:
class
case class RAdam(parameters: Seq[(STen, PTag)], weightDecay: OptimizerHyperparameter, learningRate: OptimizerHyperparameter, beta1: OptimizerHyperparameter, beta2: OptimizerHyperparameter, eps: Double, clip: Option[Double]) extends Optimizer

Rectified Adam optimizer algorithm

Rectified Adam optimizer algorithm

Companion:
object
case class RNN(weightXh: Constant, weightHh: Constant, biasH: Constant) extends StatefulModule[Variable, Variable, Option[Variable]]

Inputs of size (sequence length * batch * in dim) Outputs of size (sequence length * batch * hidden dim)

Inputs of size (sequence length * batch * in dim) Outputs of size (sequence length * batch * hidden dim)

Companion:
object
object RNN
Companion:
class
case class Recursive[A, M <: GenericModule[A, A]](member: M & GenericModule[A, A], n: Int) extends GenericModule[A, A]
Companion:
object
object Recursive
Companion:
class
case class ResidualModule[M <: Module](transform: M & Module) extends Module
Companion:
object
Companion:
class
object SGDW
Companion:
class
case class SGDW(parameters: Seq[(STen, PTag)], learningRate: OptimizerHyperparameter, weightDecay: OptimizerHyperparameter, momentum: Option[OptimizerHyperparameter], clip: Option[Double]) extends Optimizer
Companion:
object
case class Seq2[T1, T2, T3, M1 <: GenericModule[T1, T2], M2 <: GenericModule[T2, T3]](m1: M1 & GenericModule[T1, T2], m2: M2 & GenericModule[T2, T3]) extends GenericModule[T1, T3]
Companion:
object
object Seq2
Companion:
class
case class Seq2Seq[S0, S1, M1 <: StatefulModule2[Variable, Variable, S0, S1], M2 <: StatefulModule[Variable, Variable, S1]](encoder: M1 & StatefulModule2[Variable, Variable, S0, S1], decoder: M2 & StatefulModule[Variable, Variable, S1]) extends StatefulModule2[(Variable, Variable), Variable, S0, S1]
Companion:
object
object Seq2Seq
Companion:
class
case class Seq2SeqWithAttention[S0, S1, M0 <: Module, M1 <: StatefulModule2[Variable, Variable, S0, S1], M2 <: StatefulModule[Variable, Variable, S1]](destinationEmbedding: M0 & Module, encoder: M1 & StatefulModule2[Variable, Variable, S0, S1], decoder: M2 & StatefulModule[Variable, Variable, S1], padToken: Long)(stateToKey: S1 => Variable) extends StatefulModule2[(Variable, Variable), Variable, S0, S1]
Companion:
object
Companion:
class
case class Seq3[T1, T2, T3, T4, M1 <: GenericModule[T1, T2], M2 <: GenericModule[T2, T3], M3 <: GenericModule[T3, T4]](m1: M1 & GenericModule[T1, T2], m2: M2 & GenericModule[T2, T3], m3: M3 & GenericModule[T3, T4]) extends GenericModule[T1, T4]
Companion:
object
object Seq3
Companion:
class
case class Seq4[T1, T2, T3, T4, T5, M1 <: GenericModule[T1, T2], M2 <: GenericModule[T2, T3], M3 <: GenericModule[T3, T4], M4 <: GenericModule[T4, T5]](m1: M1 & GenericModule[T1, T2], m2: M2 & GenericModule[T2, T3], m3: M3 & GenericModule[T3, T4], m4: M4 & GenericModule[T4, T5]) extends GenericModule[T1, T5]
Companion:
object
object Seq4
Companion:
class
case class Seq5[T1, T2, T3, T4, T5, T6, M1 <: GenericModule[T1, T2], M2 <: GenericModule[T2, T3], M3 <: GenericModule[T3, T4], M4 <: GenericModule[T4, T5], M5 <: GenericModule[T5, T6]](m1: M1 & GenericModule[T1, T2], m2: M2 & GenericModule[T2, T3], m3: M3 & GenericModule[T3, T4], m4: M4 & GenericModule[T4, T5], m5: M5 & GenericModule[T5, T6]) extends GenericModule[T1, T6]
Companion:
object
object Seq5
Companion:
class
case class Seq6[T1, T2, T3, T4, T5, T6, T7, M1 <: GenericModule[T1, T2], M2 <: GenericModule[T2, T3], M3 <: GenericModule[T3, T4], M4 <: GenericModule[T4, T5], M5 <: GenericModule[T5, T6], M6 <: GenericModule[T6, T7]](m1: M1 & GenericModule[T1, T2], m2: M2 & GenericModule[T2, T3], m3: M3 & GenericModule[T3, T4], m4: M4 & GenericModule[T4, T5], m5: M5 & GenericModule[T5, T6], m6: M6 & GenericModule[T6, T7]) extends GenericModule[T1, T7]
Companion:
object
object Seq6
Companion:
class
case class SeqLinear(weight: Constant, bias: Constant) extends Module

Inputs of size (sequence length * batch * in dim) Outputs of size (sequence length * batch * output dim) Applies a linear function to each time step

Inputs of size (sequence length * batch * in dim) Outputs of size (sequence length * batch * output dim) Applies a linear function to each time step

Companion:
object
object SeqLinear
Companion:
class
case class Sequential[A, M <: GenericModule[A, A]](members: M & GenericModule[A, A]*) extends GenericModule[A, A]
Companion:
object
object Sequential
Companion:
class
object Shampoo
Companion:
class
case class Shampoo(parameters: Seq[(STen, PTag)], learningRate: OptimizerHyperparameter, clip: Option[Double], eps: Double, diagonalThreshold: Int, updatePreconditionerEveryNIterations: Int, momentum: OptimizerHyperparameter) extends Optimizer
See also:
Companion:
object
case class StatefulSeq2[T1, T2, T3, S1, S2, M1 <: StatefulModule[T1, T2, S1], M2 <: StatefulModule[T2, T3, S2]](m1: M1 & StatefulModule[T1, T2, S1], m2: M2 & StatefulModule[T2, T3, S2]) extends StatefulModule[T1, T3, (S1, S2)]
Companion:
object
Companion:
class
case class StatefulSeq3[T1, T2, T3, T4, S1, S2, S3, M1 <: StatefulModule[T1, T2, S1], M2 <: StatefulModule[T2, T3, S2], M3 <: StatefulModule[T3, T4, S3]](m1: M1 & StatefulModule[T1, T2, S1], m2: M2 & StatefulModule[T2, T3, S2], m3: M3 & StatefulModule[T3, T4, S3]) extends StatefulModule[T1, T4, (S1, S2, S3)]
Companion:
object
Companion:
class
case class StatefulSeq4[T1, T2, T3, T4, T5, S1, S2, S3, S4, M1 <: StatefulModule[T1, T2, S1], M2 <: StatefulModule[T2, T3, S2], M3 <: StatefulModule[T3, T4, S3], M4 <: StatefulModule[T4, T5, S4]](m1: M1 & StatefulModule[T1, T2, S1], m2: M2 & StatefulModule[T2, T3, S2], m3: M3 & StatefulModule[T3, T4, S3], m4: M4 & StatefulModule[T4, T5, S4]) extends StatefulModule[T1, T5, (S1, S2, S3, S4)]
Companion:
object
Companion:
class
case class StatefulSeq5[T1, T2, T3, T4, T5, T6, S1, S2, S3, S4, S5, M1 <: StatefulModule[T1, T2, S1], M2 <: StatefulModule[T2, T3, S2], M3 <: StatefulModule[T3, T4, S3], M4 <: StatefulModule[T4, T5, S4], M5 <: StatefulModule[T5, T6, S5]](m1: M1 & StatefulModule[T1, T2, S1], m2: M2 & StatefulModule[T2, T3, S2], m3: M3 & StatefulModule[T3, T4, S3], m4: M4 & StatefulModule[T4, T5, S4], m5: M5 & StatefulModule[T5, T6, S5]) extends StatefulModule[T1, T6, (S1, S2, S3, S4, S5)]
Companion:
object
Companion:
class
case class SupervisedModel[I, M <: GenericModule[I, Variable]](module: M & GenericModule[I, Variable], lossFunction: LossFunction, lossCalculation: LossCalculation[I], printMemoryAllocations: Boolean)(implicit tm: TrainingMode[M])
implicit class ToLift[M <: Module](mod: M & Module)
implicit class ToMappedState[A, B, C, M <: StatefulModule[A, B, C]](mod: M & StatefulModule[A, B, C])
implicit class ToUnlift[A, B, C, D, M <: StatefulModule2[A, B, C, D]](mod: M & StatefulModule2[A, B, C, D])(implicit is: InitState[M, C])
implicit class ToWithInit[A, B, C, M <: StatefulModule[A, B, C]](mod: M & StatefulModule[A, B, C])
trait TrainingMode[M]

Type class about how to switch a module into training or evaluation mode

Type class about how to switch a module into training or evaluation mode

Companion:
object
Companion:
class
implicit class TrainingModeSyntax[M](m: M)(implicit evidence$1: TrainingMode[M])
case class TransformerEmbedding(embedding: Embedding, addPositionalEmbedding: Boolean, positionalEmbedding: Constant) extends GenericModule[Variable, (Variable, STen)]

Gradients are not computed for positionalEmbedding

Gradients are not computed for positionalEmbedding

Companion:
object
Companion:
class
case class TransformerEncoder(blocks: Seq[TransformerEncoderBlock]) extends GenericModule[(Variable, STen), Variable]

TransformerEncoder module

TransformerEncoder module

Input is (data, tokens) where data is (batch, num tokens, in dimension), double tensor tokens is (batch,num tokens) long tensor.

Output is (bach, num tokens, out dimension)

The sole purpose of tokens is to carry over the padding

Companion:
object
Companion:
class
case class TransformerEncoderBlock(attention: MultiheadAttention, layerNorm1: LayerNorm, layerNorm2: LayerNorm, w1: Constant, b1: Constant, w2: Constant, b2: Constant, dropout: Double, train: Boolean) extends GenericModule[(Variable, STen), Variable]

A single block of the transformer encoder as defined in Fig 10.7.1 in d2l v0.16

A single block of the transformer encoder as defined in Fig 10.7.1 in d2l v0.16

Companion:
object
case class UnliftedModule[A, B, C, D, M <: StatefulModule2[A, B, C, D]](statefulModule: M & StatefulModule2[A, B, C, D])(implicit init: InitState[M, C]) extends GenericModule[A, B]
Companion:
object
Companion:
class
case class WeightNormLinear(weightsV: Constant, weightsG: Constant, bias: Option[Constant]) extends Module
Companion:
object
Companion:
class
case class WithInit[A, B, C, M <: StatefulModule[A, B, C]](module: M & StatefulModule[A, B, C], init: C) extends StatefulModule[A, B, C]
Companion:
object
object WithInit
Companion:
class
object Yogi
Companion:
class
case class Yogi(parameters: Seq[(STen, PTag)], weightDecay: OptimizerHyperparameter, learningRate: OptimizerHyperparameter, beta1: OptimizerHyperparameter, beta2: OptimizerHyperparameter, eps: Double, clip: Option[Double], debias: Boolean) extends Optimizer

The Yogi optimizer algorithm I added the decoupled weight decay term following https://arxiv.org/pdf/1711.05101.pdf

The Yogi optimizer algorithm I added the decoupled weight decay term following https://arxiv.org/pdf/1711.05101.pdf

See also:
Companion:
object
object sequence
case class simple(v: Double) extends OptimizerHyperparameter

Types

type StatefulModule[A, B, C] = GenericModule[(A, C), (B, C)]
type StatefulModule2[A, B, C, D] = GenericModule[(A, C), (B, D)]

Value members

Concrete methods

def gradientClippingInPlace(gradients: Seq[Option[STen]], theta: Double): Unit
def initLinear[S : Sc](in: Int, out: Int, tOpt: STenOptions): Constant
def loadMultiple[T1 <: GenericModule[_, _] : Load, T2 <: GenericModule[_, _] : Load](t1: T1, t2: T2, tensors: Seq[STen]): Unit
def loadMultiple[T1 <: GenericModule[_, _] : Load, T2 <: GenericModule[_, _] : Load, T3 <: GenericModule[_, _] : Load](t1: T1, t2: T2, t3: T3, tensors: Seq[STen]): Unit
def loadMultiple[T1 <: GenericModule[_, _] : Load, T2 <: GenericModule[_, _] : Load, T3 <: GenericModule[_, _] : Load, T4 <: GenericModule[_, _] : Load](t1: T1, t2: T2, t3: T3, t4: T4, tensors: Seq[STen]): Unit
def loadMultiple[T1 <: GenericModule[_, _] : Load, T2 <: GenericModule[_, _] : Load, T3 <: GenericModule[_, _] : Load, T4 <: GenericModule[_, _] : Load, T5 <: GenericModule[_, _] : Load](t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, tensors: Seq[STen]): Unit
def loadMultiple[T1 <: GenericModule[_, _] : Load, T2 <: GenericModule[_, _] : Load, T3 <: GenericModule[_, _] : Load, T4 <: GenericModule[_, _] : Load, T5 <: GenericModule[_, _] : Load, T6 <: GenericModule[_, _] : Load](t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, tensors: Seq[STen]): Unit
def loadMultiple[T1 <: GenericModule[_, _] : Load, T2 <: GenericModule[_, _] : Load, T3 <: GenericModule[_, _] : Load, T4 <: GenericModule[_, _] : Load, T5 <: GenericModule[_, _] : Load, T6 <: GenericModule[_, _] : Load, T7 <: GenericModule[_, _] : Load](t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, tensors: Seq[STen]): Unit
def loadMultiple[T1 <: GenericModule[_, _] : Load, T2 <: GenericModule[_, _] : Load, T3 <: GenericModule[_, _] : Load, T4 <: GenericModule[_, _] : Load, T5 <: GenericModule[_, _] : Load, T6 <: GenericModule[_, _] : Load, T7 <: GenericModule[_, _] : Load, T8 <: GenericModule[_, _] : Load](t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, tensors: Seq[STen]): Unit
def loadMultiple[T1 <: GenericModule[_, _] : Load, T2 <: GenericModule[_, _] : Load, T3 <: GenericModule[_, _] : Load, T4 <: GenericModule[_, _] : Load, T5 <: GenericModule[_, _] : Load, T6 <: GenericModule[_, _] : Load, T7 <: GenericModule[_, _] : Load, T8 <: GenericModule[_, _] : Load, T9 <: GenericModule[_, _] : Load](t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, tensors: Seq[STen]): Unit
def loadMultiple[T1 <: GenericModule[_, _] : Load, T2 <: GenericModule[_, _] : Load, T3 <: GenericModule[_, _] : Load, T4 <: GenericModule[_, _] : Load, T5 <: GenericModule[_, _] : Load, T6 <: GenericModule[_, _] : Load, T7 <: GenericModule[_, _] : Load, T8 <: GenericModule[_, _] : Load, T9 <: GenericModule[_, _] : Load, T10 <: GenericModule[_, _] : Load](t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, tensors: Seq[STen]): Unit
def loadMultiple[T1 <: GenericModule[_, _] : Load, T2 <: GenericModule[_, _] : Load, T3 <: GenericModule[_, _] : Load, T4 <: GenericModule[_, _] : Load, T5 <: GenericModule[_, _] : Load, T6 <: GenericModule[_, _] : Load, T7 <: GenericModule[_, _] : Load, T8 <: GenericModule[_, _] : Load, T9 <: GenericModule[_, _] : Load, T10 <: GenericModule[_, _] : Load, T11 <: GenericModule[_, _] : Load](t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, tensors: Seq[STen]): Unit
def loadMultiple[T1 <: GenericModule[_, _] : Load, T2 <: GenericModule[_, _] : Load, T3 <: GenericModule[_, _] : Load, T4 <: GenericModule[_, _] : Load, T5 <: GenericModule[_, _] : Load, T6 <: GenericModule[_, _] : Load, T7 <: GenericModule[_, _] : Load, T8 <: GenericModule[_, _] : Load, T9 <: GenericModule[_, _] : Load, T10 <: GenericModule[_, _] : Load, T11 <: GenericModule[_, _] : Load, T12 <: GenericModule[_, _] : Load](t1: T1, t2: T2, t3: T3, t4: T4, t5: T5, t6: T6, t7: T7, t8: T8, t9: T9, t10: T10, t11: T11, t12: T12, tensors: Seq[STen]): Unit

Implicits

Implicits

final implicit def InitStateSyntax[M, C](m: M)(implicit is: InitState[M, C]): InitStateSyntax[M, C]
final implicit def LoadSyntax[M : Load](m: M): LoadSyntax[M]
final implicit def ToLift[M <: Module](mod: M & Module): ToLift[M]
final implicit def ToMappedState[A, B, C, M <: StatefulModule[A, B, C]](mod: M & StatefulModule[A, B, C]): ToMappedState[A, B, C, M]
final implicit def ToUnlift[A, B, C, D, M <: StatefulModule2[A, B, C, D]](mod: M & StatefulModule2[A, B, C, D])(implicit is: InitState[M, C]): ToUnlift[A, B, C, D, M]
final implicit def ToWithInit[A, B, C, M <: StatefulModule[A, B, C]](mod: M & StatefulModule[A, B, C]): ToWithInit[A, B, C, M]
final implicit def TrainingModeSyntax[M : TrainingMode](m: M): TrainingModeSyntax[M]