case class BatchNorm2D(scope: Scope, input: Variable, weight: Variable, bias: Variable, runningMean: STen, runningVar: STen, training: Boolean, momentum: Double, eps: Double) extends Op with Product with Serializable
Batch Norm 2D 0-th dimension are samples. 1-th are features, everything else is averaged out.
- Alphabetic
- By Inheritance
- BatchNorm2D
- Serializable
- Serializable
- Product
- Equals
- Op
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Instance Constructors
Value Members
-
final
def
!=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
##(): Int
- Definition Classes
- AnyRef → Any
-
final
def
==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
asInstanceOf[T0]: T0
- Definition Classes
- Any
- val bias: Variable
-
def
clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()
- val eps: Double
-
final
def
eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- val expectedShape: List[Long]
-
def
finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( classOf[java.lang.Throwable] )
-
final
def
getClass(): Class[_]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
- val input: Variable
- val inputShape: List[Long]
-
final
def
isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- val momentum: Double
-
final
def
ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
final
def
notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
final
def
notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
- val output: Tensor
-
val
params: List[(Variable, (STen, STen) ⇒ Unit)]
Implementation of the backward pass
Implementation of the backward pass
A list of input variables paired up with an anonymous function computing the respective partial derivative. With the notation in the documentation of the trait lamp.autograd.Op:
dy/dw2 => dy/dw2 * dw2/dw1
. The first argument of the anonymous function is the incoming partial derivative (dy/dw2
), the second argument is the output tensor into which the result (dy/dw2 * dw2/dw1
) is accumulated (added).If the operation does not support computing the partial derivative for some of its arguments, then do not include that argument in this list.
- Definition Classes
- BatchNorm2D → Op
- See also
The documentation on the trait lamp.autograd.Op for more details and example.
- val runningMean: STen
- val runningVar: STen
- val saveInvstd: Tensor
- val saveMean: Tensor
- val scope: Scope
-
final
def
synchronized[T0](arg0: ⇒ T0): T0
- Definition Classes
- AnyRef
- val training: Boolean
-
val
value: Variable
The value of this operation
The value of this operation
- Definition Classes
- BatchNorm2D → Op
-
final
def
wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()
- val weight: Variable