Packages

  • package root
    Definition Classes
    root
  • package lamp

    Lamp provides utilities to build state of the art machine learning applications

    Lamp provides utilities to build state of the art machine learning applications

    Overview

    Notable types and packages:

    • lamp.STen is a memory managed wrapper around aten.ATen, an off the heap, native n-dimensionl array backed by libtorch.
    • lamp.autograd implements reverse mode automatic differentiation.
    • lamp.nn contains neural network building blocks, see e.g. lamp.nn.Linear.
    • lamp.data.IOLoops implements a training loop and other data related abstractions.
    • lamp.knn implements k-nearest neighbor search on the CPU and GPU
    • lamp.umap.Umap implements the UMAP dimension reduction algorithm
    • lamp.onnx implements serialization of computation graphs into ONNX format
    • lamp.io contains CSV and NPY readers
    How to get data into lamp

    Use one of the file readers in lamp.io or one of the factories in lamp.STen$.

    How to define a custom neural network layer

    See the documentation on lamp.nn.GenericModule

    How to compose neural network layers

    See the documentation on lamp.nn

    How to train models

    See the training loops in lamp.data.IOLoops

    Definition Classes
    root
  • package data
    Definition Classes
    lamp
  • CPU
  • CudaDevice
  • Device
  • DoublePrecision
  • FloatingPointPrecision
  • Movable
  • STen
  • STenOptions
  • Scope
  • SinglePrecision
  • TensorHelpers

object STen extends Serializable

Companion object of lamp.STen

- STen.fromDoubleArray, STen.fromLongArray, STen.fromFloatArray factory methods copy data from JVM arrays into off heap memory and create an STen instance

  • There are similar factories which take SADDLE data structures
Linear Supertypes
Serializable, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. STen
  2. Serializable
  3. AnyRef
  4. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. Protected

Type Members

  1. implicit class OwnedSyntax extends AnyRef

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##: Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. def addOut(out: STen, self: STen, other: STen, alpha: Double): Unit
  5. def addcmulOut(out: STen, self: STen, tensor1: STen, tensor2: STen, alpha: Double): Unit
  6. def addmmOut(out: STen, self: STen, mat1: STen, mat2: STen, beta: Double, alpha: Double): Unit
  7. def apply[S](vs: Double*)(implicit arg0: Sc[S]): STen

    Returns a 1D tensor containing the given values

  8. def arange[S](start: Double, end: Double, step: Double, tensorOptions: STenOptions = STen.dOptions)(implicit arg0: Sc[S]): STen
  9. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  10. def bmmOut(out: STen, self: STen, other: STen): Unit
  11. def cat[S](tensors: Seq[STen], dim: Long)(implicit arg0: Sc[S]): STen
  12. def catOut(out: STen, tensors: Seq[STen], dim: Int): Unit
  13. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.CloneNotSupportedException]) @native()
  14. val dOptions: STenOptions

    A tensor option specifying CPU and double

  15. def divOut(out: STen, self: STen, other: STen): Unit
  16. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  17. def equals(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef → Any
  18. def eye[S](n: Int, m: Int, tensorOptions: STenOptions)(implicit arg0: Sc[S]): STen
  19. def eye[S](n: Int, tensorOptions: STenOptions = STen.dOptions)(implicit arg0: Sc[S]): STen
  20. val fOptions: STenOptions

    A tensor option specifying CPU and float

  21. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.Throwable])
  22. def free(value: Tensor): STen

    Wraps a tensor without registering it to any scope.

    Wraps a tensor without registering it to any scope.

    Memory may leak.

  23. def fromDoubleArray[S](ar: Array[Double], dim: Seq[Long], device: Device, precision: FloatingPointPrecision)(implicit arg0: Sc[S]): STen

    Returns a tensor with the given content and shape on the given device

  24. def fromFile[S](path: String, offset: Long, length: Long, scalarTypeByte: Byte, pin: Boolean)(implicit arg0: Sc[S]): STen

    Create tensor directly from file.

    Create tensor directly from file. Memory maps a file into host memory. Data is not passed through the JVM. Returned tensor is always on the CPU device.

    path

    file path

    offset

    byte offset into the file. Must be page aligned (usually multiple of 4096)

    length

    byte length of the data

    scalarTypeByte

    scalar type (long=4,half=5,float=6,double=7)

    pin

    if true the mapped segment will be page locked with mlock(2)

    returns

    tensor on CPU

  25. def fromFloatArray[S](ar: Array[Float], dim: Seq[Long], device: Device)(implicit arg0: Sc[S]): STen

    Returns a tensor with the given content and shape on the given device

  26. def fromFloatMat[S](m: Mat[Float], device: Device)(implicit arg0: Sc[S]): STen

    Returns a tensor with the given content and shape on the given device

  27. def fromLongArray[S](ar: Array[Long], dim: Seq[Long], device: Device)(implicit arg0: Sc[S]): STen

    Returns a tensor with the given content and shape on the given device

  28. def fromLongMat[S](m: Mat[Long], cuda: Boolean = false)(implicit arg0: Sc[S]): STen

    Returns a tensor with the given content and shape on the given device

  29. def fromLongMat[S](m: Mat[Long], device: Device)(implicit arg0: Sc[S]): STen

    Returns a tensor with the given content and shape on the given device

  30. def fromLongVec[S](m: Vec[Long], cuda: Boolean = false)(implicit arg0: Sc[S]): STen

    Returns a tensor with the given content and shape on the given device

  31. def fromLongVec[S](m: Vec[Long], device: Device)(implicit arg0: Sc[S]): STen

    Returns a tensor with the given content and shape on the given device

  32. def fromMat[S](m: Mat[Double], device: Device, precision: FloatingPointPrecision)(implicit arg0: Sc[S]): STen

    Returns a tensor with the given content and shape on the given device

  33. def fromMat[S](m: Mat[Double], cuda: Boolean = false)(implicit arg0: Sc[S]): STen

    Returns a tensor with the given content and shape on the given device

  34. def fromVec[S](m: Vec[Double], device: Device, precision: FloatingPointPrecision)(implicit arg0: Sc[S]): STen

    Returns a tensor with the given content and shape on the given device

  35. def fromVec[S](m: Vec[Double], cuda: Boolean = false)(implicit arg0: Sc[S]): STen

    Returns a tensor with the given content and shape on the given device

  36. final def getClass(): Class[_ <: AnyRef]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  37. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  38. def indexSelectOut(out: STen, self: STen, dim: Int, index: STen): Unit
  39. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  40. def l1_loss_backward[S](gradOutput: STen, self: STen, target: STen, reduction: Long)(implicit arg0: Sc[S]): STen
  41. val lOptions: STenOptions

    A tensor option specifying CPU and long

  42. def linspace[S](start: Double, end: Double, steps: Long, tensorOptions: STenOptions = STen.dOptions)(implicit arg0: Sc[S]): STen
  43. def meanOut(out: STen, self: STen, dim: Seq[Int], keepDim: Boolean): Unit
  44. def mmOut(out: STen, self: STen, other: STen): Unit
  45. def mse_loss[S](self: STen, target: STen, reduction: Long)(implicit arg0: Sc[S]): STen
  46. def mse_loss_backward[S](gradOutput: STen, self: STen, target: STen, reduction: Long)(implicit arg0: Sc[S]): STen
  47. def mulOut(out: STen, self: STen, other: STen): Unit
  48. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  49. def normal[S](mean: Double, std: Double, size: Seq[Long], options: STenOptions)(implicit arg0: Sc[S]): STen
  50. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  51. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  52. def ones[S](size: Seq[Long], tensorOptions: STenOptions = STen.dOptions)(implicit arg0: Sc[S]): STen
  53. def onesLike[S](tensor: STen)(implicit arg0: Sc[S]): STen
  54. def onesLike[S](tensor: Tensor)(implicit arg0: Sc[S]): STen
  55. def owned(value: Tensor)(implicit scope: Scope): STen

    Wraps an aten.Tensor and registering it to the given scope

  56. def powOut(out: STen, self: STen, other: STen): Unit
  57. def powOut(out: STen, self: STen, other: Double): Unit
  58. def rand[S](size: Seq[Long], tensorOptions: STenOptions = STen.dOptions)(implicit arg0: Sc[S]): STen
  59. def randint[S](low: Long, high: Long, size: Seq[Long], tensorOptions: STenOptions)(implicit arg0: Sc[S]): STen
  60. def randint[S](high: Long, size: Seq[Long], tensorOptions: STenOptions = STen.dOptions)(implicit arg0: Sc[S]): STen
  61. def randn[S](size: Seq[Long], tensorOptions: STenOptions = STen.dOptions)(implicit arg0: Sc[S]): STen
  62. def randperm[S](n: Long, tensorOptions: STenOptions = STen.dOptions)(implicit arg0: Sc[S]): STen
  63. def remainderOut(out: STen, self: STen, other: Double): Unit
  64. def remainderOut(out: STen, self: STen, other: STen): Unit
  65. def scalarDouble[S](value: Double, options: STenOptions)(implicit arg0: Sc[S]): STen
  66. def scalarLong(value: Long, options: STenOptions)(implicit scope: Scope): STen
  67. def softplus_backward[S](gradOutput: STen, self: STen, beta: Double, threshold: Double, output: STen)(implicit arg0: Sc[S]): STen
  68. def sparse_coo[S](indices: STen, values: STen, dim: Seq[Long], tensorOptions: STenOptions = STen.dOptions)(implicit arg0: Sc[S]): STen
  69. def stack[S](tensors: Seq[STen], dim: Long)(implicit arg0: Sc[S]): STen
  70. def subOut(out: STen, self: STen, other: STen, alpha: Double): Unit
  71. def sumOut(out: STen, self: STen, dim: Seq[Int], keepDim: Boolean): Unit
  72. final def synchronized[T0](arg0: => T0): T0
    Definition Classes
    AnyRef
  73. def tanh_backward[S](gradOutput: STen, output: STen)(implicit arg0: Sc[S]): STen
  74. def tensorsFromFile[S](path: String, offset: Long, length: Long, pin: Boolean, tensors: List[(Byte, Long, Long)])(implicit arg0: Sc[S]): Vector[STen]

    Create tensors directly from file.

    Create tensors directly from file. Memory maps a file into host memory. Data is not passed through the JVM. Returned tensor is always on the CPU device.

    path

    file path

    offset

    byte offset into the file. Must be page aligned (usually multiple of 4096)

    length

    byte length of the data (all tensors in total)

    pin

    if true the mapped segment will be page locked with mlock(2)

    tensors

    list of tensors with (scalarType, byte offset, byte length), byte offset must be aligned to 8

    returns

    tensor on CPU

  75. def toString(): String
    Definition Classes
    AnyRef → Any
  76. def to_dense_backward[S](gradOutput: STen, input: STen)(implicit arg0: Sc[S]): STen
  77. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  78. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  79. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException]) @native()
  80. def where[S](condition: Tensor, self: STen, other: STen)(implicit arg0: Sc[S]): STen
  81. def where[S](condition: STen, self: STen, other: STen)(implicit arg0: Sc[S]): STen
  82. def zeros[S](size: Seq[Long], tensorOptions: STenOptions = STen.dOptions)(implicit arg0: Sc[S]): STen
  83. def zerosLike[S](tensor: STen)(implicit arg0: Sc[S]): STen
  84. def zerosLike[S](tensor: Tensor)(implicit arg0: Sc[S]): STen

Inherited from Serializable

Inherited from AnyRef

Inherited from Any

Ungrouped