STen

object STen

Companion object of lamp.STen

Companion:
class
trait Product
trait Mirror
class Object
trait Matchable
class Any
STen.type

Type members

Classlikes

implicit class OwnedSyntax(t: Tensor)

Inherited types

type MirroredElemLabels <: Tuple

The names of the product elements

The names of the product elements

Inherited from:
Mirror
type MirroredLabel <: String

The name of the type

The name of the type

Inherited from:
Mirror

Value members

Concrete methods

def addOut(out: STen, self: STen, other: STen, alpha: Double): Unit
def addcmulOut(out: STen, self: STen, tensor1: STen, tensor2: STen, alpha: Double): Unit
def addmmOut(out: STen, self: STen, mat1: STen, mat2: STen, beta: Double, alpha: Double): Unit
def arange[S : Sc](start: Double, end: Double, step: Double, tensorOptions: STenOptions): STen
def arange_l[S : Sc](start: Long, end: Long, step: Long, tensorOptions: STenOptions): STen
def atan2[S : Sc](y: STen, x: STen): STen
def bmmOut(out: STen, self: STen, other: STen): Unit
def cat[S : Sc](tensors: Seq[STen], dim: Long): STen
def catOut(out: STen, tensors: Seq[STen], dim: Int): Unit
def divOut(out: STen, self: STen, other: STen): Unit
def eye[S : Sc](n: Int, tensorOptions: STenOptions): STen
def eye[S : Sc](n: Int, m: Int, tensorOptions: STenOptions): STen
def free(value: Tensor): STen

Wraps a tensor without registering it to any scope.

Wraps a tensor without registering it to any scope.

Memory may leak.

def fromDoubleArray[S : Sc](ar: Array[Double], dim: Seq[Long], device: Device, precision: FloatingPointPrecision): STen

Returns a tensor with the given content and shape on the given device

Returns a tensor with the given content and shape on the given device

def fromFile[S : Sc](path: String, offset: Long, length: Long, scalarTypeByte: Byte, pin: Boolean): STen

Create tensor directly from file. Memory maps a file into host memory. Data is not passed through the JVM. Returned tensor is always on the CPU device.

Create tensor directly from file. Memory maps a file into host memory. Data is not passed through the JVM. Returned tensor is always on the CPU device.

Value parameters:
length

byte length of the data

offset

byte offset into the file. Must be page aligned (usually multiple of 4096)

path

file path

pin

if true the mapped segment will be page locked with mlock(2)

scalarTypeByte

scalar type (long=4,half=5,float=6,double=7)

Returns:

tensor on CPU

def fromFloatArray[S : Sc](ar: Array[Float], dim: Seq[Long], device: Device): STen

Returns a tensor with the given content and shape on the given device

Returns a tensor with the given content and shape on the given device

def fromLongArray[S : Sc](ar: Array[Long], dim: Seq[Long], device: Device): STen

Returns a tensor with the given content and shape on the given device

Returns a tensor with the given content and shape on the given device

def fromLongArray[S : Sc](ar: Array[Long]): STen

Returns a tensor with the given content and shape on the given device

Returns a tensor with the given content and shape on the given device

def indexCopyOut(out: STen, self: STen, dim: Int, index: STen, source: STen): Unit
def indexSelectOut(out: STen, self: STen, dim: Int, index: STen): Unit
def l1_loss_backward[S : Sc](gradOutput: STen, self: STen, target: STen, reduction: Long): STen
def linspace[S : Sc](start: Double, end: Double, steps: Long, tensorOptions: STenOptions): STen
def lstsq[S : Sc](A: STen, B: STen): (STen, STen, STen, STen)
def meanOut(out: STen, self: STen, dim: Seq[Int], keepDim: Boolean): Unit
def mmOut(out: STen, self: STen, other: STen): Unit
def mse_loss[S : Sc](self: STen, target: STen, reduction: Long): STen
def mse_loss_backward[S : Sc](gradOutput: STen, self: STen, target: STen, reduction: Long): STen
def mulOut(out: STen, self: STen, other: STen): Unit
def ncclBoadcast(tensors: Seq[(STen, NcclComm)]): Unit

Broadcast tensor on root to the clique Blocks until all peers execute the broadcast. Takes a list of tensors for the case where a single thread manages multiple GPUs

Broadcast tensor on root to the clique Blocks until all peers execute the broadcast. Takes a list of tensors for the case where a single thread manages multiple GPUs

def ncclInitComm(nRanks: Int, myRank: Int, myDevice: Int, ncclUniqueId: NcclUniqueId): NcclComm

Blocks until all peers join the clique.

Blocks until all peers join the clique.

def ncclReduce(inputs: Seq[(STen, NcclComm)], output: STen, rootRank: Int): Unit

Reduction with + Output must be on the root rank

Reduction with + Output must be on the root rank

Blocks until all peers execute the reduce. Takes a list of tensors for the case where a single thread manages multiple GPUs

def normal[S : Sc](mean: Double, std: Double, size: Seq[Long], options: STenOptions): STen
def ones[S : Sc](size: Seq[Long], tensorOptions: STenOptions): STen
def onesLike[S : Sc](tensor: Tensor): STen
def onesLike[S : Sc](tensor: STen): STen
def owned(value: Tensor)(implicit scope: Scope): STen

Wraps an aten.Tensor and registering it to the given scope

Wraps an aten.Tensor and registering it to the given scope

def powOut(out: STen, self: STen, other: Double): Unit
def powOut(out: STen, self: STen, other: STen): Unit
def rand[S : Sc](size: Seq[Long], tensorOptions: STenOptions): STen
def randint[S : Sc](high: Long, size: Seq[Long], tensorOptions: STenOptions): STen
def randint[S : Sc](low: Long, high: Long, size: Seq[Long], tensorOptions: STenOptions): STen
def randn[S : Sc](size: Seq[Long], tensorOptions: STenOptions): STen
def randperm[S : Sc](n: Long, tensorOptions: STenOptions): STen
def remainderOut(out: STen, self: STen, other: STen): Unit
def remainderOut(out: STen, self: STen, other: Double): Unit
def scalarDouble[S : Sc](value: Double, options: STenOptions): STen
def scalarLong(value: Long, options: STenOptions)(implicit scope: Scope): STen
def softplus_backward[S : Sc](gradOutput: STen, self: STen, beta: Double, threshold: Double): STen
def sparse_coo[S : Sc](indices: STen, values: STen, dim: Seq[Long], tensorOptions: STenOptions): STen
def stack[S : Sc](tensors: Seq[STen], dim: Long): STen
def subOut(out: STen, self: STen, other: STen, alpha: Double): Unit
def sumOut(out: STen, self: STen, dim: Seq[Int], keepDim: Boolean): Unit
def tanh_backward[S : Sc](gradOutput: STen, output: STen): STen
def tensorsFromFile[S : Sc](path: String, offset: Long, length: Long, pin: Boolean, tensors: List[(Byte, Long, Long)]): Vector[STen]

Create tensors directly from file. Memory maps a file into host memory. Data is not passed through the JVM. Returned tensor is always on the CPU device.

Create tensors directly from file. Memory maps a file into host memory. Data is not passed through the JVM. Returned tensor is always on the CPU device.

Value parameters:
length

byte length of the data (all tensors in total)

offset

byte offset into the file. Must be page aligned (usually multiple of 4096)

path

file path

pin

if true the mapped segment will be page locked with mlock(2)

tensors

list of tensors with (scalarType, byte offset, byte length), byte offset must be aligned to 8

Returns:

tensor on CPU

def to_dense_backward[S : Sc](gradOutput: STen, input: STen): STen
def triangularSolve[S : Sc](b: STen, A: STen, upper: Boolean, transpose: Boolean, uniTriangular: Boolean): STen
def where[S : Sc](condition: STen, self: STen, other: STen): STen
def where[S : Sc](condition: Tensor, self: STen, other: STen): STen
def zeros[S : Sc](size: Seq[Long], tensorOptions: STenOptions): STen
def zerosLike[S : Sc](tensor: Tensor): STen
def zerosLike[S : Sc](tensor: STen): STen

Concrete fields

A tensor option specifying CPU and byte

A tensor option specifying CPU and byte

A tensor option specifying CPU and double

A tensor option specifying CPU and double

A tensor option specifying CPU and float

A tensor option specifying CPU and float

A tensor option specifying CPU and long

A tensor option specifying CPU and long

Implicits

Implicits

final implicit def OwnedSyntax(t: Tensor): OwnedSyntax