Class SDLoss


  • public class SDLoss
    extends SDOps
    • Constructor Detail

      • SDLoss

        public SDLoss​(SameDiff sameDiff)
    • Method Detail

      • absoluteDifference

        public SDVariable absoluteDifference​(SDVariable label,
                                             SDVariable predictions,
                                             SDVariable weights,
                                             LossReduce lossReduce)
        Absolute difference loss: sum_i abs( label[i] - predictions[i] )
        Parameters:
        label - Label array (NUMERIC type)
        predictions - Predictions array (NUMERIC type)
        weights - Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)
        lossReduce - Reduction type for the loss. See LossReduce for more details. Default: LossReduce#MEAN_BY_NONZERO_WEIGHT_COUNT
        Returns:
        output loss variable (NUMERIC type)
      • absoluteDifference

        public SDVariable absoluteDifference​(String name,
                                             SDVariable label,
                                             SDVariable predictions,
                                             SDVariable weights,
                                             LossReduce lossReduce)
        Absolute difference loss: sum_i abs( label[i] - predictions[i] )
        Parameters:
        name - name May be null. Name for the output variable
        label - Label array (NUMERIC type)
        predictions - Predictions array (NUMERIC type)
        weights - Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)
        lossReduce - Reduction type for the loss. See LossReduce for more details. Default: LossReduce#MEAN_BY_NONZERO_WEIGHT_COUNT
        Returns:
        output loss variable (NUMERIC type)
      • absoluteDifference

        public SDVariable absoluteDifference​(SDVariable label,
                                             SDVariable predictions,
                                             SDVariable weights)
        Absolute difference loss: sum_i abs( label[i] - predictions[i] )
        Parameters:
        label - Label array (NUMERIC type)
        predictions - Predictions array (NUMERIC type)
        weights - Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)
        Returns:
        output loss variable (NUMERIC type)
      • absoluteDifference

        public SDVariable absoluteDifference​(String name,
                                             SDVariable label,
                                             SDVariable predictions,
                                             SDVariable weights)
        Absolute difference loss: sum_i abs( label[i] - predictions[i] )
        Parameters:
        name - name May be null. Name for the output variable
        label - Label array (NUMERIC type)
        predictions - Predictions array (NUMERIC type)
        weights - Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)
        Returns:
        output loss variable (NUMERIC type)
      • cosineDistance

        public SDVariable cosineDistance​(SDVariable label,
                                         SDVariable predictions,
                                         SDVariable weights,
                                         LossReduce lossReduce,
                                         int dimension)
        Cosine distance loss: 1 - cosineSimilarity(x,y) or 1 - sum_i label[i] * prediction[i], which is
        equivalent to cosine distance when both the predictions and labels are normalized.
        Note: This loss function assumes that both the predictions and labels are normalized to have unit l2 norm.
        If this is not the case, you should normalize them first by dividing by norm2(String, SDVariable, boolean, int...)
        along the cosine distance dimension (with keepDims=true).
        Parameters:
        label - Label array (NUMERIC type)
        predictions - Predictions array (NUMERIC type)
        weights - Weights array. May be null. If null, a weight of 1.0 is use (NUMERIC type)
        lossReduce - Reduction type for the loss. See LossReduce for more details. Default: LossReduce#MEAN_BY_NONZERO_WEIGHT_COUNT
        dimension - Dimension to perform the cosine distance over
        Returns:
        output Cosine distance loss (NUMERIC type)
      • cosineDistance

        public SDVariable cosineDistance​(String name,
                                         SDVariable label,
                                         SDVariable predictions,
                                         SDVariable weights,
                                         LossReduce lossReduce,
                                         int dimension)
        Cosine distance loss: 1 - cosineSimilarity(x,y) or 1 - sum_i label[i] * prediction[i], which is
        equivalent to cosine distance when both the predictions and labels are normalized.
        Note: This loss function assumes that both the predictions and labels are normalized to have unit l2 norm.
        If this is not the case, you should normalize them first by dividing by norm2(String, SDVariable, boolean, int...)
        along the cosine distance dimension (with keepDims=true).
        Parameters:
        name - name May be null. Name for the output variable
        label - Label array (NUMERIC type)
        predictions - Predictions array (NUMERIC type)
        weights - Weights array. May be null. If null, a weight of 1.0 is use (NUMERIC type)
        lossReduce - Reduction type for the loss. See LossReduce for more details. Default: LossReduce#MEAN_BY_NONZERO_WEIGHT_COUNT
        dimension - Dimension to perform the cosine distance over
        Returns:
        output Cosine distance loss (NUMERIC type)
      • cosineDistance

        public SDVariable cosineDistance​(SDVariable label,
                                         SDVariable predictions,
                                         SDVariable weights,
                                         int dimension)
        Cosine distance loss: 1 - cosineSimilarity(x,y) or 1 - sum_i label[i] * prediction[i], which is
        equivalent to cosine distance when both the predictions and labels are normalized.
        Note: This loss function assumes that both the predictions and labels are normalized to have unit l2 norm.
        If this is not the case, you should normalize them first by dividing by norm2(String, SDVariable, boolean, int...)
        along the cosine distance dimension (with keepDims=true).
        Parameters:
        label - Label array (NUMERIC type)
        predictions - Predictions array (NUMERIC type)
        weights - Weights array. May be null. If null, a weight of 1.0 is use (NUMERIC type)
        dimension - Dimension to perform the cosine distance over
        Returns:
        output Cosine distance loss (NUMERIC type)
      • cosineDistance

        public SDVariable cosineDistance​(String name,
                                         SDVariable label,
                                         SDVariable predictions,
                                         SDVariable weights,
                                         int dimension)
        Cosine distance loss: 1 - cosineSimilarity(x,y) or 1 - sum_i label[i] * prediction[i], which is
        equivalent to cosine distance when both the predictions and labels are normalized.
        Note: This loss function assumes that both the predictions and labels are normalized to have unit l2 norm.
        If this is not the case, you should normalize them first by dividing by norm2(String, SDVariable, boolean, int...)
        along the cosine distance dimension (with keepDims=true).
        Parameters:
        name - name May be null. Name for the output variable
        label - Label array (NUMERIC type)
        predictions - Predictions array (NUMERIC type)
        weights - Weights array. May be null. If null, a weight of 1.0 is use (NUMERIC type)
        dimension - Dimension to perform the cosine distance over
        Returns:
        output Cosine distance loss (NUMERIC type)
      • ctcLoss

        public SDVariable ctcLoss​(SDVariable targetLabels,
                                  SDVariable logitInput,
                                  SDVariable targetLabelLengths,
                                  SDVariable logitInputLengths)
        CTC Loss: Connectionist Temporal Classification Loss. See:
        https://dl.acm.org/citation.cfm?id=1143891
        Parameters:
        targetLabels - Label array (NUMERIC type)
        logitInput - Inputs (NUMERIC type)
        targetLabelLengths - Length of the target label (NUMERIC type)
        logitInputLengths - Length of the input (NUMERIC type)
        Returns:
        output Ctc loss (NUMERIC type)
      • ctcLoss

        public SDVariable ctcLoss​(String name,
                                  SDVariable targetLabels,
                                  SDVariable logitInput,
                                  SDVariable targetLabelLengths,
                                  SDVariable logitInputLengths)
        CTC Loss: Connectionist Temporal Classification Loss. See:
        https://dl.acm.org/citation.cfm?id=1143891
        Parameters:
        name - name May be null. Name for the output variable
        targetLabels - Label array (NUMERIC type)
        logitInput - Inputs (NUMERIC type)
        targetLabelLengths - Length of the target label (NUMERIC type)
        logitInputLengths - Length of the input (NUMERIC type)
        Returns:
        output Ctc loss (NUMERIC type)
      • hingeLoss

        public SDVariable hingeLoss​(SDVariable label,
                                    SDVariable predictions,
                                    SDVariable weights,
                                    LossReduce lossReduce)
        Hinge loss: a loss function used for training classifiers.
        Implements L = max(0, 1 - t * predictions) where t is the label values after internally converting to {-1,1}
        from the user specified {0,1}. Note that Labels should be provided with values {0,1}.
        Parameters:
        label - Label array. Each value should be 0.0 or 1.0 (internally -1 to 1 is used) (NUMERIC type)
        predictions - Predictions array (NUMERIC type)
        weights - Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)
        lossReduce - Reduction type for the loss. See LossReduce for more details. Default: LossReduce#MEAN_BY_NONZERO_WEIGHT_COUNT
        Returns:
        output Loss variable (NUMERIC type)
      • hingeLoss

        public SDVariable hingeLoss​(String name,
                                    SDVariable label,
                                    SDVariable predictions,
                                    SDVariable weights,
                                    LossReduce lossReduce)
        Hinge loss: a loss function used for training classifiers.
        Implements L = max(0, 1 - t * predictions) where t is the label values after internally converting to {-1,1}
        from the user specified {0,1}. Note that Labels should be provided with values {0,1}.
        Parameters:
        name - name May be null. Name for the output variable
        label - Label array. Each value should be 0.0 or 1.0 (internally -1 to 1 is used) (NUMERIC type)
        predictions - Predictions array (NUMERIC type)
        weights - Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)
        lossReduce - Reduction type for the loss. See LossReduce for more details. Default: LossReduce#MEAN_BY_NONZERO_WEIGHT_COUNT
        Returns:
        output Loss variable (NUMERIC type)
      • hingeLoss

        public SDVariable hingeLoss​(SDVariable label,
                                    SDVariable predictions,
                                    SDVariable weights)
        Hinge loss: a loss function used for training classifiers.
        Implements L = max(0, 1 - t * predictions) where t is the label values after internally converting to {-1,1}
        from the user specified {0,1}. Note that Labels should be provided with values {0,1}.
        Parameters:
        label - Label array. Each value should be 0.0 or 1.0 (internally -1 to 1 is used) (NUMERIC type)
        predictions - Predictions array (NUMERIC type)
        weights - Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)
        Returns:
        output Loss variable (NUMERIC type)
      • hingeLoss

        public SDVariable hingeLoss​(String name,
                                    SDVariable label,
                                    SDVariable predictions,
                                    SDVariable weights)
        Hinge loss: a loss function used for training classifiers.
        Implements L = max(0, 1 - t * predictions) where t is the label values after internally converting to {-1,1}
        from the user specified {0,1}. Note that Labels should be provided with values {0,1}.
        Parameters:
        name - name May be null. Name for the output variable
        label - Label array. Each value should be 0.0 or 1.0 (internally -1 to 1 is used) (NUMERIC type)
        predictions - Predictions array (NUMERIC type)
        weights - Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)
        Returns:
        output Loss variable (NUMERIC type)
      • huberLoss

        public SDVariable huberLoss​(SDVariable label,
                                    SDVariable predictions,
                                    SDVariable weights,
                                    LossReduce lossReduce,
                                    double delta)
        Huber loss function, used for robust regression. It is similar both squared error loss and absolute difference loss,
        though is less sensitive to outliers than squared error.
        Huber loss implements:

        L = 0.5 * (label[i] - predictions[i])^2 if abs(label[i] - predictions[i]) < delta
        L = delta * abs(label[i] - predictions[i]) - 0.5 * delta^2 otherwise

        Parameters:
        label - Label array (NUMERIC type)
        predictions - Predictions array (NUMERIC type)
        weights - Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)
        lossReduce - Reduction type for the loss. See LossReduce for more details. Default: LossReduce#MEAN_BY_NONZERO_WEIGHT_COUNT
        delta - Loss function delta value
        Returns:
        output Huber loss (NUMERIC type)
      • huberLoss

        public SDVariable huberLoss​(String name,
                                    SDVariable label,
                                    SDVariable predictions,
                                    SDVariable weights,
                                    LossReduce lossReduce,
                                    double delta)
        Huber loss function, used for robust regression. It is similar both squared error loss and absolute difference loss,
        though is less sensitive to outliers than squared error.
        Huber loss implements:

        L = 0.5 * (label[i] - predictions[i])^2 if abs(label[i] - predictions[i]) < delta
        L = delta * abs(label[i] - predictions[i]) - 0.5 * delta^2 otherwise

        Parameters:
        name - name May be null. Name for the output variable
        label - Label array (NUMERIC type)
        predictions - Predictions array (NUMERIC type)
        weights - Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)
        lossReduce - Reduction type for the loss. See LossReduce for more details. Default: LossReduce#MEAN_BY_NONZERO_WEIGHT_COUNT
        delta - Loss function delta value
        Returns:
        output Huber loss (NUMERIC type)
      • huberLoss

        public SDVariable huberLoss​(SDVariable label,
                                    SDVariable predictions,
                                    SDVariable weights,
                                    double delta)
        Huber loss function, used for robust regression. It is similar both squared error loss and absolute difference loss,
        though is less sensitive to outliers than squared error.
        Huber loss implements:

        L = 0.5 * (label[i] - predictions[i])^2 if abs(label[i] - predictions[i]) < delta
        L = delta * abs(label[i] - predictions[i]) - 0.5 * delta^2 otherwise

        Parameters:
        label - Label array (NUMERIC type)
        predictions - Predictions array (NUMERIC type)
        weights - Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)
        delta - Loss function delta value
        Returns:
        output Huber loss (NUMERIC type)
      • huberLoss

        public SDVariable huberLoss​(String name,
                                    SDVariable label,
                                    SDVariable predictions,
                                    SDVariable weights,
                                    double delta)
        Huber loss function, used for robust regression. It is similar both squared error loss and absolute difference loss,
        though is less sensitive to outliers than squared error.
        Huber loss implements:

        L = 0.5 * (label[i] - predictions[i])^2 if abs(label[i] - predictions[i]) < delta
        L = delta * abs(label[i] - predictions[i]) - 0.5 * delta^2 otherwise

        Parameters:
        name - name May be null. Name for the output variable
        label - Label array (NUMERIC type)
        predictions - Predictions array (NUMERIC type)
        weights - Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)
        delta - Loss function delta value
        Returns:
        output Huber loss (NUMERIC type)
      • l2Loss

        public SDVariable l2Loss​(SDVariable var)
        L2 loss: 1/2 * sum(x^2)
        Parameters:
        var - Variable to calculate L2 loss of (NUMERIC type)
        Returns:
        output L2 loss (NUMERIC type)
      • l2Loss

        public SDVariable l2Loss​(String name,
                                 SDVariable var)
        L2 loss: 1/2 * sum(x^2)
        Parameters:
        name - name May be null. Name for the output variable
        var - Variable to calculate L2 loss of (NUMERIC type)
        Returns:
        output L2 loss (NUMERIC type)
      • logLoss

        public SDVariable logLoss​(SDVariable label,
                                  SDVariable predictions,
                                  SDVariable weights,
                                  LossReduce lossReduce,
                                  double epsilon)
        Log loss, i.e., binary cross entropy loss, usually used for binary multi-label classification. Implements:
        -1/numExamples * sum_i (labels[i] * log(predictions[i] + epsilon) + (1-labels[i]) * log(1-predictions[i] + epsilon))
        Parameters:
        label - Label array (NUMERIC type)
        predictions - Predictions array (NUMERIC type)
        weights - Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)
        lossReduce - Reduction type for the loss. See LossReduce for more details. Default: LossReduce#MEAN_BY_NONZERO_WEIGHT_COUNT
        epsilon - epsilon
        Returns:
        output Log loss (NUMERIC type)
      • logLoss

        public SDVariable logLoss​(String name,
                                  SDVariable label,
                                  SDVariable predictions,
                                  SDVariable weights,
                                  LossReduce lossReduce,
                                  double epsilon)
        Log loss, i.e., binary cross entropy loss, usually used for binary multi-label classification. Implements:
        -1/numExamples * sum_i (labels[i] * log(predictions[i] + epsilon) + (1-labels[i]) * log(1-predictions[i] + epsilon))
        Parameters:
        name - name May be null. Name for the output variable
        label - Label array (NUMERIC type)
        predictions - Predictions array (NUMERIC type)
        weights - Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)
        lossReduce - Reduction type for the loss. See LossReduce for more details. Default: LossReduce#MEAN_BY_NONZERO_WEIGHT_COUNT
        epsilon - epsilon
        Returns:
        output Log loss (NUMERIC type)
      • logLoss

        public SDVariable logLoss​(SDVariable label,
                                  SDVariable predictions)
        Log loss, i.e., binary cross entropy loss, usually used for binary multi-label classification. Implements:
        -1/numExamples * sum_i (labels[i] * log(predictions[i] + epsilon) + (1-labels[i]) * log(1-predictions[i] + epsilon))
        Parameters:
        label - Label array (NUMERIC type)
        predictions - Predictions array (NUMERIC type)
        Returns:
        output Log loss (NUMERIC type)
      • logLoss

        public SDVariable logLoss​(String name,
                                  SDVariable label,
                                  SDVariable predictions)
        Log loss, i.e., binary cross entropy loss, usually used for binary multi-label classification. Implements:
        -1/numExamples * sum_i (labels[i] * log(predictions[i] + epsilon) + (1-labels[i]) * log(1-predictions[i] + epsilon))
        Parameters:
        name - name May be null. Name for the output variable
        label - Label array (NUMERIC type)
        predictions - Predictions array (NUMERIC type)
        Returns:
        output Log loss (NUMERIC type)
      • logPoisson

        public SDVariable logPoisson​(SDVariable label,
                                     SDVariable predictions,
                                     SDVariable weights,
                                     LossReduce lossReduce,
                                     boolean full)
        Log poisson loss: a loss function used for training classifiers.
        Implements L = exp(c) - z * c where c is log(predictions) and z is labels.
        Parameters:
        label - Label array. Each value should be 0.0 or 1.0 (NUMERIC type)
        predictions - Predictions array (has to be log(x) of actual predictions) (NUMERIC type)
        weights - Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)
        lossReduce - Reduction type for the loss. See LossReduce for more details. Default: LossReduce#MEAN_BY_NONZERO_WEIGHT_COUNT
        full - Boolean flag. true for logPoissonFull, false for logPoisson
        Returns:
        output Loss variable (NUMERIC type)
      • logPoisson

        public SDVariable logPoisson​(String name,
                                     SDVariable label,
                                     SDVariable predictions,
                                     SDVariable weights,
                                     LossReduce lossReduce,
                                     boolean full)
        Log poisson loss: a loss function used for training classifiers.
        Implements L = exp(c) - z * c where c is log(predictions) and z is labels.
        Parameters:
        name - name May be null. Name for the output variable
        label - Label array. Each value should be 0.0 or 1.0 (NUMERIC type)
        predictions - Predictions array (has to be log(x) of actual predictions) (NUMERIC type)
        weights - Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)
        lossReduce - Reduction type for the loss. See LossReduce for more details. Default: LossReduce#MEAN_BY_NONZERO_WEIGHT_COUNT
        full - Boolean flag. true for logPoissonFull, false for logPoisson
        Returns:
        output Loss variable (NUMERIC type)
      • logPoisson

        public SDVariable logPoisson​(SDVariable label,
                                     SDVariable predictions,
                                     SDVariable weights,
                                     boolean full)
        Log poisson loss: a loss function used for training classifiers.
        Implements L = exp(c) - z * c where c is log(predictions) and z is labels.
        Parameters:
        label - Label array. Each value should be 0.0 or 1.0 (NUMERIC type)
        predictions - Predictions array (has to be log(x) of actual predictions) (NUMERIC type)
        weights - Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)
        full - Boolean flag. true for logPoissonFull, false for logPoisson
        Returns:
        output Loss variable (NUMERIC type)
      • logPoisson

        public SDVariable logPoisson​(String name,
                                     SDVariable label,
                                     SDVariable predictions,
                                     SDVariable weights,
                                     boolean full)
        Log poisson loss: a loss function used for training classifiers.
        Implements L = exp(c) - z * c where c is log(predictions) and z is labels.
        Parameters:
        name - name May be null. Name for the output variable
        label - Label array. Each value should be 0.0 or 1.0 (NUMERIC type)
        predictions - Predictions array (has to be log(x) of actual predictions) (NUMERIC type)
        weights - Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)
        full - Boolean flag. true for logPoissonFull, false for logPoisson
        Returns:
        output Loss variable (NUMERIC type)
      • meanPairwiseSquaredError

        public SDVariable meanPairwiseSquaredError​(SDVariable label,
                                                   SDVariable predictions,
                                                   SDVariable weights,
                                                   LossReduce lossReduce)
        Mean pairwise squared error.
        MPWSE loss calculates the difference between pairs of consecutive elements in the predictions and labels arrays.
        For example, if predictions = [p0, p1, p2] and labels are [l0, l1, l2] then MPWSE is:
        [((p0-p1) - (l0-l1))^2 + ((p0-p2) - (l0-l2))^2 + ((p1-p2) - (l1-l2))^2] / 3
        Parameters:
        label - Label array (NUMERIC type)
        predictions - Predictions array (NUMERIC type)
        weights - Weights array. May be null. If null, a weight of 1.0 is used. Must be either null, scalar, or have shape [batchSize] (NUMERIC type)
        lossReduce - Reduction type for the loss. See LossReduce for more details. Default: LossReduce#MEAN_BY_NONZERO_WEIGHT_COUNT
        Returns:
        output Loss variable, scalar output (NUMERIC type)
      • meanPairwiseSquaredError

        public SDVariable meanPairwiseSquaredError​(String name,
                                                   SDVariable label,
                                                   SDVariable predictions,
                                                   SDVariable weights,
                                                   LossReduce lossReduce)
        Mean pairwise squared error.
        MPWSE loss calculates the difference between pairs of consecutive elements in the predictions and labels arrays.
        For example, if predictions = [p0, p1, p2] and labels are [l0, l1, l2] then MPWSE is:
        [((p0-p1) - (l0-l1))^2 + ((p0-p2) - (l0-l2))^2 + ((p1-p2) - (l1-l2))^2] / 3
        Parameters:
        name - name May be null. Name for the output variable
        label - Label array (NUMERIC type)
        predictions - Predictions array (NUMERIC type)
        weights - Weights array. May be null. If null, a weight of 1.0 is used. Must be either null, scalar, or have shape [batchSize] (NUMERIC type)
        lossReduce - Reduction type for the loss. See LossReduce for more details. Default: LossReduce#MEAN_BY_NONZERO_WEIGHT_COUNT
        Returns:
        output Loss variable, scalar output (NUMERIC type)
      • meanPairwiseSquaredError

        public SDVariable meanPairwiseSquaredError​(SDVariable label,
                                                   SDVariable predictions,
                                                   SDVariable weights)
        Mean pairwise squared error.
        MPWSE loss calculates the difference between pairs of consecutive elements in the predictions and labels arrays.
        For example, if predictions = [p0, p1, p2] and labels are [l0, l1, l2] then MPWSE is:
        [((p0-p1) - (l0-l1))^2 + ((p0-p2) - (l0-l2))^2 + ((p1-p2) - (l1-l2))^2] / 3
        Parameters:
        label - Label array (NUMERIC type)
        predictions - Predictions array (NUMERIC type)
        weights - Weights array. May be null. If null, a weight of 1.0 is used. Must be either null, scalar, or have shape [batchSize] (NUMERIC type)
        Returns:
        output Loss variable, scalar output (NUMERIC type)
      • meanPairwiseSquaredError

        public SDVariable meanPairwiseSquaredError​(String name,
                                                   SDVariable label,
                                                   SDVariable predictions,
                                                   SDVariable weights)
        Mean pairwise squared error.
        MPWSE loss calculates the difference between pairs of consecutive elements in the predictions and labels arrays.
        For example, if predictions = [p0, p1, p2] and labels are [l0, l1, l2] then MPWSE is:
        [((p0-p1) - (l0-l1))^2 + ((p0-p2) - (l0-l2))^2 + ((p1-p2) - (l1-l2))^2] / 3
        Parameters:
        name - name May be null. Name for the output variable
        label - Label array (NUMERIC type)
        predictions - Predictions array (NUMERIC type)
        weights - Weights array. May be null. If null, a weight of 1.0 is used. Must be either null, scalar, or have shape [batchSize] (NUMERIC type)
        Returns:
        output Loss variable, scalar output (NUMERIC type)
      • meanSquaredError

        public SDVariable meanSquaredError​(SDVariable label,
                                           SDVariable predictions,
                                           SDVariable weights,
                                           LossReduce lossReduce)
        Mean squared error loss function. Implements (label[i] - prediction[i])^2 - i.e., squared error on a per-element basis.
        When averaged (using LossReduce#MEAN_BY_WEIGHT or LossReduce#MEAN_BY_NONZERO_WEIGHT_COUNT (the default))
        this is the mean squared error loss function.
        Parameters:
        label - Label array (NUMERIC type)
        predictions - Predictions array (NUMERIC type)
        weights - Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)
        lossReduce - Reduction type for the loss. See LossReduce for more details. Default: LossReduce#MEAN_BY_NONZERO_WEIGHT_COUNT
        Returns:
        output Loss variable (NUMERIC type)
      • meanSquaredError

        public SDVariable meanSquaredError​(String name,
                                           SDVariable label,
                                           SDVariable predictions,
                                           SDVariable weights,
                                           LossReduce lossReduce)
        Mean squared error loss function. Implements (label[i] - prediction[i])^2 - i.e., squared error on a per-element basis.
        When averaged (using LossReduce#MEAN_BY_WEIGHT or LossReduce#MEAN_BY_NONZERO_WEIGHT_COUNT (the default))
        this is the mean squared error loss function.
        Parameters:
        name - name May be null. Name for the output variable
        label - Label array (NUMERIC type)
        predictions - Predictions array (NUMERIC type)
        weights - Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)
        lossReduce - Reduction type for the loss. See LossReduce for more details. Default: LossReduce#MEAN_BY_NONZERO_WEIGHT_COUNT
        Returns:
        output Loss variable (NUMERIC type)
      • meanSquaredError

        public SDVariable meanSquaredError​(SDVariable label,
                                           SDVariable predictions,
                                           SDVariable weights)
        Mean squared error loss function. Implements (label[i] - prediction[i])^2 - i.e., squared error on a per-element basis.
        When averaged (using LossReduce#MEAN_BY_WEIGHT or LossReduce#MEAN_BY_NONZERO_WEIGHT_COUNT (the default))
        this is the mean squared error loss function.
        Parameters:
        label - Label array (NUMERIC type)
        predictions - Predictions array (NUMERIC type)
        weights - Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)
        Returns:
        output Loss variable (NUMERIC type)
      • meanSquaredError

        public SDVariable meanSquaredError​(String name,
                                           SDVariable label,
                                           SDVariable predictions,
                                           SDVariable weights)
        Mean squared error loss function. Implements (label[i] - prediction[i])^2 - i.e., squared error on a per-element basis.
        When averaged (using LossReduce#MEAN_BY_WEIGHT or LossReduce#MEAN_BY_NONZERO_WEIGHT_COUNT (the default))
        this is the mean squared error loss function.
        Parameters:
        name - name May be null. Name for the output variable
        label - Label array (NUMERIC type)
        predictions - Predictions array (NUMERIC type)
        weights - Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)
        Returns:
        output Loss variable (NUMERIC type)
      • sigmoidCrossEntropy

        public SDVariable sigmoidCrossEntropy​(SDVariable label,
                                              SDVariable predictionLogits,
                                              SDVariable weights,
                                              LossReduce lossReduce,
                                              double labelSmoothing)
        Sigmoid cross entropy: applies the sigmoid activation function on the input logits (input "pre-sigmoid preductions")
        and implements the binary cross entropy loss function. This implementation is numerically more stable than using
        standard (but separate) sigmoid activation function and log loss (binary cross entropy) loss function.
        Implements:
        -1/numExamples * sum_i (labels[i] * log(sigmoid(logits[i])) + (1-labels[i]) * log(1-sigmoid(logits[i])))
        though this is done in a mathematically equivalent but more numerical stable form.

        When label smoothing is > 0, the following label smoothing is used:

        numClasses = labels.size(1);<br> label = (1.0 - labelSmoothing) * label + 0.5 * labelSmoothing

        Parameters:
        label - Label array (NUMERIC type)
        predictionLogits - Predictions array (NUMERIC type)
        weights - Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)
        lossReduce - Reduction type for the loss. See LossReduce for more details. Default: LossReduce#MEAN_BY_NONZERO_WEIGHT_COUNT
        labelSmoothing - Label smoothing value. Default value: 0
        Returns:
        output Loss variable (NUMERIC type)
      • sigmoidCrossEntropy

        public SDVariable sigmoidCrossEntropy​(String name,
                                              SDVariable label,
                                              SDVariable predictionLogits,
                                              SDVariable weights,
                                              LossReduce lossReduce,
                                              double labelSmoothing)
        Sigmoid cross entropy: applies the sigmoid activation function on the input logits (input "pre-sigmoid preductions")
        and implements the binary cross entropy loss function. This implementation is numerically more stable than using
        standard (but separate) sigmoid activation function and log loss (binary cross entropy) loss function.
        Implements:
        -1/numExamples * sum_i (labels[i] * log(sigmoid(logits[i])) + (1-labels[i]) * log(1-sigmoid(logits[i])))
        though this is done in a mathematically equivalent but more numerical stable form.

        When label smoothing is > 0, the following label smoothing is used:

        numClasses = labels.size(1);<br> label = (1.0 - labelSmoothing) * label + 0.5 * labelSmoothing

        Parameters:
        name - name May be null. Name for the output variable
        label - Label array (NUMERIC type)
        predictionLogits - Predictions array (NUMERIC type)
        weights - Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)
        lossReduce - Reduction type for the loss. See LossReduce for more details. Default: LossReduce#MEAN_BY_NONZERO_WEIGHT_COUNT
        labelSmoothing - Label smoothing value. Default value: 0
        Returns:
        output Loss variable (NUMERIC type)
      • sigmoidCrossEntropy

        public SDVariable sigmoidCrossEntropy​(SDVariable label,
                                              SDVariable predictionLogits,
                                              SDVariable weights)
        Sigmoid cross entropy: applies the sigmoid activation function on the input logits (input "pre-sigmoid preductions")
        and implements the binary cross entropy loss function. This implementation is numerically more stable than using
        standard (but separate) sigmoid activation function and log loss (binary cross entropy) loss function.
        Implements:
        -1/numExamples * sum_i (labels[i] * log(sigmoid(logits[i])) + (1-labels[i]) * log(1-sigmoid(logits[i])))
        though this is done in a mathematically equivalent but more numerical stable form.

        When label smoothing is > 0, the following label smoothing is used:

        numClasses = labels.size(1);<br> label = (1.0 - labelSmoothing) * label + 0.5 * labelSmoothing

        Parameters:
        label - Label array (NUMERIC type)
        predictionLogits - Predictions array (NUMERIC type)
        weights - Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)
        Returns:
        output Loss variable (NUMERIC type)
      • sigmoidCrossEntropy

        public SDVariable sigmoidCrossEntropy​(String name,
                                              SDVariable label,
                                              SDVariable predictionLogits,
                                              SDVariable weights)
        Sigmoid cross entropy: applies the sigmoid activation function on the input logits (input "pre-sigmoid preductions")
        and implements the binary cross entropy loss function. This implementation is numerically more stable than using
        standard (but separate) sigmoid activation function and log loss (binary cross entropy) loss function.
        Implements:
        -1/numExamples * sum_i (labels[i] * log(sigmoid(logits[i])) + (1-labels[i]) * log(1-sigmoid(logits[i])))
        though this is done in a mathematically equivalent but more numerical stable form.

        When label smoothing is > 0, the following label smoothing is used:

        numClasses = labels.size(1);<br> label = (1.0 - labelSmoothing) * label + 0.5 * labelSmoothing

        Parameters:
        name - name May be null. Name for the output variable
        label - Label array (NUMERIC type)
        predictionLogits - Predictions array (NUMERIC type)
        weights - Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)
        Returns:
        output Loss variable (NUMERIC type)
      • softmaxCrossEntropy

        public SDVariable softmaxCrossEntropy​(SDVariable oneHotLabels,
                                              SDVariable logitPredictions,
                                              SDVariable weights,
                                              LossReduce lossReduce,
                                              double labelSmoothing)
        Applies the softmax activation function to the input, then implement multi-class cross entropy:
        -sum_classes label[i] * log(p[c]) where p = softmax(logits)
        If LossReduce#NONE is used, returned shape is [numExamples] out for [numExamples, numClasses] predicitons/labels;
        otherwise, the output is a scalar.


        When label smoothing is > 0, the following label smoothing is used:


        numClasses = labels.size(1);<br> oneHotLabel = (1.0 - labelSmoothing) * oneHotLabels + labelSmoothing/numClasses

        Parameters:
        oneHotLabels - Label array. Should be one-hot per example and same shape as predictions (for example, [mb, nOut]) (NUMERIC type)
        logitPredictions - Predictions array (pre-softmax) (NUMERIC type)
        weights - Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)
        lossReduce - Reduction type for the loss. See LossReduce for more details. Default: LossReduce#MEAN_BY_NONZERO_WEIGHT_COUNT
        labelSmoothing - Label smoothing value. Default value: 0
        Returns:
        output Loss variable (NUMERIC type)
      • softmaxCrossEntropy

        public SDVariable softmaxCrossEntropy​(String name,
                                              SDVariable oneHotLabels,
                                              SDVariable logitPredictions,
                                              SDVariable weights,
                                              LossReduce lossReduce,
                                              double labelSmoothing)
        Applies the softmax activation function to the input, then implement multi-class cross entropy:
        -sum_classes label[i] * log(p[c]) where p = softmax(logits)
        If LossReduce#NONE is used, returned shape is [numExamples] out for [numExamples, numClasses] predicitons/labels;
        otherwise, the output is a scalar.


        When label smoothing is > 0, the following label smoothing is used:


        numClasses = labels.size(1);<br> oneHotLabel = (1.0 - labelSmoothing) * oneHotLabels + labelSmoothing/numClasses

        Parameters:
        name - name May be null. Name for the output variable
        oneHotLabels - Label array. Should be one-hot per example and same shape as predictions (for example, [mb, nOut]) (NUMERIC type)
        logitPredictions - Predictions array (pre-softmax) (NUMERIC type)
        weights - Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)
        lossReduce - Reduction type for the loss. See LossReduce for more details. Default: LossReduce#MEAN_BY_NONZERO_WEIGHT_COUNT
        labelSmoothing - Label smoothing value. Default value: 0
        Returns:
        output Loss variable (NUMERIC type)
      • softmaxCrossEntropy

        public SDVariable softmaxCrossEntropy​(SDVariable oneHotLabels,
                                              SDVariable logitPredictions,
                                              SDVariable weights)
        Applies the softmax activation function to the input, then implement multi-class cross entropy:
        -sum_classes label[i] * log(p[c]) where p = softmax(logits)
        If LossReduce#NONE is used, returned shape is [numExamples] out for [numExamples, numClasses] predicitons/labels;
        otherwise, the output is a scalar.


        When label smoothing is > 0, the following label smoothing is used:


        numClasses = labels.size(1);<br> oneHotLabel = (1.0 - labelSmoothing) * oneHotLabels + labelSmoothing/numClasses

        Parameters:
        oneHotLabels - Label array. Should be one-hot per example and same shape as predictions (for example, [mb, nOut]) (NUMERIC type)
        logitPredictions - Predictions array (pre-softmax) (NUMERIC type)
        weights - Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)
        Returns:
        output Loss variable (NUMERIC type)
      • softmaxCrossEntropy

        public SDVariable softmaxCrossEntropy​(String name,
                                              SDVariable oneHotLabels,
                                              SDVariable logitPredictions,
                                              SDVariable weights)
        Applies the softmax activation function to the input, then implement multi-class cross entropy:
        -sum_classes label[i] * log(p[c]) where p = softmax(logits)
        If LossReduce#NONE is used, returned shape is [numExamples] out for [numExamples, numClasses] predicitons/labels;
        otherwise, the output is a scalar.


        When label smoothing is > 0, the following label smoothing is used:


        numClasses = labels.size(1);<br> oneHotLabel = (1.0 - labelSmoothing) * oneHotLabels + labelSmoothing/numClasses

        Parameters:
        name - name May be null. Name for the output variable
        oneHotLabels - Label array. Should be one-hot per example and same shape as predictions (for example, [mb, nOut]) (NUMERIC type)
        logitPredictions - Predictions array (pre-softmax) (NUMERIC type)
        weights - Weights array. May be null. If null, a weight of 1.0 is used (NUMERIC type)
        Returns:
        output Loss variable (NUMERIC type)
      • sparseSoftmaxCrossEntropy

        public SDVariable sparseSoftmaxCrossEntropy​(SDVariable logits,
                                                    SDVariable labels)
        As per softmaxCrossEntropy(String, SDVariable, SDVariable, LossReduce) but the labels variable
        is represented as an integer array instead of the equivalent one-hot array.
        i.e., if logits are rank N, then labels have rank N-1
        Parameters:
        logits - Logits array ("pre-softmax activations") (NUMERIC type)
        labels - Labels array. Must be an integer type. (INT type)
        Returns:
        output Softmax cross entropy (NUMERIC type)
      • sparseSoftmaxCrossEntropy

        public SDVariable sparseSoftmaxCrossEntropy​(String name,
                                                    SDVariable logits,
                                                    SDVariable labels)
        As per softmaxCrossEntropy(String, SDVariable, SDVariable, LossReduce) but the labels variable
        is represented as an integer array instead of the equivalent one-hot array.
        i.e., if logits are rank N, then labels have rank N-1
        Parameters:
        name - name May be null. Name for the output variable
        logits - Logits array ("pre-softmax activations") (NUMERIC type)
        labels - Labels array. Must be an integer type. (INT type)
        Returns:
        output Softmax cross entropy (NUMERIC type)