Package onnx

Class OnnxMl.TrainingInfoProto.Builder

  • All Implemented Interfaces:
    Cloneable, OnnxMl.TrainingInfoProtoOrBuilder, org.nd4j.shade.protobuf.Message.Builder, org.nd4j.shade.protobuf.MessageLite.Builder, org.nd4j.shade.protobuf.MessageLiteOrBuilder, org.nd4j.shade.protobuf.MessageOrBuilder
    Enclosing class:
    OnnxMl.TrainingInfoProto

    public static final class OnnxMl.TrainingInfoProto.Builder
    extends org.nd4j.shade.protobuf.GeneratedMessageV3.Builder<OnnxMl.TrainingInfoProto.Builder>
    implements OnnxMl.TrainingInfoProtoOrBuilder
     Training information
     TrainingInfoProto stores information for training a model.
     In particular, this defines two functionalities: an initialization-step
     and a training-algorithm-step. Initialization resets the model
     back to its original state as if no training has been performed.
     Training algorithm improves the model based on input data.
     The semantics of the initialization-step is that the initializers
     in ModelProto.graph and in TrainingInfoProto.algorithm are first
     initialized as specified by the initializers in the graph, and then
     updated by the "initialization_binding" in every instance in
     ModelProto.training_info.
     The field "algorithm" defines a computation graph which represents a
     training algorithm's step. After the execution of a
     TrainingInfoProto.algorithm, the initializers specified by "update_binding"
     may be immediately updated. If the targeted training algorithm contains
     consecutive update steps (such as block coordinate descent methods),
     the user needs to create a TrainingInfoProto for each step.
     
    Protobuf type onnx.TrainingInfoProto
    • Method Detail

      • getDescriptor

        public static final org.nd4j.shade.protobuf.Descriptors.Descriptor getDescriptor()
      • internalGetFieldAccessorTable

        protected org.nd4j.shade.protobuf.GeneratedMessageV3.FieldAccessorTable internalGetFieldAccessorTable()
        Specified by:
        internalGetFieldAccessorTable in class org.nd4j.shade.protobuf.GeneratedMessageV3.Builder<OnnxMl.TrainingInfoProto.Builder>
      • clear

        public OnnxMl.TrainingInfoProto.Builder clear()
        Specified by:
        clear in interface org.nd4j.shade.protobuf.Message.Builder
        Specified by:
        clear in interface org.nd4j.shade.protobuf.MessageLite.Builder
        Overrides:
        clear in class org.nd4j.shade.protobuf.GeneratedMessageV3.Builder<OnnxMl.TrainingInfoProto.Builder>
      • getDescriptorForType

        public org.nd4j.shade.protobuf.Descriptors.Descriptor getDescriptorForType()
        Specified by:
        getDescriptorForType in interface org.nd4j.shade.protobuf.Message.Builder
        Specified by:
        getDescriptorForType in interface org.nd4j.shade.protobuf.MessageOrBuilder
        Overrides:
        getDescriptorForType in class org.nd4j.shade.protobuf.GeneratedMessageV3.Builder<OnnxMl.TrainingInfoProto.Builder>
      • getDefaultInstanceForType

        public OnnxMl.TrainingInfoProto getDefaultInstanceForType()
        Specified by:
        getDefaultInstanceForType in interface org.nd4j.shade.protobuf.MessageLiteOrBuilder
        Specified by:
        getDefaultInstanceForType in interface org.nd4j.shade.protobuf.MessageOrBuilder
      • build

        public OnnxMl.TrainingInfoProto build()
        Specified by:
        build in interface org.nd4j.shade.protobuf.Message.Builder
        Specified by:
        build in interface org.nd4j.shade.protobuf.MessageLite.Builder
      • buildPartial

        public OnnxMl.TrainingInfoProto buildPartial()
        Specified by:
        buildPartial in interface org.nd4j.shade.protobuf.Message.Builder
        Specified by:
        buildPartial in interface org.nd4j.shade.protobuf.MessageLite.Builder
      • clone

        public OnnxMl.TrainingInfoProto.Builder clone()
        Specified by:
        clone in interface org.nd4j.shade.protobuf.Message.Builder
        Specified by:
        clone in interface org.nd4j.shade.protobuf.MessageLite.Builder
        Overrides:
        clone in class org.nd4j.shade.protobuf.GeneratedMessageV3.Builder<OnnxMl.TrainingInfoProto.Builder>
      • clearField

        public OnnxMl.TrainingInfoProto.Builder clearField​(org.nd4j.shade.protobuf.Descriptors.FieldDescriptor field)
        Specified by:
        clearField in interface org.nd4j.shade.protobuf.Message.Builder
        Overrides:
        clearField in class org.nd4j.shade.protobuf.GeneratedMessageV3.Builder<OnnxMl.TrainingInfoProto.Builder>
      • clearOneof

        public OnnxMl.TrainingInfoProto.Builder clearOneof​(org.nd4j.shade.protobuf.Descriptors.OneofDescriptor oneof)
        Specified by:
        clearOneof in interface org.nd4j.shade.protobuf.Message.Builder
        Overrides:
        clearOneof in class org.nd4j.shade.protobuf.GeneratedMessageV3.Builder<OnnxMl.TrainingInfoProto.Builder>
      • setRepeatedField

        public OnnxMl.TrainingInfoProto.Builder setRepeatedField​(org.nd4j.shade.protobuf.Descriptors.FieldDescriptor field,
                                                                 int index,
                                                                 Object value)
        Specified by:
        setRepeatedField in interface org.nd4j.shade.protobuf.Message.Builder
        Overrides:
        setRepeatedField in class org.nd4j.shade.protobuf.GeneratedMessageV3.Builder<OnnxMl.TrainingInfoProto.Builder>
      • addRepeatedField

        public OnnxMl.TrainingInfoProto.Builder addRepeatedField​(org.nd4j.shade.protobuf.Descriptors.FieldDescriptor field,
                                                                 Object value)
        Specified by:
        addRepeatedField in interface org.nd4j.shade.protobuf.Message.Builder
        Overrides:
        addRepeatedField in class org.nd4j.shade.protobuf.GeneratedMessageV3.Builder<OnnxMl.TrainingInfoProto.Builder>
      • isInitialized

        public final boolean isInitialized()
        Specified by:
        isInitialized in interface org.nd4j.shade.protobuf.MessageLiteOrBuilder
        Overrides:
        isInitialized in class org.nd4j.shade.protobuf.GeneratedMessageV3.Builder<OnnxMl.TrainingInfoProto.Builder>
      • mergeFrom

        public OnnxMl.TrainingInfoProto.Builder mergeFrom​(org.nd4j.shade.protobuf.CodedInputStream input,
                                                          org.nd4j.shade.protobuf.ExtensionRegistryLite extensionRegistry)
                                                   throws IOException
        Specified by:
        mergeFrom in interface org.nd4j.shade.protobuf.Message.Builder
        Specified by:
        mergeFrom in interface org.nd4j.shade.protobuf.MessageLite.Builder
        Overrides:
        mergeFrom in class org.nd4j.shade.protobuf.AbstractMessage.Builder<OnnxMl.TrainingInfoProto.Builder>
        Throws:
        IOException
      • hasInitialization

        public boolean hasInitialization()
         This field describes a graph to compute the initial tensors
         upon starting the training process. Initialization graph has no input
         and can have multiple outputs. Usually, trainable tensors in neural
         networks are randomly initialized. To achieve that, for each tensor,
         the user can put a random number operator such as RandomNormal or
         RandomUniform in TrainingInfoProto.initialization.node and assign its
         random output to the specific tensor using "initialization_binding".
         This graph can also set the initializers in "algorithm" in the same
         TrainingInfoProto; a use case is resetting the number of training
         iteration to zero.
         By default, this field is an empty graph and its evaluation does not
         produce any output. Thus, no initializer would be changed by default.
         
        .onnx.GraphProto initialization = 1;
        Specified by:
        hasInitialization in interface OnnxMl.TrainingInfoProtoOrBuilder
        Returns:
        Whether the initialization field is set.
      • getInitialization

        public OnnxMl.GraphProto getInitialization()
         This field describes a graph to compute the initial tensors
         upon starting the training process. Initialization graph has no input
         and can have multiple outputs. Usually, trainable tensors in neural
         networks are randomly initialized. To achieve that, for each tensor,
         the user can put a random number operator such as RandomNormal or
         RandomUniform in TrainingInfoProto.initialization.node and assign its
         random output to the specific tensor using "initialization_binding".
         This graph can also set the initializers in "algorithm" in the same
         TrainingInfoProto; a use case is resetting the number of training
         iteration to zero.
         By default, this field is an empty graph and its evaluation does not
         produce any output. Thus, no initializer would be changed by default.
         
        .onnx.GraphProto initialization = 1;
        Specified by:
        getInitialization in interface OnnxMl.TrainingInfoProtoOrBuilder
        Returns:
        The initialization.
      • setInitialization

        public OnnxMl.TrainingInfoProto.Builder setInitialization​(OnnxMl.GraphProto value)
         This field describes a graph to compute the initial tensors
         upon starting the training process. Initialization graph has no input
         and can have multiple outputs. Usually, trainable tensors in neural
         networks are randomly initialized. To achieve that, for each tensor,
         the user can put a random number operator such as RandomNormal or
         RandomUniform in TrainingInfoProto.initialization.node and assign its
         random output to the specific tensor using "initialization_binding".
         This graph can also set the initializers in "algorithm" in the same
         TrainingInfoProto; a use case is resetting the number of training
         iteration to zero.
         By default, this field is an empty graph and its evaluation does not
         produce any output. Thus, no initializer would be changed by default.
         
        .onnx.GraphProto initialization = 1;
      • setInitialization

        public OnnxMl.TrainingInfoProto.Builder setInitialization​(OnnxMl.GraphProto.Builder builderForValue)
         This field describes a graph to compute the initial tensors
         upon starting the training process. Initialization graph has no input
         and can have multiple outputs. Usually, trainable tensors in neural
         networks are randomly initialized. To achieve that, for each tensor,
         the user can put a random number operator such as RandomNormal or
         RandomUniform in TrainingInfoProto.initialization.node and assign its
         random output to the specific tensor using "initialization_binding".
         This graph can also set the initializers in "algorithm" in the same
         TrainingInfoProto; a use case is resetting the number of training
         iteration to zero.
         By default, this field is an empty graph and its evaluation does not
         produce any output. Thus, no initializer would be changed by default.
         
        .onnx.GraphProto initialization = 1;
      • mergeInitialization

        public OnnxMl.TrainingInfoProto.Builder mergeInitialization​(OnnxMl.GraphProto value)
         This field describes a graph to compute the initial tensors
         upon starting the training process. Initialization graph has no input
         and can have multiple outputs. Usually, trainable tensors in neural
         networks are randomly initialized. To achieve that, for each tensor,
         the user can put a random number operator such as RandomNormal or
         RandomUniform in TrainingInfoProto.initialization.node and assign its
         random output to the specific tensor using "initialization_binding".
         This graph can also set the initializers in "algorithm" in the same
         TrainingInfoProto; a use case is resetting the number of training
         iteration to zero.
         By default, this field is an empty graph and its evaluation does not
         produce any output. Thus, no initializer would be changed by default.
         
        .onnx.GraphProto initialization = 1;
      • clearInitialization

        public OnnxMl.TrainingInfoProto.Builder clearInitialization()
         This field describes a graph to compute the initial tensors
         upon starting the training process. Initialization graph has no input
         and can have multiple outputs. Usually, trainable tensors in neural
         networks are randomly initialized. To achieve that, for each tensor,
         the user can put a random number operator such as RandomNormal or
         RandomUniform in TrainingInfoProto.initialization.node and assign its
         random output to the specific tensor using "initialization_binding".
         This graph can also set the initializers in "algorithm" in the same
         TrainingInfoProto; a use case is resetting the number of training
         iteration to zero.
         By default, this field is an empty graph and its evaluation does not
         produce any output. Thus, no initializer would be changed by default.
         
        .onnx.GraphProto initialization = 1;
      • getInitializationBuilder

        public OnnxMl.GraphProto.Builder getInitializationBuilder()
         This field describes a graph to compute the initial tensors
         upon starting the training process. Initialization graph has no input
         and can have multiple outputs. Usually, trainable tensors in neural
         networks are randomly initialized. To achieve that, for each tensor,
         the user can put a random number operator such as RandomNormal or
         RandomUniform in TrainingInfoProto.initialization.node and assign its
         random output to the specific tensor using "initialization_binding".
         This graph can also set the initializers in "algorithm" in the same
         TrainingInfoProto; a use case is resetting the number of training
         iteration to zero.
         By default, this field is an empty graph and its evaluation does not
         produce any output. Thus, no initializer would be changed by default.
         
        .onnx.GraphProto initialization = 1;
      • getInitializationOrBuilder

        public OnnxMl.GraphProtoOrBuilder getInitializationOrBuilder()
         This field describes a graph to compute the initial tensors
         upon starting the training process. Initialization graph has no input
         and can have multiple outputs. Usually, trainable tensors in neural
         networks are randomly initialized. To achieve that, for each tensor,
         the user can put a random number operator such as RandomNormal or
         RandomUniform in TrainingInfoProto.initialization.node and assign its
         random output to the specific tensor using "initialization_binding".
         This graph can also set the initializers in "algorithm" in the same
         TrainingInfoProto; a use case is resetting the number of training
         iteration to zero.
         By default, this field is an empty graph and its evaluation does not
         produce any output. Thus, no initializer would be changed by default.
         
        .onnx.GraphProto initialization = 1;
        Specified by:
        getInitializationOrBuilder in interface OnnxMl.TrainingInfoProtoOrBuilder
      • hasAlgorithm

        public boolean hasAlgorithm()
         This field represents a training algorithm step. Given required inputs,
         it computes outputs to update initializers in its own or inference graph's
         initializer lists. In general, this field contains loss node, gradient node,
         optimizer node, increment of iteration count.
         An execution of the training algorithm step is performed by executing the
         graph obtained by combining the inference graph (namely "ModelProto.graph")
         and the "algorithm" graph. That is, the actual the actual
         input/initializer/output/node/value_info/sparse_initializer list of
         the training graph is the concatenation of
         "ModelProto.graph.input/initializer/output/node/value_info/sparse_initializer"
         and "algorithm.input/initializer/output/node/value_info/sparse_initializer"
         in that order. This combined graph must satisfy the normal ONNX conditions.
         Now, let's provide a visualization of graph combination for clarity.
         Let the inference graph (i.e., "ModelProto.graph") be
            tensor_a, tensor_b -> MatMul -> tensor_c -> Sigmoid -> tensor_d
         and the "algorithm" graph be
            tensor_d -> Add -> tensor_e
         The combination process results
            tensor_a, tensor_b -> MatMul -> tensor_c -> Sigmoid -> tensor_d -> Add -> tensor_e
         Notice that an input of a node in the "algorithm" graph may reference the
         output of a node in the inference graph (but not the other way round). Also, inference
         node cannot reference inputs of "algorithm". With these restrictions, inference graph
         can always be run independently without training information.
         By default, this field is an empty graph and its evaluation does not
         produce any output. Evaluating the default training step never
         update any initializers.
         
        .onnx.GraphProto algorithm = 2;
        Specified by:
        hasAlgorithm in interface OnnxMl.TrainingInfoProtoOrBuilder
        Returns:
        Whether the algorithm field is set.
      • getAlgorithm

        public OnnxMl.GraphProto getAlgorithm()
         This field represents a training algorithm step. Given required inputs,
         it computes outputs to update initializers in its own or inference graph's
         initializer lists. In general, this field contains loss node, gradient node,
         optimizer node, increment of iteration count.
         An execution of the training algorithm step is performed by executing the
         graph obtained by combining the inference graph (namely "ModelProto.graph")
         and the "algorithm" graph. That is, the actual the actual
         input/initializer/output/node/value_info/sparse_initializer list of
         the training graph is the concatenation of
         "ModelProto.graph.input/initializer/output/node/value_info/sparse_initializer"
         and "algorithm.input/initializer/output/node/value_info/sparse_initializer"
         in that order. This combined graph must satisfy the normal ONNX conditions.
         Now, let's provide a visualization of graph combination for clarity.
         Let the inference graph (i.e., "ModelProto.graph") be
            tensor_a, tensor_b -> MatMul -> tensor_c -> Sigmoid -> tensor_d
         and the "algorithm" graph be
            tensor_d -> Add -> tensor_e
         The combination process results
            tensor_a, tensor_b -> MatMul -> tensor_c -> Sigmoid -> tensor_d -> Add -> tensor_e
         Notice that an input of a node in the "algorithm" graph may reference the
         output of a node in the inference graph (but not the other way round). Also, inference
         node cannot reference inputs of "algorithm". With these restrictions, inference graph
         can always be run independently without training information.
         By default, this field is an empty graph and its evaluation does not
         produce any output. Evaluating the default training step never
         update any initializers.
         
        .onnx.GraphProto algorithm = 2;
        Specified by:
        getAlgorithm in interface OnnxMl.TrainingInfoProtoOrBuilder
        Returns:
        The algorithm.
      • setAlgorithm

        public OnnxMl.TrainingInfoProto.Builder setAlgorithm​(OnnxMl.GraphProto value)
         This field represents a training algorithm step. Given required inputs,
         it computes outputs to update initializers in its own or inference graph's
         initializer lists. In general, this field contains loss node, gradient node,
         optimizer node, increment of iteration count.
         An execution of the training algorithm step is performed by executing the
         graph obtained by combining the inference graph (namely "ModelProto.graph")
         and the "algorithm" graph. That is, the actual the actual
         input/initializer/output/node/value_info/sparse_initializer list of
         the training graph is the concatenation of
         "ModelProto.graph.input/initializer/output/node/value_info/sparse_initializer"
         and "algorithm.input/initializer/output/node/value_info/sparse_initializer"
         in that order. This combined graph must satisfy the normal ONNX conditions.
         Now, let's provide a visualization of graph combination for clarity.
         Let the inference graph (i.e., "ModelProto.graph") be
            tensor_a, tensor_b -> MatMul -> tensor_c -> Sigmoid -> tensor_d
         and the "algorithm" graph be
            tensor_d -> Add -> tensor_e
         The combination process results
            tensor_a, tensor_b -> MatMul -> tensor_c -> Sigmoid -> tensor_d -> Add -> tensor_e
         Notice that an input of a node in the "algorithm" graph may reference the
         output of a node in the inference graph (but not the other way round). Also, inference
         node cannot reference inputs of "algorithm". With these restrictions, inference graph
         can always be run independently without training information.
         By default, this field is an empty graph and its evaluation does not
         produce any output. Evaluating the default training step never
         update any initializers.
         
        .onnx.GraphProto algorithm = 2;
      • setAlgorithm

        public OnnxMl.TrainingInfoProto.Builder setAlgorithm​(OnnxMl.GraphProto.Builder builderForValue)
         This field represents a training algorithm step. Given required inputs,
         it computes outputs to update initializers in its own or inference graph's
         initializer lists. In general, this field contains loss node, gradient node,
         optimizer node, increment of iteration count.
         An execution of the training algorithm step is performed by executing the
         graph obtained by combining the inference graph (namely "ModelProto.graph")
         and the "algorithm" graph. That is, the actual the actual
         input/initializer/output/node/value_info/sparse_initializer list of
         the training graph is the concatenation of
         "ModelProto.graph.input/initializer/output/node/value_info/sparse_initializer"
         and "algorithm.input/initializer/output/node/value_info/sparse_initializer"
         in that order. This combined graph must satisfy the normal ONNX conditions.
         Now, let's provide a visualization of graph combination for clarity.
         Let the inference graph (i.e., "ModelProto.graph") be
            tensor_a, tensor_b -> MatMul -> tensor_c -> Sigmoid -> tensor_d
         and the "algorithm" graph be
            tensor_d -> Add -> tensor_e
         The combination process results
            tensor_a, tensor_b -> MatMul -> tensor_c -> Sigmoid -> tensor_d -> Add -> tensor_e
         Notice that an input of a node in the "algorithm" graph may reference the
         output of a node in the inference graph (but not the other way round). Also, inference
         node cannot reference inputs of "algorithm". With these restrictions, inference graph
         can always be run independently without training information.
         By default, this field is an empty graph and its evaluation does not
         produce any output. Evaluating the default training step never
         update any initializers.
         
        .onnx.GraphProto algorithm = 2;
      • mergeAlgorithm

        public OnnxMl.TrainingInfoProto.Builder mergeAlgorithm​(OnnxMl.GraphProto value)
         This field represents a training algorithm step. Given required inputs,
         it computes outputs to update initializers in its own or inference graph's
         initializer lists. In general, this field contains loss node, gradient node,
         optimizer node, increment of iteration count.
         An execution of the training algorithm step is performed by executing the
         graph obtained by combining the inference graph (namely "ModelProto.graph")
         and the "algorithm" graph. That is, the actual the actual
         input/initializer/output/node/value_info/sparse_initializer list of
         the training graph is the concatenation of
         "ModelProto.graph.input/initializer/output/node/value_info/sparse_initializer"
         and "algorithm.input/initializer/output/node/value_info/sparse_initializer"
         in that order. This combined graph must satisfy the normal ONNX conditions.
         Now, let's provide a visualization of graph combination for clarity.
         Let the inference graph (i.e., "ModelProto.graph") be
            tensor_a, tensor_b -> MatMul -> tensor_c -> Sigmoid -> tensor_d
         and the "algorithm" graph be
            tensor_d -> Add -> tensor_e
         The combination process results
            tensor_a, tensor_b -> MatMul -> tensor_c -> Sigmoid -> tensor_d -> Add -> tensor_e
         Notice that an input of a node in the "algorithm" graph may reference the
         output of a node in the inference graph (but not the other way round). Also, inference
         node cannot reference inputs of "algorithm". With these restrictions, inference graph
         can always be run independently without training information.
         By default, this field is an empty graph and its evaluation does not
         produce any output. Evaluating the default training step never
         update any initializers.
         
        .onnx.GraphProto algorithm = 2;
      • clearAlgorithm

        public OnnxMl.TrainingInfoProto.Builder clearAlgorithm()
         This field represents a training algorithm step. Given required inputs,
         it computes outputs to update initializers in its own or inference graph's
         initializer lists. In general, this field contains loss node, gradient node,
         optimizer node, increment of iteration count.
         An execution of the training algorithm step is performed by executing the
         graph obtained by combining the inference graph (namely "ModelProto.graph")
         and the "algorithm" graph. That is, the actual the actual
         input/initializer/output/node/value_info/sparse_initializer list of
         the training graph is the concatenation of
         "ModelProto.graph.input/initializer/output/node/value_info/sparse_initializer"
         and "algorithm.input/initializer/output/node/value_info/sparse_initializer"
         in that order. This combined graph must satisfy the normal ONNX conditions.
         Now, let's provide a visualization of graph combination for clarity.
         Let the inference graph (i.e., "ModelProto.graph") be
            tensor_a, tensor_b -> MatMul -> tensor_c -> Sigmoid -> tensor_d
         and the "algorithm" graph be
            tensor_d -> Add -> tensor_e
         The combination process results
            tensor_a, tensor_b -> MatMul -> tensor_c -> Sigmoid -> tensor_d -> Add -> tensor_e
         Notice that an input of a node in the "algorithm" graph may reference the
         output of a node in the inference graph (but not the other way round). Also, inference
         node cannot reference inputs of "algorithm". With these restrictions, inference graph
         can always be run independently without training information.
         By default, this field is an empty graph and its evaluation does not
         produce any output. Evaluating the default training step never
         update any initializers.
         
        .onnx.GraphProto algorithm = 2;
      • getAlgorithmBuilder

        public OnnxMl.GraphProto.Builder getAlgorithmBuilder()
         This field represents a training algorithm step. Given required inputs,
         it computes outputs to update initializers in its own or inference graph's
         initializer lists. In general, this field contains loss node, gradient node,
         optimizer node, increment of iteration count.
         An execution of the training algorithm step is performed by executing the
         graph obtained by combining the inference graph (namely "ModelProto.graph")
         and the "algorithm" graph. That is, the actual the actual
         input/initializer/output/node/value_info/sparse_initializer list of
         the training graph is the concatenation of
         "ModelProto.graph.input/initializer/output/node/value_info/sparse_initializer"
         and "algorithm.input/initializer/output/node/value_info/sparse_initializer"
         in that order. This combined graph must satisfy the normal ONNX conditions.
         Now, let's provide a visualization of graph combination for clarity.
         Let the inference graph (i.e., "ModelProto.graph") be
            tensor_a, tensor_b -> MatMul -> tensor_c -> Sigmoid -> tensor_d
         and the "algorithm" graph be
            tensor_d -> Add -> tensor_e
         The combination process results
            tensor_a, tensor_b -> MatMul -> tensor_c -> Sigmoid -> tensor_d -> Add -> tensor_e
         Notice that an input of a node in the "algorithm" graph may reference the
         output of a node in the inference graph (but not the other way round). Also, inference
         node cannot reference inputs of "algorithm". With these restrictions, inference graph
         can always be run independently without training information.
         By default, this field is an empty graph and its evaluation does not
         produce any output. Evaluating the default training step never
         update any initializers.
         
        .onnx.GraphProto algorithm = 2;
      • getAlgorithmOrBuilder

        public OnnxMl.GraphProtoOrBuilder getAlgorithmOrBuilder()
         This field represents a training algorithm step. Given required inputs,
         it computes outputs to update initializers in its own or inference graph's
         initializer lists. In general, this field contains loss node, gradient node,
         optimizer node, increment of iteration count.
         An execution of the training algorithm step is performed by executing the
         graph obtained by combining the inference graph (namely "ModelProto.graph")
         and the "algorithm" graph. That is, the actual the actual
         input/initializer/output/node/value_info/sparse_initializer list of
         the training graph is the concatenation of
         "ModelProto.graph.input/initializer/output/node/value_info/sparse_initializer"
         and "algorithm.input/initializer/output/node/value_info/sparse_initializer"
         in that order. This combined graph must satisfy the normal ONNX conditions.
         Now, let's provide a visualization of graph combination for clarity.
         Let the inference graph (i.e., "ModelProto.graph") be
            tensor_a, tensor_b -> MatMul -> tensor_c -> Sigmoid -> tensor_d
         and the "algorithm" graph be
            tensor_d -> Add -> tensor_e
         The combination process results
            tensor_a, tensor_b -> MatMul -> tensor_c -> Sigmoid -> tensor_d -> Add -> tensor_e
         Notice that an input of a node in the "algorithm" graph may reference the
         output of a node in the inference graph (but not the other way round). Also, inference
         node cannot reference inputs of "algorithm". With these restrictions, inference graph
         can always be run independently without training information.
         By default, this field is an empty graph and its evaluation does not
         produce any output. Evaluating the default training step never
         update any initializers.
         
        .onnx.GraphProto algorithm = 2;
        Specified by:
        getAlgorithmOrBuilder in interface OnnxMl.TrainingInfoProtoOrBuilder
      • getInitializationBindingList

        public List<OnnxMl.StringStringEntryProto> getInitializationBindingList()
         This field specifies the bindings from the outputs of "initialization" to
         some initializers in "ModelProto.graph.initializer" and
         the "algorithm.initializer" in the same TrainingInfoProto.
         See "update_binding" below for details.
         By default, this field is empty and no initializer would be changed
         by the execution of "initialization".
         
        repeated .onnx.StringStringEntryProto initialization_binding = 3;
        Specified by:
        getInitializationBindingList in interface OnnxMl.TrainingInfoProtoOrBuilder
      • getInitializationBindingCount

        public int getInitializationBindingCount()
         This field specifies the bindings from the outputs of "initialization" to
         some initializers in "ModelProto.graph.initializer" and
         the "algorithm.initializer" in the same TrainingInfoProto.
         See "update_binding" below for details.
         By default, this field is empty and no initializer would be changed
         by the execution of "initialization".
         
        repeated .onnx.StringStringEntryProto initialization_binding = 3;
        Specified by:
        getInitializationBindingCount in interface OnnxMl.TrainingInfoProtoOrBuilder
      • getInitializationBinding

        public OnnxMl.StringStringEntryProto getInitializationBinding​(int index)
         This field specifies the bindings from the outputs of "initialization" to
         some initializers in "ModelProto.graph.initializer" and
         the "algorithm.initializer" in the same TrainingInfoProto.
         See "update_binding" below for details.
         By default, this field is empty and no initializer would be changed
         by the execution of "initialization".
         
        repeated .onnx.StringStringEntryProto initialization_binding = 3;
        Specified by:
        getInitializationBinding in interface OnnxMl.TrainingInfoProtoOrBuilder
      • setInitializationBinding

        public OnnxMl.TrainingInfoProto.Builder setInitializationBinding​(int index,
                                                                         OnnxMl.StringStringEntryProto value)
         This field specifies the bindings from the outputs of "initialization" to
         some initializers in "ModelProto.graph.initializer" and
         the "algorithm.initializer" in the same TrainingInfoProto.
         See "update_binding" below for details.
         By default, this field is empty and no initializer would be changed
         by the execution of "initialization".
         
        repeated .onnx.StringStringEntryProto initialization_binding = 3;
      • setInitializationBinding

        public OnnxMl.TrainingInfoProto.Builder setInitializationBinding​(int index,
                                                                         OnnxMl.StringStringEntryProto.Builder builderForValue)
         This field specifies the bindings from the outputs of "initialization" to
         some initializers in "ModelProto.graph.initializer" and
         the "algorithm.initializer" in the same TrainingInfoProto.
         See "update_binding" below for details.
         By default, this field is empty and no initializer would be changed
         by the execution of "initialization".
         
        repeated .onnx.StringStringEntryProto initialization_binding = 3;
      • addInitializationBinding

        public OnnxMl.TrainingInfoProto.Builder addInitializationBinding​(OnnxMl.StringStringEntryProto value)
         This field specifies the bindings from the outputs of "initialization" to
         some initializers in "ModelProto.graph.initializer" and
         the "algorithm.initializer" in the same TrainingInfoProto.
         See "update_binding" below for details.
         By default, this field is empty and no initializer would be changed
         by the execution of "initialization".
         
        repeated .onnx.StringStringEntryProto initialization_binding = 3;
      • addInitializationBinding

        public OnnxMl.TrainingInfoProto.Builder addInitializationBinding​(int index,
                                                                         OnnxMl.StringStringEntryProto value)
         This field specifies the bindings from the outputs of "initialization" to
         some initializers in "ModelProto.graph.initializer" and
         the "algorithm.initializer" in the same TrainingInfoProto.
         See "update_binding" below for details.
         By default, this field is empty and no initializer would be changed
         by the execution of "initialization".
         
        repeated .onnx.StringStringEntryProto initialization_binding = 3;
      • addInitializationBinding

        public OnnxMl.TrainingInfoProto.Builder addInitializationBinding​(OnnxMl.StringStringEntryProto.Builder builderForValue)
         This field specifies the bindings from the outputs of "initialization" to
         some initializers in "ModelProto.graph.initializer" and
         the "algorithm.initializer" in the same TrainingInfoProto.
         See "update_binding" below for details.
         By default, this field is empty and no initializer would be changed
         by the execution of "initialization".
         
        repeated .onnx.StringStringEntryProto initialization_binding = 3;
      • addInitializationBinding

        public OnnxMl.TrainingInfoProto.Builder addInitializationBinding​(int index,
                                                                         OnnxMl.StringStringEntryProto.Builder builderForValue)
         This field specifies the bindings from the outputs of "initialization" to
         some initializers in "ModelProto.graph.initializer" and
         the "algorithm.initializer" in the same TrainingInfoProto.
         See "update_binding" below for details.
         By default, this field is empty and no initializer would be changed
         by the execution of "initialization".
         
        repeated .onnx.StringStringEntryProto initialization_binding = 3;
      • addAllInitializationBinding

        public OnnxMl.TrainingInfoProto.Builder addAllInitializationBinding​(Iterable<? extends OnnxMl.StringStringEntryProto> values)
         This field specifies the bindings from the outputs of "initialization" to
         some initializers in "ModelProto.graph.initializer" and
         the "algorithm.initializer" in the same TrainingInfoProto.
         See "update_binding" below for details.
         By default, this field is empty and no initializer would be changed
         by the execution of "initialization".
         
        repeated .onnx.StringStringEntryProto initialization_binding = 3;
      • clearInitializationBinding

        public OnnxMl.TrainingInfoProto.Builder clearInitializationBinding()
         This field specifies the bindings from the outputs of "initialization" to
         some initializers in "ModelProto.graph.initializer" and
         the "algorithm.initializer" in the same TrainingInfoProto.
         See "update_binding" below for details.
         By default, this field is empty and no initializer would be changed
         by the execution of "initialization".
         
        repeated .onnx.StringStringEntryProto initialization_binding = 3;
      • removeInitializationBinding

        public OnnxMl.TrainingInfoProto.Builder removeInitializationBinding​(int index)
         This field specifies the bindings from the outputs of "initialization" to
         some initializers in "ModelProto.graph.initializer" and
         the "algorithm.initializer" in the same TrainingInfoProto.
         See "update_binding" below for details.
         By default, this field is empty and no initializer would be changed
         by the execution of "initialization".
         
        repeated .onnx.StringStringEntryProto initialization_binding = 3;
      • getInitializationBindingBuilder

        public OnnxMl.StringStringEntryProto.Builder getInitializationBindingBuilder​(int index)
         This field specifies the bindings from the outputs of "initialization" to
         some initializers in "ModelProto.graph.initializer" and
         the "algorithm.initializer" in the same TrainingInfoProto.
         See "update_binding" below for details.
         By default, this field is empty and no initializer would be changed
         by the execution of "initialization".
         
        repeated .onnx.StringStringEntryProto initialization_binding = 3;
      • getInitializationBindingOrBuilder

        public OnnxMl.StringStringEntryProtoOrBuilder getInitializationBindingOrBuilder​(int index)
         This field specifies the bindings from the outputs of "initialization" to
         some initializers in "ModelProto.graph.initializer" and
         the "algorithm.initializer" in the same TrainingInfoProto.
         See "update_binding" below for details.
         By default, this field is empty and no initializer would be changed
         by the execution of "initialization".
         
        repeated .onnx.StringStringEntryProto initialization_binding = 3;
        Specified by:
        getInitializationBindingOrBuilder in interface OnnxMl.TrainingInfoProtoOrBuilder
      • getInitializationBindingOrBuilderList

        public List<? extends OnnxMl.StringStringEntryProtoOrBuilder> getInitializationBindingOrBuilderList()
         This field specifies the bindings from the outputs of "initialization" to
         some initializers in "ModelProto.graph.initializer" and
         the "algorithm.initializer" in the same TrainingInfoProto.
         See "update_binding" below for details.
         By default, this field is empty and no initializer would be changed
         by the execution of "initialization".
         
        repeated .onnx.StringStringEntryProto initialization_binding = 3;
        Specified by:
        getInitializationBindingOrBuilderList in interface OnnxMl.TrainingInfoProtoOrBuilder
      • addInitializationBindingBuilder

        public OnnxMl.StringStringEntryProto.Builder addInitializationBindingBuilder()
         This field specifies the bindings from the outputs of "initialization" to
         some initializers in "ModelProto.graph.initializer" and
         the "algorithm.initializer" in the same TrainingInfoProto.
         See "update_binding" below for details.
         By default, this field is empty and no initializer would be changed
         by the execution of "initialization".
         
        repeated .onnx.StringStringEntryProto initialization_binding = 3;
      • addInitializationBindingBuilder

        public OnnxMl.StringStringEntryProto.Builder addInitializationBindingBuilder​(int index)
         This field specifies the bindings from the outputs of "initialization" to
         some initializers in "ModelProto.graph.initializer" and
         the "algorithm.initializer" in the same TrainingInfoProto.
         See "update_binding" below for details.
         By default, this field is empty and no initializer would be changed
         by the execution of "initialization".
         
        repeated .onnx.StringStringEntryProto initialization_binding = 3;
      • getInitializationBindingBuilderList

        public List<OnnxMl.StringStringEntryProto.Builder> getInitializationBindingBuilderList()
         This field specifies the bindings from the outputs of "initialization" to
         some initializers in "ModelProto.graph.initializer" and
         the "algorithm.initializer" in the same TrainingInfoProto.
         See "update_binding" below for details.
         By default, this field is empty and no initializer would be changed
         by the execution of "initialization".
         
        repeated .onnx.StringStringEntryProto initialization_binding = 3;
      • getUpdateBindingList

        public List<OnnxMl.StringStringEntryProto> getUpdateBindingList()
         Gradient-based training is usually an iterative procedure. In one gradient
         descent iteration, we apply
         x = x - r * g
         where "x" is the optimized tensor, "r" stands for learning rate, and "g" is
         gradient of "x" with respect to a chosen loss. To avoid adding assignments
         into the training graph, we split the update equation into
         y = x - r * g
         x = y
         The user needs to save "y = x - r * g" into TrainingInfoProto.algorithm. To
         tell that "y" should be assigned to "x", the field "update_binding" may
         contain a key-value pair of strings, "x" (key of StringStringEntryProto)
         and "y" (value of StringStringEntryProto).
         For a neural network with multiple trainable (mutable) tensors, there can
         be multiple key-value pairs in "update_binding".
         The initializers appears as keys in "update_binding" are considered
         mutable variables. This implies some behaviors
         as described below.
          1. We have only unique keys in all "update_binding"s so that two
             variables may not have the same name. This ensures that one
             variable is assigned up to once.
          2. The keys must appear in names of "ModelProto.graph.initializer" or
             "TrainingInfoProto.algorithm.initializer".
          3. The values must be output names of "algorithm" or "ModelProto.graph.output".
          4. Mutable variables are initialized to the value specified by the
             corresponding initializer, and then potentially updated by
             "initializer_binding"s and "update_binding"s in "TrainingInfoProto"s.
         This field usually contains names of trainable tensors
         (in ModelProto.graph), optimizer states such as momentums in advanced
         stochastic gradient methods (in TrainingInfoProto.graph),
         and number of training iterations (in TrainingInfoProto.graph).
         By default, this field is empty and no initializer would be changed
         by the execution of "algorithm".
         
        repeated .onnx.StringStringEntryProto update_binding = 4;
        Specified by:
        getUpdateBindingList in interface OnnxMl.TrainingInfoProtoOrBuilder
      • getUpdateBindingCount

        public int getUpdateBindingCount()
         Gradient-based training is usually an iterative procedure. In one gradient
         descent iteration, we apply
         x = x - r * g
         where "x" is the optimized tensor, "r" stands for learning rate, and "g" is
         gradient of "x" with respect to a chosen loss. To avoid adding assignments
         into the training graph, we split the update equation into
         y = x - r * g
         x = y
         The user needs to save "y = x - r * g" into TrainingInfoProto.algorithm. To
         tell that "y" should be assigned to "x", the field "update_binding" may
         contain a key-value pair of strings, "x" (key of StringStringEntryProto)
         and "y" (value of StringStringEntryProto).
         For a neural network with multiple trainable (mutable) tensors, there can
         be multiple key-value pairs in "update_binding".
         The initializers appears as keys in "update_binding" are considered
         mutable variables. This implies some behaviors
         as described below.
          1. We have only unique keys in all "update_binding"s so that two
             variables may not have the same name. This ensures that one
             variable is assigned up to once.
          2. The keys must appear in names of "ModelProto.graph.initializer" or
             "TrainingInfoProto.algorithm.initializer".
          3. The values must be output names of "algorithm" or "ModelProto.graph.output".
          4. Mutable variables are initialized to the value specified by the
             corresponding initializer, and then potentially updated by
             "initializer_binding"s and "update_binding"s in "TrainingInfoProto"s.
         This field usually contains names of trainable tensors
         (in ModelProto.graph), optimizer states such as momentums in advanced
         stochastic gradient methods (in TrainingInfoProto.graph),
         and number of training iterations (in TrainingInfoProto.graph).
         By default, this field is empty and no initializer would be changed
         by the execution of "algorithm".
         
        repeated .onnx.StringStringEntryProto update_binding = 4;
        Specified by:
        getUpdateBindingCount in interface OnnxMl.TrainingInfoProtoOrBuilder
      • getUpdateBinding

        public OnnxMl.StringStringEntryProto getUpdateBinding​(int index)
         Gradient-based training is usually an iterative procedure. In one gradient
         descent iteration, we apply
         x = x - r * g
         where "x" is the optimized tensor, "r" stands for learning rate, and "g" is
         gradient of "x" with respect to a chosen loss. To avoid adding assignments
         into the training graph, we split the update equation into
         y = x - r * g
         x = y
         The user needs to save "y = x - r * g" into TrainingInfoProto.algorithm. To
         tell that "y" should be assigned to "x", the field "update_binding" may
         contain a key-value pair of strings, "x" (key of StringStringEntryProto)
         and "y" (value of StringStringEntryProto).
         For a neural network with multiple trainable (mutable) tensors, there can
         be multiple key-value pairs in "update_binding".
         The initializers appears as keys in "update_binding" are considered
         mutable variables. This implies some behaviors
         as described below.
          1. We have only unique keys in all "update_binding"s so that two
             variables may not have the same name. This ensures that one
             variable is assigned up to once.
          2. The keys must appear in names of "ModelProto.graph.initializer" or
             "TrainingInfoProto.algorithm.initializer".
          3. The values must be output names of "algorithm" or "ModelProto.graph.output".
          4. Mutable variables are initialized to the value specified by the
             corresponding initializer, and then potentially updated by
             "initializer_binding"s and "update_binding"s in "TrainingInfoProto"s.
         This field usually contains names of trainable tensors
         (in ModelProto.graph), optimizer states such as momentums in advanced
         stochastic gradient methods (in TrainingInfoProto.graph),
         and number of training iterations (in TrainingInfoProto.graph).
         By default, this field is empty and no initializer would be changed
         by the execution of "algorithm".
         
        repeated .onnx.StringStringEntryProto update_binding = 4;
        Specified by:
        getUpdateBinding in interface OnnxMl.TrainingInfoProtoOrBuilder
      • setUpdateBinding

        public OnnxMl.TrainingInfoProto.Builder setUpdateBinding​(int index,
                                                                 OnnxMl.StringStringEntryProto value)
         Gradient-based training is usually an iterative procedure. In one gradient
         descent iteration, we apply
         x = x - r * g
         where "x" is the optimized tensor, "r" stands for learning rate, and "g" is
         gradient of "x" with respect to a chosen loss. To avoid adding assignments
         into the training graph, we split the update equation into
         y = x - r * g
         x = y
         The user needs to save "y = x - r * g" into TrainingInfoProto.algorithm. To
         tell that "y" should be assigned to "x", the field "update_binding" may
         contain a key-value pair of strings, "x" (key of StringStringEntryProto)
         and "y" (value of StringStringEntryProto).
         For a neural network with multiple trainable (mutable) tensors, there can
         be multiple key-value pairs in "update_binding".
         The initializers appears as keys in "update_binding" are considered
         mutable variables. This implies some behaviors
         as described below.
          1. We have only unique keys in all "update_binding"s so that two
             variables may not have the same name. This ensures that one
             variable is assigned up to once.
          2. The keys must appear in names of "ModelProto.graph.initializer" or
             "TrainingInfoProto.algorithm.initializer".
          3. The values must be output names of "algorithm" or "ModelProto.graph.output".
          4. Mutable variables are initialized to the value specified by the
             corresponding initializer, and then potentially updated by
             "initializer_binding"s and "update_binding"s in "TrainingInfoProto"s.
         This field usually contains names of trainable tensors
         (in ModelProto.graph), optimizer states such as momentums in advanced
         stochastic gradient methods (in TrainingInfoProto.graph),
         and number of training iterations (in TrainingInfoProto.graph).
         By default, this field is empty and no initializer would be changed
         by the execution of "algorithm".
         
        repeated .onnx.StringStringEntryProto update_binding = 4;
      • setUpdateBinding

        public OnnxMl.TrainingInfoProto.Builder setUpdateBinding​(int index,
                                                                 OnnxMl.StringStringEntryProto.Builder builderForValue)
         Gradient-based training is usually an iterative procedure. In one gradient
         descent iteration, we apply
         x = x - r * g
         where "x" is the optimized tensor, "r" stands for learning rate, and "g" is
         gradient of "x" with respect to a chosen loss. To avoid adding assignments
         into the training graph, we split the update equation into
         y = x - r * g
         x = y
         The user needs to save "y = x - r * g" into TrainingInfoProto.algorithm. To
         tell that "y" should be assigned to "x", the field "update_binding" may
         contain a key-value pair of strings, "x" (key of StringStringEntryProto)
         and "y" (value of StringStringEntryProto).
         For a neural network with multiple trainable (mutable) tensors, there can
         be multiple key-value pairs in "update_binding".
         The initializers appears as keys in "update_binding" are considered
         mutable variables. This implies some behaviors
         as described below.
          1. We have only unique keys in all "update_binding"s so that two
             variables may not have the same name. This ensures that one
             variable is assigned up to once.
          2. The keys must appear in names of "ModelProto.graph.initializer" or
             "TrainingInfoProto.algorithm.initializer".
          3. The values must be output names of "algorithm" or "ModelProto.graph.output".
          4. Mutable variables are initialized to the value specified by the
             corresponding initializer, and then potentially updated by
             "initializer_binding"s and "update_binding"s in "TrainingInfoProto"s.
         This field usually contains names of trainable tensors
         (in ModelProto.graph), optimizer states such as momentums in advanced
         stochastic gradient methods (in TrainingInfoProto.graph),
         and number of training iterations (in TrainingInfoProto.graph).
         By default, this field is empty and no initializer would be changed
         by the execution of "algorithm".
         
        repeated .onnx.StringStringEntryProto update_binding = 4;
      • addUpdateBinding

        public OnnxMl.TrainingInfoProto.Builder addUpdateBinding​(OnnxMl.StringStringEntryProto value)
         Gradient-based training is usually an iterative procedure. In one gradient
         descent iteration, we apply
         x = x - r * g
         where "x" is the optimized tensor, "r" stands for learning rate, and "g" is
         gradient of "x" with respect to a chosen loss. To avoid adding assignments
         into the training graph, we split the update equation into
         y = x - r * g
         x = y
         The user needs to save "y = x - r * g" into TrainingInfoProto.algorithm. To
         tell that "y" should be assigned to "x", the field "update_binding" may
         contain a key-value pair of strings, "x" (key of StringStringEntryProto)
         and "y" (value of StringStringEntryProto).
         For a neural network with multiple trainable (mutable) tensors, there can
         be multiple key-value pairs in "update_binding".
         The initializers appears as keys in "update_binding" are considered
         mutable variables. This implies some behaviors
         as described below.
          1. We have only unique keys in all "update_binding"s so that two
             variables may not have the same name. This ensures that one
             variable is assigned up to once.
          2. The keys must appear in names of "ModelProto.graph.initializer" or
             "TrainingInfoProto.algorithm.initializer".
          3. The values must be output names of "algorithm" or "ModelProto.graph.output".
          4. Mutable variables are initialized to the value specified by the
             corresponding initializer, and then potentially updated by
             "initializer_binding"s and "update_binding"s in "TrainingInfoProto"s.
         This field usually contains names of trainable tensors
         (in ModelProto.graph), optimizer states such as momentums in advanced
         stochastic gradient methods (in TrainingInfoProto.graph),
         and number of training iterations (in TrainingInfoProto.graph).
         By default, this field is empty and no initializer would be changed
         by the execution of "algorithm".
         
        repeated .onnx.StringStringEntryProto update_binding = 4;
      • addUpdateBinding

        public OnnxMl.TrainingInfoProto.Builder addUpdateBinding​(int index,
                                                                 OnnxMl.StringStringEntryProto value)
         Gradient-based training is usually an iterative procedure. In one gradient
         descent iteration, we apply
         x = x - r * g
         where "x" is the optimized tensor, "r" stands for learning rate, and "g" is
         gradient of "x" with respect to a chosen loss. To avoid adding assignments
         into the training graph, we split the update equation into
         y = x - r * g
         x = y
         The user needs to save "y = x - r * g" into TrainingInfoProto.algorithm. To
         tell that "y" should be assigned to "x", the field "update_binding" may
         contain a key-value pair of strings, "x" (key of StringStringEntryProto)
         and "y" (value of StringStringEntryProto).
         For a neural network with multiple trainable (mutable) tensors, there can
         be multiple key-value pairs in "update_binding".
         The initializers appears as keys in "update_binding" are considered
         mutable variables. This implies some behaviors
         as described below.
          1. We have only unique keys in all "update_binding"s so that two
             variables may not have the same name. This ensures that one
             variable is assigned up to once.
          2. The keys must appear in names of "ModelProto.graph.initializer" or
             "TrainingInfoProto.algorithm.initializer".
          3. The values must be output names of "algorithm" or "ModelProto.graph.output".
          4. Mutable variables are initialized to the value specified by the
             corresponding initializer, and then potentially updated by
             "initializer_binding"s and "update_binding"s in "TrainingInfoProto"s.
         This field usually contains names of trainable tensors
         (in ModelProto.graph), optimizer states such as momentums in advanced
         stochastic gradient methods (in TrainingInfoProto.graph),
         and number of training iterations (in TrainingInfoProto.graph).
         By default, this field is empty and no initializer would be changed
         by the execution of "algorithm".
         
        repeated .onnx.StringStringEntryProto update_binding = 4;
      • addUpdateBinding

        public OnnxMl.TrainingInfoProto.Builder addUpdateBinding​(OnnxMl.StringStringEntryProto.Builder builderForValue)
         Gradient-based training is usually an iterative procedure. In one gradient
         descent iteration, we apply
         x = x - r * g
         where "x" is the optimized tensor, "r" stands for learning rate, and "g" is
         gradient of "x" with respect to a chosen loss. To avoid adding assignments
         into the training graph, we split the update equation into
         y = x - r * g
         x = y
         The user needs to save "y = x - r * g" into TrainingInfoProto.algorithm. To
         tell that "y" should be assigned to "x", the field "update_binding" may
         contain a key-value pair of strings, "x" (key of StringStringEntryProto)
         and "y" (value of StringStringEntryProto).
         For a neural network with multiple trainable (mutable) tensors, there can
         be multiple key-value pairs in "update_binding".
         The initializers appears as keys in "update_binding" are considered
         mutable variables. This implies some behaviors
         as described below.
          1. We have only unique keys in all "update_binding"s so that two
             variables may not have the same name. This ensures that one
             variable is assigned up to once.
          2. The keys must appear in names of "ModelProto.graph.initializer" or
             "TrainingInfoProto.algorithm.initializer".
          3. The values must be output names of "algorithm" or "ModelProto.graph.output".
          4. Mutable variables are initialized to the value specified by the
             corresponding initializer, and then potentially updated by
             "initializer_binding"s and "update_binding"s in "TrainingInfoProto"s.
         This field usually contains names of trainable tensors
         (in ModelProto.graph), optimizer states such as momentums in advanced
         stochastic gradient methods (in TrainingInfoProto.graph),
         and number of training iterations (in TrainingInfoProto.graph).
         By default, this field is empty and no initializer would be changed
         by the execution of "algorithm".
         
        repeated .onnx.StringStringEntryProto update_binding = 4;
      • addUpdateBinding

        public OnnxMl.TrainingInfoProto.Builder addUpdateBinding​(int index,
                                                                 OnnxMl.StringStringEntryProto.Builder builderForValue)
         Gradient-based training is usually an iterative procedure. In one gradient
         descent iteration, we apply
         x = x - r * g
         where "x" is the optimized tensor, "r" stands for learning rate, and "g" is
         gradient of "x" with respect to a chosen loss. To avoid adding assignments
         into the training graph, we split the update equation into
         y = x - r * g
         x = y
         The user needs to save "y = x - r * g" into TrainingInfoProto.algorithm. To
         tell that "y" should be assigned to "x", the field "update_binding" may
         contain a key-value pair of strings, "x" (key of StringStringEntryProto)
         and "y" (value of StringStringEntryProto).
         For a neural network with multiple trainable (mutable) tensors, there can
         be multiple key-value pairs in "update_binding".
         The initializers appears as keys in "update_binding" are considered
         mutable variables. This implies some behaviors
         as described below.
          1. We have only unique keys in all "update_binding"s so that two
             variables may not have the same name. This ensures that one
             variable is assigned up to once.
          2. The keys must appear in names of "ModelProto.graph.initializer" or
             "TrainingInfoProto.algorithm.initializer".
          3. The values must be output names of "algorithm" or "ModelProto.graph.output".
          4. Mutable variables are initialized to the value specified by the
             corresponding initializer, and then potentially updated by
             "initializer_binding"s and "update_binding"s in "TrainingInfoProto"s.
         This field usually contains names of trainable tensors
         (in ModelProto.graph), optimizer states such as momentums in advanced
         stochastic gradient methods (in TrainingInfoProto.graph),
         and number of training iterations (in TrainingInfoProto.graph).
         By default, this field is empty and no initializer would be changed
         by the execution of "algorithm".
         
        repeated .onnx.StringStringEntryProto update_binding = 4;
      • addAllUpdateBinding

        public OnnxMl.TrainingInfoProto.Builder addAllUpdateBinding​(Iterable<? extends OnnxMl.StringStringEntryProto> values)
         Gradient-based training is usually an iterative procedure. In one gradient
         descent iteration, we apply
         x = x - r * g
         where "x" is the optimized tensor, "r" stands for learning rate, and "g" is
         gradient of "x" with respect to a chosen loss. To avoid adding assignments
         into the training graph, we split the update equation into
         y = x - r * g
         x = y
         The user needs to save "y = x - r * g" into TrainingInfoProto.algorithm. To
         tell that "y" should be assigned to "x", the field "update_binding" may
         contain a key-value pair of strings, "x" (key of StringStringEntryProto)
         and "y" (value of StringStringEntryProto).
         For a neural network with multiple trainable (mutable) tensors, there can
         be multiple key-value pairs in "update_binding".
         The initializers appears as keys in "update_binding" are considered
         mutable variables. This implies some behaviors
         as described below.
          1. We have only unique keys in all "update_binding"s so that two
             variables may not have the same name. This ensures that one
             variable is assigned up to once.
          2. The keys must appear in names of "ModelProto.graph.initializer" or
             "TrainingInfoProto.algorithm.initializer".
          3. The values must be output names of "algorithm" or "ModelProto.graph.output".
          4. Mutable variables are initialized to the value specified by the
             corresponding initializer, and then potentially updated by
             "initializer_binding"s and "update_binding"s in "TrainingInfoProto"s.
         This field usually contains names of trainable tensors
         (in ModelProto.graph), optimizer states such as momentums in advanced
         stochastic gradient methods (in TrainingInfoProto.graph),
         and number of training iterations (in TrainingInfoProto.graph).
         By default, this field is empty and no initializer would be changed
         by the execution of "algorithm".
         
        repeated .onnx.StringStringEntryProto update_binding = 4;
      • clearUpdateBinding

        public OnnxMl.TrainingInfoProto.Builder clearUpdateBinding()
         Gradient-based training is usually an iterative procedure. In one gradient
         descent iteration, we apply
         x = x - r * g
         where "x" is the optimized tensor, "r" stands for learning rate, and "g" is
         gradient of "x" with respect to a chosen loss. To avoid adding assignments
         into the training graph, we split the update equation into
         y = x - r * g
         x = y
         The user needs to save "y = x - r * g" into TrainingInfoProto.algorithm. To
         tell that "y" should be assigned to "x", the field "update_binding" may
         contain a key-value pair of strings, "x" (key of StringStringEntryProto)
         and "y" (value of StringStringEntryProto).
         For a neural network with multiple trainable (mutable) tensors, there can
         be multiple key-value pairs in "update_binding".
         The initializers appears as keys in "update_binding" are considered
         mutable variables. This implies some behaviors
         as described below.
          1. We have only unique keys in all "update_binding"s so that two
             variables may not have the same name. This ensures that one
             variable is assigned up to once.
          2. The keys must appear in names of "ModelProto.graph.initializer" or
             "TrainingInfoProto.algorithm.initializer".
          3. The values must be output names of "algorithm" or "ModelProto.graph.output".
          4. Mutable variables are initialized to the value specified by the
             corresponding initializer, and then potentially updated by
             "initializer_binding"s and "update_binding"s in "TrainingInfoProto"s.
         This field usually contains names of trainable tensors
         (in ModelProto.graph), optimizer states such as momentums in advanced
         stochastic gradient methods (in TrainingInfoProto.graph),
         and number of training iterations (in TrainingInfoProto.graph).
         By default, this field is empty and no initializer would be changed
         by the execution of "algorithm".
         
        repeated .onnx.StringStringEntryProto update_binding = 4;
      • removeUpdateBinding

        public OnnxMl.TrainingInfoProto.Builder removeUpdateBinding​(int index)
         Gradient-based training is usually an iterative procedure. In one gradient
         descent iteration, we apply
         x = x - r * g
         where "x" is the optimized tensor, "r" stands for learning rate, and "g" is
         gradient of "x" with respect to a chosen loss. To avoid adding assignments
         into the training graph, we split the update equation into
         y = x - r * g
         x = y
         The user needs to save "y = x - r * g" into TrainingInfoProto.algorithm. To
         tell that "y" should be assigned to "x", the field "update_binding" may
         contain a key-value pair of strings, "x" (key of StringStringEntryProto)
         and "y" (value of StringStringEntryProto).
         For a neural network with multiple trainable (mutable) tensors, there can
         be multiple key-value pairs in "update_binding".
         The initializers appears as keys in "update_binding" are considered
         mutable variables. This implies some behaviors
         as described below.
          1. We have only unique keys in all "update_binding"s so that two
             variables may not have the same name. This ensures that one
             variable is assigned up to once.
          2. The keys must appear in names of "ModelProto.graph.initializer" or
             "TrainingInfoProto.algorithm.initializer".
          3. The values must be output names of "algorithm" or "ModelProto.graph.output".
          4. Mutable variables are initialized to the value specified by the
             corresponding initializer, and then potentially updated by
             "initializer_binding"s and "update_binding"s in "TrainingInfoProto"s.
         This field usually contains names of trainable tensors
         (in ModelProto.graph), optimizer states such as momentums in advanced
         stochastic gradient methods (in TrainingInfoProto.graph),
         and number of training iterations (in TrainingInfoProto.graph).
         By default, this field is empty and no initializer would be changed
         by the execution of "algorithm".
         
        repeated .onnx.StringStringEntryProto update_binding = 4;
      • getUpdateBindingBuilder

        public OnnxMl.StringStringEntryProto.Builder getUpdateBindingBuilder​(int index)
         Gradient-based training is usually an iterative procedure. In one gradient
         descent iteration, we apply
         x = x - r * g
         where "x" is the optimized tensor, "r" stands for learning rate, and "g" is
         gradient of "x" with respect to a chosen loss. To avoid adding assignments
         into the training graph, we split the update equation into
         y = x - r * g
         x = y
         The user needs to save "y = x - r * g" into TrainingInfoProto.algorithm. To
         tell that "y" should be assigned to "x", the field "update_binding" may
         contain a key-value pair of strings, "x" (key of StringStringEntryProto)
         and "y" (value of StringStringEntryProto).
         For a neural network with multiple trainable (mutable) tensors, there can
         be multiple key-value pairs in "update_binding".
         The initializers appears as keys in "update_binding" are considered
         mutable variables. This implies some behaviors
         as described below.
          1. We have only unique keys in all "update_binding"s so that two
             variables may not have the same name. This ensures that one
             variable is assigned up to once.
          2. The keys must appear in names of "ModelProto.graph.initializer" or
             "TrainingInfoProto.algorithm.initializer".
          3. The values must be output names of "algorithm" or "ModelProto.graph.output".
          4. Mutable variables are initialized to the value specified by the
             corresponding initializer, and then potentially updated by
             "initializer_binding"s and "update_binding"s in "TrainingInfoProto"s.
         This field usually contains names of trainable tensors
         (in ModelProto.graph), optimizer states such as momentums in advanced
         stochastic gradient methods (in TrainingInfoProto.graph),
         and number of training iterations (in TrainingInfoProto.graph).
         By default, this field is empty and no initializer would be changed
         by the execution of "algorithm".
         
        repeated .onnx.StringStringEntryProto update_binding = 4;
      • getUpdateBindingOrBuilder

        public OnnxMl.StringStringEntryProtoOrBuilder getUpdateBindingOrBuilder​(int index)
         Gradient-based training is usually an iterative procedure. In one gradient
         descent iteration, we apply
         x = x - r * g
         where "x" is the optimized tensor, "r" stands for learning rate, and "g" is
         gradient of "x" with respect to a chosen loss. To avoid adding assignments
         into the training graph, we split the update equation into
         y = x - r * g
         x = y
         The user needs to save "y = x - r * g" into TrainingInfoProto.algorithm. To
         tell that "y" should be assigned to "x", the field "update_binding" may
         contain a key-value pair of strings, "x" (key of StringStringEntryProto)
         and "y" (value of StringStringEntryProto).
         For a neural network with multiple trainable (mutable) tensors, there can
         be multiple key-value pairs in "update_binding".
         The initializers appears as keys in "update_binding" are considered
         mutable variables. This implies some behaviors
         as described below.
          1. We have only unique keys in all "update_binding"s so that two
             variables may not have the same name. This ensures that one
             variable is assigned up to once.
          2. The keys must appear in names of "ModelProto.graph.initializer" or
             "TrainingInfoProto.algorithm.initializer".
          3. The values must be output names of "algorithm" or "ModelProto.graph.output".
          4. Mutable variables are initialized to the value specified by the
             corresponding initializer, and then potentially updated by
             "initializer_binding"s and "update_binding"s in "TrainingInfoProto"s.
         This field usually contains names of trainable tensors
         (in ModelProto.graph), optimizer states such as momentums in advanced
         stochastic gradient methods (in TrainingInfoProto.graph),
         and number of training iterations (in TrainingInfoProto.graph).
         By default, this field is empty and no initializer would be changed
         by the execution of "algorithm".
         
        repeated .onnx.StringStringEntryProto update_binding = 4;
        Specified by:
        getUpdateBindingOrBuilder in interface OnnxMl.TrainingInfoProtoOrBuilder
      • getUpdateBindingOrBuilderList

        public List<? extends OnnxMl.StringStringEntryProtoOrBuilder> getUpdateBindingOrBuilderList()
         Gradient-based training is usually an iterative procedure. In one gradient
         descent iteration, we apply
         x = x - r * g
         where "x" is the optimized tensor, "r" stands for learning rate, and "g" is
         gradient of "x" with respect to a chosen loss. To avoid adding assignments
         into the training graph, we split the update equation into
         y = x - r * g
         x = y
         The user needs to save "y = x - r * g" into TrainingInfoProto.algorithm. To
         tell that "y" should be assigned to "x", the field "update_binding" may
         contain a key-value pair of strings, "x" (key of StringStringEntryProto)
         and "y" (value of StringStringEntryProto).
         For a neural network with multiple trainable (mutable) tensors, there can
         be multiple key-value pairs in "update_binding".
         The initializers appears as keys in "update_binding" are considered
         mutable variables. This implies some behaviors
         as described below.
          1. We have only unique keys in all "update_binding"s so that two
             variables may not have the same name. This ensures that one
             variable is assigned up to once.
          2. The keys must appear in names of "ModelProto.graph.initializer" or
             "TrainingInfoProto.algorithm.initializer".
          3. The values must be output names of "algorithm" or "ModelProto.graph.output".
          4. Mutable variables are initialized to the value specified by the
             corresponding initializer, and then potentially updated by
             "initializer_binding"s and "update_binding"s in "TrainingInfoProto"s.
         This field usually contains names of trainable tensors
         (in ModelProto.graph), optimizer states such as momentums in advanced
         stochastic gradient methods (in TrainingInfoProto.graph),
         and number of training iterations (in TrainingInfoProto.graph).
         By default, this field is empty and no initializer would be changed
         by the execution of "algorithm".
         
        repeated .onnx.StringStringEntryProto update_binding = 4;
        Specified by:
        getUpdateBindingOrBuilderList in interface OnnxMl.TrainingInfoProtoOrBuilder
      • addUpdateBindingBuilder

        public OnnxMl.StringStringEntryProto.Builder addUpdateBindingBuilder()
         Gradient-based training is usually an iterative procedure. In one gradient
         descent iteration, we apply
         x = x - r * g
         where "x" is the optimized tensor, "r" stands for learning rate, and "g" is
         gradient of "x" with respect to a chosen loss. To avoid adding assignments
         into the training graph, we split the update equation into
         y = x - r * g
         x = y
         The user needs to save "y = x - r * g" into TrainingInfoProto.algorithm. To
         tell that "y" should be assigned to "x", the field "update_binding" may
         contain a key-value pair of strings, "x" (key of StringStringEntryProto)
         and "y" (value of StringStringEntryProto).
         For a neural network with multiple trainable (mutable) tensors, there can
         be multiple key-value pairs in "update_binding".
         The initializers appears as keys in "update_binding" are considered
         mutable variables. This implies some behaviors
         as described below.
          1. We have only unique keys in all "update_binding"s so that two
             variables may not have the same name. This ensures that one
             variable is assigned up to once.
          2. The keys must appear in names of "ModelProto.graph.initializer" or
             "TrainingInfoProto.algorithm.initializer".
          3. The values must be output names of "algorithm" or "ModelProto.graph.output".
          4. Mutable variables are initialized to the value specified by the
             corresponding initializer, and then potentially updated by
             "initializer_binding"s and "update_binding"s in "TrainingInfoProto"s.
         This field usually contains names of trainable tensors
         (in ModelProto.graph), optimizer states such as momentums in advanced
         stochastic gradient methods (in TrainingInfoProto.graph),
         and number of training iterations (in TrainingInfoProto.graph).
         By default, this field is empty and no initializer would be changed
         by the execution of "algorithm".
         
        repeated .onnx.StringStringEntryProto update_binding = 4;
      • addUpdateBindingBuilder

        public OnnxMl.StringStringEntryProto.Builder addUpdateBindingBuilder​(int index)
         Gradient-based training is usually an iterative procedure. In one gradient
         descent iteration, we apply
         x = x - r * g
         where "x" is the optimized tensor, "r" stands for learning rate, and "g" is
         gradient of "x" with respect to a chosen loss. To avoid adding assignments
         into the training graph, we split the update equation into
         y = x - r * g
         x = y
         The user needs to save "y = x - r * g" into TrainingInfoProto.algorithm. To
         tell that "y" should be assigned to "x", the field "update_binding" may
         contain a key-value pair of strings, "x" (key of StringStringEntryProto)
         and "y" (value of StringStringEntryProto).
         For a neural network with multiple trainable (mutable) tensors, there can
         be multiple key-value pairs in "update_binding".
         The initializers appears as keys in "update_binding" are considered
         mutable variables. This implies some behaviors
         as described below.
          1. We have only unique keys in all "update_binding"s so that two
             variables may not have the same name. This ensures that one
             variable is assigned up to once.
          2. The keys must appear in names of "ModelProto.graph.initializer" or
             "TrainingInfoProto.algorithm.initializer".
          3. The values must be output names of "algorithm" or "ModelProto.graph.output".
          4. Mutable variables are initialized to the value specified by the
             corresponding initializer, and then potentially updated by
             "initializer_binding"s and "update_binding"s in "TrainingInfoProto"s.
         This field usually contains names of trainable tensors
         (in ModelProto.graph), optimizer states such as momentums in advanced
         stochastic gradient methods (in TrainingInfoProto.graph),
         and number of training iterations (in TrainingInfoProto.graph).
         By default, this field is empty and no initializer would be changed
         by the execution of "algorithm".
         
        repeated .onnx.StringStringEntryProto update_binding = 4;
      • getUpdateBindingBuilderList

        public List<OnnxMl.StringStringEntryProto.Builder> getUpdateBindingBuilderList()
         Gradient-based training is usually an iterative procedure. In one gradient
         descent iteration, we apply
         x = x - r * g
         where "x" is the optimized tensor, "r" stands for learning rate, and "g" is
         gradient of "x" with respect to a chosen loss. To avoid adding assignments
         into the training graph, we split the update equation into
         y = x - r * g
         x = y
         The user needs to save "y = x - r * g" into TrainingInfoProto.algorithm. To
         tell that "y" should be assigned to "x", the field "update_binding" may
         contain a key-value pair of strings, "x" (key of StringStringEntryProto)
         and "y" (value of StringStringEntryProto).
         For a neural network with multiple trainable (mutable) tensors, there can
         be multiple key-value pairs in "update_binding".
         The initializers appears as keys in "update_binding" are considered
         mutable variables. This implies some behaviors
         as described below.
          1. We have only unique keys in all "update_binding"s so that two
             variables may not have the same name. This ensures that one
             variable is assigned up to once.
          2. The keys must appear in names of "ModelProto.graph.initializer" or
             "TrainingInfoProto.algorithm.initializer".
          3. The values must be output names of "algorithm" or "ModelProto.graph.output".
          4. Mutable variables are initialized to the value specified by the
             corresponding initializer, and then potentially updated by
             "initializer_binding"s and "update_binding"s in "TrainingInfoProto"s.
         This field usually contains names of trainable tensors
         (in ModelProto.graph), optimizer states such as momentums in advanced
         stochastic gradient methods (in TrainingInfoProto.graph),
         and number of training iterations (in TrainingInfoProto.graph).
         By default, this field is empty and no initializer would be changed
         by the execution of "algorithm".
         
        repeated .onnx.StringStringEntryProto update_binding = 4;
      • setUnknownFields

        public final OnnxMl.TrainingInfoProto.Builder setUnknownFields​(org.nd4j.shade.protobuf.UnknownFieldSet unknownFields)
        Specified by:
        setUnknownFields in interface org.nd4j.shade.protobuf.Message.Builder
        Overrides:
        setUnknownFields in class org.nd4j.shade.protobuf.GeneratedMessageV3.Builder<OnnxMl.TrainingInfoProto.Builder>
      • mergeUnknownFields

        public final OnnxMl.TrainingInfoProto.Builder mergeUnknownFields​(org.nd4j.shade.protobuf.UnknownFieldSet unknownFields)
        Specified by:
        mergeUnknownFields in interface org.nd4j.shade.protobuf.Message.Builder
        Overrides:
        mergeUnknownFields in class org.nd4j.shade.protobuf.GeneratedMessageV3.Builder<OnnxMl.TrainingInfoProto.Builder>